Analysis of Algorithms
Minimum Spanning Trees
Andres Mendez-Vazquez
November 8, 2015
1 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
2 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
3 / 69
Originally
We had a Graph without weights
2
3
5
1
4
6
7
9
8
4 / 69
Then
Now, we have have weights
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
5 / 69
Finally, the optimization problem
We want to find
min
T (u,v)∈T
w(u, v)
Where T ⊆ E such that T is acyclic and connects all the vertices.
2
3
5
1
4
6
7
9
8
12
4
5
11
8
9
2
1
17
8
2
21
This problem is called
The minimum spanning tree problem
6 / 69
Finally, the optimization problem
We want to find
min
T (u,v)∈T
w(u, v)
Where T ⊆ E such that T is acyclic and connects all the vertices.
2
3
5
1
4
6
7
9
8
12
4
5
11
8
9
2
1
17
8
2
21
This problem is called
The minimum spanning tree problem
6 / 69
When do you need minimum spanning trees?
In power distribution
We want to connect points x and y with the minimum amount of cable.
In a wireless network
Given a collection of mobile beacons we want to maintain the minimum
connection overhead between all of them.
7 / 69
When do you need minimum spanning trees?
In power distribution
We want to connect points x and y with the minimum amount of cable.
In a wireless network
Given a collection of mobile beacons we want to maintain the minimum
connection overhead between all of them.
7 / 69
Some Applications
Tracking the Genetic Variance of Age-Gender-Associated
Staphylococcus Aureus
8 / 69
Some Applications
What?
Urban Tapestries is an interactive location-based wireless application
allowing users to access and publish location-specific multimedia content.
Using MST we can create paths for public multimedia shows that are
no too exhausting
9 / 69
These models can be seen as
Connected, undirected graphs G = (V , E)
E is the set of possible connections between pairs of beacons.
Each of the this edges (u, v) has a weight w(u, v) specifying the cost
of connecting u and v.
10 / 69
These models can be seen as
Connected, undirected graphs G = (V , E)
E is the set of possible connections between pairs of beacons.
Each of the this edges (u, v) has a weight w(u, v) specifying the cost
of connecting u and v.
10 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
11 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
15 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
Recognizing safe edges
Theorem for Recognizing Safe Edges (23.1)
Let G = (V , E) be a connected, undirected graph with weights w defined
on E. Let A ⊆ E that is included in a MST for G, let (S, V − S) be any
cut of G that respects A, and let (u, v) be a light edge crossing
(S, V − S). Then, edge (u, v) is safe for A.
b c d
i
h g f
ea
4
8
8
11
7
2
6
1 2
2
4
9
14
10
S
V-S
S
V-S
17 / 69
Observations
Notice that
At any point in the execution of the algorithm the graph
GA = (V , A) is a forest, and each of the connected components
of GA is a tree.
Thus
Any safe edge (u, v) for A connects distinct components of GA, since
A ∪ {(u, v)} must be acyclic.
18 / 69
Observations
Notice that
At any point in the execution of the algorithm the graph
GA = (V , A) is a forest, and each of the connected components
of GA is a tree.
Thus
Any safe edge (u, v) for A connects distinct components of GA, since
A ∪ {(u, v)} must be acyclic.
18 / 69
The basic corollary
Corollary 23.2
Let G = (V , E) be a connected, undirected graph with real-valued weight
function w defined on E. Let A be a subset of E that is included in some
minimum spanning tree for G, and let C = (Vc, Ec) be a connected
component (tree) in the forest GA = (V , A). If (u, v) is a light edge
connecting C to some other component in GA, then (u, v) is safe for A.
Proof
The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut.
Therefore, (u, v) is safe for A.
19 / 69
The basic corollary
Corollary 23.2
Let G = (V , E) be a connected, undirected graph with real-valued weight
function w defined on E. Let A be a subset of E that is included in some
minimum spanning tree for G, and let C = (Vc, Ec) be a connected
component (tree) in the forest GA = (V , A). If (u, v) is a light edge
connecting C to some other component in GA, then (u, v) is safe for A.
Proof
The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut.
Therefore, (u, v) is safe for A.
19 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
20 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
21 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Let us run the Algorithm
We have as an input the following graph
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
23 / 69
Let us run the Algorithm
1st
step everybody is a set!!!
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
24 / 69
Let us run the Algorithm
Given (f , g) with weight 1
Question: FIND − SET(f ) = FIND − SET(g)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
25 / 69
Let us run the Algorithm
Then A = A ∪ {(f , g)}, next FIND − SET(f ) = FIND − SET(i)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
26 / 69
Let us run the Algorithm
Then A = A ∪ {(f , i)}, next FIND − SET(c) = FIND − SET(f )?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
27 / 69
Let us run the Algorithm
Then A = A ∪ {(c, f )}, next FIND − SET(a) = FIND − SET(d)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
28 / 69
Let us run the Algorithm
Then A = A ∪ {(a, d)}, next FIND − SET(b) = FIND − SET(e)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
29 / 69
Let us run the Algorithm
Then A = A ∪ {(b, e)}, next FIND − SET(e) = FIND − SET(i)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
30 / 69
Let us run the Algorithm
Then A = A ∪ {(e, i)}, next FIND − SET(b) = FIND − SET(f )?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
31 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(b) = FIND − SET(c)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
32 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(d) = FIND − SET(e)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
33 / 69
Let us run the Algorithm
Then A = A ∪ {(d, e)}, next FIND − SET(a) = FIND − SET(b)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
34 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(e) = FIND − SET(g)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
35 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(g) = FIND − SET(h)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
36 / 69
Let us run the Algorithm
Then A = A ∪ {(g, h)}
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
37 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
38 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
40 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Let us run the Algorithm
We have as an input the following graph
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
45 / 69
Let us run the Algorithm
Select r =b
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
46 / 69
Let us run the Algorithm
Extract b from the priority queue Q
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
47 / 69
Let us run the Algorithm
Update the predecessor of a and its key to 12 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
Note: The RED color represent the field π [v]
48 / 69
Let us run the Algorithm
Update the predecessor of c and its key to 9 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
49 / 69
Let us run the Algorithm
Update the predecessor of e and its key to 5 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
50 / 69
Let us run the Algorithm
Update the predecessor of f and its key to 8 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
51 / 69
Let us run the Algorithm
Extract e, then update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
52 / 69
Let us run the Algorithm
Extract i from the priority queue Q
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
53 / 69
Let us run the Algorithm
Update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
54 / 69
Let us run the Algorithm
Extract f and update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
55 / 69
Let us run the Algorithm
Extract g and update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
56 / 69
Let us run the Algorithm
Extract c and no update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
57 / 69
Let us run the Algorithm
Extract d and update key at 1
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
58 / 69
Let us run the Algorithm
Extract a and no update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
59 / 69
Let us run the Algorithm
Extract h
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
60 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity II
Complexity analysis (continuation)
Within the for loop, the test for membership in Q in line 9 can be
implemented in constant time.
The assignment in line 11 involves an implicit DECREASE-KEY
operation on the min-heap, which can be implemented in a binary
min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm
is:
O(V log V + E log V ) = O(E log V )
62 / 69
Complexity II
Complexity analysis (continuation)
Within the for loop, the test for membership in Q in line 9 can be
implemented in constant time.
The assignment in line 11 involves an implicit DECREASE-KEY
operation on the min-heap, which can be implemented in a binary
min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm
is:
O(V log V + E log V ) = O(E log V )
62 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
64 / 69
Faster Algorithms
Linear Time Algorithms
Karger, Klein & Tarjan (1995) proposed a linear time randomized
algorithm.
The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on
the soft heap, an approximate priority queue.
Chazelle has also written essays about music and politics
Linear-time algorithms in special cases
If the graph is dense i.e. log log log V ≤ E
V , then a deterministic
algorithm by Fredman and Tarjan finds the MST in time O (E).
65 / 69
Faster Algorithms
Linear Time Algorithms
Karger, Klein & Tarjan (1995) proposed a linear time randomized
algorithm.
The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on
the soft heap, an approximate priority queue.
Chazelle has also written essays about music and politics
Linear-time algorithms in special cases
If the graph is dense i.e. log log log V ≤ E
V , then a deterministic
algorithm by Fredman and Tarjan finds the MST in time O (E).
65 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
66 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
68 / 69
Exercises
From Cormen’s book solve
23.1-3
23.1-5
23.1-7
23.1-9
23.2-2
23.2-3
23.2-5
23.2-7
69 / 69

19 Minimum Spanning Trees

  • 1.
    Analysis of Algorithms MinimumSpanning Trees Andres Mendez-Vazquez November 8, 2015 1 / 69
  • 2.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 2 / 69
  • 3.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 3 / 69
  • 4.
    Originally We had aGraph without weights 2 3 5 1 4 6 7 9 8 4 / 69
  • 5.
    Then Now, we havehave weights b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 5 / 69
  • 6.
    Finally, the optimizationproblem We want to find min T (u,v)∈T w(u, v) Where T ⊆ E such that T is acyclic and connects all the vertices. 2 3 5 1 4 6 7 9 8 12 4 5 11 8 9 2 1 17 8 2 21 This problem is called The minimum spanning tree problem 6 / 69
  • 7.
    Finally, the optimizationproblem We want to find min T (u,v)∈T w(u, v) Where T ⊆ E such that T is acyclic and connects all the vertices. 2 3 5 1 4 6 7 9 8 12 4 5 11 8 9 2 1 17 8 2 21 This problem is called The minimum spanning tree problem 6 / 69
  • 8.
    When do youneed minimum spanning trees? In power distribution We want to connect points x and y with the minimum amount of cable. In a wireless network Given a collection of mobile beacons we want to maintain the minimum connection overhead between all of them. 7 / 69
  • 9.
    When do youneed minimum spanning trees? In power distribution We want to connect points x and y with the minimum amount of cable. In a wireless network Given a collection of mobile beacons we want to maintain the minimum connection overhead between all of them. 7 / 69
  • 10.
    Some Applications Tracking theGenetic Variance of Age-Gender-Associated Staphylococcus Aureus 8 / 69
  • 11.
    Some Applications What? Urban Tapestriesis an interactive location-based wireless application allowing users to access and publish location-specific multimedia content. Using MST we can create paths for public multimedia shows that are no too exhausting 9 / 69
  • 12.
    These models canbe seen as Connected, undirected graphs G = (V , E) E is the set of possible connections between pairs of beacons. Each of the this edges (u, v) has a weight w(u, v) specifying the cost of connecting u and v. 10 / 69
  • 13.
    These models canbe seen as Connected, undirected graphs G = (V , E) E is the set of possible connections between pairs of beacons. Each of the this edges (u, v) has a weight w(u, v) specifying the cost of connecting u and v. 10 / 69
  • 14.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 11 / 69
  • 15.
    Growing a MinimumSpanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 16.
    Growing a MinimumSpanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 17.
    Growing a MinimumSpanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 18.
    Generic minimum spanningtree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 19.
    Generic minimum spanningtree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 20.
    Generic minimum spanningtree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 21.
    Generic minimum spanningtree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 22.
    Some basic definitionsfor the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 23.
    Some basic definitionsfor the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 24.
    Some basic definitionsfor the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 25.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 15 / 69
  • 26.
    The Greedy Choice Remark Thefollowing algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 27.
    The Greedy Choice Remark Thefollowing algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 28.
    The Greedy Choice Remark Thefollowing algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 29.
    Recognizing safe edges Theoremfor Recognizing Safe Edges (23.1) Let G = (V , E) be a connected, undirected graph with weights w defined on E. Let A ⊆ E that is included in a MST for G, let (S, V − S) be any cut of G that respects A, and let (u, v) be a light edge crossing (S, V − S). Then, edge (u, v) is safe for A. b c d i h g f ea 4 8 8 11 7 2 6 1 2 2 4 9 14 10 S V-S S V-S 17 / 69
  • 30.
    Observations Notice that At anypoint in the execution of the algorithm the graph GA = (V , A) is a forest, and each of the connected components of GA is a tree. Thus Any safe edge (u, v) for A connects distinct components of GA, since A ∪ {(u, v)} must be acyclic. 18 / 69
  • 31.
    Observations Notice that At anypoint in the execution of the algorithm the graph GA = (V , A) is a forest, and each of the connected components of GA is a tree. Thus Any safe edge (u, v) for A connects distinct components of GA, since A ∪ {(u, v)} must be acyclic. 18 / 69
  • 32.
    The basic corollary Corollary23.2 Let G = (V , E) be a connected, undirected graph with real-valued weight function w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, and let C = (Vc, Ec) be a connected component (tree) in the forest GA = (V , A). If (u, v) is a light edge connecting C to some other component in GA, then (u, v) is safe for A. Proof The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut. Therefore, (u, v) is safe for A. 19 / 69
  • 33.
    The basic corollary Corollary23.2 Let G = (V , E) be a connected, undirected graph with real-valued weight function w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, and let C = (Vc, Ec) be a connected component (tree) in the forest GA = (V , A). If (u, v) is a light edge connecting C to some other component in GA, then (u, v) is safe for A. Proof The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut. Therefore, (u, v) is safe for A. 19 / 69
  • 34.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 20 / 69
  • 35.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 21 / 69
  • 36.
    Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 37.
    Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 38.
    Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 39.
    Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 40.
    Let us runthe Algorithm We have as an input the following graph b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 23 / 69
  • 41.
    Let us runthe Algorithm 1st step everybody is a set!!! b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 24 / 69
  • 42.
    Let us runthe Algorithm Given (f , g) with weight 1 Question: FIND − SET(f ) = FIND − SET(g)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 25 / 69
  • 43.
    Let us runthe Algorithm Then A = A ∪ {(f , g)}, next FIND − SET(f ) = FIND − SET(i)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 26 / 69
  • 44.
    Let us runthe Algorithm Then A = A ∪ {(f , i)}, next FIND − SET(c) = FIND − SET(f )? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 27 / 69
  • 45.
    Let us runthe Algorithm Then A = A ∪ {(c, f )}, next FIND − SET(a) = FIND − SET(d)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 28 / 69
  • 46.
    Let us runthe Algorithm Then A = A ∪ {(a, d)}, next FIND − SET(b) = FIND − SET(e)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 29 / 69
  • 47.
    Let us runthe Algorithm Then A = A ∪ {(b, e)}, next FIND − SET(e) = FIND − SET(i)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 30 / 69
  • 48.
    Let us runthe Algorithm Then A = A ∪ {(e, i)}, next FIND − SET(b) = FIND − SET(f )? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 31 / 69
  • 49.
    Let us runthe Algorithm Then A = A, next FIND − SET(b) = FIND − SET(c)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 32 / 69
  • 50.
    Let us runthe Algorithm Then A = A, next FIND − SET(d) = FIND − SET(e)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 33 / 69
  • 51.
    Let us runthe Algorithm Then A = A ∪ {(d, e)}, next FIND − SET(a) = FIND − SET(b)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 34 / 69
  • 52.
    Let us runthe Algorithm Then A = A, next FIND − SET(e) = FIND − SET(g)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 35 / 69
  • 53.
    Let us runthe Algorithm Then A = A, next FIND − SET(g) = FIND − SET(h)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 36 / 69
  • 54.
    Let us runthe Algorithm Then A = A ∪ {(g, h)} b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 37 / 69
  • 55.
    Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 38 / 69
  • 56.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 57.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 58.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 59.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 60.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 61.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 62.
    Complexity Explanation Line 1. Initializingthe set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 63.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 40 / 69
  • 64.
    Prim’s Algorithm Prim’s algorithmoperates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 65.
    Prim’s Algorithm Prim’s algorithmoperates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 66.
    Prim’s Algorithm Prim’s algorithmoperates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 67.
    Problem Important In order toimplement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 68.
    Problem Important In order toimplement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 69.
    Problem Important In order toimplement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 70.
    The algorithm Pseudo-code MST-PRIM(G, w,r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 71.
    The algorithm Pseudo-code MST-PRIM(G, w,r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 72.
    The algorithm Pseudo-code MST-PRIM(G, w,r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 73.
    The algorithm Pseudo-code MST-PRIM(G, w,r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 74.
    Explanation Observations 1 A ={(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 75.
    Explanation Observations 1 A ={(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 76.
    Explanation Observations 1 A ={(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 77.
    Let us runthe Algorithm We have as an input the following graph b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 45 / 69
  • 78.
    Let us runthe Algorithm Select r =b b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 46 / 69
  • 79.
    Let us runthe Algorithm Extract b from the priority queue Q b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 47 / 69
  • 80.
    Let us runthe Algorithm Update the predecessor of a and its key to 12 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 Note: The RED color represent the field π [v] 48 / 69
  • 81.
    Let us runthe Algorithm Update the predecessor of c and its key to 9 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 49 / 69
  • 82.
    Let us runthe Algorithm Update the predecessor of e and its key to 5 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 50 / 69
  • 83.
    Let us runthe Algorithm Update the predecessor of f and its key to 8 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 51 / 69
  • 84.
    Let us runthe Algorithm Extract e, then update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 52 / 69
  • 85.
    Let us runthe Algorithm Extract i from the priority queue Q b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 53 / 69
  • 86.
    Let us runthe Algorithm Update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 54 / 69
  • 87.
    Let us runthe Algorithm Extract f and update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 55 / 69
  • 88.
    Let us runthe Algorithm Extract g and update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 56 / 69
  • 89.
    Let us runthe Algorithm Extract c and no update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 57 / 69
  • 90.
    Let us runthe Algorithm Extract d and update key at 1 b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 58 / 69
  • 91.
    Let us runthe Algorithm Extract a and no update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 59 / 69
  • 92.
    Let us runthe Algorithm Extract h b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 60 / 69
  • 93.
    Complexity I Complexity analysis Theperformance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 94.
    Complexity I Complexity analysis Theperformance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 95.
    Complexity I Complexity analysis Theperformance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 96.
    Complexity I Complexity analysis Theperformance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 97.
    Complexity II Complexity analysis(continuation) Within the for loop, the test for membership in Q in line 9 can be implemented in constant time. The assignment in line 11 involves an implicit DECREASE-KEY operation on the min-heap, which can be implemented in a binary min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm is: O(V log V + E log V ) = O(E log V ) 62 / 69
  • 98.
    Complexity II Complexity analysis(continuation) Within the for loop, the test for membership in Q in line 9 can be implemented in constant time. The assignment in line 11 involves an implicit DECREASE-KEY operation on the min-heap, which can be implemented in a binary min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm is: O(V log V + E log V ) = O(E log V ) 62 / 69
  • 99.
    If you useFibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 100.
    If you useFibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 101.
    If you useFibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 102.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 64 / 69
  • 103.
    Faster Algorithms Linear TimeAlgorithms Karger, Klein & Tarjan (1995) proposed a linear time randomized algorithm. The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on the soft heap, an approximate priority queue. Chazelle has also written essays about music and politics Linear-time algorithms in special cases If the graph is dense i.e. log log log V ≤ E V , then a deterministic algorithm by Fredman and Tarjan finds the MST in time O (E). 65 / 69
  • 104.
    Faster Algorithms Linear TimeAlgorithms Karger, Klein & Tarjan (1995) proposed a linear time randomized algorithm. The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on the soft heap, an approximate priority queue. Chazelle has also written essays about music and politics Linear-time algorithms in special cases If the graph is dense i.e. log log log V ≤ E V , then a deterministic algorithm by Fredman and Tarjan finds the MST in time O (E). 65 / 69
  • 105.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 66 / 69
  • 106.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 107.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 108.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 109.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 110.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 111.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 112.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 113.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 114.
    Applications Minimum spanning treeshave direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 115.
    Outline 1 Spanning trees Basicconcepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 68 / 69
  • 116.
    Exercises From Cormen’s booksolve 23.1-3 23.1-5 23.1-7 23.1-9 23.2-2 23.2-3 23.2-5 23.2-7 69 / 69