As many things in the history of analysis of algorithms the all-pairs shortest path has a long history (From the point of view of Computer Science). We can see the initial results from the book “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for
the matrix multiplication alike was first used.
In this slides, we go trough several all pairs Shortest path problem solutions from a slow version to the Johnson algorithm.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
The string matching problem is a classic of algorithms. In this class, we only look at the Rabin-Karpp algorithm as a classic example of the string matching algorithms
Here, we look at the problem of going from a source s to a possible multiple destinations. At them, each of the Lemmas, Theorems and Corollaries used to prove the properties of the
1. Bellman-Ford
2. Dijkstra
are examined in detail.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.
As we have realized in the class of Graph Algorithms, many problems can be represented as a graph. For example, we could be a electrical machine company with different possible sources of parts to the production of a particular engine.
Therefore, we want to minimize the transit between node in the graph.... we can use the Minimum Spanning Tree to obtain a really efficient distribution tree.
The string matching problem is a classic of algorithms. In this class, we only look at the Rabin-Karpp algorithm as a classic example of the string matching algorithms
Here, we look at the problem of going from a source s to a possible multiple destinations. At them, each of the Lemmas, Theorems and Corollaries used to prove the properties of the
1. Bellman-Ford
2. Dijkstra
are examined in detail.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.
As we have realized in the class of Graph Algorithms, many problems can be represented as a graph. For example, we could be a electrical machine company with different possible sources of parts to the production of a particular engine.
Therefore, we want to minimize the transit between node in the graph.... we can use the Minimum Spanning Tree to obtain a really efficient distribution tree.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptxujjwalmatoliya
The Floyd Warshall algorithm is for solving the All Pairs Shortest Path problem. The problem is to find shortest distances between every pair of vertices in a given edge weighted directed Graph.
Floyd-Warshall Algorithm is an algorithm for solving All Pairs shortest path problem which gives the shortest path between every pair of vertices of the given graph.
Floyd-Warshall Algorithm is an example of dynamic programming.
The main advantage of Floyd-Warshall Algorithm is that it is extremely simple and easy to implement.
Floyd-Warshall Algorithm is best suited for dense graphs since its complexity depends only on the number of vertices in the graph.
For sparse graphs, Johnson’s Algorithm is more suitable.
Graph Methods for Generating Test Cases with Universal and Existential Constr...Sylvain Hallé
We introduce a generalization of the t -way test case generation problem, where parameter t is replaced by a set Φ of Boolean conditions on attribute values. We then present two reductions of this problem to graphs; first, to graph colouring, where we link the the minimal number of tests to the chromatic number of some graph; second, to hypergraph vertex covering. This latter formalization allows us to handle problems with constraints of two kinds: those that must be true for every generated test case, and those that must be true for at least one test case. Experimental results show that the proposed solution produces test suites of slightly smaller sizes than a range of existing tools, while being more general: to the best of our knowledge, our work is the first to allow existential constraints over test cases.
Senior data scientist and founder of the company Intelligentia Data I+D SA de CV. We are offering consultancy services, development of projects and products in Machine Learning, Big Data, Data Sciences and Artificial Intelligence.
My first set of slides (The NN and DL class I am preparing for the fall)... I included the problem of Vanishing Gradient and the need to have ReLu (Mentioning btw the saturation problem inherited from Hebbian Learning)
It has been almost 62 years since the invention of the term Artificial Intelligence by Samuel and Minsky et al. at the Dartmouth workshop College in 1956 (“Dartmouth Summer Research Project on Artificial Intelligence”) where this new area of Computer Science was invented. However, the history of Artificial Intelligence goes back to previous millennia, when the Greeks in their Myths spoke about golden robots at Hephaestus, and the Galatea of Pygmalion. They were the first automatons known at the dawn of history, and although these first attempts were only myths, automatons were invented and built through multiple civilizations in history. Nevertheless, these automatons resembled in quite limited way their final objectives, representing animals and humans. In spite of that, the greatest illusion of an automaton, the Turk by Wolfgang von Kempelen, inspired many people, trough its exhibitions, as Alexander Graham Bell and Charles Babbage to develop inventions that would change forever human history. Thus, the importance of the concept “Artificial Intelligence” as a driver of our technological dreams. And although Artificial Intelligence has never been defined in a precise practical way, the amount of research and methods that have been developed to tackle some of its basics tasks have been and are quite humongous. Thus, the importance of having an introduction to the concepts of Artificial Intelligence, thus the dream can continue.
A review of one of the most popular methods of clustering, a part of what is know as unsupervised learning, K-Means. Here, we go from the basic heuristic used to solve the NP-Hard problem to an approximation algorithm K-Centers. Additionally, we look at variations coming from the Fuzzy Set ideas. In the future, we will add more about On-Line algorithms in the line of Stochastic Gradient Ideas...
Here a Review of the Combination of Machine Learning models from Bayesian Averaging, Committees to Boosting... Specifically An statistical analysis of Boosting is done
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
2. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
2 / 79
3. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
3 / 79
4. Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
5. Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
6. Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
7. Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
8. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
9. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
10. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
11. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
12. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
13. What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
14. This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
15. This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
16. This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
17. This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
18. For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
19. For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
20. For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
21. For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
22. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
8 / 79
23. Assumptions Matrix Representation
Matrix Representation of a Graph
For this, we have that each weight in the matrix has the following values
wij =
0 if i = j
w(i, j) if i = j and (i, j) ∈ E
∞ if i = j and (i, j) /∈ E
Then, we have W =
w11 w22 ... w1k−1 w1n
. . .
. . .
. . .
wn1 wn2 ... wnn−1 wnn
Important
There are not negative weight cycles.
9 / 79
24. Assumptions Matrix Representation
Matrix Representation of a Graph
For this, we have that each weight in the matrix has the following values
wij =
0 if i = j
w(i, j) if i = j and (i, j) ∈ E
∞ if i = j and (i, j) /∈ E
Then, we have W =
w11 w22 ... w1k−1 w1n
. . .
. . .
. . .
wn1 wn2 ... wnn−1 wnn
Important
There are not negative weight cycles.
9 / 79
25. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
10 / 79
26. Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =
d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn
Each entry dij = δ (i, j).
11 / 79
27. Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =
d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn
Each entry dij = δ (i, j).
11 / 79
28. Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =
d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn
Each entry dij = δ (i, j).
11 / 79
29. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
12 / 79
30. Structure of a Shortest Path
Consider Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a SP from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where
1 ≤ i ≤ j ≤ k.
We can do the following
Consider the shortest path p from vertex i and j, p contains at most
m edges.
Then, we can use the Corollary to make a decomposition
i
p
k → j =⇒ δ(i, j) = δ(i, k) + wkj
13 / 79
31. Structure of a Shortest Path
Consider Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a SP from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where
1 ≤ i ≤ j ≤ k.
We can do the following
Consider the shortest path p from vertex i and j, p contains at most
m edges.
Then, we can use the Corollary to make a decomposition
i
p
k → j =⇒ δ(i, j) = δ(i, k) + wkj
13 / 79
32. Structure of a Shortest Path
Idea of Using Matrix Multiplication
We define the following concept based in the decomposition
Corollary!!!
l
(m)
ij =minimum weight of any path from i to j, it contains at most m
edges i.e.
l
(m)
ij could be min
k
l
(m−1)
ik + wkj
14 / 79
33. Structure of a Shortest Path
Idea of Using Matrix Multiplication
We define the following concept based in the decomposition
Corollary!!!
l
(m)
ij =minimum weight of any path from i to j, it contains at most m
edges i.e.
l
(m)
ij could be min
k
l
(m−1)
ik + wkj
14 / 79
35. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
16 / 79
36. Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
37. Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
38. Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
39. Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
40. Recursive Solution
Why? A simple notation problem
l
(m)
ij = l
(m−1)
ij + 0 = l
(m−1)
ij + wjj
18 / 79
41. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
19 / 79
42. Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
43. Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
44. Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
45. Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
46. Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
47. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
21 / 79
48. Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
49. Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
50. Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
51. Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
54. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
24 / 79
55. Look Alike Matrix Multiplication Operations
Mapping That Can be Thought
L =⇒ A
W =⇒ B
L =⇒ C
min =⇒ +
+ =⇒ ·
∞ =⇒ 0
25 / 79
56. Look Alike Matrix Multiplication Operations
Using the previous notation, we can rewrite our previous algorithm as
Square-Matrix-Multiply(A, B)
1 n = A.rows
2 let C be a new n × n matrix
3 for i = 1 to n
4 for j = 1 to n
5 cij = 0
6 for k = 1 to n
7 cij = cij + aik · bkj
8 return C
26 / 79
58. Using the Analogy
Returning to the all-pairs shortest-paths problem
It is possible to compute the shortest path by extending such a path edge
by edge.
Therefore
If we denote A · B as the “product” of the Extended-Shortest-Path
28 / 79
59. Using the Analogy
Returning to the all-pairs shortest-paths problem
It is possible to compute the shortest path by extending such a path edge
by edge.
Therefore
If we denote A · B as the “product” of the Extended-Shortest-Path
28 / 79
60. Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
61. Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
62. Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
63. The Final Algorithm
We have that
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 for m = 2 to n − 1
4 L(m)
←EXTEND-SHORTEST-PATHS L(m−1)
, W
5 return L(n−1)
30 / 79
65. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
32 / 79
72. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
39 / 79
73. Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
74. Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
75. Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
78. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
79. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
80. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
81. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
82. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
83. Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
84. The Faster Algorithm
Complexity of the Previous Algorithm
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 m ← 1
4 while m < n − 1
5 L(2m)
←EXTEND-SHORTEST-PATHS L(m)
, L(m)
6 m ← 2m
7 return L(m)
Complexity
If n = |V | we have that O V 3 lg V .
43 / 79
85. The Faster Algorithm
Complexity of the Previous Algorithm
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 m ← 1
4 while m < n − 1
5 L(2m)
←EXTEND-SHORTEST-PATHS L(m)
, L(m)
6 m ← 2m
7 return L(m)
Complexity
If n = |V | we have that O V 3 lg V .
43 / 79
86. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
44 / 79
87. The Shortest Path Structure
Intermediate Vertex
For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p
other than v1 or vl.
Define
d
(k)
ij =weight of a shortest path between i and j with all intermediate
vertices are in the set {1, 2, ..., k}.
45 / 79
88. The Shortest Path Structure
Intermediate Vertex
For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p
other than v1 or vl.
Define
d
(k)
ij =weight of a shortest path between i and j with all intermediate
vertices are in the set {1, 2, ..., k}.
45 / 79
89. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
90. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
91. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
92. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
93. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
94. The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
96. The Recursive Solution
The Recursion
d
(k)
ij =
wij if k = 0
min d
(k−1)
ij , d
(k−1)
ik + d
(k−1)
kj if k ≥ 1
Final answer when k = n
We recursively calculate D(n) = d
(n)
ij or d
(n)
ij = δ (i, j) for all i, j ∈ V .
48 / 79
97. The Recursive Solution
The Recursion
d
(k)
ij =
wij if k = 0
min d
(k−1)
ij , d
(k−1)
ik + d
(k−1)
kj if k ≥ 1
Final answer when k = n
We recursively calculate D(n) = d
(n)
ij or d
(n)
ij = δ (i, j) for all i, j ∈ V .
48 / 79
98. Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
99. Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
100. Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
101. Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
102. Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
103. Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
104. Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
105. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
51 / 79
106. Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
107. Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
108. Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
109. Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
110. In addition, we want to rebuild the answer
For this, we have the predecessor matrix Π
Actually, we want to compute a sequence of matrices
Π(0)
, Π(1)
, ..., Π(n)
Where
Π = Π(n)
53 / 79
111. In addition, we want to rebuild the answer
For this, we have the predecessor matrix Π
Actually, we want to compute a sequence of matrices
Π(0)
, Π(1)
, ..., Π(n)
Where
Π = Π(n)
53 / 79
112. What are the elements in Π(k)
Each element in the matrix is as follow
π
(k)
ij = the predecessor of vertex j on a shortest path from vertex i with all
intermediate vertices in the set {1, 2, ..., k}
Thus, we have that
π
(0)
ij =
NULL if i = j or wij = ∞
i if i = j and wij < ∞
54 / 79
113. What are the elements in Π(k)
Each element in the matrix is as follow
π
(k)
ij = the predecessor of vertex j on a shortest path from vertex i with all
intermediate vertices in the set {1, 2, ..., k}
Thus, we have that
π
(0)
ij =
NULL if i = j or wij = ∞
i if i = j and wij < ∞
54 / 79
114. Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
115. Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
116. Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
117. Formally
We have then
π
(k)
ij =
π
(k−1)
ij if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
π
(k−1)
kj if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
56 / 79
118. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
57 / 79
119. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
120. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
121. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
122. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
123. Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
124. Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
125. Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
126. Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
127. Explanation
Line 6 and 7
This is done to go through all the possible combinations of i’s and j’s
Line 8
Deciding if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
60 / 79
128. Explanation
Line 6 and 7
This is done to go through all the possible combinations of i’s and j’s
Line 8
Deciding if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
60 / 79
136. Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
137. Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
138. Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
139. Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
140. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
69 / 79
141. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
142. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
143. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
144. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
145. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
146. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
147. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
148. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
149. Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
150. Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
151. Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
152. Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
153. Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
154. Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
155. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
156. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
157. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
158. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
159. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
160. Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
161. Example
Graph G with original weight function w with new source s and
h(v) = δ(s, v) at each vertex
0
0 0
0
0
0
0
-1
1
2
3
4
5
-4
7
3
4
8
2
1
6
s -5
-5
-4 0
73 / 79
162. Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
163. Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
164. Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
165. Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
166. Example
The new Graph G after re-weighting G
0
5 1
0
4
0
0
-1
1
2
3
4
5
0
10
4
0
13
2
0
2
s 0
-5
-4 0
75 / 79
167. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
168. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
169. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
170. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
171. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
172. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
173. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
174. Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
175. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
176. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
177. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
178. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
179. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
180. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
181. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
182. Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
183. Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
78 / 79