This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
Greedy algorithms make locally optimal choices at each step in the hopes of finding a global optimum. They have two key properties: the greedy choice property, where choices are not reconsidered, and optimal substructure, where optimal solutions contain optimal solutions to subproblems. The activity selection problem of scheduling non-conflicting activities is used as an example. It can be solved greedily by always selecting the activity with the earliest finish time. While greedy algorithms are simple, they do not always find the true optimal solution.
The document provides an introduction to algorithms and their analysis. It discusses the definition of an algorithm, their characteristics, types and examples. It also covers the analysis of algorithms including best, average and worst case analysis. Common asymptotic notations like Big-O, Omega and Theta notations are introduced to analyze the time complexity of algorithms. Examples of analyzing for loops and other control statements are provided.
The document discusses algorithms and their analysis. It begins with definitions of algorithms and describes their key properties including definiteness, finiteness, and effectiveness. It then discusses different ways to specify algorithms using natural language, flowcharts, or pseudo-code. Examples are provided to illustrate selection sort and the Towers of Hanoi problem solved recursively. The document concludes with analyzing the time and space complexity of algorithms.
The document discusses hardware and software requirements needed to perform sequential pattern mining using various algorithms like Apriori, GSP, and PrefixSpan. It then summarizes different sequential pattern mining algorithms like AprioriAll, AprioriSome, and their forward and backward phases. Finally, it describes applying the AprioriSome algorithm to find frequent fuzzy sequential patterns from clustered transaction data by generating candidate sets and determining if itemsets meet the minimum support threshold.
The document discusses various algorithm design techniques including greedy algorithms, divide and conquer, and dynamic programming. It provides examples of greedy algorithms like job scheduling and activity selection. It also explains the divide and conquer approach with examples like merge sort, quicksort, and closest pair of points problems. Finally, it discusses running time analysis and big-O notation for classifying algorithms based on time complexity.
This document provides an outline for a course on algorithms and analysis of algorithms. It discusses greedy algorithms as one topic that will be covered in the course. Greedy algorithms make locally optimal choices at each step in the hope of finding a globally optimal solution. The document provides examples of problems that can be solved using greedy algorithms, such as coin changing, fractional knapsack, and minimum spanning trees. Common greedy algorithms like Kruskal's algorithm and Prim's algorithm are described for finding minimum spanning trees in graphs.
This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
Greedy algorithms make locally optimal choices at each step in the hopes of finding a global optimum. They have two key properties: the greedy choice property, where choices are not reconsidered, and optimal substructure, where optimal solutions contain optimal solutions to subproblems. The activity selection problem of scheduling non-conflicting activities is used as an example. It can be solved greedily by always selecting the activity with the earliest finish time. While greedy algorithms are simple, they do not always find the true optimal solution.
The document provides an introduction to algorithms and their analysis. It discusses the definition of an algorithm, their characteristics, types and examples. It also covers the analysis of algorithms including best, average and worst case analysis. Common asymptotic notations like Big-O, Omega and Theta notations are introduced to analyze the time complexity of algorithms. Examples of analyzing for loops and other control statements are provided.
The document discusses algorithms and their analysis. It begins with definitions of algorithms and describes their key properties including definiteness, finiteness, and effectiveness. It then discusses different ways to specify algorithms using natural language, flowcharts, or pseudo-code. Examples are provided to illustrate selection sort and the Towers of Hanoi problem solved recursively. The document concludes with analyzing the time and space complexity of algorithms.
The document discusses hardware and software requirements needed to perform sequential pattern mining using various algorithms like Apriori, GSP, and PrefixSpan. It then summarizes different sequential pattern mining algorithms like AprioriAll, AprioriSome, and their forward and backward phases. Finally, it describes applying the AprioriSome algorithm to find frequent fuzzy sequential patterns from clustered transaction data by generating candidate sets and determining if itemsets meet the minimum support threshold.
The document discusses various algorithm design techniques including greedy algorithms, divide and conquer, and dynamic programming. It provides examples of greedy algorithms like job scheduling and activity selection. It also explains the divide and conquer approach with examples like merge sort, quicksort, and closest pair of points problems. Finally, it discusses running time analysis and big-O notation for classifying algorithms based on time complexity.
This document provides an outline for a course on algorithms and analysis of algorithms. It discusses greedy algorithms as one topic that will be covered in the course. Greedy algorithms make locally optimal choices at each step in the hope of finding a globally optimal solution. The document provides examples of problems that can be solved using greedy algorithms, such as coin changing, fractional knapsack, and minimum spanning trees. Common greedy algorithms like Kruskal's algorithm and Prim's algorithm are described for finding minimum spanning trees in graphs.
Design and Analysis of Algorithm help to design the algorithms for solving different types of problems in Computer Science. It also helps to design and analyze the logic of how the program will work before developing the actual code for a program.
Jagannath Institute Of Management Sciences, Vasant Kunj-II is one of the best BCA colleges in Delhi. Dr. Arpana Chaturvedi shares here the Notes of C- Algorithms. This subject is taught to semester I students of BCA
A Heuristic is a technique to solve a problem faster than classic methods, or to find an approximate solution when classic methods cannot. This is a kind of a shortcut as we often trade one of optimality, completeness, accuracy, or precision for speed. A Heuristic (or a heuristic function) takes a look at search algorithms. At each branching step, it evaluates the available information and makes a decision on which branch to follow.
The document discusses approximation algorithms for NP-hard problems. It begins with an introduction that defines approximation algorithms as algorithms that find feasible but not necessarily optimal solutions to optimization problems in polynomial time.
It then discusses different types of approximation schemes - absolute approximation where the approximate solution is within a constant of optimal, epsilon (ε)-approximation where the approximate solution is within a factor of ε times optimal, and polynomial time approximation schemes that run in polynomial time for any fixed ε.
The document provides examples of problems that admit absolute approximation algorithms, such as planar graph coloring and maximum programs stored on disks. It also discusses Graham's theorem, which proves that the largest processing time scheduling algorithm generates schedules within 1/3
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first removed. Common stack operations are push to insert and pop to remove elements. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first removed. Common queue operations are enqueue to insert and dequeue to remove elements. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also introduced.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first to be removed. Common stack operations are push to add an element and pop to remove an element. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first to be removed. Common queue operations are enqueue to add an element and dequeue to remove an element. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also discussed briefly.
Backtracking is a problem-solving technique that incrementally builds candidates to the solutions, and abandons each partial candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution. This document provides an overview of backtracking algorithms, including their history, how they work, examples of problems they can solve like the n-queens problem, advantages like finding optimal solutions, disadvantages like slow runtimes, and time complexities that depend on the specific problem.
The document discusses various algorithms design approaches and patterns including divide and conquer, greedy algorithms, dynamic programming, backtracking, and branch and bound. It provides examples of each along with pseudocode. Specific algorithms discussed include binary search, merge sort, knapsack problem, shortest path problems, and the traveling salesman problem. The document is authored by Ashwin Shiv, a second year computer science student at NIT Delhi.
Design and Analysis of Algorithm help to design the algorithms for solving different types of problems in Computer Science. It also helps to design and analyze the logic of how the program will work before developing the actual code for a program.
Jagannath Institute Of Management Sciences, Vasant Kunj-II is one of the best BCA colleges in Delhi. Dr. Arpana Chaturvedi shares here the Notes of C- Algorithms. This subject is taught to semester I students of BCA
A Heuristic is a technique to solve a problem faster than classic methods, or to find an approximate solution when classic methods cannot. This is a kind of a shortcut as we often trade one of optimality, completeness, accuracy, or precision for speed. A Heuristic (or a heuristic function) takes a look at search algorithms. At each branching step, it evaluates the available information and makes a decision on which branch to follow.
The document discusses approximation algorithms for NP-hard problems. It begins with an introduction that defines approximation algorithms as algorithms that find feasible but not necessarily optimal solutions to optimization problems in polynomial time.
It then discusses different types of approximation schemes - absolute approximation where the approximate solution is within a constant of optimal, epsilon (ε)-approximation where the approximate solution is within a factor of ε times optimal, and polynomial time approximation schemes that run in polynomial time for any fixed ε.
The document provides examples of problems that admit absolute approximation algorithms, such as planar graph coloring and maximum programs stored on disks. It also discusses Graham's theorem, which proves that the largest processing time scheduling algorithm generates schedules within 1/3
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first removed. Common stack operations are push to insert and pop to remove elements. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first removed. Common queue operations are enqueue to insert and dequeue to remove elements. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also introduced.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first to be removed. Common stack operations are push to add an element and pop to remove an element. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first to be removed. Common queue operations are enqueue to add an element and dequeue to remove an element. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also discussed briefly.
Backtracking is a problem-solving technique that incrementally builds candidates to the solutions, and abandons each partial candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution. This document provides an overview of backtracking algorithms, including their history, how they work, examples of problems they can solve like the n-queens problem, advantages like finding optimal solutions, disadvantages like slow runtimes, and time complexities that depend on the specific problem.
The document discusses various algorithms design approaches and patterns including divide and conquer, greedy algorithms, dynamic programming, backtracking, and branch and bound. It provides examples of each along with pseudocode. Specific algorithms discussed include binary search, merge sort, knapsack problem, shortest path problems, and the traveling salesman problem. The document is authored by Ashwin Shiv, a second year computer science student at NIT Delhi.
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
In this infographic, we have explored cost-effective strategies for iOS app development, focusing on building high-quality apps within a budget. Key points covered include prioritizing essential features, leveraging existing tools and libraries, adopting cross-platform development approaches, optimizing for a Minimum Viable Product (MVP), and integrating with cloud services and third-party APIs. By implementing these strategies, businesses and developers can create functional and engaging iOS apps while minimizing development costs and time-to-market.
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
Secure-by-Design Using Hardware and Software Protection for FDA ComplianceICS
This webinar explores the “secure-by-design” approach to medical device software development. During this important session, we will outline which security measures should be considered for compliance, identify technical solutions available on various hardware platforms, summarize hardware protection methods you should consider when building in security and review security software such as Trusted Execution Environments for secure storage of keys and data, and Intrusion Detection Protection Systems to monitor for threats.
The Rising Future of CPaaS in the Middle East 2024Yara Milbes
Explore "The Rising Future of CPaaS in the Middle East in 2024" with this comprehensive PPT presentation. Discover how Communication Platforms as a Service (CPaaS) is transforming communication across various sectors in the Middle East.
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
1. Iris Hui-Ru Jiang Fall 2014
CHAPTER 4
GREEDY ALGORITHMS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Interval scheduling: The greedy algorithm stays ahead
¤ Scheduling to minimize lateness: An exchange argument
¤ Shortest paths
¤ The minimum spanning tree problem
¤ Implementing Kruskal’s algorithm: Union-find
¨ Reading:
¤ Chapter 4
Greedy algorithms
2
IRIS H.-R. JIANG
Greedy Algorithms
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s easy to invent greedy algorithms for almost any problem.
¤ Intuitive and fast
¤ Usually not optimal
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead.
2. An exchange argument.
Greedy algorithms
3
The greedy algorithm stays ahead
Interval Scheduling
4
Greedy algorithms
2. Iris Hui-Ru Jiang Fall 2014
CHAPTER 4
GREEDY ALGORITHMS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Interval scheduling: The greedy algorithm stays ahead
¤ Scheduling to minimize lateness: An exchange argument
¤ Shortest paths
¤ The minimum spanning tree problem
¤ Implementing Kruskal’s algorithm: Union-find
¨ Reading:
¤ Chapter 4
Greedy algorithms
2
IRIS H.-R. JIANG
Greedy Algorithms
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s easy to invent greedy algorithms for almost any problem.
¤ Intuitive and fast
¤ Usually not optimal
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead.
2. An exchange argument.
Greedy algorithms
3
The greedy algorithm stays ahead
Interval Scheduling
4
Greedy algorithms
3. Iris Hui-Ru Jiang Fall 2014
CHAPTER 4
GREEDY ALGORITHMS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Interval scheduling: The greedy algorithm stays ahead
¤ Scheduling to minimize lateness: An exchange argument
¤ Shortest paths
¤ The minimum spanning tree problem
¤ Implementing Kruskal’s algorithm: Union-find
¨ Reading:
¤ Chapter 4
Greedy algorithms
2
IRIS H.-R. JIANG
Greedy Algorithms
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s easy to invent greedy algorithms for almost any problem.
¤ Intuitive and fast
¤ Usually not optimal
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead.
2. An exchange argument.
Greedy algorithms
3
The greedy algorithm stays ahead
Interval Scheduling
4
Greedy algorithms
4. Iris Hui-Ru Jiang Fall 2014
CHAPTER 4
GREEDY ALGORITHMS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Interval scheduling: The greedy algorithm stays ahead
¤ Scheduling to minimize lateness: An exchange argument
¤ Shortest paths
¤ The minimum spanning tree problem
¤ Implementing Kruskal’s algorithm: Union-find
¨ Reading:
¤ Chapter 4
Greedy algorithms
2
IRIS H.-R. JIANG
Greedy Algorithms
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s easy to invent greedy algorithms for almost any problem.
¤ Intuitive and fast
¤ Usually not optimal
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead.
2. An exchange argument.
Greedy algorithms
3
The greedy algorithm stays ahead
Interval Scheduling
4
Greedy algorithms
5. IRIS H.-R. JIANG
The Interval Scheduling Problem
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Find a compatible subset of the requests with maximum size
Greedy algorithms
5
requests don’t overlap optimal
Time
0 1 2 3 4 5 6 7 8 9 10 11
6
7
8
5
1
2
3
4
Maximum compatible subset {2, 5, 8}
8
5
2
IRIS H.-R. JIANG
Greedy Rule
¨ Repeat
¤ Use a simple rule to select a first request i1.
¤ Once i1 is selected, reject all requests not compatible with i1.
¨ Until run out of requests.
¨ Q: How to decide a greedy rule for a good algorithm?
¨ A:
1. Earliest start time: min s(i)
2. Shortest interval: min {f(i) - s(i)}
3. Fewest conflicts: mini=1..n |{j: j is not compatible with i}|
4. Earliest finish time: min f(i)
Greedy algorithms
6
IRIS H.-R. JIANG
The Greedy Algorithm
¨ The 4th greedy rule leads to the optimal solution.
¤ We first accept the request that finish first
¤ Natural idea: Our resource becomes free ASAP
¨ The greedy algorithm:
¤ Q: Feasible?
¤ A: Yes! Line 5.
n A is a compatible set of requests.
¤ Q: Optimal? Efficient?
Greedy algorithms
7
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ;
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
Æ: empty set = {}
IRIS H.-R. JIANG
The Greedy Algorithm Stays Ahead
¨ Q: How to prove optimality?
¤ Let O be an optimal solution. Prove A = O? Prove |A| = |O|?
¨ The greedy algorithm stays ahead.
¤ We will compare the partial solutions of A and O, and show that
the greedy algorithm is doing better in a step-by-step fashion.
¨ Let A be the output of the greedy algorithm, A = {i1, …, ik}, in the
order they were added. Let O be the optimal solution, O = {j1,
…, jm} in the ascending order of start (finish) times.
For all indices r ! k, we have f(ir) ! f(jr).
¨ Pf: Proof by induction!
¤ Basis step: true for r = 1, f(i1) ! f(j1).
¤ Induction step: hypothesis: true for r-1.
n f(ir-1) ! f(jr-1)
n O is compatible, f(jr-1) ! s(jr)
n Hence, f(ir-1) ! s(jr); jrÎR after ir-1 is selected in line 5.
n According to line 3, f(ir) ! f(jr).
Greedy algorithms
8
jr
jr-1
ir
ir-1
Q: ir finishes later?
6. IRIS H.-R. JIANG
The Interval Scheduling Problem
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Find a compatible subset of the requests with maximum size
Greedy algorithms
5
requests don’t overlap optimal
Time
0 1 2 3 4 5 6 7 8 9 10 11
6
7
8
5
1
2
3
4
Maximum compatible subset {2, 5, 8}
8
5
2
IRIS H.-R. JIANG
Greedy Rule
¨ Repeat
¤ Use a simple rule to select a first request i1.
¤ Once i1 is selected, reject all requests not compatible with i1.
¨ Until run out of requests.
¨ Q: How to decide a greedy rule for a good algorithm?
¨ A:
1. Earliest start time: min s(i)
2. Shortest interval: min {f(i) - s(i)}
3. Fewest conflicts: mini=1..n |{j: j is not compatible with i}|
4. Earliest finish time: min f(i)
Greedy algorithms
6
IRIS H.-R. JIANG
The Greedy Algorithm
¨ The 4th greedy rule leads to the optimal solution.
¤ We first accept the request that finish first
¤ Natural idea: Our resource becomes free ASAP
¨ The greedy algorithm:
¤ Q: Feasible?
¤ A: Yes! Line 5.
n A is a compatible set of requests.
¤ Q: Optimal? Efficient?
Greedy algorithms
7
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ;
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
Æ: empty set = {}
IRIS H.-R. JIANG
The Greedy Algorithm Stays Ahead
¨ Q: How to prove optimality?
¤ Let O be an optimal solution. Prove A = O? Prove |A| = |O|?
¨ The greedy algorithm stays ahead.
¤ We will compare the partial solutions of A and O, and show that
the greedy algorithm is doing better in a step-by-step fashion.
¨ Let A be the output of the greedy algorithm, A = {i1, …, ik}, in the
order they were added. Let O be the optimal solution, O = {j1,
…, jm} in the ascending order of start (finish) times.
For all indices r ! k, we have f(ir) ! f(jr).
¨ Pf: Proof by induction!
¤ Basis step: true for r = 1, f(i1) ! f(j1).
¤ Induction step: hypothesis: true for r-1.
n f(ir-1) ! f(jr-1)
n O is compatible, f(jr-1) ! s(jr)
n Hence, f(ir-1) ! s(jr); jrÎR after ir-1 is selected in line 5.
n According to line 3, f(ir) ! f(jr).
Greedy algorithms
8
jr
jr-1
ir
ir-1
Q: ir finishes later?
7. IRIS H.-R. JIANG
The Interval Scheduling Problem
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Find a compatible subset of the requests with maximum size
Greedy algorithms
5
requests don’t overlap optimal
Time
0 1 2 3 4 5 6 7 8 9 10 11
6
7
8
5
1
2
3
4
Maximum compatible subset {2, 5, 8}
8
5
2
IRIS H.-R. JIANG
Greedy Rule
¨ Repeat
¤ Use a simple rule to select a first request i1.
¤ Once i1 is selected, reject all requests not compatible with i1.
¨ Until run out of requests.
¨ Q: How to decide a greedy rule for a good algorithm?
¨ A:
1. Earliest start time: min s(i)
2. Shortest interval: min {f(i) - s(i)}
3. Fewest conflicts: mini=1..n |{j: j is not compatible with i}|
4. Earliest finish time: min f(i)
Greedy algorithms
6
IRIS H.-R. JIANG
The Greedy Algorithm
¨ The 4th greedy rule leads to the optimal solution.
¤ We first accept the request that finish first
¤ Natural idea: Our resource becomes free ASAP
¨ The greedy algorithm:
¤ Q: Feasible?
¤ A: Yes! Line 5.
n A is a compatible set of requests.
¤ Q: Optimal? Efficient?
Greedy algorithms
7
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ;
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
Æ: empty set = {}
IRIS H.-R. JIANG
The Greedy Algorithm Stays Ahead
¨ Q: How to prove optimality?
¤ Let O be an optimal solution. Prove A = O? Prove |A| = |O|?
¨ The greedy algorithm stays ahead.
¤ We will compare the partial solutions of A and O, and show that
the greedy algorithm is doing better in a step-by-step fashion.
¨ Let A be the output of the greedy algorithm, A = {i1, …, ik}, in the
order they were added. Let O be the optimal solution, O = {j1,
…, jm} in the ascending order of start (finish) times.
For all indices r ! k, we have f(ir) ! f(jr).
¨ Pf: Proof by induction!
¤ Basis step: true for r = 1, f(i1) ! f(j1).
¤ Induction step: hypothesis: true for r-1.
n f(ir-1) ! f(jr-1)
n O is compatible, f(jr-1) ! s(jr)
n Hence, f(ir-1) ! s(jr); jrÎR after ir-1 is selected in line 5.
n According to line 3, f(ir) ! f(jr).
Greedy algorithms
8
jr
jr-1
ir
ir-1
Q: ir finishes later?
8. IRIS H.-R. JIANG
The Interval Scheduling Problem
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Find a compatible subset of the requests with maximum size
Greedy algorithms
5
requests don’t overlap optimal
Time
0 1 2 3 4 5 6 7 8 9 10 11
6
7
8
5
1
2
3
4
Maximum compatible subset {2, 5, 8}
8
5
2
IRIS H.-R. JIANG
Greedy Rule
¨ Repeat
¤ Use a simple rule to select a first request i1.
¤ Once i1 is selected, reject all requests not compatible with i1.
¨ Until run out of requests.
¨ Q: How to decide a greedy rule for a good algorithm?
¨ A:
1. Earliest start time: min s(i)
2. Shortest interval: min {f(i) - s(i)}
3. Fewest conflicts: mini=1..n |{j: j is not compatible with i}|
4. Earliest finish time: min f(i)
Greedy algorithms
6
IRIS H.-R. JIANG
The Greedy Algorithm
¨ The 4th greedy rule leads to the optimal solution.
¤ We first accept the request that finish first
¤ Natural idea: Our resource becomes free ASAP
¨ The greedy algorithm:
¤ Q: Feasible?
¤ A: Yes! Line 5.
n A is a compatible set of requests.
¤ Q: Optimal? Efficient?
Greedy algorithms
7
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ;
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
Æ: empty set = {}
IRIS H.-R. JIANG
The Greedy Algorithm Stays Ahead
¨ Q: How to prove optimality?
¤ Let O be an optimal solution. Prove A = O? Prove |A| = |O|?
¨ The greedy algorithm stays ahead.
¤ We will compare the partial solutions of A and O, and show that
the greedy algorithm is doing better in a step-by-step fashion.
¨ Let A be the output of the greedy algorithm, A = {i1, …, ik}, in the
order they were added. Let O be the optimal solution, O = {j1,
…, jm} in the ascending order of start (finish) times.
For all indices r ! k, we have f(ir) ! f(jr).
¨ Pf: Proof by induction!
¤ Basis step: true for r = 1, f(i1) ! f(j1).
¤ Induction step: hypothesis: true for r-1.
n f(ir-1) ! f(jr-1)
n O is compatible, f(jr-1) ! s(jr)
n Hence, f(ir-1) ! s(jr); jrÎR after ir-1 is selected in line 5.
n According to line 3, f(ir) ! f(jr).
Greedy algorithms
8
jr
jr-1
ir
ir-1
Q: ir finishes later?
9. IRIS H.-R. JIANG
The Greedy Algorithm Is Optimal
¨ The greedy algorithm returns an optimal set A.
¨ Pf: Proof by contradiction.
¤ If A is not optimal, then an optimal set O must have more
requests, i.e., |O| = m > k = |A|.
¤ Since f(ik) ! f(jk) and m>k, there is a request jk+1 in O.
¤ f(jk) ! s(jk+1); f(ik) ! s(jk+1).
¤ Hence, jk+1 is compatible with ik. R should contain jk+1.
¤ However, the greedy algorithm stops with request ik, and it is only
supposed to stop when R is empty. ®¬
Greedy algorithms
9
jk+1
jk
ik
A
O j1
i1
…
… jm
…
IRIS H.-R. JIANG
Implementation: The Greedy Algorithm
¨ Running time: From O(n2) to O(n log n)
¤ Initialization:
n O(n log n): sort R in ascending order of f(i)
n O(n): construct S, S[i] = s(i)
¤ Lines 3 and 5:
n O(n): scan R once
n We do not delete all incompatible requests in line 5, only
those finish after the next selected request finishes and
start before the next selected request finishes.
Greedy algorithms
10
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
IRIS H.-R. JIANG
The Interval Scheduling Problem
Greedy algorithms
11
Time
0 1 2 3 4 5 6 7 8 9 10 11
Maximum compatible subset {1, 3, 5, 8}
12 13 14 15 16
8
9
4 5
6
3
1
2
7
1 3 5 8
Interval coloring
Interval Partitioning
12
Greedy algorithms
10. IRIS H.-R. JIANG
The Greedy Algorithm Is Optimal
¨ The greedy algorithm returns an optimal set A.
¨ Pf: Proof by contradiction.
¤ If A is not optimal, then an optimal set O must have more
requests, i.e., |O| = m > k = |A|.
¤ Since f(ik) ! f(jk) and m>k, there is a request jk+1 in O.
¤ f(jk) ! s(jk+1); f(ik) ! s(jk+1).
¤ Hence, jk+1 is compatible with ik. R should contain jk+1.
¤ However, the greedy algorithm stops with request ik, and it is only
supposed to stop when R is empty. ®¬
Greedy algorithms
9
jk+1
jk
ik
A
O j1
i1
…
… jm
…
IRIS H.-R. JIANG
Implementation: The Greedy Algorithm
¨ Running time: From O(n2) to O(n log n)
¤ Initialization:
n O(n log n): sort R in ascending order of f(i)
n O(n): construct S, S[i] = s(i)
¤ Lines 3 and 5:
n O(n): scan R once
n We do not delete all incompatible requests in line 5, only
those finish after the next selected request finishes and
start before the next selected request finishes.
Greedy algorithms
10
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
IRIS H.-R. JIANG
The Interval Scheduling Problem
Greedy algorithms
11
Time
0 1 2 3 4 5 6 7 8 9 10 11
Maximum compatible subset {1, 3, 5, 8}
12 13 14 15 16
8
9
4 5
6
3
1
2
7
1 3 5 8
Interval coloring
Interval Partitioning
12
Greedy algorithms
11. IRIS H.-R. JIANG
The Greedy Algorithm Is Optimal
¨ The greedy algorithm returns an optimal set A.
¨ Pf: Proof by contradiction.
¤ If A is not optimal, then an optimal set O must have more
requests, i.e., |O| = m > k = |A|.
¤ Since f(ik) ! f(jk) and m>k, there is a request jk+1 in O.
¤ f(jk) ! s(jk+1); f(ik) ! s(jk+1).
¤ Hence, jk+1 is compatible with ik. R should contain jk+1.
¤ However, the greedy algorithm stops with request ik, and it is only
supposed to stop when R is empty. ®¬
Greedy algorithms
9
jk+1
jk
ik
A
O j1
i1
…
… jm
…
IRIS H.-R. JIANG
Implementation: The Greedy Algorithm
¨ Running time: From O(n2) to O(n log n)
¤ Initialization:
n O(n log n): sort R in ascending order of f(i)
n O(n): construct S, S[i] = s(i)
¤ Lines 3 and 5:
n O(n): scan R once
n We do not delete all incompatible requests in line 5, only
those finish after the next selected request finishes and
start before the next selected request finishes.
Greedy algorithms
10
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
IRIS H.-R. JIANG
The Interval Scheduling Problem
Greedy algorithms
11
Time
0 1 2 3 4 5 6 7 8 9 10 11
Maximum compatible subset {1, 3, 5, 8}
12 13 14 15 16
8
9
4 5
6
3
1
2
7
1 3 5 8
Interval coloring
Interval Partitioning
12
Greedy algorithms
12. IRIS H.-R. JIANG
The Greedy Algorithm Is Optimal
¨ The greedy algorithm returns an optimal set A.
¨ Pf: Proof by contradiction.
¤ If A is not optimal, then an optimal set O must have more
requests, i.e., |O| = m > k = |A|.
¤ Since f(ik) ! f(jk) and m>k, there is a request jk+1 in O.
¤ f(jk) ! s(jk+1); f(ik) ! s(jk+1).
¤ Hence, jk+1 is compatible with ik. R should contain jk+1.
¤ However, the greedy algorithm stops with request ik, and it is only
supposed to stop when R is empty. ®¬
Greedy algorithms
9
jk+1
jk
ik
A
O j1
i1
…
… jm
…
IRIS H.-R. JIANG
Implementation: The Greedy Algorithm
¨ Running time: From O(n2) to O(n log n)
¤ Initialization:
n O(n log n): sort R in ascending order of f(i)
n O(n): construct S, S[i] = s(i)
¤ Lines 3 and 5:
n O(n): scan R once
n We do not delete all incompatible requests in line 5, only
those finish after the next selected request finishes and
start before the next selected request finishes.
Greedy algorithms
10
Interval-Scheduling(R)
// R: undetermined requests; A: accepted requests
1. A = Æ
2. while (R is not empty) do
3. choose a request iÎR with minimum f(i) // greedy rule
4. A = A + {i}
5. R = R – {i} – X, where X = {j: jÎR and j is not compatible with i)
6. return A
IRIS H.-R. JIANG
The Interval Scheduling Problem
Greedy algorithms
11
Time
0 1 2 3 4 5 6 7 8 9 10 11
Maximum compatible subset {1, 3, 5, 8}
12 13 14 15 16
8
9
4 5
6
3
1
2
7
1 3 5 8
Interval coloring
Interval Partitioning
12
Greedy algorithms
13. IRIS H.-R. JIANG
What if We Have Multiple Resources?
¨ The interval partitioning problem:
¤ a.k.a. the interval coloring problem: one resource = one color
¤ Use as few resources as possible
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Partition these requests into a minimum number of compatible
subsets, each corresponds to one resource
Greedy algorithms
13
Time
0 1 2 3 4 5 6 7 8 9 10 11
3 resources
12 13
IRIS H.-R. JIANG
How Many Resources Are Required?
¨ The depth of a set of intervals is the maximum number that
pass over any single point on the time-line.
¨ In any instance of interval partitioning, the number of
resources needed is at least the depth of the set of intervals.
¨ Pf:
¤ Suppose a set of intervals has depth d, and let I1, …, Id all pass
over a common point on the time-line.
¤ Then each of these intervals must be scheduled on a different
resource, so the whole instance needs at least d resources.
Greedy algorithms
14
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
lower bound
IRIS H.-R. JIANG
Can We Reach the Lower Bound?
¨ The depth d is the lower bound on the number of required
resources.
¨ Q: Can we always use d resources to schedule all requests?
¨ A: Yes.
¨ Q: How to prove the optimality?
¨ A: One finds a bound that every possible solution must have at
least a certain value, and then one shows that the algorithm
under consideration always achieves this bound.
Greedy algorithms
15
IRIS H.-R. JIANG
The Greedy Algorithm
¨ Assign a label to each interval. Possible labels: {1, 2, …, d}.
¨ Assign different labels for overlapping intervals.
¨ Implementation:
¤ Lines 3--5: find a resource compatible with Ij, assign this label
n Record the finish time of the last added interval for each label
n Compatibility checking: s(Ij) " f(labeli)
n Use priority queue to maintain labels
Greedy algorithms
16
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
14. IRIS H.-R. JIANG
What if We Have Multiple Resources?
¨ The interval partitioning problem:
¤ a.k.a. the interval coloring problem: one resource = one color
¤ Use as few resources as possible
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Partition these requests into a minimum number of compatible
subsets, each corresponds to one resource
Greedy algorithms
13
Time
0 1 2 3 4 5 6 7 8 9 10 11
3 resources
12 13
IRIS H.-R. JIANG
How Many Resources Are Required?
¨ The depth of a set of intervals is the maximum number that
pass over any single point on the time-line.
¨ In any instance of interval partitioning, the number of
resources needed is at least the depth of the set of intervals.
¨ Pf:
¤ Suppose a set of intervals has depth d, and let I1, …, Id all pass
over a common point on the time-line.
¤ Then each of these intervals must be scheduled on a different
resource, so the whole instance needs at least d resources.
Greedy algorithms
14
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
lower bound
IRIS H.-R. JIANG
Can We Reach the Lower Bound?
¨ The depth d is the lower bound on the number of required
resources.
¨ Q: Can we always use d resources to schedule all requests?
¨ A: Yes.
¨ Q: How to prove the optimality?
¨ A: One finds a bound that every possible solution must have at
least a certain value, and then one shows that the algorithm
under consideration always achieves this bound.
Greedy algorithms
15
IRIS H.-R. JIANG
The Greedy Algorithm
¨ Assign a label to each interval. Possible labels: {1, 2, …, d}.
¨ Assign different labels for overlapping intervals.
¨ Implementation:
¤ Lines 3--5: find a resource compatible with Ij, assign this label
n Record the finish time of the last added interval for each label
n Compatibility checking: s(Ij) " f(labeli)
n Use priority queue to maintain labels
Greedy algorithms
16
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
15. IRIS H.-R. JIANG
What if We Have Multiple Resources?
¨ The interval partitioning problem:
¤ a.k.a. the interval coloring problem: one resource = one color
¤ Use as few resources as possible
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Partition these requests into a minimum number of compatible
subsets, each corresponds to one resource
Greedy algorithms
13
Time
0 1 2 3 4 5 6 7 8 9 10 11
3 resources
12 13
IRIS H.-R. JIANG
How Many Resources Are Required?
¨ The depth of a set of intervals is the maximum number that
pass over any single point on the time-line.
¨ In any instance of interval partitioning, the number of
resources needed is at least the depth of the set of intervals.
¨ Pf:
¤ Suppose a set of intervals has depth d, and let I1, …, Id all pass
over a common point on the time-line.
¤ Then each of these intervals must be scheduled on a different
resource, so the whole instance needs at least d resources.
Greedy algorithms
14
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
lower bound
IRIS H.-R. JIANG
Can We Reach the Lower Bound?
¨ The depth d is the lower bound on the number of required
resources.
¨ Q: Can we always use d resources to schedule all requests?
¨ A: Yes.
¨ Q: How to prove the optimality?
¨ A: One finds a bound that every possible solution must have at
least a certain value, and then one shows that the algorithm
under consideration always achieves this bound.
Greedy algorithms
15
IRIS H.-R. JIANG
The Greedy Algorithm
¨ Assign a label to each interval. Possible labels: {1, 2, …, d}.
¨ Assign different labels for overlapping intervals.
¨ Implementation:
¤ Lines 3--5: find a resource compatible with Ij, assign this label
n Record the finish time of the last added interval for each label
n Compatibility checking: s(Ij) " f(labeli)
n Use priority queue to maintain labels
Greedy algorithms
16
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
16. IRIS H.-R. JIANG
What if We Have Multiple Resources?
¨ The interval partitioning problem:
¤ a.k.a. the interval coloring problem: one resource = one color
¤ Use as few resources as possible
¨ Given: Set of requests {1, 2, …, n}, ith request corresponds an
interval with start time s(i) and finish time f(i)
¤ interval i: [s(i), f(i))
¨ Goal: Partition these requests into a minimum number of compatible
subsets, each corresponds to one resource
Greedy algorithms
13
Time
0 1 2 3 4 5 6 7 8 9 10 11
3 resources
12 13
IRIS H.-R. JIANG
How Many Resources Are Required?
¨ The depth of a set of intervals is the maximum number that
pass over any single point on the time-line.
¨ In any instance of interval partitioning, the number of
resources needed is at least the depth of the set of intervals.
¨ Pf:
¤ Suppose a set of intervals has depth d, and let I1, …, Id all pass
over a common point on the time-line.
¤ Then each of these intervals must be scheduled on a different
resource, so the whole instance needs at least d resources.
Greedy algorithms
14
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
lower bound
IRIS H.-R. JIANG
Can We Reach the Lower Bound?
¨ The depth d is the lower bound on the number of required
resources.
¨ Q: Can we always use d resources to schedule all requests?
¨ A: Yes.
¨ Q: How to prove the optimality?
¨ A: One finds a bound that every possible solution must have at
least a certain value, and then one shows that the algorithm
under consideration always achieves this bound.
Greedy algorithms
15
IRIS H.-R. JIANG
The Greedy Algorithm
¨ Assign a label to each interval. Possible labels: {1, 2, …, d}.
¨ Assign different labels for overlapping intervals.
¨ Implementation:
¤ Lines 3--5: find a resource compatible with Ij, assign this label
n Record the finish time of the last added interval for each label
n Compatibility checking: s(Ij) " f(labeli)
n Use priority queue to maintain labels
Greedy algorithms
16
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
17. IRIS H.-R. JIANG
The Interval Partitioning Problem
Greedy algorithms
17
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
9
8
10
6
5
4
1
2
3
7
1
2
3
4
5
6
7
8
9
10
1
2
3
IRIS H.-R. JIANG
Optimality (1/2)
¨ The greedy algorithm assigns every interval a label, and no two
overlapping intervals receive the same label.
¨ Pf:
1. No interval ends up unlabeled.
n Suppose interval Ij overlaps t intervals earlier in the sorted list.
n These t + 1 intervals pass over a common point, namely s(Ij).
n Hence, t + 1 ! d. Thus, t ! d – 1; at least one of the d labels is
not excluded, and so there is a label that can be assigned to Ij.
n i.e., line 6 never occurs!
Greedy algorithms
18
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
IRIS H.-R. JIANG
Optimality (2/2)
¨ Pf: (cont’d)
2. No two overlapping intervals are assigned with the same label.
n Consider any two intervals Ii and Ij that overlap, i<j.
n When Ij is considered (in line 2), Ii is in the set of intervals
whose labels are excluded (in line 3).
n Hence, the algorithm will not assign to Ij the label used for Ii.
¤ Since the algorithm uses d labels, we can conclude that the
greedy algorithm always uses the minimum possible number of
labels, i.e., it is optimal!
Greedy algorithms
19
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
An exchange argument
Scheduling to Minimize Lateness
20
Greedy algorithms
18. IRIS H.-R. JIANG
The Interval Partitioning Problem
Greedy algorithms
17
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
9
8
10
6
5
4
1
2
3
7
1
2
3
4
5
6
7
8
9
10
1
2
3
IRIS H.-R. JIANG
Optimality (1/2)
¨ The greedy algorithm assigns every interval a label, and no two
overlapping intervals receive the same label.
¨ Pf:
1. No interval ends up unlabeled.
n Suppose interval Ij overlaps t intervals earlier in the sorted list.
n These t + 1 intervals pass over a common point, namely s(Ij).
n Hence, t + 1 ! d. Thus, t ! d – 1; at least one of the d labels is
not excluded, and so there is a label that can be assigned to Ij.
n i.e., line 6 never occurs!
Greedy algorithms
18
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
IRIS H.-R. JIANG
Optimality (2/2)
¨ Pf: (cont’d)
2. No two overlapping intervals are assigned with the same label.
n Consider any two intervals Ii and Ij that overlap, i<j.
n When Ij is considered (in line 2), Ii is in the set of intervals
whose labels are excluded (in line 3).
n Hence, the algorithm will not assign to Ij the label used for Ii.
¤ Since the algorithm uses d labels, we can conclude that the
greedy algorithm always uses the minimum possible number of
labels, i.e., it is optimal!
Greedy algorithms
19
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
An exchange argument
Scheduling to Minimize Lateness
20
Greedy algorithms
19. IRIS H.-R. JIANG
The Interval Partitioning Problem
Greedy algorithms
17
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
9
8
10
6
5
4
1
2
3
7
1
2
3
4
5
6
7
8
9
10
1
2
3
IRIS H.-R. JIANG
Optimality (1/2)
¨ The greedy algorithm assigns every interval a label, and no two
overlapping intervals receive the same label.
¨ Pf:
1. No interval ends up unlabeled.
n Suppose interval Ij overlaps t intervals earlier in the sorted list.
n These t + 1 intervals pass over a common point, namely s(Ij).
n Hence, t + 1 ! d. Thus, t ! d – 1; at least one of the d labels is
not excluded, and so there is a label that can be assigned to Ij.
n i.e., line 6 never occurs!
Greedy algorithms
18
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
IRIS H.-R. JIANG
Optimality (2/2)
¨ Pf: (cont’d)
2. No two overlapping intervals are assigned with the same label.
n Consider any two intervals Ii and Ij that overlap, i<j.
n When Ij is considered (in line 2), Ii is in the set of intervals
whose labels are excluded (in line 3).
n Hence, the algorithm will not assign to Ij the label used for Ii.
¤ Since the algorithm uses d labels, we can conclude that the
greedy algorithm always uses the minimum possible number of
labels, i.e., it is optimal!
Greedy algorithms
19
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
An exchange argument
Scheduling to Minimize Lateness
20
Greedy algorithms
20. IRIS H.-R. JIANG
The Interval Partitioning Problem
Greedy algorithms
17
Time
0 1 2 3 4 5 6 7 8 9 10 11
depth: 3
12 13
9
8
10
6
5
4
1
2
3
7
1
2
3
4
5
6
7
8
9
10
1
2
3
IRIS H.-R. JIANG
Optimality (1/2)
¨ The greedy algorithm assigns every interval a label, and no two
overlapping intervals receive the same label.
¨ Pf:
1. No interval ends up unlabeled.
n Suppose interval Ij overlaps t intervals earlier in the sorted list.
n These t + 1 intervals pass over a common point, namely s(Ij).
n Hence, t + 1 ! d. Thus, t ! d – 1; at least one of the d labels is
not excluded, and so there is a label that can be assigned to Ij.
n i.e., line 6 never occurs!
Greedy algorithms
18
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
IRIS H.-R. JIANG
Optimality (2/2)
¨ Pf: (cont’d)
2. No two overlapping intervals are assigned with the same label.
n Consider any two intervals Ii and Ij that overlap, i<j.
n When Ij is considered (in line 2), Ii is in the set of intervals
whose labels are excluded (in line 3).
n Hence, the algorithm will not assign to Ij the label used for Ii.
¤ Since the algorithm uses d labels, we can conclude that the
greedy algorithm always uses the minimum possible number of
labels, i.e., it is optimal!
Greedy algorithms
19
Interval-Partitioning(R)
1. {I1, …, In} = sort intervals in ascending order of their start times
2. for j from 1 to n do
3. exclude the labels of all assigned intervals that are not compatible with Ij
4. if (there is a nonexcluded label from {1, 2, …, d}) then
5. assign a nonexcluded label to Ij
6. else leave Ij unlabeled
An exchange argument
Scheduling to Minimize Lateness
20
Greedy algorithms
21. IRIS H.-R. JIANG
What If Each Request Has a Deadline?
¨ Given: A single resource is available starting at time s. A set of
requests {1, 2, …, n}, request i requires a contiguous interval of
length ti and has a deadline di.
¨ Goal: Schedule all requests without overlapping so as to minimize
the maximum lateness.
¤ Lateness: li = max{0, f(i) – di}.
Greedy algorithms
21
Time
0 1 2 3 4 5 6 7 8 9 10 11 12 13
6
5
4
1
3
2
14 15
deadline
length max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
Greedy Rule
¨ Consider requests in some order.
1. Shortest interval first: Process requests in ascending order of ti
2. Smallest slack: Process requests in ascending order of di – ti
3. Earliest deadline first: Process requests in ascending order of di
Greedy algorithms
22
1 2
ti
di
1
100
10
10
1 2
ti
di
1
2
10
10
Jobs with earlier deadlines
get completed earlier
IRIS H.-R. JIANG
Minimizing Lateness
¨ Greedy rule: Earliest deadline first!
Greedy algorithms
23
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
Time
s
6
5
4
1
3
2
max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
No Idle Time
¨ Observation: The greedy schedule has no idle time.
¤ Line 4!
¨ There is an optimal schedule with no idle time.
Greedy algorithms
24
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
22. IRIS H.-R. JIANG
What If Each Request Has a Deadline?
¨ Given: A single resource is available starting at time s. A set of
requests {1, 2, …, n}, request i requires a contiguous interval of
length ti and has a deadline di.
¨ Goal: Schedule all requests without overlapping so as to minimize
the maximum lateness.
¤ Lateness: li = max{0, f(i) – di}.
Greedy algorithms
21
Time
0 1 2 3 4 5 6 7 8 9 10 11 12 13
6
5
4
1
3
2
14 15
deadline
length max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
Greedy Rule
¨ Consider requests in some order.
1. Shortest interval first: Process requests in ascending order of ti
2. Smallest slack: Process requests in ascending order of di – ti
3. Earliest deadline first: Process requests in ascending order of di
Greedy algorithms
22
1 2
ti
di
1
100
10
10
1 2
ti
di
1
2
10
10
Jobs with earlier deadlines
get completed earlier
IRIS H.-R. JIANG
Minimizing Lateness
¨ Greedy rule: Earliest deadline first!
Greedy algorithms
23
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
Time
s
6
5
4
1
3
2
max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
No Idle Time
¨ Observation: The greedy schedule has no idle time.
¤ Line 4!
¨ There is an optimal schedule with no idle time.
Greedy algorithms
24
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
23. IRIS H.-R. JIANG
What If Each Request Has a Deadline?
¨ Given: A single resource is available starting at time s. A set of
requests {1, 2, …, n}, request i requires a contiguous interval of
length ti and has a deadline di.
¨ Goal: Schedule all requests without overlapping so as to minimize
the maximum lateness.
¤ Lateness: li = max{0, f(i) – di}.
Greedy algorithms
21
Time
0 1 2 3 4 5 6 7 8 9 10 11 12 13
6
5
4
1
3
2
14 15
deadline
length max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
Greedy Rule
¨ Consider requests in some order.
1. Shortest interval first: Process requests in ascending order of ti
2. Smallest slack: Process requests in ascending order of di – ti
3. Earliest deadline first: Process requests in ascending order of di
Greedy algorithms
22
1 2
ti
di
1
100
10
10
1 2
ti
di
1
2
10
10
Jobs with earlier deadlines
get completed earlier
IRIS H.-R. JIANG
Minimizing Lateness
¨ Greedy rule: Earliest deadline first!
Greedy algorithms
23
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
Time
s
6
5
4
1
3
2
max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
No Idle Time
¨ Observation: The greedy schedule has no idle time.
¤ Line 4!
¨ There is an optimal schedule with no idle time.
Greedy algorithms
24
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
24. IRIS H.-R. JIANG
What If Each Request Has a Deadline?
¨ Given: A single resource is available starting at time s. A set of
requests {1, 2, …, n}, request i requires a contiguous interval of
length ti and has a deadline di.
¨ Goal: Schedule all requests without overlapping so as to minimize
the maximum lateness.
¤ Lateness: li = max{0, f(i) – di}.
Greedy algorithms
21
Time
0 1 2 3 4 5 6 7 8 9 10 11 12 13
6
5
4
1
3
2
14 15
deadline
length max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
Greedy Rule
¨ Consider requests in some order.
1. Shortest interval first: Process requests in ascending order of ti
2. Smallest slack: Process requests in ascending order of di – ti
3. Earliest deadline first: Process requests in ascending order of di
Greedy algorithms
22
1 2
ti
di
1
100
10
10
1 2
ti
di
1
2
10
10
Jobs with earlier deadlines
get completed earlier
IRIS H.-R. JIANG
Minimizing Lateness
¨ Greedy rule: Earliest deadline first!
Greedy algorithms
23
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
Time
s
6
5
4
1
3
2
max lateness = 1
1 2 3 4 5 6
IRIS H.-R. JIANG
No Idle Time
¨ Observation: The greedy schedule has no idle time.
¤ Line 4!
¨ There is an optimal schedule with no idle time.
Greedy algorithms
24
Min-Lateness(R,s)
// f: the finishing time of the last scheduled request
1. {d1, …,dn} = sort requests in ascending order of their deadlines
2. f = s
3. for i from 1 to n do
4. assign request i to the time interval from s(i) = f to f(i) = f + ti
5. f = f + ti
6. return the set of scheduled intervals [s(i), f(i)) for all i = 1..n
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11 12
d = 4 d = 6 d = 12
25. IRIS H.-R. JIANG
No Inversions
¨ Exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
¨ An inversion in schedule S is a pair of requests i and j such
that s(i) < s(j) but dj < di.
¨ All schedules without inversions and without idle time have the
same maximum lateness.
¨ Pf:
¤ If two different schedules have neither inversions nor idle time,
then they can only differ in the order in which requests with
identical deadlines are scheduled.
¤ Consider such a deadline d. In both schedules, the jobs with
deadline d are all scheduled consecutively (after all jobs with
earlier deadlines and before all jobs with later deadlines).
¤ Among them, the last one has the greatest lateness, and this
lateness does not depend on the order of the requests.
Greedy algorithms
25
IRIS H.-R. JIANG
Optimality
¨ There is an optimal schedule with no inversions and no idle
time.
¨ Pf:
¤ There is an optimal schedule O without idle time. (done!)
1. If O has an inversion, there is a pair of jobs i and j such that j is
scheduled immediately after i and has dj < di.
2. After swapping i and j we get a schedule with one less
inversion.
3. The new swapped schedule has a maximum lateness no larger
than that of O.
n Other requests have the same lateness
Greedy algorithms
26
inversion
i j
i
j
fi
fi
dj di
IRIS H.-R. JIANG
Optimality: Exchange Argument
¨ Theorem: The greedy schedule S is optimal.
¨ Pf: Proof by contradiction
¤ Let O be an optimal schedule with inversions.
¤ Assume O has no idle time.
¤ If O has no inversions, then S = O. done!
¤ If O has an inversion, let i-j be an adjacent inversion.
n Swapping i and j does not increase the maximum lateness and
strictly decreases the number of inversions.
n This contradicts definition of O.
Greedy algorithms
27
IRIS H.-R. JIANG
Summary: Greedy Analysis Strategies
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead: Show that after each step of
the greedy algorithm, its partial solution is better than the
optimal.
2. An exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
Greedy algorithms
28
26. IRIS H.-R. JIANG
No Inversions
¨ Exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
¨ An inversion in schedule S is a pair of requests i and j such
that s(i) < s(j) but dj < di.
¨ All schedules without inversions and without idle time have the
same maximum lateness.
¨ Pf:
¤ If two different schedules have neither inversions nor idle time,
then they can only differ in the order in which requests with
identical deadlines are scheduled.
¤ Consider such a deadline d. In both schedules, the jobs with
deadline d are all scheduled consecutively (after all jobs with
earlier deadlines and before all jobs with later deadlines).
¤ Among them, the last one has the greatest lateness, and this
lateness does not depend on the order of the requests.
Greedy algorithms
25
IRIS H.-R. JIANG
Optimality
¨ There is an optimal schedule with no inversions and no idle
time.
¨ Pf:
¤ There is an optimal schedule O without idle time. (done!)
1. If O has an inversion, there is a pair of jobs i and j such that j is
scheduled immediately after i and has dj < di.
2. After swapping i and j we get a schedule with one less
inversion.
3. The new swapped schedule has a maximum lateness no larger
than that of O.
n Other requests have the same lateness
Greedy algorithms
26
inversion
i j
i
j
fi
fi
dj di
IRIS H.-R. JIANG
Optimality: Exchange Argument
¨ Theorem: The greedy schedule S is optimal.
¨ Pf: Proof by contradiction
¤ Let O be an optimal schedule with inversions.
¤ Assume O has no idle time.
¤ If O has no inversions, then S = O. done!
¤ If O has an inversion, let i-j be an adjacent inversion.
n Swapping i and j does not increase the maximum lateness and
strictly decreases the number of inversions.
n This contradicts definition of O.
Greedy algorithms
27
IRIS H.-R. JIANG
Summary: Greedy Analysis Strategies
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead: Show that after each step of
the greedy algorithm, its partial solution is better than the
optimal.
2. An exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
Greedy algorithms
28
27. IRIS H.-R. JIANG
No Inversions
¨ Exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
¨ An inversion in schedule S is a pair of requests i and j such
that s(i) < s(j) but dj < di.
¨ All schedules without inversions and without idle time have the
same maximum lateness.
¨ Pf:
¤ If two different schedules have neither inversions nor idle time,
then they can only differ in the order in which requests with
identical deadlines are scheduled.
¤ Consider such a deadline d. In both schedules, the jobs with
deadline d are all scheduled consecutively (after all jobs with
earlier deadlines and before all jobs with later deadlines).
¤ Among them, the last one has the greatest lateness, and this
lateness does not depend on the order of the requests.
Greedy algorithms
25
IRIS H.-R. JIANG
Optimality
¨ There is an optimal schedule with no inversions and no idle
time.
¨ Pf:
¤ There is an optimal schedule O without idle time. (done!)
1. If O has an inversion, there is a pair of jobs i and j such that j is
scheduled immediately after i and has dj < di.
2. After swapping i and j we get a schedule with one less
inversion.
3. The new swapped schedule has a maximum lateness no larger
than that of O.
n Other requests have the same lateness
Greedy algorithms
26
inversion
i j
i
j
fi
fi
dj di
IRIS H.-R. JIANG
Optimality: Exchange Argument
¨ Theorem: The greedy schedule S is optimal.
¨ Pf: Proof by contradiction
¤ Let O be an optimal schedule with inversions.
¤ Assume O has no idle time.
¤ If O has no inversions, then S = O. done!
¤ If O has an inversion, let i-j be an adjacent inversion.
n Swapping i and j does not increase the maximum lateness and
strictly decreases the number of inversions.
n This contradicts definition of O.
Greedy algorithms
27
IRIS H.-R. JIANG
Summary: Greedy Analysis Strategies
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead: Show that after each step of
the greedy algorithm, its partial solution is better than the
optimal.
2. An exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
Greedy algorithms
28
28. IRIS H.-R. JIANG
No Inversions
¨ Exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
¨ An inversion in schedule S is a pair of requests i and j such
that s(i) < s(j) but dj < di.
¨ All schedules without inversions and without idle time have the
same maximum lateness.
¨ Pf:
¤ If two different schedules have neither inversions nor idle time,
then they can only differ in the order in which requests with
identical deadlines are scheduled.
¤ Consider such a deadline d. In both schedules, the jobs with
deadline d are all scheduled consecutively (after all jobs with
earlier deadlines and before all jobs with later deadlines).
¤ Among them, the last one has the greatest lateness, and this
lateness does not depend on the order of the requests.
Greedy algorithms
25
IRIS H.-R. JIANG
Optimality
¨ There is an optimal schedule with no inversions and no idle
time.
¨ Pf:
¤ There is an optimal schedule O without idle time. (done!)
1. If O has an inversion, there is a pair of jobs i and j such that j is
scheduled immediately after i and has dj < di.
2. After swapping i and j we get a schedule with one less
inversion.
3. The new swapped schedule has a maximum lateness no larger
than that of O.
n Other requests have the same lateness
Greedy algorithms
26
inversion
i j
i
j
fi
fi
dj di
IRIS H.-R. JIANG
Optimality: Exchange Argument
¨ Theorem: The greedy schedule S is optimal.
¨ Pf: Proof by contradiction
¤ Let O be an optimal schedule with inversions.
¤ Assume O has no idle time.
¤ If O has no inversions, then S = O. done!
¤ If O has an inversion, let i-j be an adjacent inversion.
n Swapping i and j does not increase the maximum lateness and
strictly decreases the number of inversions.
n This contradicts definition of O.
Greedy algorithms
27
IRIS H.-R. JIANG
Summary: Greedy Analysis Strategies
¨ An algorithm is greedy if it builds up a solution in small steps,
choosing a decision at each step myopically to optimize some
underlying criterion.
¨ It’s challenging to prove greedy algorithms succeed in solving
a nontrivial problem optimally.
1. The greedy algorithm stays ahead: Show that after each step of
the greedy algorithm, its partial solution is better than the
optimal.
2. An exchange argument: Gradually transform an optimal solution
to the one found by the greedy algorithm without hurting its
quality.
Greedy algorithms
28
29. Edsger W. Dijkstra 1959
Shortest Paths
29
Greedy algorithms
IRIS H.-R. JIANG
Edsger W. Dijkstra (1930—2002)
¨ 1972 recipient of the ACM Turing Award
Greedy algorithms
30
The question of whether computers can think is like
the question of whether submarines can swim.
http://www.cs.utexas.edu/users/EWD/obituary.html
If you want more effective programmers,
you will discover that they should not waste their time debugging,
they should not introduce the bugs to start with.
Program testing can be a very effective way to show the presence of bugs,
but it is hopelessly inadequate for showing their absence.
-- Turing Award Lecture 1972, the humble programmer
IRIS H.-R. JIANG
Google Map
Greedy algorithms
31
Shortest path from
San Francisco Shopping Centre to
Yosemite National Park
IRIS H.-R. JIANG
Floor Guide in a Shopping Mall
¨ Direct shoppers to their destinations in real-time
Greedy algorithms
32
30. Edsger W. Dijkstra 1959
Shortest Paths
29
Greedy algorithms
IRIS H.-R. JIANG
Edsger W. Dijkstra (1930—2002)
¨ 1972 recipient of the ACM Turing Award
Greedy algorithms
30
The question of whether computers can think is like
the question of whether submarines can swim.
http://www.cs.utexas.edu/users/EWD/obituary.html
If you want more effective programmers,
you will discover that they should not waste their time debugging,
they should not introduce the bugs to start with.
Program testing can be a very effective way to show the presence of bugs,
but it is hopelessly inadequate for showing their absence.
-- Turing Award Lecture 1972, the humble programmer
IRIS H.-R. JIANG
Google Map
Greedy algorithms
31
Shortest path from
San Francisco Shopping Centre to
Yosemite National Park
IRIS H.-R. JIANG
Floor Guide in a Shopping Mall
¨ Direct shoppers to their destinations in real-time
Greedy algorithms
32
31. Edsger W. Dijkstra 1959
Shortest Paths
29
Greedy algorithms
IRIS H.-R. JIANG
Edsger W. Dijkstra (1930—2002)
¨ 1972 recipient of the ACM Turing Award
Greedy algorithms
30
The question of whether computers can think is like
the question of whether submarines can swim.
http://www.cs.utexas.edu/users/EWD/obituary.html
If you want more effective programmers,
you will discover that they should not waste their time debugging,
they should not introduce the bugs to start with.
Program testing can be a very effective way to show the presence of bugs,
but it is hopelessly inadequate for showing their absence.
-- Turing Award Lecture 1972, the humble programmer
IRIS H.-R. JIANG
Google Map
Greedy algorithms
31
Shortest path from
San Francisco Shopping Centre to
Yosemite National Park
IRIS H.-R. JIANG
Floor Guide in a Shopping Mall
¨ Direct shoppers to their destinations in real-time
Greedy algorithms
32
32. Edsger W. Dijkstra 1959
Shortest Paths
29
Greedy algorithms
IRIS H.-R. JIANG
Edsger W. Dijkstra (1930—2002)
¨ 1972 recipient of the ACM Turing Award
Greedy algorithms
30
The question of whether computers can think is like
the question of whether submarines can swim.
http://www.cs.utexas.edu/users/EWD/obituary.html
If you want more effective programmers,
you will discover that they should not waste their time debugging,
they should not introduce the bugs to start with.
Program testing can be a very effective way to show the presence of bugs,
but it is hopelessly inadequate for showing their absence.
-- Turing Award Lecture 1972, the humble programmer
IRIS H.-R. JIANG
Google Map
Greedy algorithms
31
Shortest path from
San Francisco Shopping Centre to
Yosemite National Park
IRIS H.-R. JIANG
Floor Guide in a Shopping Mall
¨ Direct shoppers to their destinations in real-time
Greedy algorithms
32
33. IRIS H.-R. JIANG
The Shortest Path Problem
¨ Given:
¤ Directed graph G = (V, E)
n Length le = length of edge e = (u, v) Î E
n Distance; time; cost
n le ³ 0
n Q: what if undirected?
n A: 1 undirected edge = 2 directed ones
¤ Source s
¨ Goal:
¤ Shortest path Pv from s to each other node v Î V – {s}
n Length of path P: l(P)= SeÎP le
Greedy algorithms
33
http://upload.wikimedia.org/wikipedia/commons/4/45/Dijksta_Anim.gif
l(a®b) = l(1®3®6®5)
= 9+2+9 = 20
IRIS H.-R. JIANG
Dijkstra’s Algorithm
Greedy algorithms
34
E. W. Dijkstra: A note on two problems in connexion with graphs.
In Numerische Mathematik, 1 (1959), S. 269–271.
Dijkstra(G,l)
// S: the set of explored nodes
// for each u Î S, we store a shortest path distance d(u) from s to u
1. initialize S = {s}, d(s) = 0
2. while S ¹ V do
3. select a node v Ï S with at least one edge from S for which
4. d'(v) = mine = (u, v): uÎS d(u)+le
5. add v to S and define d(v) = d'(v)
shortest path to some u in
explored part, followed by a
single edge (u, v)
s
u
x
v
y
z
1
2
1
2
4
3
3
1
2
S: explored
d'(x) = 2; d'(y) = 4; d'(z) = 5
d'(y) = 3; d'(z) = 4
IRIS H.-R. JIANG
Correctness
¨ Loop invariant: Consider the set S at any point in the
algorithm’s execution. For each node u Î S, d(u) is the length
of the shortest s-u path Pu.
¨ Pf: Proof by induction on |S|
¤ Basis step: trivial for |S| = 1.
¤ Induction step: hypothesis: true for k ³ 1.
n Grow S by adding v;
let (u, v) be the final edge on our s-v path Pv.
n By induction hypothesis, Pu is the shortest s-u path.
n Consider any other s-v path P; P must leave S somewhere; let
y be the first node on P that is not in S, and x Î S be the node
just before y.
n P cannot be shorter than Pv because it is already at least as
long as Pv by the time it has left the set S.
n At iteration k+1, d(v) = d'(v) = d(u) + le=(u, v) £ d(x) + le'=(x, y) £ l(P)
Greedy algorithms
35
s
x
u
y
v
S P
Pv
The algorithm
stays ahead!
Q: Why le ³ 0?
IRIS H.-R. JIANG
Implementation
¨ Q: How to do line 4 efficiently? d'(v) = mine = (u, v): uÎS d(u)+le
¨ A: Explicitly maintain d'(v) in the view of each unexplored node
v instead of S
¤ Next node to explore = node with minimum d'(v).
¤ When exploring v, update d'(w) for each outgoing (v, w), w Ï S.
¨ Q: How?
¨ A: Implement a min priority queue nicely
Greedy algorithms
36
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n) O(n) Q(lg n) Q(1)
ExtractMin O(n) O(n) Q(lg n) O(lg n)
ChangeKey O(m) O(1) Q(lg n) Q(1)
IsEmpty O(n) O(1) Q(1) Q(1)
Total O(n2) O(mlg n) O(m+nlg n)
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n)
ExtractMin O(n)
ChangeKey O(m)
IsEmpty O(n)
Total
Operation Dijkstra Array Binary heap Fibonacci heap
Insert
ExtractMin
ChangeKey
IsEmpty
Total
34. IRIS H.-R. JIANG
The Shortest Path Problem
¨ Given:
¤ Directed graph G = (V, E)
n Length le = length of edge e = (u, v) Î E
n Distance; time; cost
n le ³ 0
n Q: what if undirected?
n A: 1 undirected edge = 2 directed ones
¤ Source s
¨ Goal:
¤ Shortest path Pv from s to each other node v Î V – {s}
n Length of path P: l(P)= SeÎP le
Greedy algorithms
33
http://upload.wikimedia.org/wikipedia/commons/4/45/Dijksta_Anim.gif
l(a®b) = l(1®3®6®5)
= 9+2+9 = 20
IRIS H.-R. JIANG
Dijkstra’s Algorithm
Greedy algorithms
34
E. W. Dijkstra: A note on two problems in connexion with graphs.
In Numerische Mathematik, 1 (1959), S. 269–271.
Dijkstra(G,l)
// S: the set of explored nodes
// for each u Î S, we store a shortest path distance d(u) from s to u
1. initialize S = {s}, d(s) = 0
2. while S ¹ V do
3. select a node v Ï S with at least one edge from S for which
4. d'(v) = mine = (u, v): uÎS d(u)+le
5. add v to S and define d(v) = d'(v)
shortest path to some u in
explored part, followed by a
single edge (u, v)
s
u
x
v
y
z
1
2
1
2
4
3
3
1
2
S: explored
d'(x) = 2; d'(y) = 4; d'(z) = 5
d'(y) = 3; d'(z) = 4
IRIS H.-R. JIANG
Correctness
¨ Loop invariant: Consider the set S at any point in the
algorithm’s execution. For each node u Î S, d(u) is the length
of the shortest s-u path Pu.
¨ Pf: Proof by induction on |S|
¤ Basis step: trivial for |S| = 1.
¤ Induction step: hypothesis: true for k ³ 1.
n Grow S by adding v;
let (u, v) be the final edge on our s-v path Pv.
n By induction hypothesis, Pu is the shortest s-u path.
n Consider any other s-v path P; P must leave S somewhere; let
y be the first node on P that is not in S, and x Î S be the node
just before y.
n P cannot be shorter than Pv because it is already at least as
long as Pv by the time it has left the set S.
n At iteration k+1, d(v) = d'(v) = d(u) + le=(u, v) £ d(x) + le'=(x, y) £ l(P)
Greedy algorithms
35
s
x
u
y
v
S P
Pv
The algorithm
stays ahead!
Q: Why le ³ 0?
IRIS H.-R. JIANG
Implementation
¨ Q: How to do line 4 efficiently? d'(v) = mine = (u, v): uÎS d(u)+le
¨ A: Explicitly maintain d'(v) in the view of each unexplored node
v instead of S
¤ Next node to explore = node with minimum d'(v).
¤ When exploring v, update d'(w) for each outgoing (v, w), w Ï S.
¨ Q: How?
¨ A: Implement a min priority queue nicely
Greedy algorithms
36
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n) O(n) Q(lg n) Q(1)
ExtractMin O(n) O(n) Q(lg n) O(lg n)
ChangeKey O(m) O(1) Q(lg n) Q(1)
IsEmpty O(n) O(1) Q(1) Q(1)
Total O(n2) O(mlg n) O(m+nlg n)
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n)
ExtractMin O(n)
ChangeKey O(m)
IsEmpty O(n)
Total
Operation Dijkstra Array Binary heap Fibonacci heap
Insert
ExtractMin
ChangeKey
IsEmpty
Total
35. IRIS H.-R. JIANG
The Shortest Path Problem
¨ Given:
¤ Directed graph G = (V, E)
n Length le = length of edge e = (u, v) Î E
n Distance; time; cost
n le ³ 0
n Q: what if undirected?
n A: 1 undirected edge = 2 directed ones
¤ Source s
¨ Goal:
¤ Shortest path Pv from s to each other node v Î V – {s}
n Length of path P: l(P)= SeÎP le
Greedy algorithms
33
http://upload.wikimedia.org/wikipedia/commons/4/45/Dijksta_Anim.gif
l(a®b) = l(1®3®6®5)
= 9+2+9 = 20
IRIS H.-R. JIANG
Dijkstra’s Algorithm
Greedy algorithms
34
E. W. Dijkstra: A note on two problems in connexion with graphs.
In Numerische Mathematik, 1 (1959), S. 269–271.
Dijkstra(G,l)
// S: the set of explored nodes
// for each u Î S, we store a shortest path distance d(u) from s to u
1. initialize S = {s}, d(s) = 0
2. while S ¹ V do
3. select a node v Ï S with at least one edge from S for which
4. d'(v) = mine = (u, v): uÎS d(u)+le
5. add v to S and define d(v) = d'(v)
shortest path to some u in
explored part, followed by a
single edge (u, v)
s
u
x
v
y
z
1
2
1
2
4
3
3
1
2
S: explored
d'(x) = 2; d'(y) = 4; d'(z) = 5
d'(y) = 3; d'(z) = 4
IRIS H.-R. JIANG
Correctness
¨ Loop invariant: Consider the set S at any point in the
algorithm’s execution. For each node u Î S, d(u) is the length
of the shortest s-u path Pu.
¨ Pf: Proof by induction on |S|
¤ Basis step: trivial for |S| = 1.
¤ Induction step: hypothesis: true for k ³ 1.
n Grow S by adding v;
let (u, v) be the final edge on our s-v path Pv.
n By induction hypothesis, Pu is the shortest s-u path.
n Consider any other s-v path P; P must leave S somewhere; let
y be the first node on P that is not in S, and x Î S be the node
just before y.
n P cannot be shorter than Pv because it is already at least as
long as Pv by the time it has left the set S.
n At iteration k+1, d(v) = d'(v) = d(u) + le=(u, v) £ d(x) + le'=(x, y) £ l(P)
Greedy algorithms
35
s
x
u
y
v
S P
Pv
The algorithm
stays ahead!
Q: Why le ³ 0?
IRIS H.-R. JIANG
Implementation
¨ Q: How to do line 4 efficiently? d'(v) = mine = (u, v): uÎS d(u)+le
¨ A: Explicitly maintain d'(v) in the view of each unexplored node
v instead of S
¤ Next node to explore = node with minimum d'(v).
¤ When exploring v, update d'(w) for each outgoing (v, w), w Ï S.
¨ Q: How?
¨ A: Implement a min priority queue nicely
Greedy algorithms
36
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n) O(n) Q(lg n) Q(1)
ExtractMin O(n) O(n) Q(lg n) O(lg n)
ChangeKey O(m) O(1) Q(lg n) Q(1)
IsEmpty O(n) O(1) Q(1) Q(1)
Total O(n2) O(mlg n) O(m+nlg n)
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n)
ExtractMin O(n)
ChangeKey O(m)
IsEmpty O(n)
Total
Operation Dijkstra Array Binary heap Fibonacci heap
Insert
ExtractMin
ChangeKey
IsEmpty
Total
36. IRIS H.-R. JIANG
The Shortest Path Problem
¨ Given:
¤ Directed graph G = (V, E)
n Length le = length of edge e = (u, v) Î E
n Distance; time; cost
n le ³ 0
n Q: what if undirected?
n A: 1 undirected edge = 2 directed ones
¤ Source s
¨ Goal:
¤ Shortest path Pv from s to each other node v Î V – {s}
n Length of path P: l(P)= SeÎP le
Greedy algorithms
33
http://upload.wikimedia.org/wikipedia/commons/4/45/Dijksta_Anim.gif
l(a®b) = l(1®3®6®5)
= 9+2+9 = 20
IRIS H.-R. JIANG
Dijkstra’s Algorithm
Greedy algorithms
34
E. W. Dijkstra: A note on two problems in connexion with graphs.
In Numerische Mathematik, 1 (1959), S. 269–271.
Dijkstra(G,l)
// S: the set of explored nodes
// for each u Î S, we store a shortest path distance d(u) from s to u
1. initialize S = {s}, d(s) = 0
2. while S ¹ V do
3. select a node v Ï S with at least one edge from S for which
4. d'(v) = mine = (u, v): uÎS d(u)+le
5. add v to S and define d(v) = d'(v)
shortest path to some u in
explored part, followed by a
single edge (u, v)
s
u
x
v
y
z
1
2
1
2
4
3
3
1
2
S: explored
d'(x) = 2; d'(y) = 4; d'(z) = 5
d'(y) = 3; d'(z) = 4
IRIS H.-R. JIANG
Correctness
¨ Loop invariant: Consider the set S at any point in the
algorithm’s execution. For each node u Î S, d(u) is the length
of the shortest s-u path Pu.
¨ Pf: Proof by induction on |S|
¤ Basis step: trivial for |S| = 1.
¤ Induction step: hypothesis: true for k ³ 1.
n Grow S by adding v;
let (u, v) be the final edge on our s-v path Pv.
n By induction hypothesis, Pu is the shortest s-u path.
n Consider any other s-v path P; P must leave S somewhere; let
y be the first node on P that is not in S, and x Î S be the node
just before y.
n P cannot be shorter than Pv because it is already at least as
long as Pv by the time it has left the set S.
n At iteration k+1, d(v) = d'(v) = d(u) + le=(u, v) £ d(x) + le'=(x, y) £ l(P)
Greedy algorithms
35
s
x
u
y
v
S P
Pv
The algorithm
stays ahead!
Q: Why le ³ 0?
IRIS H.-R. JIANG
Implementation
¨ Q: How to do line 4 efficiently? d'(v) = mine = (u, v): uÎS d(u)+le
¨ A: Explicitly maintain d'(v) in the view of each unexplored node
v instead of S
¤ Next node to explore = node with minimum d'(v).
¤ When exploring v, update d'(w) for each outgoing (v, w), w Ï S.
¨ Q: How?
¨ A: Implement a min priority queue nicely
Greedy algorithms
36
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n) O(n) Q(lg n) Q(1)
ExtractMin O(n) O(n) Q(lg n) O(lg n)
ChangeKey O(m) O(1) Q(lg n) Q(1)
IsEmpty O(n) O(1) Q(1) Q(1)
Total O(n2) O(mlg n) O(m+nlg n)
Operation Dijkstra Array Binary heap Fibonacci heap
Insert O(n)
ExtractMin O(n)
ChangeKey O(m)
IsEmpty O(n)
Total
Operation Dijkstra Array Binary heap Fibonacci heap
Insert
ExtractMin
ChangeKey
IsEmpty
Total
37. IRIS H.-R. JIANG
Example
Greedy algorithms
37
Cormen, 3rd ed.
Binary Tree Application
Recap Heaps: Priority Queues
38
Greedy algorithms
IRIS H.-R. JIANG
Priority Queue
¨ In a priority queue (PQ)
¤ Each element has a priority (key)
¤ Only the element with highest (or lowest) priority can be deleted
n Max priority queue, or min priority queue
¤ An element with arbitrary priority can be inserted into the queue
at any time
Greedy algorithms
39
Operation Binary heap Fibonacci heap
FindMin Q(1) Q(1)
ExtractMin Q(lg n) O(lg n)
Insert Q(lg n) Q(1)
ChangeKey Q(lg n) Q(1)
The time complexities are worst-case time for binary heap, and amortized
time complexity for Fibonacci heap
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Introduction to Algorithms, 2nd Edition. MIT Press and McGraw-Hill, 2001.
Fredman M. L. & Tarjan R. E. (1987). Fibonacci heaps and their uses in improved
network optimization algorithms. Journal of the ACM 34(3), pp. 596-615.
IRIS H.-R. JIANG
Max heap Min heap
Heap
¨ Definition: A max (min) heap is
¤ A max (min) tree: key[parent] >= (<=) key[children]
¤ A complete binary tree
¨ Corollary: Who has the largest (smallest) key in a max (min) heap?
¤ Root!
¨ Example
Greedy algorithms
40
7
8 6
2
7 4
8 6
14
12
10 10
38. IRIS H.-R. JIANG
Example
Greedy algorithms
37
Cormen, 3rd ed.
Binary Tree Application
Recap Heaps: Priority Queues
38
Greedy algorithms
IRIS H.-R. JIANG
Priority Queue
¨ In a priority queue (PQ)
¤ Each element has a priority (key)
¤ Only the element with highest (or lowest) priority can be deleted
n Max priority queue, or min priority queue
¤ An element with arbitrary priority can be inserted into the queue
at any time
Greedy algorithms
39
Operation Binary heap Fibonacci heap
FindMin Q(1) Q(1)
ExtractMin Q(lg n) O(lg n)
Insert Q(lg n) Q(1)
ChangeKey Q(lg n) Q(1)
The time complexities are worst-case time for binary heap, and amortized
time complexity for Fibonacci heap
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Introduction to Algorithms, 2nd Edition. MIT Press and McGraw-Hill, 2001.
Fredman M. L. & Tarjan R. E. (1987). Fibonacci heaps and their uses in improved
network optimization algorithms. Journal of the ACM 34(3), pp. 596-615.
IRIS H.-R. JIANG
Max heap Min heap
Heap
¨ Definition: A max (min) heap is
¤ A max (min) tree: key[parent] >= (<=) key[children]
¤ A complete binary tree
¨ Corollary: Who has the largest (smallest) key in a max (min) heap?
¤ Root!
¨ Example
Greedy algorithms
40
7
8 6
2
7 4
8 6
14
12
10 10
39. IRIS H.-R. JIANG
Example
Greedy algorithms
37
Cormen, 3rd ed.
Binary Tree Application
Recap Heaps: Priority Queues
38
Greedy algorithms
IRIS H.-R. JIANG
Priority Queue
¨ In a priority queue (PQ)
¤ Each element has a priority (key)
¤ Only the element with highest (or lowest) priority can be deleted
n Max priority queue, or min priority queue
¤ An element with arbitrary priority can be inserted into the queue
at any time
Greedy algorithms
39
Operation Binary heap Fibonacci heap
FindMin Q(1) Q(1)
ExtractMin Q(lg n) O(lg n)
Insert Q(lg n) Q(1)
ChangeKey Q(lg n) Q(1)
The time complexities are worst-case time for binary heap, and amortized
time complexity for Fibonacci heap
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Introduction to Algorithms, 2nd Edition. MIT Press and McGraw-Hill, 2001.
Fredman M. L. & Tarjan R. E. (1987). Fibonacci heaps and their uses in improved
network optimization algorithms. Journal of the ACM 34(3), pp. 596-615.
IRIS H.-R. JIANG
Max heap Min heap
Heap
¨ Definition: A max (min) heap is
¤ A max (min) tree: key[parent] >= (<=) key[children]
¤ A complete binary tree
¨ Corollary: Who has the largest (smallest) key in a max (min) heap?
¤ Root!
¨ Example
Greedy algorithms
40
7
8 6
2
7 4
8 6
14
12
10 10
40. IRIS H.-R. JIANG
Example
Greedy algorithms
37
Cormen, 3rd ed.
Binary Tree Application
Recap Heaps: Priority Queues
38
Greedy algorithms
IRIS H.-R. JIANG
Priority Queue
¨ In a priority queue (PQ)
¤ Each element has a priority (key)
¤ Only the element with highest (or lowest) priority can be deleted
n Max priority queue, or min priority queue
¤ An element with arbitrary priority can be inserted into the queue
at any time
Greedy algorithms
39
Operation Binary heap Fibonacci heap
FindMin Q(1) Q(1)
ExtractMin Q(lg n) O(lg n)
Insert Q(lg n) Q(1)
ChangeKey Q(lg n) Q(1)
The time complexities are worst-case time for binary heap, and amortized
time complexity for Fibonacci heap
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
Introduction to Algorithms, 2nd Edition. MIT Press and McGraw-Hill, 2001.
Fredman M. L. & Tarjan R. E. (1987). Fibonacci heaps and their uses in improved
network optimization algorithms. Journal of the ACM 34(3), pp. 596-615.
IRIS H.-R. JIANG
Max heap Min heap
Heap
¨ Definition: A max (min) heap is
¤ A max (min) tree: key[parent] >= (<=) key[children]
¤ A complete binary tree
¨ Corollary: Who has the largest (smallest) key in a max (min) heap?
¤ Root!
¨ Example
Greedy algorithms
40
7
8 6
2
7 4
8 6
14
12
10 10
45. IRIS H.-R. JIANG
Deletion from a Max Heap (1/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
45
Greedy algorithms
- -
- -
21 15 20 14 10 1 2
heapSize = 7
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
21
20
- -
- -
15 20 14 10 1 2
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
2 15 20 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
20 15 2 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
IRIS H.-R. JIANG
Deletion from a Max Heap (2/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
Greedy algorithms
46
- -
-
- -
20 15 2 14 10 1
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
IRIS H.-R. JIANG
Deletion from a Max Heap (3/3)
¨ Time complexity?
¤ How many times to trickle down in the worst case? Q(lg n)
Greedy algorithms
47
IRIS H.-R. JIANG
Max Heapify
¨ Max (min) heapify = maintain the max (min) heap property
¤ What we do to trickle down the root in deletion
¤ Assume i’s left & right subtrees are heaps
n But key[i] may be < (>) key[children]
¤ Heapify i = trickle down key[i]
Þ the tree rooted at i is a heap
Greedy algorithms
48
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
46. IRIS H.-R. JIANG
Deletion from a Max Heap (1/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
45
Greedy algorithms
- -
- -
21 15 20 14 10 1 2
heapSize = 7
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
21
20
- -
- -
15 20 14 10 1 2
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
2 15 20 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
20 15 2 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
IRIS H.-R. JIANG
Deletion from a Max Heap (2/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
Greedy algorithms
46
- -
-
- -
20 15 2 14 10 1
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
IRIS H.-R. JIANG
Deletion from a Max Heap (3/3)
¨ Time complexity?
¤ How many times to trickle down in the worst case? Q(lg n)
Greedy algorithms
47
IRIS H.-R. JIANG
Max Heapify
¨ Max (min) heapify = maintain the max (min) heap property
¤ What we do to trickle down the root in deletion
¤ Assume i’s left & right subtrees are heaps
n But key[i] may be < (>) key[children]
¤ Heapify i = trickle down key[i]
Þ the tree rooted at i is a heap
Greedy algorithms
48
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
47. IRIS H.-R. JIANG
Deletion from a Max Heap (1/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
45
Greedy algorithms
- -
- -
21 15 20 14 10 1 2
heapSize = 7
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
21
20
- -
- -
15 20 14 10 1 2
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
2 15 20 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
20 15 2 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
IRIS H.-R. JIANG
Deletion from a Max Heap (2/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
Greedy algorithms
46
- -
-
- -
20 15 2 14 10 1
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
IRIS H.-R. JIANG
Deletion from a Max Heap (3/3)
¨ Time complexity?
¤ How many times to trickle down in the worst case? Q(lg n)
Greedy algorithms
47
IRIS H.-R. JIANG
Max Heapify
¨ Max (min) heapify = maintain the max (min) heap property
¤ What we do to trickle down the root in deletion
¤ Assume i’s left & right subtrees are heaps
n But key[i] may be < (>) key[children]
¤ Heapify i = trickle down key[i]
Þ the tree rooted at i is a heap
Greedy algorithms
48
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
48. IRIS H.-R. JIANG
Deletion from a Max Heap (1/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
45
Greedy algorithms
- -
- -
21 15 20 14 10 1 2
heapSize = 7
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
21
20
- -
- -
15 20 14 10 1 2
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
2 15 20 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
-
- -
20 15 2 14 10 1
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
IRIS H.-R. JIANG
Deletion from a Max Heap (2/3)
¨ Maintain heap Þ trickle down if needed!
¨ Pop()
Greedy algorithms
46
- -
-
- -
20 15 2 14 10 1
heapSize = 6
capacity = 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
2
1
14
15
10
20
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
IRIS H.-R. JIANG
Deletion from a Max Heap (3/3)
¨ Time complexity?
¤ How many times to trickle down in the worst case? Q(lg n)
Greedy algorithms
47
IRIS H.-R. JIANG
Max Heapify
¨ Max (min) heapify = maintain the max (min) heap property
¤ What we do to trickle down the root in deletion
¤ Assume i’s left & right subtrees are heaps
n But key[i] may be < (>) key[children]
¤ Heapify i = trickle down key[i]
Þ the tree rooted at i is a heap
Greedy algorithms
48
- -
- -
- -
1 15 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
heapSize = 5
capacity = 10
- -
- -
- -
15 1 2 14 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1 2
14
15
10
- -
- -
- -
15 14 2 1 10
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
1
2
14
15
10
49. IRIS H.-R. JIANG
How to Build a Max Heap?
¨ How to convert any complete binary tree to a max heap?
¨ Intuition: Max heapify in a bottom-up manner
¤ Induction basis: Leaves are already heaps
¤ Induction steps: Start at parents of leaves, work upward till root
¤ Time complexity: O(n lg n)
Greedy algorithms
49
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
14 8
9 10
2 16
1 3
- 4 7
a
16 10
14
16 10
14
16 10
14
16
10
14
16 10
14
16
10
14
1
4
3 1
4
3 1
4
3
1
4
2
2
8 7
9 2
8 7
9
2 8 7
9
1
8 7
8 7
9 3
4
7
2 8 1
9 3 9 3
2 4
Robert C. Prim 1957 (Dijkstra 1959)
Joseph B. Kruskal 1956
Reverse-delete
Minimum Spanning Trees
50
Greedy algorithms
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Graphs
¨ Q: How can a cable TV company lay cable to a new
neighborhood, of course, as cheaply as possible?
¨ A: Curiously and fortunately, this problem is a case where
many greedy algorithms optimally solve.
¨ Given
¤ Undirected graph G = (V, E)
n Nonnegative cost ce for each edge e = (u, v) Î V
n ce ³ 0
¨ Goal
¤ Find a subset of edges T ك E so that
n The subgraph (V, T) is connected
n Total cost SeÎV ce is minimized
Greedy algorithms
51
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Trees
¨ Q: Let T be a minimum-cost solution. What should (V, T) be?
¨ A:
¤ By definition, (V, T) must be connected.
¤ We show that it also contains no cycles.
¤ Suppose it contained a cycle C, and let e be any edge on C.
¤ We claim that (V, T – {e}) is still connected
¤ Any path previously used e can now go path C – {e} instead.
¤ It follows that (V, T – {e}) is also a valid solution, but cheaper.
¤ Hence, (V, T) is a tree.
¨ Goal
¤ Find a subset of edges T ك E so that
n (V, T) is a tree,
n Total cost SeÎV ce is minimized.
Greedy algorithms
52
Minimum Spanning ????
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
50. IRIS H.-R. JIANG
How to Build a Max Heap?
¨ How to convert any complete binary tree to a max heap?
¨ Intuition: Max heapify in a bottom-up manner
¤ Induction basis: Leaves are already heaps
¤ Induction steps: Start at parents of leaves, work upward till root
¤ Time complexity: O(n lg n)
Greedy algorithms
49
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
14 8
9 10
2 16
1 3
- 4 7
a
16 10
14
16 10
14
16 10
14
16
10
14
16 10
14
16
10
14
1
4
3 1
4
3 1
4
3
1
4
2
2
8 7
9 2
8 7
9
2 8 7
9
1
8 7
8 7
9 3
4
7
2 8 1
9 3 9 3
2 4
Robert C. Prim 1957 (Dijkstra 1959)
Joseph B. Kruskal 1956
Reverse-delete
Minimum Spanning Trees
50
Greedy algorithms
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Graphs
¨ Q: How can a cable TV company lay cable to a new
neighborhood, of course, as cheaply as possible?
¨ A: Curiously and fortunately, this problem is a case where
many greedy algorithms optimally solve.
¨ Given
¤ Undirected graph G = (V, E)
n Nonnegative cost ce for each edge e = (u, v) Î V
n ce ³ 0
¨ Goal
¤ Find a subset of edges T ك E so that
n The subgraph (V, T) is connected
n Total cost SeÎV ce is minimized
Greedy algorithms
51
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Trees
¨ Q: Let T be a minimum-cost solution. What should (V, T) be?
¨ A:
¤ By definition, (V, T) must be connected.
¤ We show that it also contains no cycles.
¤ Suppose it contained a cycle C, and let e be any edge on C.
¤ We claim that (V, T – {e}) is still connected
¤ Any path previously used e can now go path C – {e} instead.
¤ It follows that (V, T – {e}) is also a valid solution, but cheaper.
¤ Hence, (V, T) is a tree.
¨ Goal
¤ Find a subset of edges T ك E so that
n (V, T) is a tree,
n Total cost SeÎV ce is minimized.
Greedy algorithms
52
Minimum Spanning ????
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
51. IRIS H.-R. JIANG
How to Build a Max Heap?
¨ How to convert any complete binary tree to a max heap?
¨ Intuition: Max heapify in a bottom-up manner
¤ Induction basis: Leaves are already heaps
¤ Induction steps: Start at parents of leaves, work upward till root
¤ Time complexity: O(n lg n)
Greedy algorithms
49
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
14 8
9 10
2 16
1 3
- 4 7
a
16 10
14
16 10
14
16 10
14
16
10
14
16 10
14
16
10
14
1
4
3 1
4
3 1
4
3
1
4
2
2
8 7
9 2
8 7
9
2 8 7
9
1
8 7
8 7
9 3
4
7
2 8 1
9 3 9 3
2 4
Robert C. Prim 1957 (Dijkstra 1959)
Joseph B. Kruskal 1956
Reverse-delete
Minimum Spanning Trees
50
Greedy algorithms
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Graphs
¨ Q: How can a cable TV company lay cable to a new
neighborhood, of course, as cheaply as possible?
¨ A: Curiously and fortunately, this problem is a case where
many greedy algorithms optimally solve.
¨ Given
¤ Undirected graph G = (V, E)
n Nonnegative cost ce for each edge e = (u, v) Î V
n ce ³ 0
¨ Goal
¤ Find a subset of edges T ك E so that
n The subgraph (V, T) is connected
n Total cost SeÎV ce is minimized
Greedy algorithms
51
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Trees
¨ Q: Let T be a minimum-cost solution. What should (V, T) be?
¨ A:
¤ By definition, (V, T) must be connected.
¤ We show that it also contains no cycles.
¤ Suppose it contained a cycle C, and let e be any edge on C.
¤ We claim that (V, T – {e}) is still connected
¤ Any path previously used e can now go path C – {e} instead.
¤ It follows that (V, T – {e}) is also a valid solution, but cheaper.
¤ Hence, (V, T) is a tree.
¨ Goal
¤ Find a subset of edges T ك E so that
n (V, T) is a tree,
n Total cost SeÎV ce is minimized.
Greedy algorithms
52
Minimum Spanning ????
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
52. IRIS H.-R. JIANG
How to Build a Max Heap?
¨ How to convert any complete binary tree to a max heap?
¨ Intuition: Max heapify in a bottom-up manner
¤ Induction basis: Leaves are already heaps
¤ Induction steps: Start at parents of leaves, work upward till root
¤ Time complexity: O(n lg n)
Greedy algorithms
49
[8] [9]
[6] [7]
[4] [5]
[2] [3]
[0] [1] [10]
14 8
9 10
2 16
1 3
- 4 7
a
16 10
14
16 10
14
16 10
14
16
10
14
16 10
14
16
10
14
1
4
3 1
4
3 1
4
3
1
4
2
2
8 7
9 2
8 7
9
2 8 7
9
1
8 7
8 7
9 3
4
7
2 8 1
9 3 9 3
2 4
Robert C. Prim 1957 (Dijkstra 1959)
Joseph B. Kruskal 1956
Reverse-delete
Minimum Spanning Trees
50
Greedy algorithms
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Graphs
¨ Q: How can a cable TV company lay cable to a new
neighborhood, of course, as cheaply as possible?
¨ A: Curiously and fortunately, this problem is a case where
many greedy algorithms optimally solve.
¨ Given
¤ Undirected graph G = (V, E)
n Nonnegative cost ce for each edge e = (u, v) Î V
n ce ³ 0
¨ Goal
¤ Find a subset of edges T ك E so that
n The subgraph (V, T) is connected
n Total cost SeÎV ce is minimized
Greedy algorithms
51
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Minimum Spanning Trees
¨ Q: Let T be a minimum-cost solution. What should (V, T) be?
¨ A:
¤ By definition, (V, T) must be connected.
¤ We show that it also contains no cycles.
¤ Suppose it contained a cycle C, and let e be any edge on C.
¤ We claim that (V, T – {e}) is still connected
¤ Any path previously used e can now go path C – {e} instead.
¤ It follows that (V, T – {e}) is also a valid solution, but cheaper.
¤ Hence, (V, T) is a tree.
¨ Goal
¤ Find a subset of edges T ك E so that
n (V, T) is a tree,
n Total cost SeÎV ce is minimized.
Greedy algorithms
52
Minimum Spanning ????
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
53. IRIS H.-R. JIANG
Greedy Algorithms (1/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Kruskal's algorithm:
¤ Start with T = {}.
¤ Consider edges in ascending order of cost.
¤ Insert edge e in T as long as it does not create a cycle;
otherwise, discard e and continue.
Greedy algorithms
53
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (2/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Prim's algorithm: (c.f. Dijkstra’s algorithm)
¤ Start with a root node s.
¤ Greedily grow a tree T from s outward.
¤ At each step, add the cheapest edge e to the partial tree T that
has exactly one endpoint in T.
Greedy algorithms
54
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
D
A
F
B
E
C
G
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (3/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Reverse-delete algorithm: (reverse of Kruskal’s algorithm)
¤ Start with T = E.
¤ Consider edges in descending order of cost.
¤ Delete edge e from T unless doing so would disconnect T.
Greedy algorithms
55
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Cut Property (1/2)
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!
¤ Spanning tree: connected & acyclic
Greedy algorithms
56
v
v'
S
w
h
w'
e
e'
f
¨ Simplifying assumption: All edge costs ce are distinct.
¨ Q: When is it safe to include an edge in the MST?
¨ Cut Property: Let S be any subset of nodes, and let e = (v, w) be
the minimum cost edge with one end in S and the other in V–S.
Then every MST contains e.
¨ Pf: Exchange argument!
¤ Let T be a spanning tree that does not contain e. We need to
show that T does not have the minimum possible cost.
¤ Since T is a spanning tree, it must contain an edge f with one end
in S and the other in V–S.
¤ Since e is the cheapest edge with this property, we have ce < cf.
¤ Hence, T – {f} + {e} is a spanning tree that is cheaper than T.
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!
54. IRIS H.-R. JIANG
Greedy Algorithms (1/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Kruskal's algorithm:
¤ Start with T = {}.
¤ Consider edges in ascending order of cost.
¤ Insert edge e in T as long as it does not create a cycle;
otherwise, discard e and continue.
Greedy algorithms
53
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (2/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Prim's algorithm: (c.f. Dijkstra’s algorithm)
¤ Start with a root node s.
¤ Greedily grow a tree T from s outward.
¤ At each step, add the cheapest edge e to the partial tree T that
has exactly one endpoint in T.
Greedy algorithms
54
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
D
A
F
B
E
C
G
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (3/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Reverse-delete algorithm: (reverse of Kruskal’s algorithm)
¤ Start with T = E.
¤ Consider edges in descending order of cost.
¤ Delete edge e from T unless doing so would disconnect T.
Greedy algorithms
55
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Cut Property (1/2)
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!
¤ Spanning tree: connected & acyclic
Greedy algorithms
56
v
v'
S
w
h
w'
e
e'
f
¨ Simplifying assumption: All edge costs ce are distinct.
¨ Q: When is it safe to include an edge in the MST?
¨ Cut Property: Let S be any subset of nodes, and let e = (v, w) be
the minimum cost edge with one end in S and the other in V–S.
Then every MST contains e.
¨ Pf: Exchange argument!
¤ Let T be a spanning tree that does not contain e. We need to
show that T does not have the minimum possible cost.
¤ Since T is a spanning tree, it must contain an edge f with one end
in S and the other in V–S.
¤ Since e is the cheapest edge with this property, we have ce < cf.
¤ Hence, T – {f} + {e} is a spanning tree that is cheaper than T.
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!
55. IRIS H.-R. JIANG
Greedy Algorithms (1/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Kruskal's algorithm:
¤ Start with T = {}.
¤ Consider edges in ascending order of cost.
¤ Insert edge e in T as long as it does not create a cycle;
otherwise, discard e and continue.
Greedy algorithms
53
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (2/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Prim's algorithm: (c.f. Dijkstra’s algorithm)
¤ Start with a root node s.
¤ Greedily grow a tree T from s outward.
¤ At each step, add the cheapest edge e to the partial tree T that
has exactly one endpoint in T.
Greedy algorithms
54
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
D
A
F
B
E
C
G
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Greedy Algorithms (3/3)
¨ Q: What will you do?
¨ All three greedy algorithms produce an MST.
¨ Reverse-delete algorithm: (reverse of Kruskal’s algorithm)
¤ Start with T = E.
¤ Consider edges in descending order of cost.
¤ Delete edge e from T unless doing so would disconnect T.
Greedy algorithms
55
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
A
B
E
D
C
F
7
5
7
9
15
8
8
5
6
G
9
11
IRIS H.-R. JIANG
Cut Property (1/2)
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!
¤ Spanning tree: connected & acyclic
Greedy algorithms
56
v
v'
S
w
h
w'
e
e'
f
¨ Simplifying assumption: All edge costs ce are distinct.
¨ Q: When is it safe to include an edge in the MST?
¨ Cut Property: Let S be any subset of nodes, and let e = (v, w) be
the minimum cost edge with one end in S and the other in V–S.
Then every MST contains e.
¨ Pf: Exchange argument!
¤ Let T be a spanning tree that does not contain e. We need to
show that T does not have the minimum possible cost.
¤ Since T is a spanning tree, it must contain an edge f with one end
in S and the other in V–S.
¤ Since e is the cheapest edge with this property, we have ce < cf.
¤ Hence, T – {f} + {e} is a spanning tree that is cheaper than T.
¨ Q: What’s wrong with this proof?
¨ A: Take care about the definition!