2. Introduction
• Greedy algorithm always makes the choice that
looks best at the moment, with hoping that a locally
optimal choice will lead to a global optimum
• Similar to Dynamic Programming, it applies to
Optimization Problem
• It is usually easy to think up and implement
• Most problems for which they work well have two
properties
– Greedy choice property
– Optimal substructure
3. Greedy choice property
• Greedy algorithm never reconsiders its choices
• This is main difference from dynamic programming
5. Activity Selection problem
• Activity Selection problem is to select the maximum
number of activities that can be performed by a
single person or machine within a time frame , given
a set of activities each marked by start time and
finish time
• Formal definition
– number of activity: n
– start time of activity i is si
– finish time of activity i is fi
– non-conflicting activities i and j: si≥fj or sj≥fi
– Find the maximum set (S) of non-conflicting activities
6. Early Finish Greedy
• Activity Selection problem has optimal substructure
– Assume that activities are sorted by monotonically
increasing finish time
– Aij = Aik ∪ {ak} ∪ Akj
• Select the activity with the earliest finish
• Eliminate the activities that are in conflict
• Repeat until there is no remains
12. Early Finish Greedy
Sort the set of activities by finishing time (f[i])
S=1
f = f[1]
for i=1 to n
if s[i] ≥ f
S=SUi
f = f[i]
end for
13. Cases of failure
• Greedy algorithms don’t always yields on optimal
solution
• Ex) How can a given amount of money be made with
the least number of coins of given denominations?
– Target amount: 6
– Denominations: 1, 3, 4
– Greedy solution: (4, 1, 1)
– Optimal solution: (3, 3)
14. Conclusion
• Greedy algorithms are usually easy to think of, easy
to implement and run fast,
• but it may fail to produce the optimal solution
• Mathematical concepts may give you a recipe for
proving that a problem can be solved with greedy,
but it ultimately comes down to the experience of
the programmer.