Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

1,782 views

Published on

No Downloads

Total views

1,782

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

50

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Review
- 2. OverviewFundamentals of Analysis of Algorithm EfficiencyAlgorithmic Techniques Divide-and-Conquer, Decrease-and-Conquer Dynamic Programming Greedy TechniqueData Structures Heaps Graphs– adjacency matrices & adjacency linked lists Trees 2
- 3. Fundamentals of Analysis of Algorithm EfficiencyBasic operationsWorst-, Best-, and Average-case timeefficienciesOrders of growthEfficiency of non-recursive algorithmsEfficiency of recursive algorithms 3
- 4. Worst-Case, Best-Case, and Average-Case EfficiencyWorst case efficiency Efficiency (# of times the basic operation will be executed) for the worst case input of size n, for which The algorithm runs the longest among all possible inputs of size n.Best case Efficiency (# of times the basic operation will be executed) for the best case input of size n, for which The algorithm runs the fastest among all possible inputs of size n.Average case: Efficiency (#of times the basic operation will be executed) for a typical/random input NOT the average of worst and best case How to find the average case efficiency? 4
- 5. Orders of GrowthThree notations used to compare orders ofgrowth of algorithms O(g(n)): class of functions f(n) that grow no faster than g(n) Θ (g(n)): class of functions f(n) that grow at same rate as g(n) Ω(g(n)): class of functions f(n) that grow at least as fast as g(n) 5
- 6. Theorem If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}). The analogous assertions are true for the Ω- notation and Θ-notation. The algorithm’s overall efficiency will be determined by the part with a larger order of growth. 5n2 + 3n + 4 6
- 7. Using Limits for Comparing Orders of Growth 0 order of growth of T(n) < order of growth of g(n) c>0 order of growth of T(n) = order of growth of g(n)limn→∞ T(n)/g(n) = ∞ order of growth of T(n) > order of growth of g(n) Examples: • 10n vs. 2n2 • n(n+1)/2 vs. n2 • logb n vs. logc n 7
- 8. Summary of How to Establish Orders of Growth of an AlgorithmMethod 1: Using limits.Method 2: Using the theorem.Method 3: Using the definitions of O-,Ω-, and Θ-notation. 8
- 9. Basic Efficiency classesfast 1 constant High time efficiency log n logarithmic n linear n log n n log n n2 quadratic n3 cubic 2n exponentialslow n! factorial low time efficiency 9
- 10. Time Efficiency Analysis of NonrecursiveAlgorithmsSteps in mathematical analysis of nonrecursive algorithms: Decide on parameter n indicating input size Identify algorithm’s basic operation Determine worst, average, and best case for input of size n Set up summation for C(n) reflecting the number of times the algorithm’s basic operation is executed. Simplify summation using standard formulas (see Appendix A) 10
- 11. Time Efficiency Analysis of Recursive AlgorithmsDecide on parameter n indicating input sizeIdentify algorithm’s basic operationDetermine worst, average, and best case for input of size nSet up a recurrence relation and initial condition(s) for C(n)-thenumber of times the basic operation will be executed for aninput of size n (alternatively count recursive calls).Solve the recurrence or estimate the order of magnitude of thesolution (see Appendix B) 11
- 12. Master’s TheoremT(n) = aT(n/b) + f (n) where f (n) ∈ Θ(nk)1. a < bk T(n) ∈ Θ(nk)2. a = bk T(n) ∈ Θ(nk lg n )3. a > bk T(n) ∈ Θ(nlog b a)Note: the same results hold with O instead of Θ. 12
- 13. Divide-and-Conquer
- 14. Three Steps of The Divide andConquer ApproachThe most well known algorithm design strategy:1. Divide the problem into two or more smaller subproblems.2. Conquer the subproblems by solving them recursively(or recursively).3. Combine the solutions to the subproblems into the solutions for the original problem. 14
- 15. Divide-and-Conquer Technique a problem of size n subproblem 1 subproblem 2 of size n/2 of size n/2 a solution to a solution to subproblem 1 subproblem 2 a solution to the original problem 15
- 16. Divide and Conquer ExamplesSorting algorithms Mergesort In-place? Worst-case efficiency? Quicksort In-place? Worst-case , best-case and average-case efficiency?Binary Tree algorithms Definitions What is a binary tree? A node’s/tree’s height? A node’s level? Pre-order, post-order, and in-order traversal Find the height Find the total number of leaves. … 16
- 17. Decrease-and-Conquer
- 18. Decrease and Conquer Exploring the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. Use top down(recursive) or bottom up (iterative) to solve the problem. Example, an A top down (recursive) solution A bottom up (iterative) solution 18
- 19. Examples of Decrease and ConquerDecrease by one: the size of the problem is reduced by the sameconstant on each iteration/recursion of the algorithm. Insertion sort In-place? Worst-case , best-case and average-case efficiency? Graph search algorithms: DFS BFSDecrease by a constant factor: the size of the problem is reduced bythe same constant factor on each iteration/recursion of the algorithm. 19
- 20. A Typical Decrease by One Technique a problem of size n subproblem of size n-1 a solution to the subproblem a solution to the original problem 20
- 21. A Typical Decrease by a Constant Factor (half) Technique a problem of size n subproblem of size n/2a solution to the subproblem a solution to the original problem 21
- 22. What’s the Difference?Consider the problem of exponentiation: Compute an Divide and conquer: an= an/2 * an/2 Decrease by one: an= an-1* a (top down) an= a*a*a*a*...*a (bottom up) Decrease by a constant factor: an= (an/2)2 22
- 23. Depth-First SearchThe idea traverse “deeper” whenever possible. When reaching a dead end, the algorithm backs up one edge to the parent and tries to continue visiting unvisited vertices from there. Break the tie by the alphabetic order of the vertices It’s convenient to use a stack to track the operation of depth-first search.DFS forest/tree and the two orderings of DFSDFS can be implemented with graphs represented as: Adjacency matrices: Θ(V2) Adjacency linked lists: Θ(V+E)Applications: Topological sorting checking connectivity, finding connected components 23
- 24. Breadth-First SearchThe idea Traverse “wider” whenever possible. Discover all vertices at distance k from s (on level k) before discovering any vertices at distance k +1 (at level k+1) Similar to level-by-level tree traversals It’s convenient to use a queue to track the operation of depth-first search.BFS forest/tree and the one ordering of BFSBFS has same efficiency as DFS and can beimplemented with graphs represented as: Adjacency matrices: Θ(V2) Adjacency linked lists: Θ(V+E)Applications: checking connectivity, finding connected components 24
- 25. Heapsort
- 26. Heaps Definition Representation Properties Heap algorithms Heap construction Top-down Bottom-up Root deletion Heapsort In-place? Time efficiency? 26
- 27. Examples of Dynamic Programming AlgorithmsMain idea: solve several smaller (overlapping) subproblems record solutions in a table so that each subproblem is only solved once final state of the table will be (or contain) solutionVS. Divide and ConquerComputing binomial coefficientsWarshall’s algorithm for transitive closureFloyd’s algorithms for all-pairs shortest paths 27
- 28. Greedy Algorithms
- 29. Greedy algorithmsConstructs a solution through a sequence of steps, each expanding a partially constructed solution obtained so far, until a complete solution to the problem is reached. The choice made at each step must be: Feasible Satisfy the problem’s constraints locally optimal Be the best local choice among all feasible choices Irrevocable Once made, the choice can’t be changed on subsequent steps.Greedy algorithms do not always yield optimal solutions. 29
- 30. Examples of the Greedy StrategyMinimum Spanning Tree (MST) Definition of spanning tree and MST Prim’s algorithm Kruskal’s algorithmSingle-source shortest paths Dijkstra’s algorithm 30
- 31. P, NP, and NP-Complete Problems Tractable and intractable problems The class P The class NP The relationship between P and NP NP-complete problems 31
- 32. Backtracking and Branch-and-Bound They guarantees solving the problem exactly but doesn’t guarantee to find a solution in polynomial time. Similarity and difference between backtracking and branch-and-bound 32

No public clipboards found for this slide

Be the first to comment