Slide2

2,816 views

Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Slide2

  1. 1. Fundamentals of the Analysis of Algorithm Efficiency
  2. 2. Introduction <ul><li>Analysis is the separation of an intellectual or substantial whole into its constituent parts for individual study. </li></ul><ul><li>Investigation of an algorithm’s efficiency with respect to running time and memory space. </li></ul><ul><li>An algorithm’s time efficiency is principally measured as a function of its input size by counting the number of times its basic operation executed. </li></ul><ul><li>A basic operation is the operation that contributes most toward running time. </li></ul>
  3. 3. Order of Growth <ul><li>It is the rate of growth of the running time that really interests us. </li></ul><ul><li>Consider only the leading term of a formula, since the lower-order terms are relatively insignificant for large n and also ignoring the leading term’s constant coefficient (as constant factors are less significant than the rate of growth in determining computational efficiency for large inputs.) </li></ul><ul><li>One algorithm is more efficient than other if its worst-case running time has a lower order of growth. </li></ul><ul><li>Due to constant factors and lower order terms, error for small inputs. </li></ul>
  4. 4. Worst-Case, Best-Case & Average-Case Efficiencies <ul><li>Running time depends not only on an input size but also on the specifies of a particular input. (For example: Sequential Search) </li></ul><ul><li>The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input of size n for which the algorithm runs the longest among all possible inputs of size. </li></ul><ul><li>The best-case efficiency of an algorithm is its efficiency for the best case input of size for which the algorithm runs the fastest among all possible inputs of that size. </li></ul><ul><li>The average-case efficiency seeks to provide the information about an algorithm’s behavior on a typical or random input. </li></ul>
  5. 5. Sequential Search <ul><li>Algorithm: Sequential search (A[0..n-1],k) </li></ul><ul><li>// searches for a given value in a given array by sequential search </li></ul><ul><li>// Input: An array A[0..n-1] and a search key k </li></ul><ul><li>// Output: The index of the first element of A that matches k or -1 if there are no matching elements. </li></ul><ul><li>i=0 </li></ul><ul><li>While i<n and A [i] ≠k do </li></ul><ul><li>i=i+1 </li></ul><ul><li>If i<n return i </li></ul><ul><li>Else return -1 </li></ul>
  6. 6. Sequential Search (contd.) <ul><li>Worst-Case: No Matching elements or the first matching elements happens to be the last one on the list. C worst (n)=n </li></ul><ul><li>Best-Case: C best (n)=1 </li></ul><ul><li>Average-Case: To analyze the algorithm's average case efficiency, some assumptions about possible inputs of size n. </li></ul><ul><li>The standard assumptions are that </li></ul><ul><li>a. the probability of successful search is p(0 ≥ p ≤ 1) and </li></ul><ul><li>b. the probability of the first match occurring in the i th position of the list is the same for every i. </li></ul><ul><li>For successful search: probability of the first match occurring in the i th position of the list is p/n for every i. </li></ul><ul><li>For unsuccessful search: probability is (1-p) . </li></ul>
  7. 7. Sequential Search (contd.) <ul><li>C avg (n)=[1.p/n+2.p/n+..........+i.p/n+.........+n.p/n]+n.(1-p) </li></ul><ul><li> = p/n[1+2+…….+i+…………+n]+n(1-p) </li></ul><ul><li> =p/n (n(n+1))/2+n(1-p) </li></ul><ul><li>=p(n+1)/2+n(1-p) </li></ul><ul><li>If p=1 (successful search), the average number of key comparisons made by sequential search is (n+1)/2 i.e. the algorithm will inspect half of the list’s elements. </li></ul><ul><li>If p=0 (unsuccessful search), the average number of key comparisons made by sequential search will be n . </li></ul>
  8. 8. Asymptotic Notations <ul><li>Used to formalize that an algorithm has running time or storage requirements that are ``never more than,'' ``always greater than,'' or ``exactly'' some amount. </li></ul><ul><li>To compare and rank the order of growth of an algorithm’s basic operation count. </li></ul><ul><li>Three asymptotic notations: O (big oh), Ω (big omega), and Θ (big theta) </li></ul>
  9. 9. O-notation (Big Oh) <ul><li>Asymptotic Upper Bound </li></ul><ul><li>For a given function g(n), we denote O(g(n)) as the set of functions: </li></ul><ul><li> O(g(n)) = { f(n)| there exists positive </li></ul><ul><li>constants c and n 0 such that </li></ul><ul><li>0 ≤ f(n) ≤ c g(n) for all n ≥ n 0 } </li></ul><ul><li>For example: 100n+5 Є O(n 2 ) </li></ul><ul><li>100n+5 ≤ 100n+n (for all n≥5) = 101n ≤ 101n 2 </li></ul><ul><li>By the definition, c=101 and n o =5 </li></ul>
  10. 10. Ω -notation <ul><li>Asymptotic lower bound </li></ul><ul><li>Ω (g(n)) represents a set of functions such that: </li></ul><ul><li>Ω(g(n)) = {f(n): there exist positive </li></ul><ul><li>constants c and n 0 such that </li></ul><ul><li>0 ≤ c g(n) ≤ f(n) for all n≥ n 0 } </li></ul><ul><li>For example: n 3 Є Ω (n 2 ) </li></ul><ul><li>n 3 ≥ n 2 for all n ≥ 0 </li></ul><ul><li>By definition, we have c=1 and n o =0 </li></ul>
  11. 11. Θ -notation <ul><li>Asymptotic tight bound </li></ul><ul><li>Θ (g(n)) represents a set of functions such that: </li></ul><ul><li>Θ (g(n)) = {f(n): there exist positive </li></ul><ul><li>constants c 1 , c 2 , and n 0 such </li></ul><ul><li>that 0 ≤ c 1 g(n) ≤ f(n) ≤ c 2 g(n) </li></ul><ul><li>for all n≥ n 0 } </li></ul><ul><li>For example: ½ n(n-1) Є Θ (n 2 ) </li></ul><ul><li>Upper bound: ½ n(n-1) = ½ n 2 - ½ n ≤ ½ n 2 for all n ≥ 0 </li></ul><ul><li>Lower bound: ½ n 2 - ½ n ≥ ½ n 2 - ½ n .½ n = ¼ n 2 for all n ≥ 2 </li></ul><ul><li>Hence, we have c 2 =1/4, c 1 =1/2 and n o =2 </li></ul>
  12. 12. Mappings for n 2 Ω (n 2 ) O(n 2 ) Θ ( n 2 )
  13. 13. Bounds of a Function
  14. 14. Examples of algorithms for sorting techniques and their complexities <ul><li>Insertion sort : O(n 2 ) </li></ul><ul><li>Selection sort : O(n 2 ) </li></ul><ul><li>Quick sort : O(n logn) </li></ul><ul><li>Merge sort : O(n logn) </li></ul>
  15. 19. Time Efficiency Analysis <ul><li>Exmple1:C(n)=the number of times the comparison is executed </li></ul><ul><li>= </li></ul><ul><li>=n-1 Є Θ (n) </li></ul><ul><li>Example 2: C worst (n)= </li></ul>=(n-1) 2 - (n-2)(n-1)/2 = (n-1)n/2 ≈ ½ n 2 Є Θ (n 2 )
  16. 20. Recurrences <ul><li>When an algorithm contains a recursive call to itself, its running time can often be described by a recurrence equation or inequality. </li></ul><ul><li>It describes the overall running time on a problem of size n in terms of the running time on smaller inputs. </li></ul><ul><li>Special techniques are required to analyze the space and time required. For example: iteration method, substitution method and master theorem. </li></ul>
  17. 21. Mathematical Analysis of Recursive Algorithms <ul><li>Decide on parameter indicating an input’s size. </li></ul><ul><li>Identify the algorithm’s basic operation. </li></ul><ul><li>Check whether the number of times the basic operation is executed can vary on different inputs of the same size. </li></ul><ul><li>Set up a recurrence relation with an appropriate initial condition. </li></ul><ul><li>Solve the recurrence or at least ascertain the order of growth of its solution. </li></ul>
  18. 22. Example 1 <ul><li>Compute the factorial function F(n)=n! </li></ul><ul><li>// computes n! recursively </li></ul><ul><li>// Input: A non negative integer n </li></ul><ul><li>// Output: The value of n! </li></ul><ul><li>If n=0 return 1 </li></ul><ul><li>Else return F(n-1)*n </li></ul><ul><li>Compute Time efficiency: </li></ul><ul><li>The number of multiplications (basic operation) M(n) needed to compute F(n) must satisfy the equality </li></ul><ul><li>M(n)=M(n-1) to compute F(n-1) +1 to multiply F(n-1) by n for n>0 </li></ul><ul><li>Initial condition makes the algorithm stop its recursive calls. </li></ul><ul><li> If n=0 return 1 </li></ul>
  19. 23. Example 1 (contd.) <ul><li>From the method of backward substitutions, we have </li></ul><ul><li>M(n) =M(n-1)+1 substitute M(n-1)=M(n-2)+1 </li></ul><ul><li> =[M(n-2)+1]+1=M(n-2)+2 substituteM(n-2)=M(n-3)+1 </li></ul><ul><li>=[M(n-3)+1]+1=M(n-3)+3 </li></ul><ul><li>. </li></ul><ul><li>. </li></ul><ul><li> =M(n-i)+i =………=M(n-n)+n </li></ul><ul><li> =M(0)+n but M(0)=0 </li></ul><ul><li> =n </li></ul>
  20. 24. Example 2
  21. 25. Example 2 (contd.) <ul><li>Compute Time efficiency: </li></ul>
  22. 26. Example 3 <ul><li>Analysis of Merge Sort: </li></ul><ul><li>To set up the recurrence for T(n), the worst –case running time for merge sort on n numbers. Merge sort on just one element takes constant time. When we have n >1 elements, we break down the running time as follows: </li></ul><ul><li>Divide: The divide step just computes the middle of the sub array, which takes constant time. Thus, D(n)= Θ (1). </li></ul><ul><li>Conquer: We recursively solve two problems, each of size n/2, which contributes 2T(n/2) to the running time. </li></ul><ul><li>Combine: Merge procedure on an n-element sub array takes time Θ(n), so C(n)= Θ(n). </li></ul>
  23. 27. Example 3 (contd.) <ul><li>The worst-case running time T(n) could be </li></ul><ul><li>described by the recurrence as follows: </li></ul><ul><li> T(n)= </li></ul>Whose solution is claimed to be T(n)= Θ(n lg n). The equation 1.0 can be rewritten as follows: T(n)= Where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps.
  24. 28. Example 3 (contd.) cn T(n) cn T(n/2) T(n/2 ) cn/2 cn/2 T(n/4) T(n/4) T(n/4) T(n/4) (a) (b) (c)
  25. 29. Example 3 (contd.) <ul><li>We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence, until the problem sizes get down to 1, each with a cost of c. </li></ul><ul><li>We add the costs across each level of the tree. The top level has total cost cn, the next level down has total cost c(n/2)+c(n/2)=cn, the level after that has the total cost cn/4+cn/4+cn/4+cn/4=cn, and so on. </li></ul><ul><li>In general, the level i below the top has 2i nodes, each contributing a cost of c(n/2i), so the ith level below the top has total cost 2i c(n/2i)=cn. </li></ul>
  26. 30. Example 3 (contd.) <ul><li>At the bottom level, there are n nodes, each contributing a cost of c, for a total cost of cn. (2 i =2 log 2 n = nlog 2 2=n nodes) </li></ul><ul><li>The longest path from the root to a leaf is n-> (1/2)n-> (1/2)2n ->------->1. </li></ul><ul><li>Since (1/2) k n=1 when k= log2n, the height of the tree is log2n. The total numbers of levels of the recursion tree is lg n+1. </li></ul><ul><li>To compute the total cost represented by the recurrence (1.1), we simply add the costs of all levels. There are lg n+1 levels, each costing cn, for a total of cn(lg n+1)=cn lg n+cn. </li></ul><ul><li>Ignoring the low order term and the constant c gives the desired result of Θ(n lg n). </li></ul>
  27. 31. Randomized Algorithms <ul><li>It doesn’t require that the intermediate results of each step of execution be uniquely defined and depend only on the inputs and results of the preceding steps. </li></ul><ul><li>It makes random choices and these choices are made by random number generator. </li></ul><ul><li>When a random number generator is called, it computes a number and returns its value. </li></ul><ul><li>When a sequence of calls is made to a random generator, the sequence of numbers returned is random. </li></ul><ul><li>In practice, a pseudorandom number generator is used. It is an algorithm that produces numbers that appear random. </li></ul>
  28. 32. Algorithm Visualization <ul><li>Use of images to convey some useful information about algorithms. </li></ul><ul><li>Two principal variations: </li></ul><ul><ul><ul><li>Static algorithm visualization </li></ul></ul></ul><ul><ul><ul><li>Dynamic algorithmic visualization (animation) </li></ul></ul></ul><ul><li>Be consistent, interactive, clear and concise, adaptability, </li></ul><ul><li>user friendly etc. </li></ul>
  29. 33. Assignment <ul><li>1.0 Use the most appropriate notation among O, Ω and Θ to </li></ul><ul><li> indicate the time efficiency class of sequential search </li></ul><ul><li>a. in the worst case </li></ul><ul><li>b. in the best case </li></ul><ul><li>c. in the average case </li></ul><ul><li>2.0 Use the definitions of O, Ω and Θ to determine whether the following assertions are true or false. </li></ul><ul><li>a. n(n+1)/2 Є O(n 3 ) b. n(n+1) Є O(n 2 ) </li></ul><ul><li>c. n(n+1)/2 Є Θ (n 3 ) d. n(n+1) Є Ω(n) </li></ul><ul><li>3.0 Argue that the solution to the recurrence T(n)=T(n/3)+T(2n/3)+cn, where c is a constant, is </li></ul><ul><li> O(n lgn) by appealing to a recursion tree. </li></ul>
  30. 34. Thank You!

×