Iteration, induction, and recursion


Published on

Analysis of running time
.Iteration, induction, and recursion

Published in: Education, Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Iteration, induction, and recursion

  1. 1. Eng: Mohammed Hussein1Republic of YemenTHAMAR UNIVERSITYFaculty of Computer Science&Information SystemLecturer, and Researcher atThamar UniversityBy Eng: Mohammed
  2. 2. Outlines1. Analysis of running time2. Iteration, induction, and recursion2 Eng: Mohammed Hussein
  3. 3. Analysis of running timeEng: Mohammed Hussein3 An important criterion for the “goodness” of an algorithm is howlong it takes to run on inputs of various sizes (its “running time”). When the algorithm involves recursion, we use a formula called arecurrence equation, which is an inductive definition thatpredicts how long the algorithm takes to run on inputs of differentsizes.
  4. 4. Iteration, induction, and recursionEng: Mohammed Hussein4 Iteration, induction, and recursion are fundamental concepts thatappear in many forms in data models, data structures, andalgorithms. Iterative techniques.The simplest way to perform a sequence ofoperations repeatedly is to use an iterative construct such as the for-statement of C and C++. Recursive programs which call themselves either directly orindirectly can be simpler to write, analyze, and understand of C andC++.
  5. 5. Proofs of program correctness.Eng: Mohammed Hussein5 In computer science, we often wish to prove, formally orinformally, that a statement F(n) about a program is true. The statement F(n) might, for example, describe what is true onthe n iteration of some loop or what is true for the n recursivecall to some function. Iteration : Each beginning programmer learns to use iteration,employing some kind of looping construct such as the for- orwhile-statement of C. An example of an Iterative Sorting Algorithm such as SelectionSort.
  6. 6. Inductive definitionsEng: Mohammed Hussein6 The inductive definition is consist of a basis and an inductivesteps. Many important concepts of computer science, especially thoseinvolving data models, are best defined by an induction in which wegive a basis rule defining the simplest example or examples of theconcept, and an inductive rule or rules, where we build largerinstances of the concept from smaller ones.
  7. 7. Notation: The SummationEng: Mohammed Hussein7 Greek capital letter sigma is often used to denote a summation,as This particular expression represents the sum of the integersfrom 1 to n; that is, it stands for the sum 1 + 2 + 3 + · · · + n. More generally, we can sum any function f(i) of the summationindex i.
  8. 8. Induction8 Induction rules Basis: show F(0) The basis could be F(1) Means 1 = 1 × 2/2 Hypothesis: assume F(k) holdsfor arbitrary k <= n Step: Show F(n+1) follows For example, we suggestedthat the statement can be proved true for all n≥ 1 by an induction on n.Eng: Mohammed Hussein8
  9. 9. Induction Example:Gaussian Closed Form Prove 1 + 2 + 3 + … + n = n(n+1) / 2Basis: If n = 0, then 0 = 0(0+1) / 2Inductive hypothesis: Assume 1 + 2 + 3 + … + n = n(n+1) / 2Step (show true for n+1):1 + 2 + … + n + n+1 = (1 + 2 + …+ n) + (n+1)= n(n+1)/2 + n+1 = [n(n+1) + 2(n+1)]/2= (n+1)(n+2)/2 = (n+1)(n+1 + 1) / 2n ( n +1) / 29 Eng: Mohammed Hussein9
  10. 10. A Template for All InductionsEng: Mohammed Hussein101. Specify the statement F(n) to be proved, for n ≥ i0. Specify what i0 is; often it is 0 or1, but i0 could be any integer. Explain intuitively what n represents.2. State the basis case(f). These will be all the integers from i0 up to some integer j0. Often j0 = i0, but j0 couldbe larger.3. Prove each of the basis cases F(i0), F(i0 + 1), . . . , F(j0).4. Set up the inductive step by stating that you are assuming F(i0), F(i0 + 1), . . . , F(n) (the “inductive hypothesis”) and that you want to prove F(n + 1). State that you are assuming n ≥ j0; that is, n is at least as great as the highest basis case. Express F(n + 1) by substituting n + 1 for n in the statement F(n).5. Prove F(n + 1) under the assumptions mentioned in (4). If the induction is a weak, rather than complete, induction, then only F(n) will be usedin the proof, but you are free to use any or all of the statements of the inductivehypothesis.6. Conclude that F(n) is true for all n ≥ i0 (but not necessarily for smaller n).
  11. 11. Basic Recursion Base case: value for which function can be evaluated withoutrecursion Two fundamental rules Must always have a base case Each recursive call must be to a case that eventually leads toward abase case11 Eng: Mohammed Hussein11
  12. 12. Example Recursion(1/2)Problem: write an algorithm that will strip digits from aninteger and print them out one by onevoid print_out(int n){if(n < 10)print_digit(n); /*outputs single-digit to terminal*/else {print_out(n/10); /*print the quotient*/print_digit(n%10); /*print the remainder*/}}12 Eng: Mohammed Hussein12
  13. 13. Example Recursion(2/2)Prove by induction that the recursive printing program works: basis: If n has one digit, then program is correct hypothesis: Print_out works for all numbers of k or fewer digits case k+1: k+1 digits can be written as the first k digits followed by theleast significant digitThe number expressed by the first k digits is exactly floor( n/10 )? whichby hypothesis prints correctly; the last digit is n%10; so the (k+1)-digit isprinted correctlyBy induction, all numbers are correctly printed13 Eng: Mohammed Hussein13
  14. 14. RecursiveEng: Mohammed Hussein14 Recursive programs are often more succinct or easier to understandthan their iterative counterparts. More importantly, some problemsare more easily solved by recursive programs than by iterativeprograms. A recursive function that implements a recursive definition willhave a basis part and an inductive part. Frequently, the basis part checks for a simple kind of input that canbe solved by the basis of the definition, with no recursive callneeded. The inductive part of the function requires one or more recursivecalls to itself and implements the inductive part of the definition.
  15. 15. Recursion Dont need to know how recursion is being managed Recursion is expensive in terms of space requirement; weavoid recursion if simple loop will do. Last two rules Assume all recursive calls work Do not duplicate work by solving identical problem in separated recursive calls Evaluate fib(4) -- use a recursion treefib(n) = fib(n-1) + fib(n-2)15 Eng: Mohammed Hussein15
  16. 16. Arithmetic expressionsEng: Mohammed Hussein16 If E1 and E2 are arithmeticexpressions, then the followingare also arithmetic expressions: 1. (E1 + E2) 2. (E1 − E2) 3. (E1 × E2) 4. (E1 / E2) 5. If E is an arithmetic expression,then so is (−E). The operators +, −, ×, and / aresaid to be binary operators,because they take two arguments.i. x Basis rule (1)ii. 10 Basis rule (2)iii. (x + 10) Recursive rule (1)on (i) and (ii)iv. (−(x + 10)) Recursive rule(5) on (iii)v. y Basis rule (1)vi. y × (−( (x + 10) ))Recursive rule (3) on (v)and (iv)
  17. 17. Recursive exampleEng: Mohammed Hussein17 Recursive function that computes n! given a positive integer n. This function is a direct transcription of the recursive definition of n! in. That is, line (1) distinguishes the basis case from the inductive case. We assume that n ≥ 1, so the test of line (1) is really asking whether n =1. If so, we apply the basis rule, 1! = 1, at line (2). If n > 1, then we apply the inductive rule, n! = n × (n − 1)!, at line (3).int fact(int n){(1) if (n <= 1)(2) return 1; /* basis */else/* induction */(3) return n*fact(n-1);}
  18. 18. Euclid’s Algorithm - gcdEng: Mohammed Hussein18 Euclid’s algorithm is based on the factthat if u is greater than v then thegreatest common divisor of u and v isthe same as the greatest commondivisor of v and u%v. This description explains how tocompute the greatest common divisorof two numbers by computing thegreatest common divisor of twosmaller numbers. We can implement this methoddirectly in C++ simply by having thegcd function call itself with smallerarguments: cout<< gcd(461952,116298);int gcd ( int u, int v){if( v==0 )return u;elsereturn gcd(v, u % v);}
  19. 19. Common mistake in recursiveEng: Mohammed Hussein19 One shouldn’t make a recursive call for a larger problem, since thatmight lead to a loop in which the program attempts to solve largerand larger problems. Not all programming environments support a general-purposerecursion facility because of intrinsic difficulties involved. when recursion is provided and used, it can be a source ofunacceptable inefficiency.
  20. 20. Sorting algorithmEng: Mohammed Hussein20 In computer science, a sorting algorithm is an algorithm thatputs elements of a list in a certain order.The most-used orders arenumerical order and lexicographical order. Efficient sorting isimportant for optimizing the use of other algorithms (suchas search and merge algorithms) that require sorted lists to workcorrectly; More formally, the output must satisfy two conditions:1. The output is in nondecreasing order (each element is no smallerthan the previous element according to the desired total order);2. The output is a permutation (reordering) of the input.
  21. 21. popular sorting algorithmsEng: Mohammed Hussein21 Bubble sort Selection sort Insertion sort Shell sort Comb sort Merge sort Heap sort Quick sort Counting sort Bucket sort Radix sort Distribution sort Time sort
  22. 22. Sorting algorithmsEng: Mohammed Hussein22
  23. 23. Sorting algorithms classified by (1/2):Eng: Mohammed Hussein23 Computational complexity (worst, average and best behavior) of elementcomparisons in terms of the size of the list (n). For typical sorting algorithms good behavior is O(n log n) and bad behavior isO(n²). Ideal behavior for a sort is O(n), but this is not possible in the average case. Comparison-based sorting algorithms, which evaluate the elements of the listvia an abstract key comparison operation, need at least O(n log n) comparisonsfor most inputs. Memory usage (and use of other computer resources). Some sortingalgorithms are an in place sort needs only O(1) memory ; sometimes O(log(n))additional memory is considered "in place". Recursion. Some algorithms are either recursive or non-recursive, while othersmay be both (e.g., merge sort).
  24. 24. Sorting algorithms classified by (2/2):Eng: Mohammed Hussein24 Stability: stable sorting algorithms maintain the relative order ofrecords with equal keys (i.e., values). Whether or not they are a comparison sort.A comparison sortexamines the data only by comparing two elements with acomparison operator. General method: insertion, exchange, selection, merging, etc.. Exchange sorts include bubble sort and quick sort. Selection sorts include shaker sort and heap sort. Adaptability:Whether or not the presorted of the input affects therunning time.Algorithms that take this into account are known tobe adaptive.
  25. 25. Comparison sort algorithmsEng: Mohammed Hussein25Algorithm Name MethodSelection sort SelectionInsertion sort InsertionMerge sort MergingTim sort Insertion & MergingQuick sort PartitioningHeap sort SelectionBinary tree sort InsertionBubble sort ExchangingStrand sort Selection
  26. 26. SortingEng: Mohammed Hussein26 Sorting is a fundamental operation in computer science (manyprograms use it as an intermediate step), and as a result a largenumber of good sorting algorithms have been developed. Sorting :To sort a list of n elements we need to permute theelements of the list so that they appear in nondecreasing order. 3, 1, 4, 1, 5, 9, 2, 6, 5 1, 1, 2, 3, 4, 5, 5, 6, 9 Thus, the sorted array has two 1’s, two 5’s, and one each ofthe numbers that appear once in the original array.
  27. 27. Sorting problemEng: Mohammed Hussein27 Here is how we formally define the sorting problem:Input:A sequence of n numbers a1,a2 ……...,an.Output:A permutation (reordering)( ) of the inputsequencesuch that
  28. 28. Selection Sort AlgorithmEng: Mohammed Hussein28 Suppose we have an array A of n integers that we wish to sort intonondecreasing order. We may do so by iterating a step in which a smallest element not yet partof the sorted portion of the array is found and exchanged with theelement in the first position of the unsorted part of the array. In the first iteration, we find (“select”) a smallest element among thevalues found in the full arrayA[0..n-1] and exchange it withA[0]. In the second iteration, we find a smallest element in A[1..n-1] andexchange it with A[1]. We continue these iterations.At the start of the i + 1st iteration,A[0..i-1]contains the i smallest elements inA sorted in nondecreasing order, andthe remaining elements of the array are in no particular order.
  29. 29. Selection Sort AlgorithmEng: Mohammed Hussein29 The idea of algorithm is quite simple. Array is imaginary divided into two parts - sorted one and unsortedone. At the beginning, sorted part is empty, while unsorted onecontains whole array. At every step, algorithm finds minimal element in the unsortedpart and adds it to the end of the sorted one. When unsorted part becomes empty, algorithm stops.
  30. 30. Selection Sort AlgorithmEng: Mohammed Hussein30 Lines (2) through (5) select a smallest element in theunsorted part of the array, A[i..n-1]. We begin bysetting the small to i in line (2). we set small to the index of the smallest element inA[i..n-1] via for-loop of lines (3) through (5).Andsmall is set to j if A[j] has a smaller value than any ofthe array elements in the range A[i..j-1]. In lines (6) to (8), we exchange the element in thatposition with the element in A[i]. Notice that in order to swap two elements, we need atemporary place to store one of them.Thus, we movethe value in A[small] to temp at line (6), move thevalue in A[i] to A[small] at line (7), and finally movethe value originally in A[small] from temp to A[i] atline (8).
  31. 31. Proving Properties of ProgramsEng: Mohammed Hussein31 The loop invariants of a program are often the most useful shortexplanation one can give of how the program works. So, the programmer should have a loop rule in mind while writing apiece of code. That is, there must be a reason why a program works, and this reasonoften has to do with an inductive hypothesis that holds each time theprogram goes around a loop or each time it performs a recursivecall. We shall see a technique for explaining what an iterative programdoes as it goes around a loop. The key to proving a property of a loop in a program is selecting aloop invariant, or inductive assertion.
  32. 32. The inner loop of Selection SortEng: Mohammed Hussein32 1. First, we need to initialize small to i, as we do in line (2). 2.At the beginning of the for-loop of line (3), we need to initialize jto i + 1. 3.Then, we need to test whether j < n. 4. If so, we execute the body of the loop, which consists of lines (4)and (5). 5.At the end of the body, we need to increment j and go back to thetest. The S statement is Inductive true each time we enter a assertionparticular point in the loop.The statement S is then proved byinduction on a parameter that in some way measures the number oftimes we have gone around the loop.
  33. 33. The inner loop of Selection SortEng: Mohammed Hussein33 we see a point just before the test that is labeled by aloop-invariant statement we have called S(k); The first time we reach the test, j has the value i + 1and small has the value i. The second time we reach the test, j has the value i+2,because j has been incremented once. Because the body (lines 4 and 5) sets small to i + 1 ifA[i + 1] is less thanA[i], we see that small is the indexof whichever of A[i] andA[i + 1] is smaller Similarly, the third time we reach the test, the value of jis i + 3 and small is the index of the smallest ofA[i..i+2]. S(k): If we reach the test for j < n in the for-statementof line (3) with k as the value of loop index j, then thevalue of small is the index of the smallest of A[i..k-1].
  34. 34. What is Algorithm Analysis? How to estimate the time required for an algorithm Techniques that drastically reduce the running time of an algorithm A mathemactical framwork that more rigorously describes therunning time of an algorithmpp 34 Eng: Mohammed Hussein34
  35. 35. Input Size Time and space complexity This is generally a function of the input size E.g., sorting, multiplication How we characterize input size depends: Sorting: number of input items Multiplication: total number of bits Graph algorithms: number of nodes & edges Etc35 Eng: Mohammed Hussein35
  36. 36. Running Time Number of primitive steps that are executed Except for time of executing a function call most statements roughlyrequire the same amount of time y = m * x + b c = 5 / 9 * (t - 32 ) z = f(x) + g(y) We can be more exact if need be36 Eng: Mohammed Hussein36
  37. 37. Analysis Worst case Provides an upper bound on running time An absolute guarantee Average case Provides the expected running time Very useful, but treat with care: what is “average”? Random (equally likely) inputs Real-life inputs37 Eng: Mohammed Hussein37
  38. 38. Running time for small inputspp 38 Eng: Mohammed Hussein38
  39. 39. Function of Growth rate39 Eng: Mohammed Hussein39
  40. 40. RefrencesEng: Mohammed Hussein40 Michael Sipser, Introduction to theTheory of Computation, ChinaMachine Press. John E. Hopcroft, Rajeev Motwani, Introduction to AutomataTheory, Languages, and Computation (Second Edition),TsinghuaUniversity Press.