Upcoming SlideShare
×

Like this document? Why not share!

# Data Structures - Lecture 8 - Study Notes

## on Nov 25, 2012

• 2,489 views

### Views

Total Views
2,489
Views on SlideShare
1,690
Embed Views
799

Likes
0
50
0

### Report content

• Comment goes here.
Are you sure you want to

## Data Structures - Lecture 8 - Study NotesDocument Transcript

• CHAPTER 4 Algorithm Analysis Algorithms are designed to solve problems, but a given problem can have many di↵erent solutions. How then are we to determine which solution is the most e cient for a given problem? One approach is to measure the execution time. We can implement the solution by constructing a computer program, using a given programming language. We then execute the program and time it using a wall clock or the computer’s internal clock. The execution time is dependent on several factors. First, the amount of data that must be processed directly a↵ects the execution time. As the data set size increases, so does the execution time. Second, the execution times can vary de- pending on the type of hardware and the time of day a computer is used. If we use a multi-process, multi-user system to execute the program, the execution of other programs on the same machine can directly a↵ect the execution time of our program. Finally, the choice of programming language and compiler used to implement an algorithm can also inﬂuence the execution time. Some compilers are better optimizers than others and some languages produce better optimized code than others. Thus, we need a method to analyze an algorithm’s e ciency independent of the implementation details.4.1 Complexity Analysis To determine the e ciency of an algorithm, we can examine the solution itself and measure those aspects of the algorithm that most critically a↵ect its execution time. For example, we can count the number of logical comparisons, data interchanges, or arithmetic operations. Consider the following algorithm for computing the sum of each row of an n ⇥ n matrix and an overall sum of the entire matrix: totalSum = 0 # Version 1 for i in range( n ) : rowSum[i] = 0 for j in range( n ) : rowSum[i] = rowSum[i] + matrix[i,j] totalSum = totalSum + matrix[i,j] 97
• 4.1 Complexity Analysis 99 Figure 4.1: Graphical comparison of the growth rates from Table 4.1.4.1.1 Big-O Notation Instead of counting the precise number of operations or steps, computer scientists are more interested in classifying an algorithm based on the order of magnitude as applied to execution time or space requirements. This classiﬁcation approx- imates the actual number of required steps for execution or the actual storage requirements in terms of variable-sized data sets. The term big-O, which is de- rived from the expression “on the order of,” is used to specify an algorithm’s classiﬁcation. Deﬁning Big-O Assume we have a function T (n) that represents the approximate number of steps required by an algorithm for an input of size n. For the second version of our algorithm in the previous section, this would be written as T2 (n) = n2 + n Now, suppose there exists a function f (n) deﬁned for the integers n 0, such that for some constant c, and some constant m, T (n)  cf (n) for all su ciently large values of n m. Then, such an algorithm is said to have a time-complexity of, or executes on the order of, f (n) relative to the number of operations it requires. In other words, there is a positive integer m and a constant c (constant of proportionality ) such that for all n m, T (n)  cf (n). The
• 100 CHAPTER 4 Algorithm Analysis function f (n) indicates the rate of growth at which the run time of an algorithm increases as the input size, n, increases. To specify the time-complexity of an algorithm, which runs on the order of f (n), we use the notation O( f (n) ) Consider the two versions of our algorithm from earlier. For version one, the time was computed to be T1 (n) = 2n2 . If we let c = 2, then 2n2  2n2 for a result of O(n2 ). For version two, we computed a time of T2 (n) = n2 + n. Again, if we let c = 2, then n2 + n  2n2 for a result of O(n2 ). In this case, the choice of c comes from the observation that when n 1, we have n  n2 and n2 + n  n2 + n2 , which satisﬁes the equation in the deﬁnition of big-O. The function f (n) = n2 is not the only choice for satisfying the condition T (n)  cf (n). We could have said the algorithms had a run time of O(n3 ) or O(n4 ) since 2n2  n3 and 2n2  n4 when n > 1. The objective, however, is to ﬁnd a function f (·) that provides the tightest (lowest) upper bound or limit for the run time of an algorithm. The big-O notation is intended to indicate an algorithm’s e ciency for large values of n. There is usually little di↵erence in the execution times of algorithms when n is small. Constant of Proportionality The constant of proportionality is only crucial when two algorithms have the same f (n). It usually makes no di↵erence when comparing algorithms whose growth rates are of di↵erent magnitudes. Suppose we have two algorithms, L1 and L2 , with run times equal to n2 and 2n respectively. L1 has a time-complexity of O(n2 ) with c = 1 and L2 has a time of O(n) with c = 2. Even though L1 has a smaller constant of proportionality, L1 is still slower and, in fact an order of magnitude slower, for large values of n. Thus, f (n) dominates the expression cf (n) and the run time performance of the algorithm. The di↵erences between the run times of these two algorithms is shown numerically in Table 4.2 and graphically in Figure 4.2. Constructing T(n) Instead of counting the number of logical comparisons or arithmetic operations, we evaluate an algorithm by considering every operation. For simplicity, we assume that each basic operation or statement, at the abstract level, takes the same amount of time and, thus, each is assumed to cost constant time. The total number of
• 4.1 Complexity Analysis 101 n n2 2n 10 100 20 100 10,000 200 1000 1,000,000 2,000 10000 100,000,000 20,000 100000 10,000,000,000 200,000 Table 4.2: Numerical comparison of two sample algorithms. Figure 4.2: Graphical comparison of the data from Table 4.2.operations required by an algorithm can be computed as a sum of the times requiredto perform each step: T (n) = f1 (n) + f2 (n) + . . . + fk (n). The steps requiring constant time are generally omitted since they eventuallybecome part of the constant of proportionality. Consider Figure 4.3(a), whichshows a markup of version one of the algorithm from earlier. The basic operationsare marked with a constant time while the loops are marked with the appropriatetotal number of iterations. Figure 4.3(b) shows the same algorithm but with theconstant steps omitted since these operations are independent of the data set size.
• 102 CHAPTER 4 Algorithm Analysis 1 totalSum = 0 1 for i in range( n ) : 1 rowSum[i] = 0 (a) n 1 for j in range( n ) : n 1 rowSum[i] = rowSum[i] + matrix[i,j] 1 totalSum = totalSum + matrix[i,j] for i in range( n ) : ... (b) n n for j in range( n ) : ... Figure 4.3: Markup for version one of the matrix summing algorithm: (a) shows all oper- ations marked with the appropriate time and (b) shows only the non-constant time steps. Choosing the Function The function f (n) used to categorize a particular algorithm is chosen to be the dominant term within T (n). That is, the term that is so large for big values of n, that we can ignore the other terms when computing a big-O value. For example, in the expression n2 + log2 n + 3n the term n2 dominates the other terms since for n 3, we have n2 + log2 n + 3n  n2 + n2 + n2 n2 + log2 n + 3n  3n2 which leads to a time-complexity of O(n2 ). Now, consider the function T (n) = 2n2 + 15n + 500 and assume it is the polynomial that represents the exact number of instructions required to execute some algorithm. For small values of n (less than 16), the constant value 500 dominates the function, but what happens as n gets larger, say 100, 000? The term n2 becomes the dominant term, with the other two becoming less signiﬁcant in computing the ﬁnal result. Classes of Algorithms We will work with many di↵erent algorithms in this text, but most will have a time-complexity selected from among a common set of functions, which are listed in Table 4.3 and illustrated graphically in Figure 4.4. Algorithms can be classiﬁed based on their big-O function. The various classes are commonly named based upon the dominant term. A logarithmic algorithm is
• 4.1 Complexity Analysis 103 f (·) Common Name 1 constant log n logarithmic n linear n log n log linear n2 quadratic n3 cubic an exponential Table 4.3: Common big-O functions listed from smallest to largest order of magnitude. Figure 4.4: Growth rates of the common time-complexity functions.any algorithm whose time-complexity is O(loga n). These algorithms are generallyvery e cient since loga n will increase more slowly than n. For many problemsencountered in computer science a will typically equal 2 and thus we use the no-tation log n to imply log2 n. Logarithms of other bases will be explicitly stated.Polynomial algorithms with an e ciency expressed as a polynomial of the form am nm + am 1n m 1 + . . . + a2 n2 + a1 n + a0
• 104 CHAPTER 4 Algorithm Analysis are characterized by a time-complexity of O(nm ) since the dominant term is the highest power of n. The most common polynomial algorithms are linear (m = 1), quadratic (m = 2), and cubic (m = 3). An algorithm whose e ciency is char- acterized by a dominant term in the form an is called exponential . Exponential algorithms are among the worst algorithms in terms of time-complexity. 4.1.2 Evaluating Python Code As indicated earlier, when evaluating the time complexity of an algorithm or code segment, we assume that basic operations only require constant time. But what exactly is a basic operation? The basic operations include statements and func- tion calls whose execution time does not depend on the speciﬁc values of the data that is used or manipulated by the given instruction. For example, the assignment statement x = 5 is a basic instruction since the time required to assign a reference to the given variable is independent of the value or type of object speciﬁed on the righthand side of the = sign. The evaluation of arithmetic and logical expressions y = x z = x + y * 6 done = x > 0 and x < 100 are basic instructions, again since they require the same number of steps to perform the given operations regardless of the values of their operands. The subscript operator, when used with Python’s sequence types (strings, tuples, and lists) is also a basic instruction. Linear Time Examples Now, consider the following assignment statement: y = ex1(n) An assignment statement only requires constant time, but that is the time required to perform the actual assignment and does not include the time required to execute any function calls used on the righthand side of the assignment statement. To determine the run time of the previous statement, we must know the cost of the function call ex1(n). The time required by a function call is the time it takes to execute the given function. For example, consider the ex1() function, which computes the sum of the integer values in the range [0 . . . n): def ex1( n ): total = 0 for i in range( n ) : total += i return total
• 4.1 Complexity Analysis 105 Efﬁciency of String Operations. Most of the string operations have i a time-complexity that is proportional to the length of the string. ForNOTE most problems that do not involve string processing, string operations sel- dom have an impact on the run time of an algorithm. Thus, in the text, we assume the string operations, including the use of the print() function, only require constant time, unless explicitly stated otherwise. The time required to execute a loop depends on the number of iterations per-formed and the time needed to execute the loop body during each iteration. In thiscase, the loop will be executed n times and the loop body only requires constanttime since it contains a single basic instruction. (Note that the underlying mech-anism of the for loop and the range() function are both O(1).) We can computethe time required by the loop as T (n) = n ⇤ 1 for a result of O(n). But what about the other statements in the function? The ﬁrst line of thefunction and the return statement only require constant time. Remember, it’scommon to omit the steps that only require constant time and instead focus onthe critical operations, those that contribute to the overall time. In most instances,this means we can limit our evaluation to repetition and selection statements andfunction and method calls since those have the greatest impact on the overall timeof an algorithm. Since the loop is the only non-constant step, the function ex1()has a run time of O(n). That means the statement y = ex1(n) from earlier requireslinear time. Next, consider the following function, which includes two for loops: def ex2( n ): count = 0 for i in range( n ) : count += 1 for j in range( n ) : count += 1 return count To evaluate the function, we have to determine the time required by each loop.The two loops each require O(n) time as they are just like the loop in functionex1() earlier. If we combine the times, it yields T (n) = n + n for a result of O(n).Quadratic Time ExamplesWhen presented with nested loops, such as in the following, the time required bythe inner loop impacts the time of the outer loop. def ex3( n ): count = 0 for i in range( n ) : for j in range( n ) : count += 1 return count
• 106 CHAPTER 4 Algorithm Analysis Both loops will be executed n, but since the inner loop is nested inside the outer loop, the total time required by the outer loop will be T (n) = n ⇤ n, resulting in a time of O(n2 ) for the ex3() function. Not all nested loops result in a quadratic time. Consider the following function: def ex4( n ): count = 0 for i in range( n ) : for j in range( 25 ) : count += 1 return count which has a time-complexity of O(n). The function contains a nested loop, but the inner loop executes independent of the size variable n. Since the inner loop executes a constant number of times, it is a constant time operation. The outer loop executes n times, resulting in a linear run time. The next example presents a special case of nested loops: def ex5( n ): count = 0 for i in range( n ) : for j in range( i+1 ) : count += 1 return count How many times does the inner loop execute? It depends on the current it- eration of the outer loop. On the ﬁrst iteration of the outer loop, the inner loop will execute one time; on the second iteration, it executes two times; on the third iteration, it executes three times, and so on until the last iteration when the inner loop will execute n times. The time required to execute the outer loop will be the number of times the increment statement count += 1 is executed. Since the inner loop varies from 1 to n iterations by increments of 1, the total number of times the increment statement will be executed is equal to the sum of the ﬁrst n positive integers: n(n + 1) n2 + n T (n) = = 2 2 which results in a quadratic time of O(n2 ). Logarithmic Time Examples The next example contains a single loop, but notice the change to the modiﬁcation step. Instead of incrementing (or decrementing) by one, it cuts the loop variable in half each time through the loop. def ex6( n ): count = 0 i = n while i >= 1 :
• 4.1 Complexity Analysis 107 count += 1 i = i // 2 return count To determine the run time of this function, we have to determine the number ofloop iterations just like we did with the earlier examples. Since the loop variable iscut in half each time, this will be less than n. For example, if n equals 16, variablei will contain the following ﬁve values during subsequent iterations (16, 8, 4, 2, 1). Given a small number, it’s easy to determine the number of loop iterations.But how do we compute the number of iterations for any given value of n? Whenthe size of the input is reduced by half in each subsequent iteration, the numberof iterations required to reach a size of one will be equal to blog2 nc + 1or the largest integer less than log2 n, plus 1. In our example of n = 16, there arelog2 16 + 1, or four iterations. The logarithm to base a of a number n, which isnormally written as y = loga n, is the power to which a must be raised to equaln, n = ay . Thus, function ex6() requires O(log n) time. Since many problems incomputer science that repeatedly reduce the input size do so by half, it’s not un-common to use log n to imply log2 n when specifying the run time of an algorithm. Finally, consider the following deﬁnition of function ex7(), which calls ex6()from within a loop. Since the loop is executed n times and function ex6() requireslogarithmic time, ex7() will have a run time of O(n log n). def ex7( n ): count = 0 for i in range( n ) count += ex6( n ) return countDifferent CasesSome algorithms can have run times that are di↵erent orders of magnitude fordi↵erent sets of inputs of the same size. These algorithms can be evaluated fortheir best, worst, and average cases. Algorithms that have di↵erent cases cantypically be identiﬁed by the inclusion of an event-controlled loop or a conditionalstatement. Consider the following example, which traverses a list containing integervalues to ﬁnd the position of the ﬁrst negative value. Note that for this problem,the input is the collection of n values contained in the list. def findNeg( intList ): n = len(intList) for i in range( n ) : if intList[i] < 0 : return i return None
• 108 CHAPTER 4 Algorithm Analysis At ﬁrst glance, it appears the loop will execute n times, where n is the size of the list. But notice the return statement inside the loop, which can cause it to terminate early. If the list does not contain a negative value, L = [ 72, 4, 90, 56, 12, 67, 43, 17, 2, 86, 33 ] p = findNeg( L ) the return statement inside the loop will not be executed and the loop will ter- minate in the normal fashion from having traversed all n times. In this case, the function requires O(n) time. This is known as the worst case since the function must examine every value in the list requiring the most number of steps. Now consider the case where the list contains a negative value in the ﬁrst element: L = [ -12, 50, 4, 67, 39, 22, 43, 2, 17, 28 ] p = findNeg( L ) There will only be one iteration of the loop since the test of the condition by the if statement will be true the ﬁrst time through and the return statement inside the loop will be executed. In this case, the findNeg() function only requires O(1) time. This is known as the best case since the function only has to examine the ﬁrst value in the list requiring the least number of steps. The average case is evaluated for an expected data set or how we expect the algorithm to perform on average. For the findNeg() function, we would expect the search to iterate halfway through the list before ﬁnding the ﬁrst negative value, which on average requires n/2 iterations. The average case is more di cult to evaluate because it’s not always readily apparent what constitutes the average case for a particular problem. In general, we are more interested in the worst case time-complexity of an algorithm as it provides an upper bound over all possible inputs. In addition, we can compare the worst case run times of di↵erent implementations of an algorithm to determine which is the most e cient for any input. 4.2 Evaluating the Python List We deﬁned several abstract data types for storing and using collections of data in the previous chapters. The next logical step is to analyze the operations of the various ADTs to determine their e ciency. The result of this analysis depends on the e ciency of the Python list since it was the primary data structure used to implement many of the earlier abstract data types. The implementation details of the list were discussed in Chapter 2. In this section, we use those details and evaluate the e ciency of some of the more common operations. A summary of the worst case run times are shown in Table 4.4.
• C HAPTER 4 Basic Searching AlgorithmsSearching for data is a fundamental computer programming task and onethat has been studied for many years. This chapter looks at just one aspect ofthe search problem—searching for a given value in a list (array). There are two fundamental ways to search for data in a list: the sequentialsearch and the binary search. Sequential search is used when the items in thelist are in random order; binary search is used when the items are sorted inthe list.SEQUENTIAL SEARCHINGThe most obvious type of search is to begin at the beginning of a set ofrecords and move through each record until you ﬁnd the record you arelooking for or you come to the end of the records. This is called a sequentialsearch. A sequential search (also called a linear search) is very easy to implement.Start at the beginning of the array and compare each accessed array elementto the value you’re searching for. If you ﬁnd a match, the search is over. If youget to the end of the array without generating a match, then the value is notin the array. 55
• 56 BASIC SEARCHING ALGORITHMS Here is a function that performs a sequential search: bool SeqSearch(int[] arr, int sValue) { for (int index = 0; index < arr.Length-1; index++) if (arr[index] == sValue) return true; return false; }If a match is found, the function immediately returns True and exits.If the end of the array is reached without the function returning True,then the value being searched for is not in array and the function returnsFalse. Here is a program to test our implementation of a sequential search: using System; using System.IO; public class Chapter4 { static void Main() { int [] numbers = new int[100]; StreamReader numFile = File.OpenText("c:numbers.txt"); for (int i = 0; i < numbers.Length-1; i++) numbers[i] = Convert.ToInt32(numFile.ReadLine(), 10); int searchNumber; Console.Write("Enter a number to search for: "); searchNumber = Convert.ToInt32(Console.ReadLine(), 10); bool found; found = SeqSearch(numbers, searchNumber); if (found) Console.WriteLine(searchNumber + " is in the array."); else Console.WriteLine(searchNumber + " is not in the array.");
• Sequential Searching 57 } static bool SeqSearch(int[] arr, int sValue) { for (int index = 0; index < arr.Length-1; index++) if (arr[index] == sValue) return true; return false; } } The program works by ﬁrst reading in a set of data from a text ﬁle. The dataconsists of the ﬁrst 100 integers, stored in the ﬁle in a partially random order.The program then prompts the user to enter a number to search for and callsthe SeqSearch function to perform the search. You can also write the sequential search function so that the function returnsthe position in the array where the searched-for value is found or a −1 if thevalue cannot be found. First, let’s look at the new function: static int SeqSearch(int[] arr, int sValue) { for (int index = 0; index < arr.Length-1; index++) if (arr[index] == sValue) return index; return -1; } The following program uses this function: using System; using System.IO; public class Chapter4 { static void Main() { int [] numbers = new int[100]; StreamReader numFile =_ File.OpenText("c:numbers.txt"); for (int i = 0; i < numbers.Length-1; i++) numbers[i] = Convert.ToInt32(numFile.ReadLine(), 10);
• 58 BASIC SEARCHING ALGORITHMS int searchNumber; Console.Write("Enter a number to search for: "); searchNumber = Convert.ToInt32(Console.ReadLine(), 10); int foundAt; foundAt = SeqSearch(numbers, searchNumber); if (foundAt >= 0) Console.WriteLine(searchNumber + " is in the_ array at position " + foundAt); else Console.WriteLine(searchNumber + " is not in the array."); } static int SeqSearch(int[] arr, int sValue) { for (int index = 0; index < arr.Length-1; index++) if (arr[index] == sValue) return index; return -1; } }Searching for Minimum and Maximum ValuesComputer programs are often asked to search an array (or other data structure)for minimum and maximum values. In an ordered array, searching for thesevalues is a trivial task. Searching an unordered array, however, is a little morechallenging. Let’s start by looking at how to ﬁnd the minimum value in an array. Thealgorithm is:1. Assign the ﬁrst element of the array to a variable as the minimum value.2. Begin looping through the array, comparing each successive array element with the minimum value variable.3. If the currently accessed array element is less than the minimum value, assign this element to the minimum value variable.4. Continue until the last array element is accessed.5. The minimum value is stored in the variable.
• Sequential Searching 59 Let’s look at a function, FindMin, which implements this algorithm: static int FindMin(int[] arr) { int min = arr[0]; for(int i = 0; i < arr.Length-1; i++) if (arr[index] < min) min = arr[index]; return min; } Notice that the array search starts at position 1 and not at position 0. The0th position is assigned as the minimum value before the loop starts, so wecan start making comparisons at position 1. The algorithm for ﬁnding the maximum value in an array works in the sameway. We assign the ﬁrst array element to a variable that holds the maximumamount. Next we loop through the array, comparing each array element withthe value stored in the variable, replacing the current value if the accessedvalue is greater. Here’s the code: static int FindMax(int[] arr) { int max = arr[0]; for(int i = 0; i < arr.Length-1; i++) if (arr[index] > max) max = arr[index]; return max; } An alternative version of these two functions could return the position ofthe maximum or minimum value in the array rather than the actual value.Making Sequential Search Faster: Self-Organizing DataThe fastest successful sequential searches occur when the data element beingsearched for is at the beginning of the data set. You can ensure that a success-fully located data item is at the beginning of the data set by moving it thereafter it has been found. The concept behind this strategy is that we can minimize search timesby putting frequently searched-for items at the beginning of the data set.
• 60 BASIC SEARCHING ALGORITHMSEventually, all the most frequently searched-for data items will be located atthe beginning of the data set. This is an example of self-organization, in thatthe data set is organized not by the programmer before the program runs, butby the program while the program is running. It makes sense to allow your data to organize in this way since the data beingsearched probably follows the “80–20” rule, meaning that 80% of the searchesconducted on your data set are searching for 20% of the data in the data set.Self-organization will eventually put that 20% at the beginning of the data set,where a sequential search will ﬁnd them quickly. Probability distributions such as this are called Pareto distributions, namedfor Vilfredo Pareto, who discovered these distributions studying the spread ofincome and wealth in the late nineteenth century. See Knuth (1998, pp. 399–401) for more on probability distributions in data sets. We can modify our SeqSearch method quite easily to include self-organization. Here’s a ﬁrst stab at the method: static bool SeqSearch(int sValue) { for(int index = 0; i < arr.Length-1; i++) if (arr[index] == sValue) { swap(index, index-1); return true; } return false; } If the search is successful, the item found is switched with the element atthe ﬁrst of the array using a swap function, shown as follows: static void swap(ref int item1, ref int item2) { int temp = arr[item1]; arr[item1] = arr[item2]; arr[item2] = temp; } The problem with the SeqSearch method as we’ve modiﬁed it is that fre-quently accessed items might be moved around quite a bit during the courseof many searches. We want to keep items that are moved to the ﬁrst of the
• Sequential Searching 61data set there and not moved farther back when a subsequent item fartherdown in the set is successfully located. There are two ways we can achieve this goal. First, we can only swap founditems if they are located away from the beginning of the data set. We onlyhave to determine what is considered to be far enough back in the data set towarrant swapping. Following the “80–20” rule again, we can make a rule thata data item is relocated to the beginning of the data set only if its location isoutside the ﬁrst 20% of the items in the data set. Here’s the code for this ﬁrstrewrite: static int SeqSearch(int sValue) { for(int index = 0; i < arr.Length-1; i++) if (arr[index] == sValue && index > (arr.Length *_ 0.2)) { swap(index, index-1); return index; } else if (arr[index] == sValue) return index; return -1; } The If–Then statement is short-circuited because if the item isn’t found inthe data set, there’s no reason to test to see where the index is in the data set. The other way we can rewrite the SeqSearch method is to swap a found itemwith the element that precedes it in the data set. Using this method, whichis similar to how data is sorted using the Bubble sort, the most frequentlyaccessed items will eventually work their way up to the front of the data set.This technique also guarantees that if an item is already at the beginning ofthe data set, it won’t move back down. The code for this new version of SeqSearch is shown as follows: static int SeqSearch(int sValue) { for(int index = 0; i < arr.Length-1; i++) if (arr[index] == sValue) { swap(index, index-1); return index; } return -1; }