Upcoming SlideShare
×

# Course introduction, lecture 1

679 views
627 views

Published on

Published in: Technology, Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
679
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
20
0
Likes
0
Embeds 0
No embeds

No notes for slide
• Welcome to CSE 830, the Design and Theory of Algorithms. My name is Charles, and I will be your instructor for this class. The goal of the class is to teach you the fundamentals of creating and analyzing computer algorithms. Before we get to the actual material, however, I need to go through a few administrative details. While I take attendance, I have some handouts for you to be reading over.
• In this class, most algorithms will be expressed in plain English with pseudo-code used to clear up any ambiguities.
• Basically, any basic operation that takes a small, fixed amount of time we assume to take just one step. We measure the run time of an algorithm by counting the number of steps it takes. Why does this work? For the same reason that the “Flat Earth” model works. In our day-to-day lives we assume the Earth to be flat! Now, lets look at how we can use this.
• Asymptotic or Big-O notation. O Omega Sigma
• [ Do on blackboard!!!!! ] f(n) = 3n 2 - 100n + 6
• Asymptotic or Big-O notation. O Omega Sigma
• ### Course introduction, lecture 1

1. 1. CSE 830: Design and Theory of Algorithms Dr. Eric Torng
2. 2. Outline <ul><li>Definitions </li></ul><ul><ul><li>Algorithms </li></ul></ul><ul><ul><li>Problems </li></ul></ul><ul><li>Course Objectives </li></ul><ul><li>Administrative stuff … </li></ul><ul><li>Analysis of Algorithms </li></ul>
3. 3. What is an Algorithm? Algorithms are the ideas behind computer programs. An algorithm is the thing that stays the same whether the program is in C++ running on a Cray in New York or is in BASIC running on a Macintosh in Katmandu! To be interesting, an algorithm has to solve a general, specified problem.
4. 4. What is a problem? <ul><li>Definition </li></ul><ul><ul><li>A mapping/relation between a set of input instances (domain) and an output set (range) </li></ul></ul><ul><li>Problem Specification </li></ul><ul><ul><li>Specify what a typical input instance is </li></ul></ul><ul><ul><li>Specify what the output should be in terms of the input instance </li></ul></ul><ul><li>Example: Sorting </li></ul><ul><ul><li>Input : A sequence of N numbers a 1 …a n </li></ul></ul><ul><ul><li>Output : the permutation (reordering) of the input sequence such that a 1  a 2  …  a n . </li></ul></ul>
5. 5. Types of Problems Search : find X in the input satisfying property Y Structuring : Transform input X to satisfy property Y Construction : Build X satisfying Y Optimization : Find the best X satisfying property Y Decision : Does X satisfy Y? Adaptive : Maintain property Y over time.
6. 6. Two desired properties of algorithms <ul><li>Correctness </li></ul><ul><ul><li>Always provides correct output when presented with legal input </li></ul></ul><ul><li>Efficiency </li></ul><ul><ul><li>What does efficiency mean? </li></ul></ul>
7. 7. Example: Odd Number Input : A number n Output : Yes if n is odd, no if n is even Which of the following algorithms solves Odd Number best? <ul><ul><li>Count up to that number from one and alternate naming each number as odd or even. </li></ul></ul><ul><ul><li>Factor the number and see if there are any twos in the factorization. </li></ul></ul><ul><ul><li>Keep a lookup table of all numbers from 0 to the maximum integer. </li></ul></ul><ul><ul><li>Look at the last bit (or digit) of the number. </li></ul></ul>
8. 8. Example: TSP <ul><li>Input : A sequence of N cities with the distances d ij between each pair of cities </li></ul><ul><li>Output : a permutation (ordering) of the cities <c 1’ , …, c n’ > that minimizes the expression </li></ul><ul><ul><li> {j =1 to n-1} d j’,j’+1 + d n’,1’ </li></ul></ul>
9. 9. Possible Algorithm: Nearest neighbor
10. 10. Not Correct!
11. 11. A Correct Algorithm <ul><li>We could try all possible orderings of the points, then select the ordering which minimizes the total length: </li></ul><ul><li>d =  </li></ul><ul><li>For each of the n! permutations, P i of the n points, </li></ul><ul><ul><li>if cost(P i ) < d then </li></ul></ul><ul><ul><ul><li>d = cost(P i ) </li></ul></ul></ul><ul><ul><ul><li>P min = P i </li></ul></ul></ul><ul><li>return P min </li></ul>
12. 12. Outline <ul><li>Definitions </li></ul><ul><ul><li>Algorithms </li></ul></ul><ul><ul><li>Problems </li></ul></ul><ul><li>Course Objectives </li></ul><ul><li>Administrative stuff … </li></ul><ul><li>Analysis of Algorithms </li></ul>
13. 13. Course Objectives <ul><li>Learning classic algorithms </li></ul><ul><li>How to devise correct and efficient algorithms for solving a given problem </li></ul><ul><li>How to express algorithms </li></ul><ul><li>How to validate/verify algorithms </li></ul><ul><li>How to analyze algorithms </li></ul><ul><li>How to prove (or at least indicate) no correct, efficient algorithm exists for solving a given problem </li></ul><ul><li>Writing clear algorithms and </li></ul>
14. 14. Classic Algorithms <ul><li>Lots of wonderful algorithms have already been developed </li></ul><ul><li>I expect you to learn most of this from reading, though we will reinforce in lecture </li></ul>
15. 15. How to devise algorithms <ul><li>Something of an art form </li></ul><ul><li>Cannot be fully automated </li></ul><ul><li>We will describe some general techniques and try to illustrate when each is appropriate </li></ul>
16. 16. Expressing Algorithms <ul><li>Implementations </li></ul><ul><li>Pseudo-code </li></ul><ul><li>English </li></ul><ul><li>My main concern here is not the specific language used but the clarity of your expression </li></ul>
17. 17. Verifying algorithm correctness <ul><li>Proving an algorithm generates correct output for all inputs </li></ul><ul><li>One technique covered in textbook </li></ul><ul><ul><li>Loop invariants </li></ul></ul><ul><li>We will do some of this in the course, but it is not emphasized as much as other objectives </li></ul>
18. 18. Analyzing algorithms <ul><li>The “process” of determining how much resources (time, space) are used by a given algorithm </li></ul><ul><li>We want to be able to make quantitative assessments about the value (goodness) of one algorithm compared to another </li></ul><ul><li>We want to do this WITHOUT implementing and running an executable version of an algorithm </li></ul><ul><ul><li>Question: How can we study the time complexity of an algorithm if we don’t run it or even choose a specific machine to measure it on? </li></ul></ul>
19. 19. Proving hardness results <ul><li>We believe that no correct and efficient algorithm exists that solves many problems such as TSP </li></ul><ul><li>We define a formal notion of a problem being hard </li></ul><ul><li>We develop techniques for proving hardness results </li></ul>
20. 20. Outline <ul><li>Definitions </li></ul><ul><ul><li>Algorithms </li></ul></ul><ul><ul><li>Problems </li></ul></ul><ul><li>Course Objectives </li></ul><ul><li>Administrative stuff … </li></ul><ul><li>Analysis of Algorithms </li></ul>
21. 21. Algorithm Analysis Overview <ul><li>RAM model of computation </li></ul><ul><li>Concept of input size </li></ul><ul><li>Three complexity measures </li></ul><ul><ul><li>Best-case, average-case, worst-case </li></ul></ul><ul><li>Asymptotic analysis </li></ul><ul><ul><li>Asymptotic notation </li></ul></ul>
22. 22. The RAM Model <ul><li>RAM model represents a “generic” implementation of the algorithm </li></ul><ul><li>Each “simple” operation (+, -, =, if, call) takes exactly 1 step. </li></ul><ul><li>Loops and subroutine calls are not simple operations, but depend upon the size of the data and the contents of a subroutine. We do not want “sort” to be a single step operation. </li></ul><ul><li>Each memory access takes exactly 1 step. </li></ul>
23. 23. Input Size <ul><li>In general, larger input instances require more resources to process correctly </li></ul><ul><li>We standardize by defining a notion of size for an input instance </li></ul><ul><li>Examples </li></ul><ul><ul><li>What is the size of a sorting input instance? </li></ul></ul><ul><ul><li>What is the size of an “Odd number” input instance? </li></ul></ul>
24. 24. Measuring Complexity <ul><li>The running time of an algorithm is the function defined by the number of steps required to solve input instances of size n </li></ul><ul><ul><li>F(1) = 3 </li></ul></ul><ul><ul><li>F(2) = 5 </li></ul></ul><ul><ul><li>F(3) = 7 </li></ul></ul><ul><ul><li>… </li></ul></ul><ul><ul><li>F(n) = 2n+1 </li></ul></ul><ul><li>What potential problems do we have with the above definition when applied to real algorithms solving real problems? </li></ul>
25. 25. Case study: Insertion Sort <ul><li>Count the number of times each line will be executed: </li></ul><ul><li>Num Exec. </li></ul><ul><li>for i = 2 to n (n-1) + 1 </li></ul><ul><ul><li>key = A[i] n-1 </li></ul></ul><ul><ul><li>j = i - 1 n-1 </li></ul></ul><ul><ul><li>while j > 0 AND A[j] > key ? </li></ul></ul><ul><ul><ul><li>A[j+1] = A[j] ? </li></ul></ul></ul><ul><ul><ul><li>j = j -1 ? </li></ul></ul></ul><ul><ul><li>A[j+1] = key n-1 </li></ul></ul>
26. 26. Measuring Complexity Again <ul><li>The worst case running time of an algorithm is the function defined by the maximum number of steps taken on any instance of size n. </li></ul><ul><li>The best case running time of an algorithm is the function defined by the minimum number of steps taken on any instance of size n. </li></ul><ul><li>The average-case running time of an algorithm is the function defined by an average number of steps taken on any instance of size n. </li></ul><ul><li>Which of these is the best to use? </li></ul>
27. 27. Average case analysis <ul><li>Drawbacks </li></ul><ul><ul><li>Based on a probability distribution of input instances </li></ul></ul><ul><ul><li>How do we know if distribution is correct or not? </li></ul></ul><ul><li>Usually more complicated to compute than worst case running time </li></ul><ul><ul><li>Often worst case running time is comparable to average case running time(see next graph) </li></ul></ul><ul><ul><li>Counterexamples to above: </li></ul></ul><ul><ul><ul><li>Quicksort </li></ul></ul></ul><ul><ul><ul><li>simplex method for linear programming </li></ul></ul></ul>
28. 28. Best, Worst, and Average Case
29. 29. Worst case analysis <ul><li>Typically much simpler to compute as we do not need to “average” performance on many inputs </li></ul><ul><ul><li>Instead, we need to find and understand an input that causes worst case performance </li></ul></ul><ul><li>Provides guarantee that is independent of any assumptions about the input </li></ul><ul><li>Often reasonably close to average case running time </li></ul><ul><li>The standard analysis performed </li></ul>
30. 30. Motivation for Asymptotic Analysis <ul><li>An exact computation of worst-case running time can be difficult </li></ul><ul><ul><li>Function may have many terms: </li></ul></ul><ul><ul><ul><li>4n 2 - 3n log n + 17.5 n - 43 n ⅔ + 75 </li></ul></ul></ul><ul><li>An exact computation of worst-case running time is unnecessary </li></ul><ul><ul><li>Remember that we are already approximating running time by using RAM model </li></ul></ul>
31. 31. Simplifications <ul><li>Ignore constants </li></ul><ul><ul><li>4n 2 - 3n log n + 17.5 n - 43 n ⅔ + 75 becomes </li></ul></ul><ul><ul><li>n 2 – n log n + n - n ⅔ + 1 </li></ul></ul><ul><li>Asymptotic Efficiency </li></ul><ul><ul><li>n 2 – n log n + n - n ⅔ + 1 becomes n 2 </li></ul></ul><ul><li>End Result: Θ(n 2 ) </li></ul>
32. 32. Why ignore constants? <ul><li>RAM model introduces errors in constants </li></ul><ul><ul><li>Do all instructions take equal time? </li></ul></ul><ul><ul><li>Specific implementation (hardware, code optimizations) can speed up an algorithm by constant factors </li></ul></ul><ul><ul><li>We want to understand how effective an algorithm is independent of these factors </li></ul></ul><ul><li>Simplification of analysis </li></ul><ul><ul><li>Much easier to analyze if we focus only on n 2 rather than worrying about 3.7 n 2 or 3.9 n 2 </li></ul></ul>
33. 33. Asymptotic Analysis <ul><li>We focus on the infinite set of large n ignoring small values of n </li></ul><ul><li>Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs. </li></ul>0 infinity
34. 34. “Big Oh” Notation <ul><li>O(g(n)) = </li></ul><ul><ul><li>{f(n) : there exists positive constants c and n 0 such that  n≥n 0 , 0 ≤ f(n) ≤ c g(n) } </li></ul></ul><ul><ul><li>What are the roles of the two constants? </li></ul></ul><ul><ul><ul><li>n 0 : </li></ul></ul></ul><ul><ul><ul><li>c: </li></ul></ul></ul>
35. 35. Set Notation Comment <ul><li>O(g(n)) is a set of functions. </li></ul><ul><li>However, we will use one-way equalities like </li></ul><ul><ul><li>n = O(n 2 ) </li></ul></ul><ul><li>This really means that function n belongs to the set of functions O(n 2 ) </li></ul><ul><li>Incorrect notation: O(n 2 ) = n </li></ul><ul><li>Analogy </li></ul><ul><ul><li>“ A dog is an animal” but not “an animal is a dog” </li></ul></ul>
36. 36. Three Common Sets f(n) = O(g(n)) means c  g(n) is an Upper Bound on f(n) f(n) =  (g(n)) means c  g(n) is a Lower Bound on f(n) f(n) =  (g(n)) means c 1  g(n) is an Upper Bound on f(n) and c 2  g(n) is a Lower Bound on f(n) These bounds hold for all inputs beyond some threshold n 0 .
37. 37. O(g(n))
38. 38.  (g(n))
39. 39.  (g(n))
40. 40. O(f(n)) and  (g(n))
41. 41. Example Function f(n) = 3n 2 - 100n + 6
42. 42. Quick Questions c n 0 3n 2 - 100n + 6 = O(n 2 ) 3n 2 - 100n + 6 = O(n 3 ) 3n 2 - 100n + 6  O(n) 3n 2 - 100n + 6 =  (n 2 ) 3n 2 - 100n + 6   (n 3 ) 3n 2 - 100n + 6 =  (n) 3n 2 - 100n + 6 =  (n 2 )? 3n 2 - 100n + 6 =  (n 3 )? 3n 2 - 100n + 6 =  (n)?
43. 43. “Little Oh” Notation <ul><li>o(g(n)) = </li></ul><ul><ul><li>{f(n) :  c >0  n 0 > 0 such that  n ≥ n 0 </li></ul></ul><ul><ul><li>0 ≤ f(n) < cg(n)} </li></ul></ul><ul><ul><li>Intuitively, lim n f(n)/g(n) = 0 </li></ul></ul><ul><ul><li>f(n) < c g(n) </li></ul></ul>
44. 44. Two Other Sets f(n) = o(g(n)) means c  g(n) is a strict upper bound on f(n) f(n) =  (g(n)) means c  g(n) is a strict lower bound on f(n) These bounds hold for all inputs beyond some threshold n 0 where n 0 is now dependent on c.
45. 45. Common Complexity Functions Complexity 10 20 30 40 50 60 n 1  10 -5 sec 2  10 -5 sec 3  10 -5 sec 4  10 -5 sec 5  10 -5 sec 6  10 -5 sec n 2 0.0001 sec 0.0004 sec 0.0009 sec 0.016 sec 0.025 sec 0.036 sec n 3 0.001 sec 0.008 sec 0.027 sec 0.064 sec 0.125 sec 0.216 sec n 5 0.1 sec 3.2 sec 24.3 sec 1.7 min 5.2 min 13.0 min 2 n 0.001sec 1.0 sec 17.9 min 12.7 days 35.7 years 366 cent 3 n 0.59sec 58 min 6.5 years 3855 cent 2  10 8 cent 1.3  10 13 cent log 2 n 3  10 -6 sec 4  10 -6 sec 5  10 -6 sec 5  10 -6 sec 6  10 -6 sec 6  10 -6 sec n log 2 n 3  10 -5 sec 9  10 -5 sec 0.0001 sec 0.0002 sec 0.0003 sec 0.0004 sec
46. 46. Example Problems 1. What does it mean if: f(n)  O(g(n)) and g(n)  O(f(n)) ??? 2. Is 2 n+1 = O(2 n ) ? Is 2 2n = O(2 n ) ? 3. Does f(n) = O(f(n)) ? 4. If f(n) = O(g(n)) and g(n) = O(h(n)), can we say f(n) = O(h(n)) ?