Slide 1 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E1 – Basic Algorithms
Slide 2 of 53.
Lesson E – Introduction to Algorithms
Lesson or Unit Topic or Objective
Demonstrate the
notion of an
algorithm using two
classic ones
Slide 3 of 53.
Lesson E – Introduction to Algorithms
Computational problems
• A computational problem specifies an input-output
relationship
 What does the input look like?
 What should the output be for each input?
• Example:
 Input: an integer number N
 Output: Is the number prime?
• Example:
 Input: A list of names of people
 Output: The same list sorted alphabetically
• Example:
 Input: A picture in digital format
 Output: An English description of what the picture shows
Slide 4 of 53.
Lesson E – Introduction to Algorithms
Algorithms
• An algorithm is an exact specification of how to solve a
computational problem
• An algorithm must specify every step completely, so a
computer can implement it without any further
“understanding”
• An algorithm must work for all possible inputs of the
problem.
• Algorithms must be:
 Correct: For each input produce an appropriate output
 Efficient: run as quickly as possible, and use as little memory as
possible – more about this later
• There can be many different algorithms for each
computational problem.
Slide 5 of 53.
Lesson E – Introduction to Algorithms
Describing Algorithms
• Algorithms can be implemented in any programming
language
• Usually we use “pseudo-code” to describe algorithms
• In this course we will just describe algorithms in Java
Testing whether input N is prime:
For j = 2 .. N-1
If j|N
Output “N is composite” and halt
Output “N is prime”
Slide 6 of 53.
Lesson E – Introduction to Algorithms
Greatest Common Divisor
• The first algorithm “invented” in history was Euclid’s
algorithm for finding the greatest common divisor (GCD) of
two natural numbers
• Definition: The GCD of two natural numbers x, y is the
largest integer j that divides both (without remainder). I.e.
j|x, j|y and j is the largest integer with this property.
• The GCD Problem:
 Input: natural numbers x, y
 Output: GCD(x,y) – their GCD
Slide 7 of 53.
Lesson E – Introduction to Algorithms
Euclid’s GCD Algorithm
public static int gcd(int x, int y) {
while (y!=0) {
int temp = x%y;
x = y;
y = temp;
}
return x;
}
Slide 8 of 53.
Lesson E – Introduction to Algorithms
Euclid’s GCD Algorithm – sample run
Example: Computing GCD(48,120)
temp x y
After 0 rounds -- 72 120
After 1 round 72 120 72
After 2 rounds 48 72 48
After 3 rounds 24 48 24
After 4 rounds 0 24 0
Output: 24
while (y!=0) {
int temp = x%y;
x = y;
y = temp;
}
Slide 9 of 53.
Lesson E – Introduction to Algorithms
Correctness of Euclid’s Algorithm
• Theorem: When Euclid’s GCD algorithm terminates, it
returns the mathematical GCD of x and y.
• Notation: Let g be the GCD of the original values of x and
y.
• Loop Invariant Lemma: For all k  0, The values of x, y
after k rounds of the loop satisfy GCD(x,y)=g.
• Proof of lemma: next slide.
• Proof of Theorem: The method returns when y=0. By the
loop invariant lemma, at this point GCD(x,y)=g. But
GCD(x,0)=x for every integer x (since x|0 and x|x). Thus
g=x, which is the value returned by the code.
• Still Missing: The algorithm always terminates.
Slide 10 of 53.
Lesson E – Introduction to Algorithms
Proof of Lemma
• Loop Invariant Lemma: For all k  0, The values of x, y
after k rounds of the loop satisfy GCD(x,y)=g.
• Proof: By induction on k.
 For k=0, x and y are the original values so clearly GCD(x,y)=g.
 Induction step: Let x, y denote that values after k rounds and x’, y’
denote the values after k+1 rounds. We need to show that
GCD(x,y)=GCD(x’,y’). According to the code: x’=y and y’=x%y, so
the lemma follows from the following mathematical lemma.
• Lemma: For all integers x, y: GCD(x, y) = GCD(x%y, y)
• Proof: Let x=ay+b, where y>b 0. I.e. x%y=b.
 (1) Since g|y, and g|x, we also have g|(x-ay), I.e. g|b. Thus
GCD(b,y)  g = GCD(x,y).
 (2) Let g’=GCD(b,y), then g’|(x-ay) and g’|y, so we also have g’|x.
Thus GCD(x,y)  g’=GCD(b,y).
Slide 11 of 53.
Lesson E – Introduction to Algorithms
Termination of Euclid’s Algorithm
• Why does this algorithm terminate?
 After any iteration we have that x > y since the new value of y is
the remainder of division by the new value of x.
 In further iterations, we replace (x, y) with (y, x%y), and x%y < x,
thus the numbers decrease in each iteration.
 Formally, the value of xy decreases each iteration (except, maybe,
the first one). When it reaches 0, the algorithm must terminate.
public static int gcd(int x, int y) {
while (y!=0) {
int temp = x%y;
x = y;
y = temp;
}
return x;
}
Slide 12 of 53.
Lesson E – Introduction to Algorithms
Square Roots
• The problem we want to address is to compute the square
root of a real number.
• When working with real numbers, we can not have
complete precision.
 The inputs will be given in finite precision
 The outputs should only be computed approximately
• The square root problem:
 Input: a positive real number x, and a precision requirement 
 Output: a real number r such that |r-x|
Slide 13 of 53.
Lesson E – Introduction to Algorithms
Square Root Algorithm
public static double sqrt(double x, double epsilon){
double low = 0;
double high = x>1 ? x : 1;
while (high-low > epsilon) {
double mid = (high+low)/2;
if (mid*mid > x)
high = mid;
else
low = mid;
}
return low;
}
Slide 14 of 53.
Lesson E – Introduction to Algorithms
Binary Search Algorithm – sample run
Example: Computing sqrt(2) with precision 0.05:
mid mid*mid low high
After 0 rounds -- -- 0 2
After 1 round 1 1 1 2
After 2 rounds 1.5 2.25 1 1.5
After 3 rounds 1.25 1.56.. 1.25 1.5
After 4 rounds 1.37.. 1.89.. 1.37.. 1.5
After 5 rounds 1.43.. 2.06.. 1.37.. 1.43..
After 6 rounds 1.40.. 1.97.. 1.40.. 1.43..
Output: 1.40…
while (high-low > epsilon) {
double mid = (high+low)/2;
if (mid*mid > x)
high = mid;
else
low = mid;
}
Slide 15 of 53.
Lesson E – Introduction to Algorithms
Correctness of Binary Search Algorithm
• Theorem: When the algorithm terminates it returns a value
r that satisfies |r-x|.
• Loop invariant lemma: For all k  0, The values of low,
high after k rounds of the loop satisfy: low  x  high.
• Proof of Lemma:
 For k=0, clearly low=0  x  high=max(x,1).
 Induction step: The code only sets low=mid if mid  x, and only
sets high=mid if mid>x.
• Proof of Theorem: The algorithm terminates when
high-low, and returns low. At this point, by the lemma:
low  x  high  low+. Thus |low-x|.
• Missing Part: Does the algorithm always terminate? How
Fast? We will deal with this later.
Slide 16 of 53.
Lesson E – Introduction to Algorithms
In General…
• This type of binary search can be used to find the roots of
any continuous function f.
• Mean Value Theorem: if f(low)<0 and f(high)>0 then for
some low<x<high, f(x)=0.
• In our case, to find 2, we solved 0
2
)
( 2


 x
x
f
Slide 17 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E1 – Basic Algorithms
Slide 18 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E2 – Running Time Analysis
Slide 19 of 53.
Lesson E – Introduction to Algorithms
Lesson or Unit Topic or Objective
Analysis of Running
Times of Algorithms
Slide 20 of 53.
Lesson E – Introduction to Algorithms
How fast will your program run?
• The running time of your program will depend upon:
 The algorithm
 The input
 Your implementation of the algorithm in a programming language
 The compiler you use
 The OS on your computer
 Your computer hardware
 Maybe other things: temperature outside; other programs on your
computer; …
• Our Motivation: analyze the running time of an algorithm
as a function of only simple parameters of the input.
Slide 21 of 53.
Lesson E – Introduction to Algorithms
Basic idea: counting operations
• Each algorithm performs a sequence of basic operations:
 Arithmetic: (low + high)/2
 Comparison: if ( x > 0 ) …
 Assignment: temp = x
 Branching: while ( true ) { … }
 …
• Idea: count the number of basic operations performed on
the input.
• Difficulties:
 Which operations are basic?
 Not all operations take the same amount of time.
 Operations take different times with different hardware or
compilers
Slide 22 of 53.
Lesson E – Introduction to Algorithms
Testing operation times on your system
import java.util.*;
public class PerformanceEvaluation {
public static void main(String[] args) {
int i=0; double d = 1.618;
SimpleObject o = new SimpleObject();
final int numLoops = 1000000;
long startTime = System.currentTimeMillis();;
for (i=0 ; i<numLoops ; i++){
// put here a command to be timed
}
long endTime = System.currentTimeMillis();
long duration = endTime - startTime;
double iterationTime = (double)duration / numLoops;
System.out.println("duration: "+duration);
System.out.println("sec/iter: "+iterationTime);
}}
class SimpleObject {
private int x=0;
public void m() { x++; }
}
Slide 23 of 53.
Lesson E – Introduction to Algorithms
Sample running times of basic Java operations
Operation Loop Body nSec/iteration
Sys1 Sys2
Sys1: PII, 333MHz, jdk1.1.8, -nojit
Sys2: PIII, 500MHz, jdk1.3.1
Loop Overhead ; 196 10
Double division d = 1.0 / d; 400 77
Method call o.m(); 372 93
Object Construction o=new
SimpleObject();
1080 110
Slide 24 of 53.
Lesson E – Introduction to Algorithms
Asymptotic running times
• Operation counts are only problematic in terms of constant
factors.
• The general form of the function describing the running
time is invariant over hardware, languages or compilers!
• Running time is “about” .
• We use “Big-O” notation, and say that the running time is
O( )
2
N
2
N
public static int myMethod(int N){
int sq = 0;
for(int j=0; j<N ; j++)
for(int k=0; k<N ; k++)
sq++;
return sq;
}
Slide 25 of 53.
Lesson E – Introduction to Algorithms
Asymptotic behavior of functions
Slide 26 of 53.
Lesson E – Introduction to Algorithms
Mathematical Formalization
• Definition: Let f and g be functions from the natural
numbers to the natural numbers. We write f=O(g) if there
exists a constant c such that for all n: f(n)  cg(n).
f=O(g)   c n: f(n)  cg(n)
• This is a mathematically formal way of ignoring constant
factors, and looking only at the “shape” of the function.
• f=O(g) should be considered as saying that “f is at most g,
up to constant factors”.
• We usually will have f be the running time of an algorithm
and g a nicely written function. E.g. The running time of
the previous algorithm was O(N^2).
Slide 27 of 53.
Lesson E – Introduction to Algorithms
Asymptotic analysis of algorithms
• We usually embark on an asymptotic worst case analysis
of the running time of the algorithm.
• Asymptotic:
 Formal, exact, depends only on the algorithm
 Ignores constants
 Applicable mostly for large input sizes
• Worst Case:
 Bounds on running time must hold for all inputs.
 Thus the analysis considers the worst-case input.
 Sometimes the “average” performance can be much better
 Real-life inputs are rarely “average” in any formal sense
Slide 28 of 53.
Lesson E – Introduction to Algorithms
The running time of Euclid’s GCD Algorithm
• How fast does Euclid’s algorithm terminate?
 After the first iteration we have that x > y. In each iteration, we
replace (x, y) with (y, x%y).
 In an iteration where x>1.5y then x%y < y < 2x/3.
 In an iteration where x  1.5y then x%y  y/2 < 2x/3.
 Thus, the value of xy decreases by a factor of at least 2/3 each
iteration (except, maybe, the first one).
public static int gcd(int x, int y) {
while (y!=0) {
int temp = x%y;
x = y;
y = temp;
}
return x;
}
Slide 29 of 53.
Lesson E – Introduction to Algorithms
The running time of Euclid’s Algorithm
• Theorem: Euclid’s GCD algorithm runs it time O(N), where
N is the input length (N=log2x + log2y).
• Proof:
 Every iteration of the loop (except maybe the first) the value of xy
decreases by a factor of at least 2/3. Thus after k+1 iterations the
value of xy is at most the original value.
 Thus the algorithm must terminate when k satisfies:
(for the original values of x, y).
 Thus the algorithm runs for at most iterations.
 Each iteration has only a constant L number of operations, thus
the total number of operations is at most
 Formally,
 Thus the running time is O(N).
k
)
3
/
2
(
1
)
3
/
2
( 
k
xy
xy
2
/
3
log
1
L
xy)
log
1
( 2
/
3

LN
y
x
L
L
xy 3
)
log
2
log
2
1
(
)
log
1
( 2
2
2
/
3 




Slide 30 of 53.
Lesson E – Introduction to Algorithms
Running time of Square root algorithm
• The value of (high-low) decreases by a factor of exactly 2
each iteration. It starts at max(x,1), and the algorithm
terminates when it
goes below .
• Thus the number of
iterations is at most
• The running time is
)
/
)
1
,
(max(
log2 
x
)
log
(log 1

 
x
O
public static double
sqrt(double x, double epsilon){
double low = 0;
double high = x>1 ? x : 1;
while (high-low > epsilon) {
double mid = (high+low)/2;
if (mid*mid > x)
high = mid;
else
low = mid;
}
return low;
}
Slide 31 of 53.
Lesson E – Introduction to Algorithms
Newton-Raphson Algorithm
public static double sqrt(double x, double epsilon){
double r = 1;
while ( Math.abs(r - x/r) > epsilon)
r = (r + x/r)/2;
return r;
}
Slide 32 of 53.
Lesson E – Introduction to Algorithms
Newton-Raphson – sample run
Example: Computing sqrt(2) with precision 0.01:
r x/r
After 0 rounds 1 2
After 1 round 1.5 1.33..
After 2 rounds 1.41.. 1.41..
Output: 1.41…
while ( Math.abs(r - x/r) > epsilon)
r = (r + x/r)/2;
Slide 33 of 53.
Lesson E – Introduction to Algorithms
Analysis of Running Time
• Correctness is clear since for every r the square root of x
is between and r and x/r.
• Here we will analyze the running time only for 1<x<2
• Denote:
• Thus , where after n loops
• At the beginning , and
• In general we have that
• At the end it suffices that , since
• Thus the algorithm terminates when
2
/
)
(
' r
x
r
r 

2
2
2
2
2
2
2
4
2
2
4
)
(
4
4
2
4
/
)
/
(
'
r
x
r
r
x
r
x
x
r
r
x
r
x
r
x
r










2
1

 n
n 
 x
r
n 
 2

4
/
1
1 

|
|
|
| 2
x
r
x
r 



 
n
n
n
2
2


1
0 

1
log
log 
 
n
Slide 34 of 53.
Lesson E – Introduction to Algorithms
In General…
• The Newton-Raphson method can be used to find the
roots of any differentiable function f.
• In our case, to find 2, we solved
• So,
0
2
)
( 2


 r
r
f
2
2
2
2
)
(
'
)
(
'
2
r
r
r
r
r
r
f
r
f
r
r







Slide 35 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E2 – Running Time Analysis
Slide 36 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E3 - Recursion
Slide 37 of 53.
Lesson E – Introduction to Algorithms
Lesson or Unit Topic or Objective
Using and
understanding
Recursion
Slide 38 of 53.
Lesson E – Introduction to Algorithms
Designing Algorithms
• There is no single recipe for inventing algorithms
• There are basic rules:
 Understand your problem well – may require much mathematical
analysis!
 Use existing algorithms (reduction) or algorithmic ideas
• There is a single basic algorithmic technique:
Divide and Conquer
• In its simplest (and most useful) form it is simple induction
 In order to solve a problem, solve a similar problem of smaller size
• The key conceptual idea:
 Think only about how to use the smaller solution to get the larger one
 Do not worry about how to solve to smaller problem (it will be solved using
an even smaller one)
Slide 39 of 53.
Lesson E – Introduction to Algorithms
Recursion
• A recursive method is a method that contains a call to
itself
• Technically:
 All modern computing languages allow writing methods that call
themselves
 We will discuss how this is implemented later
• Conceptually:
 This allows programming in a style that reflects divide-n-conquer
algorithmic thinking
 At the beginning recursive programs are confusing – after a while
they become clearer than non-recursive variants
Slide 40 of 53.
Lesson E – Introduction to Algorithms
Factorial
public static void Factorial {
public static void main() {
System.out.println(“5!=“ + factorial(5));
}
public static long factorial(int n){
if (n == 0)
return 1;
else
return n * factorial(n-1);
}
}
Slide 41 of 53.
Lesson E – Introduction to Algorithms
Elements of a recursive program
• Basis: a case that can be answered without using further
recursive calls
 In our case: if (n==0) return 1;
• Creating the smaller problem, and invoking a recursive
call on it
 In our case: factorial(n-1)
• Finishing to solve the original problem
 In our case: return n * /*solution of recursive call*/
Slide 42 of 53.
Lesson E – Introduction to Algorithms
Tracing the factorial method
System.out.println(“5!=“ + factorial(5))
5 * factorial(4)
4 * factorial(3)
3 * factorial(2)
2 * factorial(1)
1 * factorial(0)
return 1
return 1
return 2
return 6
return 24
return 120
Slide 43 of 53.
Lesson E – Introduction to Algorithms
Correctness of factorial method
• Theorem: For every positive integer n, factorial(n)
returns the value n!.
• Proof: By induction on n:
• Basis: for n=0, factorial(0) returns 1=0!.
• Induction step: When called on n>1, factorial calls
factorial(n-1), which by the induction hypothesis
returns (n-1)!. The returned value is thus n*(n-1)!=n!.
Slide 44 of 53.
Lesson E – Introduction to Algorithms
Raising to power – take 1
public static double power(double x, long n) {
if (n == 0) return 1.0;
return x * power(x, n-1);
}
Slide 45 of 53.
Lesson E – Introduction to Algorithms
Running time analysis
• Simplest way to calculate the running time of a recursive
program is to add up the running times of the separate
levels of recursion.
• In the case of the power method:
 There are n+1 levels of recursion
• power(x,n), power(x,n-1), power(x, n-2), … power(x,0)
 Each level takes O(1) steps
 Total time = O(n)
Slide 46 of 53.
Lesson E – Introduction to Algorithms
Raising to power – take 2
public static double power(double x, long n) {
if (n == 0) return 1.0;
if (n%2 == 0) {
double t = power(x, n/2);
return t*t;
}
return x * power(x, n-1);
}
Slide 47 of 53.
Lesson E – Introduction to Algorithms
Analysis
• Theorem: For any x and positive integer n, the power
method returns .
• Proof: by complete induction on n.
 Basis: For n=0, we return 1.
 If n is even, we return power(x,n/2)*power(x,n/2). By the induction
hypothesis power(x,n/2) returns , so we return .
 If n is odd, we return x*power(x,n-1). By the induction hypothesis
power(x,n-1) returns , so we return .
• The running time is now O(log n):
 After 2 levels of recursion n has decreased by a factor of at least
two (since either n or n-1 is even, in which case the recursive call
is with n/2)
 Thus we reach n==0 after at most 2log2n levels of recursion
 Each level still takes O(1) time.
n
x
2
/
n
x n
n
x
x 
2
2
/
)
(
1

n
x n
n
x
x
x 
 1
Slide 48 of 53.
Lesson E – Introduction to Algorithms
Reverse
public class Reverse {
static InputRequestor in = new InputRequestor():
public static void main(String[] args) {
printReverse();
}
public static void printReverse() {
int j = in.requestInt(“Enter another Number”+
” (0 for end of list):”);
if (j!=0){
printReverse();
System.out.println(j);
}
}
}
Slide 49 of 53.
Lesson E – Introduction to Algorithms
Recursive Definitions
• Many things are defined recursively.
• Fibonaci Numbers: 1, 1, 2, 3, 5, 8, 13, 21, …
 fn = fn-1 + fn-2
• Arithmetic Expressions. E.g. 2+3*(5+(3-4))
 A number is an expression
 For any expression E: (E) is an expression
 For any two expressions E1, E2: E1+E2, E1-E2, E1*E2, E1/E2 are
expressions
• Fractals
• In such cases recursive algorithms are very natural
Slide 50 of 53.
Lesson E – Introduction to Algorithms
Fibonaci Numbers
public class Fibonaci {
public static void main(String[] args) {
for(int j = 1 ; j<20 ; j++)
System.out.println(fib(j));
}
public static int fib (int n) {
if (n <= 1) return 1;
return fib(n-1) + fib(n-2);
}
}
Slide 51 of 53.
Lesson E – Introduction to Algorithms
TurtleFractal
public class TurtleFractal {
static Turtle turtle = new Turtle();
public static void main(String[] args) {
turtle.tailDown();
drawFractal(500,4);
}
public static void drawFractal(int length, int level){
if (level==0)
turtle.moveForward(length);
else {
drawFractal(length/3, level-1) ;
turtle.turnLeft(60);
drawFractal(length/3, level-1) ;
turtle.turnRight(120);
drawFractal(length/3, level-1) ;
turtle.turnLeft(60);
drawFractal(length/3, level-1) ;
}
}}
Slide 52 of 53.
Lesson E – Introduction to Algorithms
FractalTurtle output
Slide 53 of 53.
Lesson E – Introduction to Algorithms
Lecture E
Introduction to Algorithms
Unit E3 - Recursion

e.ppt

  • 1.
    Slide 1 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E1 – Basic Algorithms
  • 2.
    Slide 2 of53. Lesson E – Introduction to Algorithms Lesson or Unit Topic or Objective Demonstrate the notion of an algorithm using two classic ones
  • 3.
    Slide 3 of53. Lesson E – Introduction to Algorithms Computational problems • A computational problem specifies an input-output relationship  What does the input look like?  What should the output be for each input? • Example:  Input: an integer number N  Output: Is the number prime? • Example:  Input: A list of names of people  Output: The same list sorted alphabetically • Example:  Input: A picture in digital format  Output: An English description of what the picture shows
  • 4.
    Slide 4 of53. Lesson E – Introduction to Algorithms Algorithms • An algorithm is an exact specification of how to solve a computational problem • An algorithm must specify every step completely, so a computer can implement it without any further “understanding” • An algorithm must work for all possible inputs of the problem. • Algorithms must be:  Correct: For each input produce an appropriate output  Efficient: run as quickly as possible, and use as little memory as possible – more about this later • There can be many different algorithms for each computational problem.
  • 5.
    Slide 5 of53. Lesson E – Introduction to Algorithms Describing Algorithms • Algorithms can be implemented in any programming language • Usually we use “pseudo-code” to describe algorithms • In this course we will just describe algorithms in Java Testing whether input N is prime: For j = 2 .. N-1 If j|N Output “N is composite” and halt Output “N is prime”
  • 6.
    Slide 6 of53. Lesson E – Introduction to Algorithms Greatest Common Divisor • The first algorithm “invented” in history was Euclid’s algorithm for finding the greatest common divisor (GCD) of two natural numbers • Definition: The GCD of two natural numbers x, y is the largest integer j that divides both (without remainder). I.e. j|x, j|y and j is the largest integer with this property. • The GCD Problem:  Input: natural numbers x, y  Output: GCD(x,y) – their GCD
  • 7.
    Slide 7 of53. Lesson E – Introduction to Algorithms Euclid’s GCD Algorithm public static int gcd(int x, int y) { while (y!=0) { int temp = x%y; x = y; y = temp; } return x; }
  • 8.
    Slide 8 of53. Lesson E – Introduction to Algorithms Euclid’s GCD Algorithm – sample run Example: Computing GCD(48,120) temp x y After 0 rounds -- 72 120 After 1 round 72 120 72 After 2 rounds 48 72 48 After 3 rounds 24 48 24 After 4 rounds 0 24 0 Output: 24 while (y!=0) { int temp = x%y; x = y; y = temp; }
  • 9.
    Slide 9 of53. Lesson E – Introduction to Algorithms Correctness of Euclid’s Algorithm • Theorem: When Euclid’s GCD algorithm terminates, it returns the mathematical GCD of x and y. • Notation: Let g be the GCD of the original values of x and y. • Loop Invariant Lemma: For all k  0, The values of x, y after k rounds of the loop satisfy GCD(x,y)=g. • Proof of lemma: next slide. • Proof of Theorem: The method returns when y=0. By the loop invariant lemma, at this point GCD(x,y)=g. But GCD(x,0)=x for every integer x (since x|0 and x|x). Thus g=x, which is the value returned by the code. • Still Missing: The algorithm always terminates.
  • 10.
    Slide 10 of53. Lesson E – Introduction to Algorithms Proof of Lemma • Loop Invariant Lemma: For all k  0, The values of x, y after k rounds of the loop satisfy GCD(x,y)=g. • Proof: By induction on k.  For k=0, x and y are the original values so clearly GCD(x,y)=g.  Induction step: Let x, y denote that values after k rounds and x’, y’ denote the values after k+1 rounds. We need to show that GCD(x,y)=GCD(x’,y’). According to the code: x’=y and y’=x%y, so the lemma follows from the following mathematical lemma. • Lemma: For all integers x, y: GCD(x, y) = GCD(x%y, y) • Proof: Let x=ay+b, where y>b 0. I.e. x%y=b.  (1) Since g|y, and g|x, we also have g|(x-ay), I.e. g|b. Thus GCD(b,y)  g = GCD(x,y).  (2) Let g’=GCD(b,y), then g’|(x-ay) and g’|y, so we also have g’|x. Thus GCD(x,y)  g’=GCD(b,y).
  • 11.
    Slide 11 of53. Lesson E – Introduction to Algorithms Termination of Euclid’s Algorithm • Why does this algorithm terminate?  After any iteration we have that x > y since the new value of y is the remainder of division by the new value of x.  In further iterations, we replace (x, y) with (y, x%y), and x%y < x, thus the numbers decrease in each iteration.  Formally, the value of xy decreases each iteration (except, maybe, the first one). When it reaches 0, the algorithm must terminate. public static int gcd(int x, int y) { while (y!=0) { int temp = x%y; x = y; y = temp; } return x; }
  • 12.
    Slide 12 of53. Lesson E – Introduction to Algorithms Square Roots • The problem we want to address is to compute the square root of a real number. • When working with real numbers, we can not have complete precision.  The inputs will be given in finite precision  The outputs should only be computed approximately • The square root problem:  Input: a positive real number x, and a precision requirement   Output: a real number r such that |r-x|
  • 13.
    Slide 13 of53. Lesson E – Introduction to Algorithms Square Root Algorithm public static double sqrt(double x, double epsilon){ double low = 0; double high = x>1 ? x : 1; while (high-low > epsilon) { double mid = (high+low)/2; if (mid*mid > x) high = mid; else low = mid; } return low; }
  • 14.
    Slide 14 of53. Lesson E – Introduction to Algorithms Binary Search Algorithm – sample run Example: Computing sqrt(2) with precision 0.05: mid mid*mid low high After 0 rounds -- -- 0 2 After 1 round 1 1 1 2 After 2 rounds 1.5 2.25 1 1.5 After 3 rounds 1.25 1.56.. 1.25 1.5 After 4 rounds 1.37.. 1.89.. 1.37.. 1.5 After 5 rounds 1.43.. 2.06.. 1.37.. 1.43.. After 6 rounds 1.40.. 1.97.. 1.40.. 1.43.. Output: 1.40… while (high-low > epsilon) { double mid = (high+low)/2; if (mid*mid > x) high = mid; else low = mid; }
  • 15.
    Slide 15 of53. Lesson E – Introduction to Algorithms Correctness of Binary Search Algorithm • Theorem: When the algorithm terminates it returns a value r that satisfies |r-x|. • Loop invariant lemma: For all k  0, The values of low, high after k rounds of the loop satisfy: low  x  high. • Proof of Lemma:  For k=0, clearly low=0  x  high=max(x,1).  Induction step: The code only sets low=mid if mid  x, and only sets high=mid if mid>x. • Proof of Theorem: The algorithm terminates when high-low, and returns low. At this point, by the lemma: low  x  high  low+. Thus |low-x|. • Missing Part: Does the algorithm always terminate? How Fast? We will deal with this later.
  • 16.
    Slide 16 of53. Lesson E – Introduction to Algorithms In General… • This type of binary search can be used to find the roots of any continuous function f. • Mean Value Theorem: if f(low)<0 and f(high)>0 then for some low<x<high, f(x)=0. • In our case, to find 2, we solved 0 2 ) ( 2    x x f
  • 17.
    Slide 17 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E1 – Basic Algorithms
  • 18.
    Slide 18 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E2 – Running Time Analysis
  • 19.
    Slide 19 of53. Lesson E – Introduction to Algorithms Lesson or Unit Topic or Objective Analysis of Running Times of Algorithms
  • 20.
    Slide 20 of53. Lesson E – Introduction to Algorithms How fast will your program run? • The running time of your program will depend upon:  The algorithm  The input  Your implementation of the algorithm in a programming language  The compiler you use  The OS on your computer  Your computer hardware  Maybe other things: temperature outside; other programs on your computer; … • Our Motivation: analyze the running time of an algorithm as a function of only simple parameters of the input.
  • 21.
    Slide 21 of53. Lesson E – Introduction to Algorithms Basic idea: counting operations • Each algorithm performs a sequence of basic operations:  Arithmetic: (low + high)/2  Comparison: if ( x > 0 ) …  Assignment: temp = x  Branching: while ( true ) { … }  … • Idea: count the number of basic operations performed on the input. • Difficulties:  Which operations are basic?  Not all operations take the same amount of time.  Operations take different times with different hardware or compilers
  • 22.
    Slide 22 of53. Lesson E – Introduction to Algorithms Testing operation times on your system import java.util.*; public class PerformanceEvaluation { public static void main(String[] args) { int i=0; double d = 1.618; SimpleObject o = new SimpleObject(); final int numLoops = 1000000; long startTime = System.currentTimeMillis();; for (i=0 ; i<numLoops ; i++){ // put here a command to be timed } long endTime = System.currentTimeMillis(); long duration = endTime - startTime; double iterationTime = (double)duration / numLoops; System.out.println("duration: "+duration); System.out.println("sec/iter: "+iterationTime); }} class SimpleObject { private int x=0; public void m() { x++; } }
  • 23.
    Slide 23 of53. Lesson E – Introduction to Algorithms Sample running times of basic Java operations Operation Loop Body nSec/iteration Sys1 Sys2 Sys1: PII, 333MHz, jdk1.1.8, -nojit Sys2: PIII, 500MHz, jdk1.3.1 Loop Overhead ; 196 10 Double division d = 1.0 / d; 400 77 Method call o.m(); 372 93 Object Construction o=new SimpleObject(); 1080 110
  • 24.
    Slide 24 of53. Lesson E – Introduction to Algorithms Asymptotic running times • Operation counts are only problematic in terms of constant factors. • The general form of the function describing the running time is invariant over hardware, languages or compilers! • Running time is “about” . • We use “Big-O” notation, and say that the running time is O( ) 2 N 2 N public static int myMethod(int N){ int sq = 0; for(int j=0; j<N ; j++) for(int k=0; k<N ; k++) sq++; return sq; }
  • 25.
    Slide 25 of53. Lesson E – Introduction to Algorithms Asymptotic behavior of functions
  • 26.
    Slide 26 of53. Lesson E – Introduction to Algorithms Mathematical Formalization • Definition: Let f and g be functions from the natural numbers to the natural numbers. We write f=O(g) if there exists a constant c such that for all n: f(n)  cg(n). f=O(g)   c n: f(n)  cg(n) • This is a mathematically formal way of ignoring constant factors, and looking only at the “shape” of the function. • f=O(g) should be considered as saying that “f is at most g, up to constant factors”. • We usually will have f be the running time of an algorithm and g a nicely written function. E.g. The running time of the previous algorithm was O(N^2).
  • 27.
    Slide 27 of53. Lesson E – Introduction to Algorithms Asymptotic analysis of algorithms • We usually embark on an asymptotic worst case analysis of the running time of the algorithm. • Asymptotic:  Formal, exact, depends only on the algorithm  Ignores constants  Applicable mostly for large input sizes • Worst Case:  Bounds on running time must hold for all inputs.  Thus the analysis considers the worst-case input.  Sometimes the “average” performance can be much better  Real-life inputs are rarely “average” in any formal sense
  • 28.
    Slide 28 of53. Lesson E – Introduction to Algorithms The running time of Euclid’s GCD Algorithm • How fast does Euclid’s algorithm terminate?  After the first iteration we have that x > y. In each iteration, we replace (x, y) with (y, x%y).  In an iteration where x>1.5y then x%y < y < 2x/3.  In an iteration where x  1.5y then x%y  y/2 < 2x/3.  Thus, the value of xy decreases by a factor of at least 2/3 each iteration (except, maybe, the first one). public static int gcd(int x, int y) { while (y!=0) { int temp = x%y; x = y; y = temp; } return x; }
  • 29.
    Slide 29 of53. Lesson E – Introduction to Algorithms The running time of Euclid’s Algorithm • Theorem: Euclid’s GCD algorithm runs it time O(N), where N is the input length (N=log2x + log2y). • Proof:  Every iteration of the loop (except maybe the first) the value of xy decreases by a factor of at least 2/3. Thus after k+1 iterations the value of xy is at most the original value.  Thus the algorithm must terminate when k satisfies: (for the original values of x, y).  Thus the algorithm runs for at most iterations.  Each iteration has only a constant L number of operations, thus the total number of operations is at most  Formally,  Thus the running time is O(N). k ) 3 / 2 ( 1 ) 3 / 2 (  k xy xy 2 / 3 log 1 L xy) log 1 ( 2 / 3  LN y x L L xy 3 ) log 2 log 2 1 ( ) log 1 ( 2 2 2 / 3     
  • 30.
    Slide 30 of53. Lesson E – Introduction to Algorithms Running time of Square root algorithm • The value of (high-low) decreases by a factor of exactly 2 each iteration. It starts at max(x,1), and the algorithm terminates when it goes below . • Thus the number of iterations is at most • The running time is ) / ) 1 , (max( log2  x ) log (log 1    x O public static double sqrt(double x, double epsilon){ double low = 0; double high = x>1 ? x : 1; while (high-low > epsilon) { double mid = (high+low)/2; if (mid*mid > x) high = mid; else low = mid; } return low; }
  • 31.
    Slide 31 of53. Lesson E – Introduction to Algorithms Newton-Raphson Algorithm public static double sqrt(double x, double epsilon){ double r = 1; while ( Math.abs(r - x/r) > epsilon) r = (r + x/r)/2; return r; }
  • 32.
    Slide 32 of53. Lesson E – Introduction to Algorithms Newton-Raphson – sample run Example: Computing sqrt(2) with precision 0.01: r x/r After 0 rounds 1 2 After 1 round 1.5 1.33.. After 2 rounds 1.41.. 1.41.. Output: 1.41… while ( Math.abs(r - x/r) > epsilon) r = (r + x/r)/2;
  • 33.
    Slide 33 of53. Lesson E – Introduction to Algorithms Analysis of Running Time • Correctness is clear since for every r the square root of x is between and r and x/r. • Here we will analyze the running time only for 1<x<2 • Denote: • Thus , where after n loops • At the beginning , and • In general we have that • At the end it suffices that , since • Thus the algorithm terminates when 2 / ) ( ' r x r r   2 2 2 2 2 2 2 4 2 2 4 ) ( 4 4 2 4 / ) / ( ' r x r r x r x x r r x r x r x r           2 1   n n   x r n   2  4 / 1 1   | | | | 2 x r x r       n n n 2 2   1 0   1 log log    n
  • 34.
    Slide 34 of53. Lesson E – Introduction to Algorithms In General… • The Newton-Raphson method can be used to find the roots of any differentiable function f. • In our case, to find 2, we solved • So, 0 2 ) ( 2    r r f 2 2 2 2 ) ( ' ) ( ' 2 r r r r r r f r f r r       
  • 35.
    Slide 35 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E2 – Running Time Analysis
  • 36.
    Slide 36 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E3 - Recursion
  • 37.
    Slide 37 of53. Lesson E – Introduction to Algorithms Lesson or Unit Topic or Objective Using and understanding Recursion
  • 38.
    Slide 38 of53. Lesson E – Introduction to Algorithms Designing Algorithms • There is no single recipe for inventing algorithms • There are basic rules:  Understand your problem well – may require much mathematical analysis!  Use existing algorithms (reduction) or algorithmic ideas • There is a single basic algorithmic technique: Divide and Conquer • In its simplest (and most useful) form it is simple induction  In order to solve a problem, solve a similar problem of smaller size • The key conceptual idea:  Think only about how to use the smaller solution to get the larger one  Do not worry about how to solve to smaller problem (it will be solved using an even smaller one)
  • 39.
    Slide 39 of53. Lesson E – Introduction to Algorithms Recursion • A recursive method is a method that contains a call to itself • Technically:  All modern computing languages allow writing methods that call themselves  We will discuss how this is implemented later • Conceptually:  This allows programming in a style that reflects divide-n-conquer algorithmic thinking  At the beginning recursive programs are confusing – after a while they become clearer than non-recursive variants
  • 40.
    Slide 40 of53. Lesson E – Introduction to Algorithms Factorial public static void Factorial { public static void main() { System.out.println(“5!=“ + factorial(5)); } public static long factorial(int n){ if (n == 0) return 1; else return n * factorial(n-1); } }
  • 41.
    Slide 41 of53. Lesson E – Introduction to Algorithms Elements of a recursive program • Basis: a case that can be answered without using further recursive calls  In our case: if (n==0) return 1; • Creating the smaller problem, and invoking a recursive call on it  In our case: factorial(n-1) • Finishing to solve the original problem  In our case: return n * /*solution of recursive call*/
  • 42.
    Slide 42 of53. Lesson E – Introduction to Algorithms Tracing the factorial method System.out.println(“5!=“ + factorial(5)) 5 * factorial(4) 4 * factorial(3) 3 * factorial(2) 2 * factorial(1) 1 * factorial(0) return 1 return 1 return 2 return 6 return 24 return 120
  • 43.
    Slide 43 of53. Lesson E – Introduction to Algorithms Correctness of factorial method • Theorem: For every positive integer n, factorial(n) returns the value n!. • Proof: By induction on n: • Basis: for n=0, factorial(0) returns 1=0!. • Induction step: When called on n>1, factorial calls factorial(n-1), which by the induction hypothesis returns (n-1)!. The returned value is thus n*(n-1)!=n!.
  • 44.
    Slide 44 of53. Lesson E – Introduction to Algorithms Raising to power – take 1 public static double power(double x, long n) { if (n == 0) return 1.0; return x * power(x, n-1); }
  • 45.
    Slide 45 of53. Lesson E – Introduction to Algorithms Running time analysis • Simplest way to calculate the running time of a recursive program is to add up the running times of the separate levels of recursion. • In the case of the power method:  There are n+1 levels of recursion • power(x,n), power(x,n-1), power(x, n-2), … power(x,0)  Each level takes O(1) steps  Total time = O(n)
  • 46.
    Slide 46 of53. Lesson E – Introduction to Algorithms Raising to power – take 2 public static double power(double x, long n) { if (n == 0) return 1.0; if (n%2 == 0) { double t = power(x, n/2); return t*t; } return x * power(x, n-1); }
  • 47.
    Slide 47 of53. Lesson E – Introduction to Algorithms Analysis • Theorem: For any x and positive integer n, the power method returns . • Proof: by complete induction on n.  Basis: For n=0, we return 1.  If n is even, we return power(x,n/2)*power(x,n/2). By the induction hypothesis power(x,n/2) returns , so we return .  If n is odd, we return x*power(x,n-1). By the induction hypothesis power(x,n-1) returns , so we return . • The running time is now O(log n):  After 2 levels of recursion n has decreased by a factor of at least two (since either n or n-1 is even, in which case the recursive call is with n/2)  Thus we reach n==0 after at most 2log2n levels of recursion  Each level still takes O(1) time. n x 2 / n x n n x x  2 2 / ) ( 1  n x n n x x x   1
  • 48.
    Slide 48 of53. Lesson E – Introduction to Algorithms Reverse public class Reverse { static InputRequestor in = new InputRequestor(): public static void main(String[] args) { printReverse(); } public static void printReverse() { int j = in.requestInt(“Enter another Number”+ ” (0 for end of list):”); if (j!=0){ printReverse(); System.out.println(j); } } }
  • 49.
    Slide 49 of53. Lesson E – Introduction to Algorithms Recursive Definitions • Many things are defined recursively. • Fibonaci Numbers: 1, 1, 2, 3, 5, 8, 13, 21, …  fn = fn-1 + fn-2 • Arithmetic Expressions. E.g. 2+3*(5+(3-4))  A number is an expression  For any expression E: (E) is an expression  For any two expressions E1, E2: E1+E2, E1-E2, E1*E2, E1/E2 are expressions • Fractals • In such cases recursive algorithms are very natural
  • 50.
    Slide 50 of53. Lesson E – Introduction to Algorithms Fibonaci Numbers public class Fibonaci { public static void main(String[] args) { for(int j = 1 ; j<20 ; j++) System.out.println(fib(j)); } public static int fib (int n) { if (n <= 1) return 1; return fib(n-1) + fib(n-2); } }
  • 51.
    Slide 51 of53. Lesson E – Introduction to Algorithms TurtleFractal public class TurtleFractal { static Turtle turtle = new Turtle(); public static void main(String[] args) { turtle.tailDown(); drawFractal(500,4); } public static void drawFractal(int length, int level){ if (level==0) turtle.moveForward(length); else { drawFractal(length/3, level-1) ; turtle.turnLeft(60); drawFractal(length/3, level-1) ; turtle.turnRight(120); drawFractal(length/3, level-1) ; turtle.turnLeft(60); drawFractal(length/3, level-1) ; } }}
  • 52.
    Slide 52 of53. Lesson E – Introduction to Algorithms FractalTurtle output
  • 53.
    Slide 53 of53. Lesson E – Introduction to Algorithms Lecture E Introduction to Algorithms Unit E3 - Recursion