Chapter 2: Algorithm Analysis - II
Text: Read Weiss, §2.4.3 – 2.4.6
1
Solutions for the Maximum Subsequence
Sum Problem: Algorithm 1
•exhaustively tries all possibilities: for
all combinations of all the values for
starting and ending points (i and j
respectively), the partial sum
(ThisSum) is calculated and
compared with the maximum sum
value (MaxSum) computed so far.
The running time is O(N3 ) and is
entirely due to lines 5 and 6.
• A more precise analysis;
2
int MaxSubSum1( const int A[ ], int N ) {
int ThisSum, MaxSum, i, j, k;
/* 1*/ MaxSum = 0;
/* 2*/ for( i = 0; i < N; i++ )
/* 3*/ for( j = i; j < N; j++ ) {
/* 4*/ ThisSum = 0;
/* 5*/ for( k = i; k <= j; k++ )
/* 6*/ ThisSum += A[ k ];
/* 7*/ if( ThisSum > MaxSum )
/* 8*/ MaxSum = ThisSum;
}
/* 9*/ return MaxSum;
}
6/)2233()232(2/12/)1()2/3(6/)12)(1(2/1
1
1)232(2/1
1
)2/3(
1
22/1
1
2/)1)(2(
1
0
2/))(1(
1
0
1
1
1
0
1
1
NNNNNNNNNNNN
N
i
NN
N
i
iN
N
i
i
N
i
iNiN
N
i
iNiN
N
i
N
ij
ij
N
i
N
ij
j
ik
++=+++++−++=
∑
=
+++∑
=
+−∑
=
=
∑
=
+−+−=
∑
−
=
−+−=∑
−
=
∑
−
=
+−=∑
−
=
∑
−
=
∑
=
3
Solutions for the Maximum Subsequence
Sum Problem: Algorithm 2
• We can improve upon Algorithm 1
to avoid the cubic running time by
removing a for loop. Obviously,
this is not always possible, but in
this case there are an awful lot of
unnecessary computations
present in Algorithm 1.
• Notice that
• so the computation at lines 5 and
6 in Algorithm 1 is unduly
expensive. Algorithm 2 is clearly
O(N2 ); the analysis is even
simpler than before.
int MaxSubSum2( const int A[ ], int N ) {
int ThisSum, MaxSum, i, j;
/* 1*/ MaxSum = 0;
/* 2*/ for( i = 0; i < N; i++ ) {
/* 3*/ ThisSum = 0;
/* 4*/ for( j = i; j < N; j++ ) {
/* 5*/ ThisSum += A[ j ];
/* 6*/ if( ThisSum > MaxSum )
/* 7*/ MaxSum = ThisSum;
}
}
/* 8*/ return MaxSum;
}
k i
j
Ak Aj
k i
j 1
Ak
4
Solutions for the Maximum Subsequence
Sum Problem: Algorithm 3
• It is a recursive O(N log N) algorithm
using a divide-and-conquer
strategy. Divide part: Split the
problem into two roughly equal
subproblems, each half the size of
the original. The subproblems are
then solved recursively. Conquer
part: Patch together the two
solutions of the subproblems
possibly doing a small amount of
additional work, to arrive at a
solution for the whole problem.
• The maximum subsequence sum
can (1) either occur entirely in the
left half of the input, or (2) entirely in
the right half, or (3) it crosses the
middle and is in both halves.
•Solve (1) and (2) recursively. For (3),
Add the largest sum in the first half
including the last element in the first
half and the largest sum in the
second half including the first element
in the second half.
•Example:
•(1) first half: 6 (A0 - A2), (2) second
half: 8 (A5 - A6). (3) max sum (first
half) covering the last item: 4 (A0 -
A3), max sum (second half) spanning
the first element: 7 (A4 - A6). Thus, the
max sum crossing the middle is
4+7=11 (A0 - A6). Answer!
First Half Second Half
4,-3,5,-2 -1,2,6,-2
Solutions for the Maximum Subsequence Sum
Problem: Algorithm 3 – Implementation I
/* Implementation */
static int MaxSubSum(const int A[ ], int Left, int Right) {
int MaxLeftSum, MaxRightSum;
int MaxLeftBorderSum, MaxRightBorderSum;
int LeftBorderSum, RightBorderSum;
int Center, i;
/* 1*/ if( Left == Right ) /* Base case */
/* 2*/ if( A[ Left ] > 0 )
/* 3*/ return A[ Left ];
else
/* 4*/ return 0;
/* Initial Call */
int MaxSubSum3( const int A[ ], int N ) {
return MaxSubSum( A, 0, N - 1 );
}
/* Utility Function */
static int Max3( int A, int B, int C ) {
return A > B ? A > C ? A : C : B > C ? B : C;
}
5
Solutions for the Maximum Subsequence Sum
Problem: Algorithm 3 – Implementation II
/* Implementation */
/* Calculate the center */
/* 5*/ Center = ( Left + Right ) / 2;
/* Make recursive calls */
/* 6*/ MaxLeftSum = MaxSubSum( A, Left, Center );
/* 7*/ MaxRightSum = MaxSubSum( A, Center + 1, Right );
/* Find the max subsequence sum in the left half where the */
/* subsequence spans the last element of the left half */
/* 8*/ MaxLeftBorderSum = 0; LeftBorderSum = 0;
/* 9*/ for( i = Center; i >= Left; i-- )
{
/*10*/ LeftBorderSum += A[ i ];
/*11*/ if( LeftBorderSum > MaxLeftBorderSum )
/*12*/ MaxLeftBorderSum = LeftBorderSum;
}
6
/* Implementation */
/*13*/ MaxRightBorderSum = 0; RightBorderSum = 0;
/*14*/ for( i = Center + 1; i <= Right; i++ )
{
/*15*/ RightBorderSum += A[ i ];
/*16*/ if( RightBorderSum > MaxRightBorderSum )
/*17*/ MaxRightBorderSum = RightBorderSum;
}
/* The function Max3 returns the largest of */
/* its three arguments */
/*18*/ return Max3( MaxLeftSum, MaxRightSum,
/*19*/ MaxLeftBorderSum + MaxRightBorderSum );
}
Solutions for the Maximum Subsequence Sum
Problem: Algorithm 3 – Implementation III
7
• T(n) : time to solve a maximum subsequence sum problem of size n.
• T(1) = 1; constant amount of time to execute lines 1 to 4
• Otherwise, the program must perform two recursive calls, the two for
loops between lines 9 and 17, and some small amount of
bookkeeping, such as lines 5 and 18. The two for loops combine to
touch every element from A0 to AN-1, and there is constant work
inside the loops, so the time spent in lines 9 to 17 is O(N). The
remainder of the work is performed in lines 6 and 7 to solve two
subsequence problems of size N/2 (assuming N is even). The total
time for the algorithm then obeys:
• T(1) = 1
• T(N) = 2T(N/2) + O(N)
• we can replace the O(N) term in the equation above with N; since
T(N) will be expressed in Big-Oh notation anyway, this will not affect
the answer.
• T(N) = 2(2T(N/4)+N/2) + N = 4T(N/4) + 2N
• = 4(2T(N/8)+N/4) + 2N = 8T(N/8) + 3N = ... = 2kT(N/2k) + kN
• If N = 2k then T(N) = N + kN = N log N + N = O(N log N)
Solutions for the Maximum Subsequence Sum
Problem: Algorithm 3 – Analysis
8
9
Solutions for the Maximum Subsequence
Sum Problem: Algorithm 4
• Algorithm 4 is O(N).
• Why does the algorithm
actually work? It’s an
improvement over Algorithm
2 given the following:
• Observation 1: If A[i] < 0
then it can not start an
optimal subsequence. Hence,
no negative subsequence
can be a prefix in the optimal.
• Observation 2: If
i can advance to j+1.
• Proof: Let . Any
subsequence starting at p
int MaxSubSum4(const int A[], int N)
{
int ThisSum, MaxSum, j;
/* 1*/ ThisSum = MaxSum = 0;
/* 2*/ for( j = 0; j < N; j++ )
{
/* 3*/ ThisSum += A[ j ];
/* 4*/ if( ThisSum > MaxSum )
/* 5*/ MaxSum = ThisSum;
/* 6*/ else if( ThisSum < 0 )
/* 7*/ ThisSum = 0;
}
/* 8*/ return MaxSum;
}
0][ <∑ =
j
ik
kA
[ ]jip ..1+∈
is not larger than the corresponding sequence starting at i,
since j is the first index causing sum<0).
Logarithms in the Running Time
• The most frequent appearance of logarithms
centers around the following general rule: An
algorithm is O(log n) if it takes constant (O(1))
time to cut the problem size by a fraction (which
is usually 1/2).
• On the other hand, if constant time is required to
merely reduce the problem by a constant
amount (such as to make the problem smaller by
1), then the algorithm is O(n).
• We usually presume that the input is preread
(Otherwise Ω(n)).
10
Binary Search - I
• Definition: Given an
integer x and integers
A0, A1, . . . , AN-1, which
are presorted and already
in memory, find i such that
Ai = x, or return i = -1 if x
is not in the input.
• The loop is O(1) per
iteration. It starts with
High-Low=N - 1 and ends
with High-Low ≤-1. Every
time through the loop the
value High-Low must be
at least halved from its
previous value; thus, the
loop is repeated at most
= O(log N).
11
typedef int ElementType;
#define NotFound (-1)
int BinarySearch(const ElementType A[],
ElementType X, int N){
int Low, Mid, High;
/* 1*/ Low = 0; High = N - 1;
/* 2*/ while( Low <= High ){
/* 3*/ Mid = ( Low + High ) / 2;
/* 4*/ if( A[ Mid ] < X )
/* 5*/ Low = Mid + 1;
/* 6*/ else if( A[ Mid ] > X )
/* 7*/ High = Mid - 1;
else
/* 8*/ return Mid;/* Found */
}
/* 9*/ return NotFound;
}
  2)1log( +−N
Binary Search - II
• Initially, High – Low = N – 1 = d
• Assume 2k ≤ d < 2k+1
• After each new iteration, new value for d may be one of
High – Mid – 1 or Mid – 1 – Low which are both bounded
from above as shown below:
12
 
222
1
2
)1(
222
1
2
dLowHigh
Low
LowHigh
Low
LowHigh
xxx
dLowHighLowHigh
High
LowHigh
High
=
−
=−
+
≤−−


 +
+≤≤=
−
=
+
−≤−




 +
− 



iterationsAfter,122/02
iterations1After,2212/12
...
iteration1After,212/12
Initially,122
kkd
kkd
kdk
kdk
<≤
−<−≤
<≤−
+<≤
• Hence, after k iterations, d
becomes 1. Loop iterates 2
more times where d takes on
the values 0 and -1 in this
order. Thus, it is repeated
k+2 times.
1log1log +<≤⇒+<≤ 





kdkkdk ≤
Euclid’s Algorithm
13
• It computes the greatest common
divisor. The greatest common
divisor (gcd) of two integers is the
largest integer that divides both.
Thus, gcd (50, 15) = 5.
• It computes gcd(M, N), assuming
M≥ N (If N > M, the first iteration of
the loop swaps them).
• Fact: If M>N, then M mod N < M/2
• Proof: There are two cases:
• If N≤M/2, then since the remainder
is always smaller than N, the
theorem is true for this case.
• If N>M/2,But then N goes into M
once with a remainder M-N<M/2,
proving the theorem.
unsigned int gcd(unsigned int M,
unsigned int N)
{
unsigned int Rem;
/* 1*/ while( N > 0 )
{
/* 2*/ Rem = M % N;
/* 3*/ M = N;
/* 4*/ N = Rem;
}
/* 5*/ return M;
}
iteration M N
After 1st N rem1=M mod N < M/2
rem1 < N
After 2nd rem1<M/2 rem2=N mod rem1 < N/2
• Thus, the Algorithm takes O(log N)
Exponentiation - I
• Algorithm pow(X, N) raises an
integer to an integer power.
• Count the number of
multiplications as the
measurement of running time.
• XN : N -1 multiplications.
• Lines 1 to 4 handle the base
case of the recursion.
• XN=XN/2*XN/2 if N is even
• XN=X(N-1)/2*X(N-1)/2*X if N is odd
• # of multiplications required is
clearly at most 2 log N, because
at most two multiplications are
required to halve the problem.
14
#define IsEven( N ) (( N )%2==0)
long int pow( long int X,
unsigned int N )
{
/* 1*/ if( N == 0 )
/* 2*/ return 1;
/* 3*/ if( N == 1 )
/* 4*/ return X;
/* 5*/ if( IsEven( N ) )
/* 6*/ return pow(X*X, N/2);
else
/* 7*/ return pow(X*X, N/2)*X;
}
15
Exponentiation - II
• It is interesting to note how much the code can be tweaked.
• Lines 3 and 4 are unnecessary (Line 7 does the right thing).
• Line 7
/* 7*/ return pow(X*X, N/2)*X;
can be rewritten as
/* 7*/ return pow(X, N-1)*X;
• Line 6, on the other hand,
/* 6*/ return pow(X*X, N/2);
cannot be substituted by any of the following:
/*6a*/ return( pow( pow( X, 2 ), N/2 ) );
/*6b*/ return( pow( pow( X, N/2 ), 2 ) );
/*6c*/ return( pow( X, N/2 ) * pow( X, N/2 ) );
Both lines 6a and 6b are incorrect because pow(X, 2) can not make any
progress and an infinite loop results. Using line 6c affects the efficiency,
because there are now two recursive calls of size N/2 instead of only one.
An analysis will show that the running time is no longer O(log N).

3 chapter2 algorithm_analysispart2

  • 1.
    Chapter 2: AlgorithmAnalysis - II Text: Read Weiss, §2.4.3 – 2.4.6 1
  • 2.
    Solutions for theMaximum Subsequence Sum Problem: Algorithm 1 •exhaustively tries all possibilities: for all combinations of all the values for starting and ending points (i and j respectively), the partial sum (ThisSum) is calculated and compared with the maximum sum value (MaxSum) computed so far. The running time is O(N3 ) and is entirely due to lines 5 and 6. • A more precise analysis; 2 int MaxSubSum1( const int A[ ], int N ) { int ThisSum, MaxSum, i, j, k; /* 1*/ MaxSum = 0; /* 2*/ for( i = 0; i < N; i++ ) /* 3*/ for( j = i; j < N; j++ ) { /* 4*/ ThisSum = 0; /* 5*/ for( k = i; k <= j; k++ ) /* 6*/ ThisSum += A[ k ]; /* 7*/ if( ThisSum > MaxSum ) /* 8*/ MaxSum = ThisSum; } /* 9*/ return MaxSum; } 6/)2233()232(2/12/)1()2/3(6/)12)(1(2/1 1 1)232(2/1 1 )2/3( 1 22/1 1 2/)1)(2( 1 0 2/))(1( 1 0 1 1 1 0 1 1 NNNNNNNNNNNN N i NN N i iN N i i N i iNiN N i iNiN N i N ij ij N i N ij j ik ++=+++++−++= ∑ = +++∑ = +−∑ = = ∑ = +−+−= ∑ − = −+−=∑ − = ∑ − = +−=∑ − = ∑ − = ∑ =
  • 3.
    3 Solutions for theMaximum Subsequence Sum Problem: Algorithm 2 • We can improve upon Algorithm 1 to avoid the cubic running time by removing a for loop. Obviously, this is not always possible, but in this case there are an awful lot of unnecessary computations present in Algorithm 1. • Notice that • so the computation at lines 5 and 6 in Algorithm 1 is unduly expensive. Algorithm 2 is clearly O(N2 ); the analysis is even simpler than before. int MaxSubSum2( const int A[ ], int N ) { int ThisSum, MaxSum, i, j; /* 1*/ MaxSum = 0; /* 2*/ for( i = 0; i < N; i++ ) { /* 3*/ ThisSum = 0; /* 4*/ for( j = i; j < N; j++ ) { /* 5*/ ThisSum += A[ j ]; /* 6*/ if( ThisSum > MaxSum ) /* 7*/ MaxSum = ThisSum; } } /* 8*/ return MaxSum; } k i j Ak Aj k i j 1 Ak
  • 4.
    4 Solutions for theMaximum Subsequence Sum Problem: Algorithm 3 • It is a recursive O(N log N) algorithm using a divide-and-conquer strategy. Divide part: Split the problem into two roughly equal subproblems, each half the size of the original. The subproblems are then solved recursively. Conquer part: Patch together the two solutions of the subproblems possibly doing a small amount of additional work, to arrive at a solution for the whole problem. • The maximum subsequence sum can (1) either occur entirely in the left half of the input, or (2) entirely in the right half, or (3) it crosses the middle and is in both halves. •Solve (1) and (2) recursively. For (3), Add the largest sum in the first half including the last element in the first half and the largest sum in the second half including the first element in the second half. •Example: •(1) first half: 6 (A0 - A2), (2) second half: 8 (A5 - A6). (3) max sum (first half) covering the last item: 4 (A0 - A3), max sum (second half) spanning the first element: 7 (A4 - A6). Thus, the max sum crossing the middle is 4+7=11 (A0 - A6). Answer! First Half Second Half 4,-3,5,-2 -1,2,6,-2
  • 5.
    Solutions for theMaximum Subsequence Sum Problem: Algorithm 3 – Implementation I /* Implementation */ static int MaxSubSum(const int A[ ], int Left, int Right) { int MaxLeftSum, MaxRightSum; int MaxLeftBorderSum, MaxRightBorderSum; int LeftBorderSum, RightBorderSum; int Center, i; /* 1*/ if( Left == Right ) /* Base case */ /* 2*/ if( A[ Left ] > 0 ) /* 3*/ return A[ Left ]; else /* 4*/ return 0; /* Initial Call */ int MaxSubSum3( const int A[ ], int N ) { return MaxSubSum( A, 0, N - 1 ); } /* Utility Function */ static int Max3( int A, int B, int C ) { return A > B ? A > C ? A : C : B > C ? B : C; } 5
  • 6.
    Solutions for theMaximum Subsequence Sum Problem: Algorithm 3 – Implementation II /* Implementation */ /* Calculate the center */ /* 5*/ Center = ( Left + Right ) / 2; /* Make recursive calls */ /* 6*/ MaxLeftSum = MaxSubSum( A, Left, Center ); /* 7*/ MaxRightSum = MaxSubSum( A, Center + 1, Right ); /* Find the max subsequence sum in the left half where the */ /* subsequence spans the last element of the left half */ /* 8*/ MaxLeftBorderSum = 0; LeftBorderSum = 0; /* 9*/ for( i = Center; i >= Left; i-- ) { /*10*/ LeftBorderSum += A[ i ]; /*11*/ if( LeftBorderSum > MaxLeftBorderSum ) /*12*/ MaxLeftBorderSum = LeftBorderSum; } 6
  • 7.
    /* Implementation */ /*13*/MaxRightBorderSum = 0; RightBorderSum = 0; /*14*/ for( i = Center + 1; i <= Right; i++ ) { /*15*/ RightBorderSum += A[ i ]; /*16*/ if( RightBorderSum > MaxRightBorderSum ) /*17*/ MaxRightBorderSum = RightBorderSum; } /* The function Max3 returns the largest of */ /* its three arguments */ /*18*/ return Max3( MaxLeftSum, MaxRightSum, /*19*/ MaxLeftBorderSum + MaxRightBorderSum ); } Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Implementation III 7
  • 8.
    • T(n) :time to solve a maximum subsequence sum problem of size n. • T(1) = 1; constant amount of time to execute lines 1 to 4 • Otherwise, the program must perform two recursive calls, the two for loops between lines 9 and 17, and some small amount of bookkeeping, such as lines 5 and 18. The two for loops combine to touch every element from A0 to AN-1, and there is constant work inside the loops, so the time spent in lines 9 to 17 is O(N). The remainder of the work is performed in lines 6 and 7 to solve two subsequence problems of size N/2 (assuming N is even). The total time for the algorithm then obeys: • T(1) = 1 • T(N) = 2T(N/2) + O(N) • we can replace the O(N) term in the equation above with N; since T(N) will be expressed in Big-Oh notation anyway, this will not affect the answer. • T(N) = 2(2T(N/4)+N/2) + N = 4T(N/4) + 2N • = 4(2T(N/8)+N/4) + 2N = 8T(N/8) + 3N = ... = 2kT(N/2k) + kN • If N = 2k then T(N) = N + kN = N log N + N = O(N log N) Solutions for the Maximum Subsequence Sum Problem: Algorithm 3 – Analysis 8
  • 9.
    9 Solutions for theMaximum Subsequence Sum Problem: Algorithm 4 • Algorithm 4 is O(N). • Why does the algorithm actually work? It’s an improvement over Algorithm 2 given the following: • Observation 1: If A[i] < 0 then it can not start an optimal subsequence. Hence, no negative subsequence can be a prefix in the optimal. • Observation 2: If i can advance to j+1. • Proof: Let . Any subsequence starting at p int MaxSubSum4(const int A[], int N) { int ThisSum, MaxSum, j; /* 1*/ ThisSum = MaxSum = 0; /* 2*/ for( j = 0; j < N; j++ ) { /* 3*/ ThisSum += A[ j ]; /* 4*/ if( ThisSum > MaxSum ) /* 5*/ MaxSum = ThisSum; /* 6*/ else if( ThisSum < 0 ) /* 7*/ ThisSum = 0; } /* 8*/ return MaxSum; } 0][ <∑ = j ik kA [ ]jip ..1+∈ is not larger than the corresponding sequence starting at i, since j is the first index causing sum<0).
  • 10.
    Logarithms in theRunning Time • The most frequent appearance of logarithms centers around the following general rule: An algorithm is O(log n) if it takes constant (O(1)) time to cut the problem size by a fraction (which is usually 1/2). • On the other hand, if constant time is required to merely reduce the problem by a constant amount (such as to make the problem smaller by 1), then the algorithm is O(n). • We usually presume that the input is preread (Otherwise Ω(n)). 10
  • 11.
    Binary Search -I • Definition: Given an integer x and integers A0, A1, . . . , AN-1, which are presorted and already in memory, find i such that Ai = x, or return i = -1 if x is not in the input. • The loop is O(1) per iteration. It starts with High-Low=N - 1 and ends with High-Low ≤-1. Every time through the loop the value High-Low must be at least halved from its previous value; thus, the loop is repeated at most = O(log N). 11 typedef int ElementType; #define NotFound (-1) int BinarySearch(const ElementType A[], ElementType X, int N){ int Low, Mid, High; /* 1*/ Low = 0; High = N - 1; /* 2*/ while( Low <= High ){ /* 3*/ Mid = ( Low + High ) / 2; /* 4*/ if( A[ Mid ] < X ) /* 5*/ Low = Mid + 1; /* 6*/ else if( A[ Mid ] > X ) /* 7*/ High = Mid - 1; else /* 8*/ return Mid;/* Found */ } /* 9*/ return NotFound; }   2)1log( +−N
  • 12.
    Binary Search -II • Initially, High – Low = N – 1 = d • Assume 2k ≤ d < 2k+1 • After each new iteration, new value for d may be one of High – Mid – 1 or Mid – 1 – Low which are both bounded from above as shown below: 12   222 1 2 )1( 222 1 2 dLowHigh Low LowHigh Low LowHigh xxx dLowHighLowHigh High LowHigh High = − =− + ≤−−    + +≤≤= − = + −≤−      + −     iterationsAfter,122/02 iterations1After,2212/12 ... iteration1After,212/12 Initially,122 kkd kkd kdk kdk <≤ −<−≤ <≤− +<≤ • Hence, after k iterations, d becomes 1. Loop iterates 2 more times where d takes on the values 0 and -1 in this order. Thus, it is repeated k+2 times. 1log1log +<≤⇒+<≤       kdkkdk ≤
  • 13.
    Euclid’s Algorithm 13 • Itcomputes the greatest common divisor. The greatest common divisor (gcd) of two integers is the largest integer that divides both. Thus, gcd (50, 15) = 5. • It computes gcd(M, N), assuming M≥ N (If N > M, the first iteration of the loop swaps them). • Fact: If M>N, then M mod N < M/2 • Proof: There are two cases: • If N≤M/2, then since the remainder is always smaller than N, the theorem is true for this case. • If N>M/2,But then N goes into M once with a remainder M-N<M/2, proving the theorem. unsigned int gcd(unsigned int M, unsigned int N) { unsigned int Rem; /* 1*/ while( N > 0 ) { /* 2*/ Rem = M % N; /* 3*/ M = N; /* 4*/ N = Rem; } /* 5*/ return M; } iteration M N After 1st N rem1=M mod N < M/2 rem1 < N After 2nd rem1<M/2 rem2=N mod rem1 < N/2 • Thus, the Algorithm takes O(log N)
  • 14.
    Exponentiation - I •Algorithm pow(X, N) raises an integer to an integer power. • Count the number of multiplications as the measurement of running time. • XN : N -1 multiplications. • Lines 1 to 4 handle the base case of the recursion. • XN=XN/2*XN/2 if N is even • XN=X(N-1)/2*X(N-1)/2*X if N is odd • # of multiplications required is clearly at most 2 log N, because at most two multiplications are required to halve the problem. 14 #define IsEven( N ) (( N )%2==0) long int pow( long int X, unsigned int N ) { /* 1*/ if( N == 0 ) /* 2*/ return 1; /* 3*/ if( N == 1 ) /* 4*/ return X; /* 5*/ if( IsEven( N ) ) /* 6*/ return pow(X*X, N/2); else /* 7*/ return pow(X*X, N/2)*X; }
  • 15.
    15 Exponentiation - II •It is interesting to note how much the code can be tweaked. • Lines 3 and 4 are unnecessary (Line 7 does the right thing). • Line 7 /* 7*/ return pow(X*X, N/2)*X; can be rewritten as /* 7*/ return pow(X, N-1)*X; • Line 6, on the other hand, /* 6*/ return pow(X*X, N/2); cannot be substituted by any of the following: /*6a*/ return( pow( pow( X, 2 ), N/2 ) ); /*6b*/ return( pow( pow( X, N/2 ), 2 ) ); /*6c*/ return( pow( X, N/2 ) * pow( X, N/2 ) ); Both lines 6a and 6b are incorrect because pow(X, 2) can not make any progress and an infinite loop results. Using line 6c affects the efficiency, because there are now two recursive calls of size N/2 instead of only one. An analysis will show that the running time is no longer O(log N).