2. INSTRUCTIONS
All are instructed to maintain discipline throughout the lecture.
If any student has any query, “raise hand”
At the end of the session,give your attendance on cymsys.
Rizvi College of
Start-up & Incubation
3. CONTENTS
● Performance analysis
● Space, and time complexity
● Growth of function:
○ Big-Oh,
○ Omega
○ Theta notation
● Mathematical background
for algorithm analysis.
● Complexity class:
○ Definition of P, NP, NP-
Hard, NP-Complete
● Analysis of selection sort
● Analysis of insertion sort.
● Recurrences:
○ The substitution
method
○ Recursion tree method
○ Master method
4. What is an algorithm?
● An algorithm is a step by step procedure to solve a problem.
● In normal language, the algorithm is defined as a sequence of
statements which are used to perform a task.
● In computer science, an algorithm can be defined as follows...
○ An algorithm is a sequence of unambiguous instructions used for
solving a problem, which can be implemented (as a program) on a
computer.
5. What is an algorithm?
● Algorithms are used to convert our problem
solution into step by step statements.
● These statements can be converted into
computer programming instructions which form
a program.
● This program is executed by a computer to
produce a solution.
● Here, the program takes required data as input,
processes data according to the program
instructions and finally produces a result as
shown in the following picture.
6. Specifications of Algorithms
Every algorithm must satisfy the following specifications...
1. Input - Every algorithm must take zero or more number of input values from
external.
2. Output - Every algorithm must produce an output as result.
3. Definiteness - Every statement/instruction in an algorithm must be clear
and unambiguous (only one interpretation).
4. Finiteness - For all different cases, the algorithm must produce result within
a finite number of steps.
5. Effectiveness - Every instruction must be basic enough to be carried out
and it also must be feasible.
7. Example for an Algorithm
Let us consider the following problem for finding the largest value in a given list of values.
Problem Statement : Find the largest number in the given list of numbers?
Input : A list of positive integer numbers. (List must contain at least one number).
Output : The largest number in the given list of positive integer numbers.
Consider the given list of numbers as 'L' (input), and the largest number as 'max' (Output).
Algorithm
1. Step 1: Define a variable 'max' and initialize with '0'.
2. Step 2: Compare first number (say 'x') in the list 'L' with 'max', if 'x' is larger than 'max', set 'max' to 'x'.
3. Step 3: Repeat step 2 for all numbers in the list 'L'.
4. Step 4: Display the value of 'max' as a result.
8. What is Performance Analysis of an algorithm?
● If we want to go from city "A" to city "B", there can be many ways of doing
this.
● We can go by flight, by bus, by train and also by bicycle.
● Depending on the availability and convenience, we choose the one which suits
us.
● Similarly, in computer science, there are multiple algorithms to solve a
problem.
● When we have more than one algorithm to solve a problem, we need to select
the best one.
● Performance analysis helps us to select the best algorithm from multiple
algorithms to solve a problem.
● When there are multiple alternative algorithms to solve a problem, we analyze
them and pick the one which is best suitable for our requirements.
9. What is Performance Analysis of an algorithm?
● The formal definition is as follows...
○ Performance of an algorithm is a process of making evaluative judgement about algorithms.
● It can also be defined as follows...
○ Performance of an algorithm means predicting the resources which are required to an
algorithm to perform its task.
○ That means when we have multiple algorithms to solve a problem, we need to select a suitable
algorithm to solve that problem.
○ We compare algorithms with each other which are solving the same problem, to select the best
algorithm.
○ To compare algorithms, we use a set of parameters or set of elements like memory required by
that algorithm, the execution speed of that algorithm, easy to understand, easy to implement,
etc.,
10. What is Performance Analysis of an algorithm?
Generally, the performance of an algorithm depends on the following elements...
1. Whether that algorithm is providing the exact solution for the problem?
2. Whether it is easy to understand?
3. Whether it is easy to implement?
4. How much space (memory) it requires to solve the problem?
5. How much time it takes to solve the problem? Etc.,
When we want to analyse an algorithm, we consider only the space and time required by that particular
algorithm and we ignore all the remaining elements.
Based on this information, performance analysis of an algorithm can also be defined as follows...
Performance analysis of an algorithm is the process of calculating space and time required by that
algorithm.
11. What is Performance Analysis of an algorithm?
Performance analysis of an algorithm is performed by using
the following measures...
1. Space required to complete the task of that algorithm
(Space Complexity). It includes program space and data
space
2. Time required to complete the task of that algorithm
(Time Complexity)
12. What is Space complexity?
When we design an algorithm to solve a problem, it needs some computer memory to
complete its execution. For any algorithm, memory is required for the following
purposes...
1. To store program instructions.
2. To store constant values.
3. To store variable values.
4. And for few other things like funcion calls, jumping statements etc,.
Space complexity of an algorithm can be defined as follows...
Total amount of computer memory required by an algorithm to complete its execution
is called as space complexity of that algorithm.
13. What is Space complexity?
Generally, when a program is under execution it uses the computer memory for THREE reasons.
They are as follows...
1. Instruction Space: It is the amount of memory used to store compiled version of
instructions.
2. Environmental Stack: It is the amount of memory used to store information of partially
executed functions at the time of function call.
3. Data Space: It is the amount of memory used to store all the variables and constants.
Note - When we want to perform analysis of an algorithm based on its Space complexity, we
consider only Data Space and ignore Instruction Space as well as Environmental Stack.
That means we calculate only the memory required to store Variables, Constants, Structures, etc.,
14. What is Space complexity?
To calculate the space complexity, we must know the memory required to store different data type values (according to the
compiler). For example, the C Programming Language compiler requires the following...
1. 2 bytes to store Integer value.
2. 4 bytes to store Floating Point value.
3. 1 byte to store Character value.
4. 6 (OR) 8 bytes to store double value.
Consider the following piece of code..
int square(int a)
{
return a*a;
}
In the above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2 bytes of memory is used for
return value. That means, totally it requires 4 bytes of memory to complete its execution. And this 4 bytes
of memory is fixed for any input value of 'a'. This space complexity is said to be Constant Space
Complexity.
15. What is Space complexity?
Consider the following piece of code...
int sum(int A[ ], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
16. What is Space complexity?
In the above piece of code it requires
'n*2' bytes of memory to store array variable 'a[ ]'
2 bytes of memory for integer parameter 'n'
4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
2 bytes of memory for return value.
That means, totally it requires '2n+8' bytes of memory to complete its execution.
Here, the total amount of memory required depends on the value of 'n'. As 'n' value
increases the space required also increases proportionately. This type of space
complexity is said to be Linear Space Complexity.
If the amount of space required by an algorithm is increased with the increase of input value, then
that space complexity is said to be Linear Space Complexity.
17. What is Time complexity?
● Every algorithm requires some amount of computer time to execute its
instruction to perform the task.
● This computer time required is called time complexity.
● The time complexity of an algorithm can be defined as follows...
● The time complexity of an algorithm is the total amount of time required by
an algorithm to complete its execution.
18. What is Time complexity?
Generally, the running time of an algorithm depends upon the following...
1. Whether it is running on Single processor machine or Multi processor machine.
2. Whether it is a 32 bit machine or 64 bit machine.
3. Read and Write speed of the machine.
4. The amount of time required by an algorithm to perform Arithmetic operations,
logical operations, return value and assignment operations etc.,
5. Input data
Note - When we calculate time complexity of an algorithm, we consider only input
data and ignore the remaining things, as they are machine dependent. We check only,
how our program is behaving for the different input values to perform all the
operations like Arithmetic, Logical, Return value and Assignment etc.,
19. What is Time complexity?
Calculating Time Complexity of an algorithm based on the system configuration is a very difficult task because the
configuration changes from one system to another system. To solve this problem, we must assume a model machine with a
specific configuration. So that, we can able to calculate generalized time complexity according to that model machine.
To calculate the time complexity of an algorithm, we need to define a model machine. Let us assume a machine with following
configuration...
1. It is a Single processor machine
2. It is a 32 bit Operating System machine
3. It performs sequential execution
4. It requires 1 unit of time for Arithmetic and Logical operations
5. It requires 1 unit of time for Assignment and Return value
6. It requires 1 unit of time for Read and Write operations
Now, we calculate the time complexity of following example code by using the above-defined model machine...
20. What is Time complexity?
Consider the following piece of code...
int sum(int a, int b)
{
return a+b;
}
In the above sample code, it requires 1 unit of time to calculate a+b and 1 unit of time to return the value.
That means, totally it takes 2 units of time to complete its execution. And it does not change based on the
input values of a and b. That means for all input values, it requires the same amount of time i.e. 2 units.
If any program requires a fixed amount of time for all input values then its time complexity is said to be
Constant Time Complexity.
21. What is Time complexity?
Consider the following piece of code...
int sum(int A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
For the above code, time complexity can be calculated as follows...
23. What is Time complexity?
● In above calculation
○ Cost is the amount of computer time required for a single operation in each line.
○ Repetition is the amount of computer time required by each operation for all its
repetitions.
○ Total is the amount of computer time required by each operation to execute.
● So above code requires '4n+4' Units of computer time to complete the task.
● Here the exact time is not fixed. And it changes based on the n value. If we increase the
n value then the time required also increases linearly.
● Totally it takes '4n+4' units of time to complete its execution and it is
Linear Time Complexity.
● If the amount of time required by an algorithm is increased with the increase of input
value then that time complexity is said to be Linear Time Complexity.
29. What is Asymptotic Notation?
● Whenever we want to perform analysis of an algorithm, we need to calculate the
complexity of that algorithm.
● But when we calculate the complexity of an algorithm it does not provide the exact
amount of resource required.
● So instead of taking the exact amount of resource, we represent that complexity in a
general form (Notation) which produces the basic nature of that algorithm.
● We use that general form (Notation) for analysis process.
● Asymptotic notation of an algorithm is a mathematical representation of its complexity.
● Note - In asymptotic notation, when we want to represent the complexity of an
algorithm, we use only the most significant terms in the complexity of that algorithm
and ignore least significant terms in the complexity of that algorithm (Here complexity
can be Space Complexity or Time Complexity).
30. What is Asymptotic Notation?
For example, consider the following time complexities of two algorithms...
● Algorithm 1 : 5n2 + 2n + 1
● Algorithm 2 : 10n2 + 8n + 3
● Generally, when we analyze an algorithm, we consider the time complexity for larger values of
input data (i.e. 'n' value).
● In above two time complexities, for larger value of 'n' the term '2n + 1' in algorithm 1 has
least significance than the term '5n2', and the term '8n + 3' in algorithm 2 has least
significance than the term '10n2'.
● Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very larger
than the value of least significant terms ( 2n + 1 and 8n + 3 ).
● So for larger value of 'n' we ignore the least significant terms to represent overall time
required by an algorithm.
● In asymptotic notation, we use only the most significant terms to represent the time
complexity of an algorithm.
31. What is Asymptotic Notation?
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
1. Big - Oh (O)
2. Big - Omega (Ω)
3. Big - Theta (Θ)
32. Big - Oh Notation (O)
● Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
● That means Big - Oh notation always indicates the maximum time required by an algorithm
for all input values.
● That means Big - Oh notation describes the worst case of an algorithm time complexity.
● Big - Oh Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term.
If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as O(g(n)).
f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n)
value on X-Axis and time required is on Y-Axis
33. Big - Oh Notation (O)
In the graph after a particular input value n0, always C g(n) is greater than f(n) which indicates the
algorithm's upper bound.
34. Big - Oh Notation (O)
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all
values of C > 0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)
35. Big - Omega Notation (Ω)
● Big - Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity.
● That means Big-Omega notation always indicates the minimum time required by an algorithm for all input
values.
● That means Big-Omega notation describes the best case of an algorithm time complexity.
● Big - Omega Notation can be defined as follows...
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis
36. Big - Omega Notation (Ω)
In the graph after a particular input value n0, always C g(n) is less than f(n) which
indicates the algorithm lower bound.
37. Big - Omega Notation (Ω)
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and
n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)
38. Big - Theta Notation (Θ)
● Big - Theta notation is used to define the average bound of an algorithm in terms of Time
Complexity.
● That means Big - Theta notation always indicates the average time required by an algorithm
for all input values.
● That means Big - Theta notation describes the average case of an algorithm time complexity.
● Big - Theta Notation can be defined as follows...
○ Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1.
Then we can represent f(n) as Θ(g(n)).
○ f(n) = Θ(g(n))
● Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis
39. Big - Theta Notation (Θ)
In the graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is greater than
f(n) which indicates the algorithm's average bound.
40. Big - Theta Notation (Θ)
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all values of
C1 > 0, C2 > 0 and n0>= 1
C1 g(n) <= f(n) <= C2 g(n)
⇒C1 n <= 3n + 2 <= C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.
By using Big - Theta notation we can represent the time complexity as follows...
3n + 2 = Θ(n)
42. Insertion sort
Insertion sort algorithm arranges a list of elements in a particular
order. In insertion sort algorithm, every iteration moves an element
from unsorted portion to sorted portion until all the elements are
sorted in the list.
Step by Step Process
The insertion sort algorithm is performed using the following steps...
● Step 1: Assume that first element in the list is in sorted portion
and all the remaining elements are in unsorted portion.
● Step 2: Take first element from the unsorted portion and insert
that element into the sorted portion in the order specified.
● Step 3: Repeat the above process until all the elements from the
unsorted portion are moved into the sorted portion.
//Insertion sort logic
for i = 1 to size-1 {
temp = list[i];
j = i-1;
while ((temp < list[j]) && (j > 0))
{
list[j] = list[j-1];
j = j - 1;
}
list[j] = temp;
}
46. Insertion sort
Complexity of the Insertion Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make
(1+2+3+......+n-1) = (n (n-1))/2 number of comparisons in the worst case. If
the list is already sorted then it requires 'n' number of comparisons.
Worst Case : O(n2)
Best Case : Ω(n)
Average Case : Θ(n2)
47. Running time
• The running time depends on the input: an already
sorted sequence is easier to sort.
• Major Simplifying Convention: Parameterize the
running time by the size of the input, since short
sequences are easier to sort than long ones.
⮚TA(n) = time of A on length n inputs
• Generally, we seek upper bounds on the running
time, to have a guarantee of performance.
48. Kinds of analyses
Worst-case: (usually)
• T(n) = maximum time of algorithm on any
input of size n.
Average-case: (sometimes)
• T(n) = expected time of algorithm over all
inputs of size n.
• Need assumption of statistical distribution of
inputs.
Best-case: (NEVER)
• Cheat with a slow algorithm that works fast
on some input.
49. Machine-independent time
What is insertion sort worst-case time?
BIG IDEAS:
• Ignore machine dependent constants,
otherwise impossible to verify and to compare algorithms
• Look at growth of T(n) as n → ∞ .
“Asymptotic Analysis”
50. Θ-notation
• Drop low-order terms; ignore leading constants.
• Example: 3n3 + 90n2 – 5n + 6046 = Θ(n3)
DEF:
Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and
n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n)
for all n ≥ n0 }
Basic manipulations:
51. Asymptotic performance
n
T(n)
n0
.
• Asymptotic analysis is a useful tool
to help to structure our thinking
toward better algorithm
• We shouldn’t ignore
asymptotically slower algorithms.
• Real-world design situations often
call for a careful balancing
When n gets large enough, a Θ(n2) algorithm always beats a Θ(n3) algorithm.
52. Insertion sort analysis
Worst case: Input reverse sorted.
Average case: All permutations equally likely.
Is insertion sort a fast sorting algorithm?
• Moderately so, for small n.
• Not at all, for large n.
[arithmetic series]
53. Selection Sort Algorithm
Selection Sort algorithm is used to arrange a list of elements in a particular order
(Ascending or Descending). In selection sort, the first element in the list is
selected and it is compared repeatedly with all the remaining elements in the list. If
any element is smaller than the selected element (for Ascending order), then both
are swapped so that first position is filled with the smallest element in the sorted
order. Next, we select the element at a second position in the list and it is
compared with all the remaining elements in the list. If any element is smaller than
the selected element, then both are swapped. This procedure is repeated until the
entire list is sorted.
54. Step by Step Process
The selection sort algorithm is performed using the following steps...
● Step 1 - Select the first element of the list (i.e., Element at first position in the list).
● Step 2: Compare the selected element with all the other elements in the list.
● Step 3: In every comparision, if any element is found smaller than the selected element (for Ascending order), then both
are swapped.
● Step 4: Repeat the same procedure with element in the next position in the list till the entire list is sorted.
55.
56.
57.
58. Complexity of the Selection Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-3)+......+1) = (n (n-1))/2 number of comparisons
in the worst case. If the list is already sorted then it requires 'n' number of comparisons.
Worst Case : O(n2)
Best Case : Ω(n2)
Average Case : Θ(n2)
59.
60. Recurrence Relation
● A recurrence relation is an equation which represents a sequence
based on some rule.
● It helps in finding the subsequent term (next term) dependent upon the
preceding term (previous term).
● If we know the previous term in a given series, then we can easily
determine the next term.
61. Recurrence Relation Definition
● When we speak about a standard pattern, all the terms in the relation or
equation have the same characteristics.
● It means if there is a value of ‘n’, it can be used to determine the other values
by just entering the value of ‘n’.
● The value of n should be organised and accurate, which is known as the
Simplest form.
● In case of the simplest form of any such relation, the next term is dependent
only upon the previous term.
● The sequence or series generated by recurrence relation is called a
Recurrence Sequence.
62. Recurrence Relation Definition
A recurrence is an equation or inequality that describes a function in
terms of its values on smaller inputs. To solve a Recurrence Relation
means to obtain a function defined on the natural numbers that satisfy
the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE
SORT Procedures is described by the recurrence.
63. Solving Recurrence Equations
● A recurrence is an equation or inequality that describes a function in terms of its value on
smaller inputs. Recurrences are generally used in divide-and-conquer paradigm.
● Let us consider T(n) to be the running time on a problem of size n.
● If the problem size is small enough, say n < c where c is a constant, the straightforward
solution takes constant time, which is written as θ(1). If the division of the problem yields a
number of sub-problems with size n/b
● To solve the problem, the required time is a.T(n/b). If we consider the time required for
division is D(n) and the time required for combining the results of sub-problems is C(n), the
recurrence relation can be represented as −
64. Solving Recurrence Equations
A recurrence relation can be solved using the following methods −
● Substitution Method − In this method, we guess a bound and
using mathematical induction we prove that our assumption was
correct.
● Recursion Tree Method − In this method, a recurrence tree is
formed where each node represents the cost.
● Master’s Theorem − This is another important technique to find
the complexity of a recurrence relation.
65. Solving Recurrence Equations
There are four methods for solving Recurrence:
1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method
66. Substitution Method
The Substitution Method Consists of two main steps:
1. Guess the Solution.
2. Use the mathematical induction to find the boundary
condition and shows that the guess is correct.
67. Substitution Method
In the substitution method, we make a guess for the solution, and then we use mathematical induction to prove
the guessed answer is right or wrong.
For example, Let us take the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(nLogn). Now we use induction to prove our guess.
We need to prove that T(n) <= cnLogn. We can assume that it is true for values smaller than n.
T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
<= cnLogn
68. Recurrence Tree Method
● In the recurrence tree method, a recurrence tree is drawn and calculates the
time taken by every level of a tree.
● After analyzing each level, we sum up the work done at all levels.
● In order to draw a recurrence tree, start from the given recurrences and keep
drawing till we find a pattern among all the levels.
● The pattern follows arithmetic or geometric series.
● For example, consider the recurrence relation.
T(n) = T(n/4) + T(n/2) + cn2
69.
70. Steps to Solve Recurrence Relations Using Recursion Tree
In the recursion tree method, the time required to solve a subproblem is referred to as the
cost of the subproblem. So if you find "cost" word associated with the recursion tree
then it means nothing but the time required to solve a subproblem.
To solve a recurrence relation using the recursion tree method, a few steps must be followed.
They are,
1. Draw the recursion tree for the given recurrence relation.
2. Calculate the height of the recursion tree formed.
3. Calculate the cost(time required to solve all the subproblems at a level) at each level.
4. Calculate the total number of nodes at each level in the recursion tree.
5. Sum up the cost of all the levels in the recursion tree.
71. Steps to Solve Recurrence Relations Using Recursion Tree
72.
73.
74.
75.
76. Solve the following recurrence relation using recursion tree method-
T(n) = 2T(n/2) + n
77. Step-01:
Draw a recursion tree based on the given recurrence relation.
The given recurrence relation shows-
● A problem of size n will get divided into 2 sub-problems of size n/2.
● Then, each sub-problem of size n/2 will get divided into 2 sub-problems of
size n/4 and so on.
● At the bottom most layer, the size of sub-problems will reduce to 1.
The given recurrence relation shows-
● The cost of dividing a problem of size n into its 2 sub-problems and then
combining its solution is n.
● The cost of dividing a problem of size n/2 into its 2 sub-problems and then
combining its solution is n/2 and so on.
78.
79. Step-02:
Determine cost of each level-
● Cost of level-0 = n
● Cost of level-1 = n/2 + n/2 = n
● Cost of level-2 = n/4 + n/4 + n/4 + n/4 = n and so on.
Step-03:
Determine total number of levels in the recursion tree-
● Size of sub-problem at level-0 = n/20
● Size of sub-problem at level-1 = n/21
● Size of sub-problem at level-2 = n/22
Continuing in similar manner, we have-
i
80. Suppose at level-x (last level), size of sub-problem becomes 1. Then-
n / 2x = 1
2x = n
Taking log on both sides, we get-
xlog2 = logn
x = log2n
∴ Total number of levels in the recursion tree = log2n + 1
81. Step-04:
Determine number of nodes in the last level-
● Level-0 has 20 nodes i.e. 1 node
● Level-1 has 21 nodes i.e. 2 nodes
● Level-2 has 22 nodes i.e. 4 nodes
Continuing in similar manner, we have-
Level-log2n has 2log
2
n nodes i.e. n nodes
Step-05:
Determine cost of last level-
Cost of last level = n x T(1) = θ(n)
82. Step-06:
Add costs of all the levels of the recursion tree and simplify
the expression so obtained in terms of asymptotic notation-
83. = n x log2n + θ (n)
= nlog2n + θ (n)
= θ (nlog2n)
87. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
T(n/4) T(n/4) T(n/4) T(n/4)
cn/2 cn/2
88. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
89. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
90. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
cn
91. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
cn
cn
92. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
cn
cn
cn
…
93. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
cn
cn
cn
#leaves = n Θ(n)
…
94. Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/4 cn/4 cn/4 cn/4
cn/2 cn/2
Θ(1)
h = lg n
cn
cn
cn
#leaves = n Θ(n)
Total = Θ(n lg n)
…
95. Conclusions
• Θ(n lg n) grows more slowly than Θ(n2).
• Therefore, merge sort asymptotically
beats insertion sort in the worst case.
• In practice, merge sort beats insertion
sort for n > 30 or so.
97. Types of Problems
Trackable : Problems that can be solvable in a
reasonable(polynomial) time.
Intractable : Some problems are intractable, as they
grow large, we are unable to solve them in
reasonable time.
98. Types of Problems
Optimization Problems – An optimization problem is one which asks,
“What is the optimal solution to problem X?” –
Examples:
• 0-1 Knapsack
• Fractional Knapsack
• Minimum Spanning Tree
Decision Problems – An decision problem is one with yes/no answer –
Examples:
• Does a graph G have a MST of weight W?
99. Optimization/Decision Problems
● An optimization problem tries to find an optimal solution
● A decision problem tries to answer a yes/no question
● Many problems will have decision and optimization
versions –
Eg: Traveling salesman problem
○ optimization: find hamiltonian cycle of minimum weight
○ decision: is there a hamiltonian cycle of weight <=k
100. Decision problems
The most commonly analyzed problems in theoretical computer science
are decision problems—the kinds of problems that can be posed as yes–
no questions. The primality example above, for instance, is an example
of a decision problem as it can be represented by the yes–no question "is
the natural number n prime". In terms of the theory of computation, a
decision problem is represented as the set of input strings that a
computer running a correct algorithm would answer "yes" to.
101. Decision problems
A decision problem has only two possible outputs, yes or no (alternatively, 1 or 0) on any input.
102. Complexity Classes
In computer science, there exist some problems whose solutions
are not yet found, the problems are divided into classes known as
Complexity Classes. In complexity theory, a Complexity Class is
a set of problems with related complexity. These classes help
scientists to group problems based on how much time and space
they require to solve problems and verify the solutions.
103. Types of Complexity Classes
1. P Class
2. NP Class
3. NP-hard
4. NP-complete
104. P Class
P: the class of problems that have polynomial-time deterministic algorithms.
● That is, they are solvable in O(p(n)), where p(n) is a polynomial on n
● A deterministic algorithm is (essentially) one that always computes the
correct answer
105. P class
The P in the P class stands for Polynomial Time. It is the collection of decision
problems(problems with a “yes” or “no” answer) that can be solved by a
deterministic machine in polynomial time.
Features:
● The solution to P problems is easy to find.
● P is often a class of computational problems that are solvable and tractable.
Tractable means that the problems can be solved in theory as well as in
practice. But the problems that can be solved in theory but not in practice
are known as intractable.
106. P class
The advantages in considering the class of polynomial-time
algorithms is that all reasonable deterministic single
processor model of computation can be simulated on each
other
107. NP Class
NP: the class of decision problems that are solvable in polynomial time on a
nondeterministic machine (or with a nondeterministic algorithm)
● (A deterministic computer is what we know)
● A nondeterministic computer is one that can “guess” the right answer or
solution
• Think of a nondeterministic computer as a parallel machine that can freely
spawn an infinite number of processes
• Thus NP can also be thought of as the class of problems “whose solutions can
be verified in polynomial time” • Note that NP stands for “Nondeterministic
Polynomial-time”
108. NP Class
The NP in NP class stands for Non-deterministic Polynomial Time. It is the
collection of decision problems that can be solved by a non-deterministic machine
in polynomial time.
Features:
● The solutions of the NP class are hard to find since they are being solved by
a non-deterministic machine but the solutions are easy to verify.
● Problems of NP can be verified by a Turing machine in polynomial time.
109. NP Class
The class NP consists of those problems that are verifiable in polynomial time. NP
is the class of decision problems for which it is easy to check the correctness of a
claimed answer, with the aid of a little extra information. Hence, we aren't asking
for a way to find a solution, but only to verify that an alleged solution really is
correct.
P is the class of problems that are solvable by a deterministic Turing machine in
polynomial time and NP is the class of problems that are solvable by a
nondeterministic Turing machine in polynomial time.
110. This class contains many problems that one would like to be able to solve effectively:
1. Boolean Satisfiability Problem (SAT).
2. Hamiltonian Path Problem.
3. Graph coloring.
111. NP-hard class
● What does NP-hard mean?
A lot of times you can solve a problem by reducing it to a
different problem. I can reduce Problem B to Problem A if,
given a solution to Problem A, I can easily construct a
solution to Problem B. (In this case, "easily" means "in
polynomial time.“).
● A problem is NP-hard if all problems in NP are polynomial
time reducible to it
112. NP-hard class
An NP-hard problem is at least as hard as the hardest problem in NP and it is a class of problems such that every
problem in NP reduces to NP-hard.
Features:
● All NP-hard problems are not in NP.
● It takes a long time to check them. This means if a solution for an NP-hard problem is given then it takes
a long time to check whether it is right or not.
● A problem A is in NP-hard if, for every problem L in NP, there exists a polynomial-time reduction from
L to A.
Some of the examples of problems in Np-hard are:
1. Halting problem.
2. Qualified Boolean formulas.
3. No Hamiltonian cycle.
114. NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems
are the hard problems in NP.
Features:
● NP-complete problems are special as any problem in NP class can be
transformed or reduced into NP-complete problems in polynomial time.
● If one could solve an NP-complete problem in polynomial time, then one
could also solve any NP problem in polynomial time.