Introduction to IEEE STANDARDS and its different types.pptx
CH-1.1 Introduction (1).pptx
1. DISCOVER . LEARN . EMPOWER
UNIT-1
UNIVERSITY INSTITUTE OF COMPUTING
MASTER OF COMPUTER APPLICATIONS
DESIGN AND ANALYSIS OF ALGORITHMS
23CAH-511
1
2. • Introduction:
Algorithm Specification, Analysis
Framework, Performance Analysis:
Space complexity, Time Complexity.
DESIGN AND ANALYSIS
OF
ALGORITHMS
CO
Number
Title Level
CO1 Analyze the asymptotic performance
of algorithms
Remember
CO2
CO3
Implement major data structure
algorithms
Apply and analyze important
algorithmic design paradigms and
their applications
Understand
Understand
Course Outcome
2
3. Outline
• Algorithm Specification
• Performance Analysis
• Asymptotic Notations
• Recursive and non-recursive algorithms
• Sorting and Searching algorithms
• Fundamentals of Data Structures
3
4. Algorithm
• An algorithm is a well-defined computational procedure that takes some values or
set of values, as input and produces some value or set of values, as output.
4
Algorithm
Input Output
Fig. 1.1: Process of Algorithm
5. Algorithm Design and Process
For creating an efficient algorithm to solve a problem in an efficient way.
An Algorithm Development Process
The development of an algorithm (a plan) is a key step in solving a problem. Once
we have an algorithm, we can translate it into a computer program in some
programming language.
5
6. Contd…
Algorithm development process consists of five major steps.
• Step 1: Obtain a description of the problem.
• Step 2: Analyze the problem.
• Step 3: Develop a high-level algorithm.
• Step 4: Refine the algorithm by adding more detail.
• Step 5: Review the algorithm.
6
Fig. 1.2: Process of Algorithm [1]
7. Characteristics of Algorithms
The main characteristics of algorithms are as follows −
• Input specified
• Output specified
• Definiteness
• Effectiveness
• Finiteness
• Independent
7
8. Algorithm Specification
An algorithm is a finite set of instructions that, if followed, accomplishes a
particular task. In addition, all algorithms must satisfy the following criteria:
• Input.
• Output.
• Definiteness.
• Finiteness.
• Effectiveness.
8
9. Characteristics of Algorithms
The main characteristics of algorithms are as follows :-
i. Input:-
Minimum number of input required is zero (0) to run algorithm.
ii. Output:-
Minimum number of output required is one.
Algo should be produce at list one output.
iii. Finiteness:-
Algo should terminate after finite number of steps.
9
10. Characteristics of Algorithms
iv. Effectiveness:-
It should be easy to implement in any programming language.
Algo is language independent.
v. Definiteness:-
Algo should be unambigious.
example:- 2+3*4=14 ← always answer is same.
10
11. Algorithms Design Techniques
The algorithms can be classified in various ways. They are:
• Implementation Method
• Design Method
• Design Approaches
• Other Classifications
11
12. Algorithms Design Techniques
Classification by Implementation Method: There are primarily three main categories into which an
algorithm can be named in this type of classification. They are:
• Recursion or Iteration: A recursive algorithm is an algorithm which calls itself again and again until a
base condition is achieved whereas iterative algorithms use loops and/or data
structures like stacks, queues to solve any problem. Every recursive solution can be implemented as an
iterative solution and vice versa.
Example: The Tower of Hanoi is implemented in a recursive fashion while Stock Span problem is
implemented iteratively.
• Exact or Approximate: Algorithms that are capable of finding an optimal solution for any problem are
known as the exact algorithm. For all those problems, where it is not possible to find the most
optimized solution, an approximation algorithm is used. Approximate algorithms are the type of
algorithms that find the result as an average outcome of sub outcomes to a problem.
Example: For NP-Hard Problems, approximation algorithms are used. Sorting algorithms are the exact
algorithms.
• Serial or Parallel or Distributed Algorithms: In serial algorithms, one instruction is executed at a time
while parallel algorithms are those in which we divide the problem into subproblems and execute
them on different processors. If parallel algorithms are distributed on different machines, then they
are known as distributed algorithms.
12
13. Algorithms Design Techniques
Classification by Design Method: There are primarily three main categories into which an
algorithm can be named in this type of classification. They are:
• Greedy Method: In the greedy method, at each step, a decision is made to choose the local
optimum, without thinking about the future consequences.
Example: Fractional Knapsack, Activity Selection.
• Divide and Conquer: The Divide and Conquer strategy involves dividing the problem into sub-
problem, recursively solving them, and then recombining them for the final answer.
Example: Merge sort, Quicksort.
• Dynamic Programming: The approach of Dynamic programming is similar to divide and
conquer. The difference is that whenever we have recursive function calls with the same
result, instead of calling them again we try to store the result in a data structure in the form
of a table and retrieve the results from the table. Thus, the overall time complexity is
reduced. “Dynamic” means we dynamically decide, whether to call a function or retrieve
values from the table.
Example: 0-1 Knapsack, subset-sum problem.
13
14. Algorithms Design Techniques
• Linear Programming: In Linear Programming, there are inequalities in terms of inputs and
maximizing or minimizing some linear functions of inputs.
Example: Maximum flow of Directed Graph
• Reduction(Transform and Conquer): In this method, we solve a difficult problem by
transforming it into a known problem for which we have an optimal solution. Basically, the
goal is to find a reducing algorithm whose complexity is not dominated by the resulting
reduced algorithms.
Example: Selection algorithm for finding the median in a list involves first sorting the list and
then finding out the middle element in the sorted list. These techniques are also
called transform and conquer.
• Backtracking: This technique is very useful in solving combinatorial problems that have a
single unique solution. Where we have to find the correct combination of steps that lead to
fulfillment of the task. Such problems have multiple stages and there are multiple options at
each stage. This approach is based on exploring each available option at every stage one-by-
one. While exploring an option if a point is reached that doesn’t seem to lead to the solution,
the program control backtracks one step, and starts exploring the next option. In this way,
the program explores all possible course of actions and finds the route that leads to the
solution.
Example: N-queen problem, maize problem.
14
15. Algorithms Design Techniques
Branch and Bound: This technique is very useful in solving combinatorial
optimization problem that have multiple solutions and we are interested in
find the most optimum solution. In this approach, the entire solution
space is represented in the form of a state space tree. As the program
progresses each state combination is explored, and the previous solution is
replaced by new one if it is not the optimal than the current solution.
Example: Job sequencing, Travelling salesman problem.
15
16. Algorithms Design Techniques
Classification by Design Approaches : There are two approaches for
designing an algorithm. these approaches include
• Top-Down Approach :
• Bottom-up approach
• Top-Down Approach: In the top-down approach, a large problem is
divided into small sub-problem. and keep repeating the process
of decomposing problems until the complex problem is solved.
• Bottom-up approach: The bottom-up approach is also known as the
reverse of top-down approaches.
In approach different, part of a complex program is solved using a
programming language and then this is combined into a complete
program.
16
17. Algorithms Design Techniques
Other Classifications: Apart from classifying the algorithms into the above broad
categories, the algorithm can be classified into other broad categories like:
• Randomized Algorithms: Algorithms that make random choices for faster
solutions are known as randomized algorithms.
Example: Randomized Quicksort Algorithm.
• Classification by complexity: Algorithms that are classified on the basis of time
taken to get a solution to any problem for input size. This analysis is known
as time complexity analysis.
Example: Some algorithms take O(n), while some take exponential time.
• Classification by Research Area: In CS each field has its own problems and
needs efficient algorithms.
Example: Sorting Algorithm, Searching Algorithm, Machine Learning etc.
• Branch and Bound Enumeration and Backtracking: These are mostly used in
Artificial Intelligence.
17
18. Recursive Vs Non-recursive algos
• Recursive algorithms: These algorithms solve a problem by calling themselves with smaller and
simpler versions of the original problem. They repeat this process until a base case is reached, at which
point the solution is returned.
• Non-recursive algorithms: These algorithms solve a problem without calling themselves. They
typically use loops or iterators to repeatedly execute steps until the desired outcome is achieved. The
image depicts this with a loop iterating through a set of instructions.
18
19. Recursive Vs Non-recursive algos
Key Differences
• Problem Decomposition: Recursive algorithms decompose the problem into smaller, similar subproblems, while non-
recursive algorithms use different iterative constructs like loops or while statements to break down the problem into
smaller steps.
• Control Flow: Recursive algorithms exhibit nested function calls, leading to a stack-based execution, while non-recursive
algorithms have a linear flow controlled by loops or conditional statements.
• Memory Usage: Recursive algorithms generally require more memory due to function call stack overhead, whereas non-
recursive algorithms typically have lower memory usage as they don't involve nested function calls.
• Code Complexity: Recursive algorithms can sometimes be more concise and elegant and complex to understand the flow
control, but non-recursive algorithms can be easier to understand and debug due to their linear structure.
19
20. Recursive Vs Non-recursive algos
Examples
Recursive: Factorial calculation, quicksort algorithm, binary search.
Non-recursive: Bubble sort algorithm, printing all elements of a list, calculating the sum of an array.
Applications and Trade-offs
Recursive algorithms: Useful for problems with naturally self-similar structures, can be more concise and
expressive, but may have higher memory usage and slower execution due to function call overhead.
Non-recursive algorithms: More efficient in terms of memory and speed, easier to understand and
debug, but may require more complex code for problems with recursive nature.
20
21. FAQ’s on Algorithm Basics
• Define algorithm
• Write steps for designing an algorithm.
• Distinguish between Algorithm and Pseudocode.
• Explain the properties / characteristics of an algorithm with an example.
• What is the need to analyze an algorithm.
• What do you mean by correctness of an algorithm.
• Elaborate different algorithm design techniques.
• How recursive algorithm differs from and non recursive algorithms.
21
22. Analysis Framework
Analysis of Algorithm is the process of analyzing the problem-solving capability of
the algorithm in terms of the time and size required (the size of memory for storage
while implementation).
However, the main concern regarding analysis of algorithms is the required time or
performance.
22
23. Analysis Framework
Resource usage:
Here, the time is considered to be the primary measure of efficiency
.We are also concerned with how much the respective algorithm
involves the computer memory. But mostly time is the resource that is
dealt with. And the actual running time depends on a variety of
backgrounds: like the speed of the Computer, the language in which
the algorithm is implemented, the compiler/interpreter, skill of the
programmers etc.
• So, mainly the resource usage can be divided into: 1.Memory (space)
2.Time
23
24. Analysis Framework
• Analysis of Algorithm is the process of analyzing the problem-solving capability
of the algorithm in terms of the time and size required (the size of memory for
storage while implementation).
• However, the main concern regarding analysis of algorithms is the required time
or performance.
24
25. Analysis Framework
Generally, we perform the following types of analysis −
• Worst-case − The maximum number of steps taken on any instance of size a.
• Best-case − The minimum number of steps taken on any instance of size a.
• Average case − An average number of steps taken on any instance of size a.
25
26. Performance Analysis
• Performance of an algorithm is a process of making evaluative judgment about
algorithms.
• We can also defined as: Performance of an algorithm means predicting the
resources which are required to an algorithm to perform its task.
• Performance of an algorithm depends on the following elements...
Whether that algorithm is providing the exact solution for the problem?
Whether it is easy to understand?
26
27. Contd…
Whether it is easy to implement?
How much space (memory) it requires to solve the problem?
How much time it takes to solve the problem? Etc.,
Performance analysis of an algorithm is performed by using the following
measures...
• Space required to complete the task of that algorithm (Space Complexity). It
includes program space and data space
• Time required to complete the task of that algorithm (Time Complexity)
27
28. Time Complexity
• When we calculate time complexity of an algorithm, we consider only input data and
ignore the remaining things, as they are machine dependent.
• We check only, how our program is behaving for the different input values to perform all
the operations like Arithmetic, Logical, Return value and Assignment etc.
The time complexity of an algorithm is the total amount of time required by an
algorithm to complete its execution.
28
29. Time Complexity
Constant time part: Any instruction that is executed just once comes in this part.
For example, input, output, if-else, switch, etc.
Variable Time Part: Any instruction that is executed more than once, say n times,
comes in this part. For example, loops, recursion, etc.
Therefore Time complexity T(P) of any algorithm P is
T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the algorithm,
which depends on instance characteristic I.
29
30. Space Complexity
• When we design an algorithm to solve a problem, it needs some computer
memory to complete its execution. For any algorithm, memory is required for the
following purposes...
• Memory required to store program instructions
• Memory required to store constant values
• Memory required to store variable values
• And for few other things
• Space complexity of an algorithm can be defined as follows...
Total amount of computer memory required by an algorithm to complete its execution is called as space
complexity of that algorithm.
30
31. Space Complexity
• Algorithm: SUM(A, B)
• Step 1 – START
• Step 2 – C ← A + B + 10
• Step 3 – Stop
• Here we have three variables A, B, and C and one constant. Hence
S(P) = 1 + 3. Now, space depends on data types of given variables and
constant types and it will be multiplied accordingly.
31
34. Difference between Space and Time
complexity
34
Space Complexity Time Complexity
Space Complexity is the space
(memory) needed for an algorithm
to solve the problem.
An efficient algorithm take space as
small as possible
Time Complexity is the time
required for an algorithm to
complete its process.
It allows comparing the algorithm to
check which one is the efficient one.
35. Analysis Framework
Time taken by an algorithm?
• performance measurement or Apostoriori Analysis: Implementing the algorithm
in a machine and then calculating the time taken by the system to execute the
program successfully.
• Performance Evaluation or Apriori Analysis. Before implementing the algorithm
in a system. This is done as follows
1. How long the algorithm takes :-will be represented as a function of the size of
the input.
f(n)→how long it takes if ‘n’ is the size of input.
2. How fast the function that characterizes the running time grows with the input
size.
“Rate of growth of running time”.
The algorithm with less rate of growth of running time is considered better.
35
36. Analysis Framework
Generally, we perform the following types of analysis −
• Worst-case − The maximum number of steps taken on any instance of size a.
• Best-case − The minimum number of steps taken on any instance of size a.
• Average case − An average number of steps taken on any instance of size a.
36
37. Asymptotic notations
Asymptotic notations are used to describe the running time of an algorithm. They are used
to describe the growth rate of the running time as the input size increases. They are usually
expressed using big O notation, big Omega notation, and big Theta notation.
Big O notation is used to describe the upper bound of the running time, it provides an
upper limit on how long the algorithm can take to finish.
Big Omega notation is used to describe the lower bound of the running time, it provides a
lower limit on how long the algorithm can take to finish.
Big Theta notation is used to describe the exact running time, it provides an exact limit on
how long the algorithm can take to finish.
37
55. Types of recurrence equations used to analyze
algorithms
Here are two common types of recurrence equations used to analyze
algorithms:
• Divide and Conquer Recurrences:
• Linear Recurrences:
55
56. Divide and Conquer Recurrences
Divide and Conquer Recurrences:
• Characterized by breaking a problem into smaller subproblems of the same
type, solving them recursively, and combining their solutions.
• General form: T(n) = aT(n/b) + f(n), where:
• T(n) represents the time complexity for input size n.
• a is the number of subproblems (usually a constant).
• b is the factor by which the problem is divided (usually a constant greater than 1).
• f(n) represents the work done outside of recursive calls (usually a polynomial function of n).
• Examples: Merge Sort, Quick Sort, Binary Search Tree operations.
56
57. Linear Recurrences
Characterized by representing problems where the time complexity depends
linearly on previous terms.
• General form: T(n) = aT(n-1) + f(n), where:
• T(n) represents the time complexity for input size n.
• a is a constant coefficient.
• f(n) represents a non-recursive term (usually a polynomial function of n).
• Examples: Factorial calculation, Fibonacci sequence, Tower of Hanoi.
57
76. Master Method
• The Master Method is used for solving the following types of recurrence
• n is the size of the problem.
• a is the number of subproblems in the recursion.
• n/b is the size of each subproblem. (Here it is assumed that all subproblems are
essentially the same size.)
• f (n) is the sum of the work done outside the recursive calls, which includes the
sum of dividing the problem and the sum of combining the solutions to the
subproblems.
• It is not possible always bound the function according to the requirement, so we
make three cases which will tell us what kind of bound we can apply on the
function.
76
84. Solution to the recurrence relation T(n) = 2T(n/2) + n
with base case T(1) = 2:
the solution to the recurrence relation T(n) = 2T(n/2) + n with base case T(1) = 2:
1. Master Theorem Applicability:
The given recurrence relation fits the form T(n) = aT(n/b) + f(n) with a = 2, b =
2, and f(n) = n.
Since a = b^k (where k = 1), we can apply the Master Theorem (Case 2) directly.
2. Master Theorem (Case 2):
If T(n) = aT(n/b) + f(n), where a = b^k and f(n) = Θ(n^log_b(a) log^k n), then T(n)
= Θ(n^log_b(a) log^(k+1) n).
In this case, a = 2, b = 2, k = 1, and f(n) = n = Θ(n^log_b(a) log^k n) = Θ(n^1 log^1
n).
Therefore, T(n) = Θ(n^log_b(a) log^(k+1) n) = Θ(n^1 log^2 n) = Θ(n log^2 n).
84