MODULE 1
INTRODUCTION AND ALGORITHM ANALYSIS
 Introduction to Data structure
 Algorithm Analysis
 Mathematical Background
 Model,
 What to Analyze
 Running Time Calculations
 General Rules
Introduction to Data Structure
Data Structure:
A data structure is a way to organize, store, and manage data in a computer so that it can be used efficiently.
Linear Data Structures
Non-Linear Data Structures
MODULE 2: LISTS, STACKS AND QUEUES
MODULE 3: TREES
MODULE 4: SEARCHING AND SORTING
MODULE 5 : GRAPHS
Data Structure Operations
Creating: Allocating memory and setting up a new data structure.
Example: Creating an empty array arr = [] or a linked list.
Inserting: Adding a new element to the data structure.
Example: Adding 10 to an array arr = [10].
Updating: Modifying the value of an existing element.
Example: Changing 10 to 20 in arr = [20].
Deleting: Removing an element from the data structure.
Example: Removing 20 from arr = [].
Traversing: Accessing each element of the data structure one by one.
Example: Printing all elements of arr = [10, 20, 30].
Searching: Finding the location of a specific element.
Example: Searching for 20 in arr = [10, 20, 30] (found at index 1).
Sorting: Arranging elements in a specific order (ascending or descending).
Example: Sorting arr = [30, 10, 20] to arr = [10, 20, 30] in ascending order.
Data Structures applications
Algorithm Analysis
 An algorithm is a step-by-step procedure or a set of rules designed to perform a specific task or solve a
particular problem.
 It is a well-defined sequence of operations that takes some input, processes it, and produces an output.
Algorithms are the foundation of computer science and programming, helping to solve problems in an
efficient and systematic way.
Example:
A simple algorithm for adding two numbers:
Input: Two numbers, a and b.
Step 1: Add the two numbers: sum = a + b.
Step 2: Output the result sum.
Characteristics of an algorithm
All algorithms must satisfy the following criteria:
1. Input: Zero or more quantities are externally supplied. (An algorithm must
accept either 0 or more no of input)
eg: Display welcome message
2. Output: At least one quantity is produced. (An Algorithm must produce at least
one output)
3. Definiteness: Each instruction is clear and unambiguous. (instruction should
have only one meaning)
eg: Statements such as “add 6 or 7 to x” or “Compute 5/0” are not permitted”
4. Finiteness: The algorithm should terminate after a finite number of steps.
5. Effectiveness: Instruction is basic enough to be carried out(simple instruction)
Algorithm Program
1.Algorithms are written at Design Phase 1.Programs are written at
implementation phase
2.Written in Natural Language 2.Written in any Programming Language
3. Algorithms are not dependant on OS
and S/W
3. Programs are dependant on OS and
S/W
4.Need domain knowledge to write
algorithm
4. Need programming language
A program is an implementation of an algorithm to be run on a specific computer
In order to analyze the performance of an algorithm we use
1. Time Complexity
2. Space Complexity
Time complexity
The amount of time that an algorithm requires for execution is known as
time complexity
PERFORMANCE ANALYSIS
1. Best case
If an algorithm requires a minimum amount of time for its execution then it is
known as a Best case
2. Worst case
If an algorithm requires a maximum amount of time for its execution then it is
known as Worst case
3. Average case
If an algorithm requires an Average amount of time for its execution then it is
known as a Average case
Eg Linear Search
To calculate time complexity
2 approach
1. Frequency count and step count
2. Asymptotic Notations
Components of Time Complexity
1. System Capacity:
 If the speed of computer is fast then the output will be generated fast
 If the speed of computer is slow then the output will be generated slow
2. Based on Processor
 If the computer contains only a single processor then output is generated slowly
 If the computer contains multiple processor then output is generated fastly
Asymptotic Notations
Asymptotic Notations are mathematical tools used to analyze the performance of
algorithms by understanding how their efficiency changes as the input size grows.
There are mainly three asymptotic notations:
1.Big-O Notation (O-notation)
2.Omega Notation (Ω-notation)
3.Theta Notation (Θ-notation)
Big-O Notation (O-notation)
 Big-O notation represents the upper bound of the running time of an algorithm.
 It is the most widely used notation for Asymptotic analysis.
It specifies the upper bound of a function.
The maximum time required by an algorithm, it provides worst-case time complexity.
It returns the highest possible output value(big-O) for a given input.
Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.
If f(n) describes the running time of an algorithm,
f(n) = O(g(n))
if there exists a positive constant C and n0 such
that,
f(n) ≤ c* g(n) for all n ≥ n0 ,c >0
Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the Best case complexity of an algorithm.
It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.
If f(n) describes the running time of an
algorithm,
f(n) = Ω (g(n))
if there exists a positive constant C and n0 such
that,
f(n) ≥ c* g(n) for all n ≥ n0, c > 0
Theta Notation (Θ-Notation):
it represents the upper and the lower bound of the running time of an algorithm, it is
used for analyzing the average-case complexity of an algorithm.
The function f is said to be
f(n) = Θ(g(n))
if there are constants c1, c2 > 0 and a natural
number n0 such that
c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
Time Complexity Analysis: Linear Search
Best Case (Ω):
In the best case, the target element is found at the very first position.
The loop runs only once.
Time complexity: Ω(1)
Worst Case (O):
In the worst case, the target is either not in the array or is the last element.
The loop iterates over all n elements.
Time complexity: O(n)
Average Case (Θ):
On average, the target is located somewhere in the middle of the array.
The loop would, on average, check n/2 elements.
Time complexity: Θ(n)
General Rules
1.For Loops:
The runtime of a for loop is the runtime of the statements inside it multiplied by the number of iterations.
2.Nested For Loops:
Analyze from the innermost loop outward.
The total runtime is the runtime of the innermost statement multiplied by the product of all loop sizes.
3.Consecutive Statements:
The overall runtime is determined by the maximum runtime among the consecutive statements.
4.If/Else Statements:
The runtime is the test condition's runtime plus the maximum runtime of either the if or else branch.
5.Recursive Calls:
Recursive calls should always progress toward a base case.
Avoid duplicating work by solving the same subproblem multiple times (use techniques like memoization or dynamic
programming).
6.General Guidelines:
Focus on analyzing the "biggest" terms when considering complexity (Big-O notation).
Ignore constants and lower-order terms as they do not affect the growth rate.
Use well-defined mathematical models for time and space complexity.
General Rules for Analyzing Running Time Example
For Loops
•The time complexity of a for loop is at most the time taken by the statements inside it multiplied by the number of iterations.
for (i = 0; i < n; i++)
{
k++;
}
Running time: O(n)
Nested For Loops
•Analyze them inside-out. Multiply the sizes of all loops to get the total running time.
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
k++;
Running time: O(n2
)
Consecutive Statements
for (i = 0; i < n; i++) // O(n)
a[i] = 0;
for (i = 0; i < n; i++) // O(n2
)
for (j = 0; j < n; j++)
a[i] += a[j] + i + j;
If/Else Statements
Rule: The runtime is the runtime of the condition check plus the larger runtime of the two branches.
if (condition)then // O(1)
a[i] = 0; // O(1)
else
for (j = 0; j < n; j++) // O(n)
b[j] = 1;
if branch: O(1),else branch: O(n), Total Runtime: O(n)
Recurrence Relation: T(n)=T(n−1)+T(n−2)+(1)
This corresponds to the Fibonacci sequence itself, where each call generates two additional calls.
Growth: Exponential, T(n) = O(2n
)
Each Fibonacci number is recalculated multiple times. For example, fib(5) calculates fib(4) and fib(3), but fib(4)
will again calculate fib(3), leading to redundant computations.
Recursive Approach
unsigned int fib(unsigned int n) {
if (n <= 1)
return n;
return fib(n - 1) + fib(n - 2);
}
Recursive Relation
void test(int a) {
if (a > 0) {
printf("%d", a);
test(a - 1);
}
}
time complexity is O(a)

Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc

  • 1.
    MODULE 1 INTRODUCTION ANDALGORITHM ANALYSIS  Introduction to Data structure  Algorithm Analysis  Mathematical Background  Model,  What to Analyze  Running Time Calculations  General Rules
  • 2.
    Introduction to DataStructure Data Structure: A data structure is a way to organize, store, and manage data in a computer so that it can be used efficiently.
  • 5.
  • 6.
    Non-Linear Data Structures MODULE2: LISTS, STACKS AND QUEUES MODULE 3: TREES MODULE 4: SEARCHING AND SORTING MODULE 5 : GRAPHS
  • 7.
    Data Structure Operations Creating:Allocating memory and setting up a new data structure. Example: Creating an empty array arr = [] or a linked list. Inserting: Adding a new element to the data structure. Example: Adding 10 to an array arr = [10]. Updating: Modifying the value of an existing element. Example: Changing 10 to 20 in arr = [20]. Deleting: Removing an element from the data structure. Example: Removing 20 from arr = []. Traversing: Accessing each element of the data structure one by one. Example: Printing all elements of arr = [10, 20, 30]. Searching: Finding the location of a specific element. Example: Searching for 20 in arr = [10, 20, 30] (found at index 1). Sorting: Arranging elements in a specific order (ascending or descending). Example: Sorting arr = [30, 10, 20] to arr = [10, 20, 30] in ascending order.
  • 8.
  • 9.
    Algorithm Analysis  Analgorithm is a step-by-step procedure or a set of rules designed to perform a specific task or solve a particular problem.  It is a well-defined sequence of operations that takes some input, processes it, and produces an output. Algorithms are the foundation of computer science and programming, helping to solve problems in an efficient and systematic way.
  • 10.
    Example: A simple algorithmfor adding two numbers: Input: Two numbers, a and b. Step 1: Add the two numbers: sum = a + b. Step 2: Output the result sum.
  • 11.
    Characteristics of analgorithm All algorithms must satisfy the following criteria: 1. Input: Zero or more quantities are externally supplied. (An algorithm must accept either 0 or more no of input) eg: Display welcome message 2. Output: At least one quantity is produced. (An Algorithm must produce at least one output) 3. Definiteness: Each instruction is clear and unambiguous. (instruction should have only one meaning) eg: Statements such as “add 6 or 7 to x” or “Compute 5/0” are not permitted” 4. Finiteness: The algorithm should terminate after a finite number of steps. 5. Effectiveness: Instruction is basic enough to be carried out(simple instruction)
  • 12.
    Algorithm Program 1.Algorithms arewritten at Design Phase 1.Programs are written at implementation phase 2.Written in Natural Language 2.Written in any Programming Language 3. Algorithms are not dependant on OS and S/W 3. Programs are dependant on OS and S/W 4.Need domain knowledge to write algorithm 4. Need programming language A program is an implementation of an algorithm to be run on a specific computer
  • 14.
    In order toanalyze the performance of an algorithm we use 1. Time Complexity 2. Space Complexity Time complexity The amount of time that an algorithm requires for execution is known as time complexity PERFORMANCE ANALYSIS
  • 15.
    1. Best case Ifan algorithm requires a minimum amount of time for its execution then it is known as a Best case 2. Worst case If an algorithm requires a maximum amount of time for its execution then it is known as Worst case 3. Average case If an algorithm requires an Average amount of time for its execution then it is known as a Average case Eg Linear Search
  • 16.
    To calculate timecomplexity 2 approach 1. Frequency count and step count 2. Asymptotic Notations Components of Time Complexity 1. System Capacity:  If the speed of computer is fast then the output will be generated fast  If the speed of computer is slow then the output will be generated slow 2. Based on Processor  If the computer contains only a single processor then output is generated slowly  If the computer contains multiple processor then output is generated fastly
  • 17.
    Asymptotic Notations Asymptotic Notationsare mathematical tools used to analyze the performance of algorithms by understanding how their efficiency changes as the input size grows. There are mainly three asymptotic notations: 1.Big-O Notation (O-notation) 2.Omega Notation (Ω-notation) 3.Theta Notation (Θ-notation)
  • 18.
    Big-O Notation (O-notation) Big-O notation represents the upper bound of the running time of an algorithm.  It is the most widely used notation for Asymptotic analysis. It specifies the upper bound of a function. The maximum time required by an algorithm, it provides worst-case time complexity. It returns the highest possible output value(big-O) for a given input. Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete statement execution in the longest amount of time possible.
  • 19.
    If f(n) describesthe running time of an algorithm, f(n) = O(g(n)) if there exists a positive constant C and n0 such that, f(n) ≤ c* g(n) for all n ≥ n0 ,c >0
  • 20.
    Omega Notation (Ω-Notation): Omeganotation represents the lower bound of the running time of an algorithm. Thus, it provides the Best case complexity of an algorithm. It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time. If f(n) describes the running time of an algorithm, f(n) = Ω (g(n)) if there exists a positive constant C and n0 such that, f(n) ≥ c* g(n) for all n ≥ n0, c > 0
  • 21.
    Theta Notation (Θ-Notation): itrepresents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm. The function f is said to be f(n) = Θ(g(n)) if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
  • 23.
    Time Complexity Analysis:Linear Search Best Case (Ω): In the best case, the target element is found at the very first position. The loop runs only once. Time complexity: Ω(1) Worst Case (O): In the worst case, the target is either not in the array or is the last element. The loop iterates over all n elements. Time complexity: O(n) Average Case (Θ): On average, the target is located somewhere in the middle of the array. The loop would, on average, check n/2 elements. Time complexity: Θ(n)
  • 24.
    General Rules 1.For Loops: Theruntime of a for loop is the runtime of the statements inside it multiplied by the number of iterations. 2.Nested For Loops: Analyze from the innermost loop outward. The total runtime is the runtime of the innermost statement multiplied by the product of all loop sizes. 3.Consecutive Statements: The overall runtime is determined by the maximum runtime among the consecutive statements. 4.If/Else Statements: The runtime is the test condition's runtime plus the maximum runtime of either the if or else branch.
  • 25.
    5.Recursive Calls: Recursive callsshould always progress toward a base case. Avoid duplicating work by solving the same subproblem multiple times (use techniques like memoization or dynamic programming). 6.General Guidelines: Focus on analyzing the "biggest" terms when considering complexity (Big-O notation). Ignore constants and lower-order terms as they do not affect the growth rate. Use well-defined mathematical models for time and space complexity.
  • 26.
    General Rules forAnalyzing Running Time Example For Loops •The time complexity of a for loop is at most the time taken by the statements inside it multiplied by the number of iterations. for (i = 0; i < n; i++) { k++; } Running time: O(n) Nested For Loops •Analyze them inside-out. Multiply the sizes of all loops to get the total running time. for (i = 0; i < n; i++) for (j = 0; j < n; j++) k++; Running time: O(n2 )
  • 27.
    Consecutive Statements for (i= 0; i < n; i++) // O(n) a[i] = 0; for (i = 0; i < n; i++) // O(n2 ) for (j = 0; j < n; j++) a[i] += a[j] + i + j; If/Else Statements Rule: The runtime is the runtime of the condition check plus the larger runtime of the two branches. if (condition)then // O(1) a[i] = 0; // O(1) else for (j = 0; j < n; j++) // O(n) b[j] = 1; if branch: O(1),else branch: O(n), Total Runtime: O(n)
  • 28.
    Recurrence Relation: T(n)=T(n−1)+T(n−2)+(1) Thiscorresponds to the Fibonacci sequence itself, where each call generates two additional calls. Growth: Exponential, T(n) = O(2n ) Each Fibonacci number is recalculated multiple times. For example, fib(5) calculates fib(4) and fib(3), but fib(4) will again calculate fib(3), leading to redundant computations. Recursive Approach unsigned int fib(unsigned int n) { if (n <= 1) return n; return fib(n - 1) + fib(n - 2); }
  • 29.
    Recursive Relation void test(inta) { if (a > 0) { printf("%d", a); test(a - 1); } } time complexity is O(a)