2. Course Objectives
� To impart a thorough understanding of linear data structures
such as stacks, queues and their applications.
� To impart a thorough understanding of non-linear data
structures such as trees, graphs and their applications.
� To impart familiarity with various sorting, searching and
hashing techniques and their performance comparison.
� To impart a basic understanding of memory management.
which includes
3. � Design an algorithm for a computational task and
calculate the time/space complexities of that
algorithm
� Identify the suitable data structure (array or linked
list) to represent a data item required to be processed
to solve a given computational problem and write an
algorithm to find the solution of the computational
problem.
� Write an algorithm to find the solution of a
computational problem by selecting an appropriate
data structure (binary tree/graph) to represent a data
item to be processed.
4. � Store a given dataset using an appropriate
Hash Function to enable efficient access of
data in the given set.
� Select appropriate sorting algorithms to be
used in specific circumstances
� Design and implement Data Structures for
solving real world problems efficiently
5. Definition
� Data: Collection of raw facts.
� A data structure is a particular way of
organizing data in a computer so that it can be
used efficiently.
� Data structure is representation of the logical
relationship existing between individual
elements of data.
� Data structure is a specialized format for
organizing and storing data in memory that
considers not only the elements stored but also
their relationship to each other.
6. ALGORITHMS
� An algorithm is a finite set of instructions to
accomplishes a particular task.
1. Input: Zero or more quantities are externally
supplied
2. Output: At least one quantity is produced
3. Definiteness: Each instruction is clear and
unambiguous
4. Finiteness: If we trace out the instructions of an
algorithm, then, for all cases, the algorithm
terminates after a finite number of steps .
5. Effectiveness: every instruction must be basic
enough to be carried out.
7. HOW TO EXPRESS ALGORITHM?
� Natural language: must be well defined and unam
biguous (what about portability?)
-pseudo coding
� Graphic representations: flowcharts (only for small
and simple algorithms)
8. PSEUDO CODE
A mixture of natural language and high level
programming concepts that describes the main
ideas behind a generic data structure or an
algorithm.
Eg. Algorithm ArrayMax(A,n)
Input: an array A with n numbers.
Output: The maximum element in A
Max=A[0]
For i= 1 to n-1
if(A[i] > Max) then Max=A[i]
return Max.
10. CHARACTERISTICS OF A GOOD ALGORITHM
� Correctness
� Amount of work done
� Amount of space used
� Simplicity
� Optimality
11. correctness of an algorithm.
An algorithm is correct with respect to a
specification if for each possible input
permutation it produces the expected
output. Functional correctness refers to the
input-output behavior of the algorithm .
Optimality: An algorithm for a problem is said to
be optimal, If no other known algorithm can
solve the same problem in fewer number of
computation than this.
12.
13. SPACE COMPLEXITY
The space needed by a program consists of following
components.
• Instruction space : Space needed to store the
executable version of the program and it is fixed.
• Data space : Space needed to store all constants,
variable values and has
further two components :
(a) Space needed by constants and simple variables. This
space is fixed.
(b) Space needed by fixed sized structural variables,
such as arrays and structures.
(c) Dynamically allocated space. This space usually
varies.
14. Environment stack space: This space is
needed to store the information to resume the
suspended (partially completed) functions.
Each time a function is invoked the following
data is saved on the environment stack :
(a) Return address : i.e., from where it has
to resume after completion of the Called
function.
(b) Values of all lead variables and the
values of formal parameters in the
function being invoked.
15. TIME COMPLEXITY
� The time complexity of an algorithm or a program
is the amount of time it needs to run the program.
⚫ The exact time will depend on the implementation of
the algorithm, programming language, optimizing the
capabilities of the compiler used, the CPU speed, other
hardware characteristics/specifications and so on.
⚫ The time complexity also depends on the amount of
data input to an algorithm
⚫ To measure the time complexity accurately, we have
to count all sorts of operations performed in an
algorithm
16. FREQUENCY COUNT
� Frequency count method can be used to analyze a
program .Here we assume that every statement
takes the same constant amount of time for its
execution. Hence the determination of time
complexity of a given program is the just matter
of summing the frequency counts of all the
statements of that program
……… for(i=0;i<n;i++) for(i=0;i<n;i++)
………. X++; for(j=0;j<n;j++)
X++; …….. x++;
(a) (b) (c)
17.
18. CALCULATE THE FREQUENCY COUNT OF THE STATEMENT
X = X+1; IN THE FOLLOWING CODE SEGMENT
1.for (i = 0; i< n; i++) n times
for (j = 1; j< n; j*=2) /* for each i- inner loop is
executed log n times */
x = x + 1; hence n*log n
2. for (i=0;i<n;i=i+2) n/2 times
for(j=0;j<n;j=j++) /* for each i, inner loop is executed
n times */
x=x+1; hence n*n/2
3. for(i=0;i<n;i++)
for(j=0;j<i;j++) /* 0+1+2+........+n-1 */
x=x+1 hence (n-1)*n/2
21. � When we analyze an algorithm the time taken
depends on the input data, there are three cases :
� Best case : Among all possible input permutation
minimum time taken by algorithm is referred to as best
case running time.
� Average case: The average running time can be computed
by taking the average of the running time taken by the
algorithm for all possible input permutations.
� Worst case running time of an algorithm gives us an upper
bound on the running for any input.
22. LINEAR SEARCH
� procedure linear_search (list, value)
⚫ for each item in the list
� if match item == value
� return the item's location
� end if
⚫ end for
� end procedure
� Best case:
Worst case
Average Case
27. ASYMPTOTIC NOTATIONS
Asymptotic growth rate is a way to compare or
classify algorithms that ignore constant factors and
small inputs
� if functions describing the behaviour of two
algorithms differ by a constant factor it is
pointless to try to distinguish between them.
A notation which talks nicely about the
functions and classify them not too precisely but
by putting them into classes giving more
importance to the behavior as n→∞.
28. THE SET BIG O
Let g be a function from the non negative integers to
positive real numbers. Then O(g) is the set of functions f ,
also from the non negative integers to positive real numbers
such that for some real constant c>0 and some non negative
integer constant n0, f(n)≤ c.g(n) ,for all n ≥ n0 .
or Limn→∞f(n)/g(n) = c < ∞
Big O notation provides an asymptotic upper bound on a
function.
29. THE SET BIG Θ
Let g be a function from the non negative integers to
positive real numbers. Then θ(g) is the set of functions f ,
also from the non negative integers to positive real numbers
such that for some real constants c1 >0, c2 >0 and some non
negative integer constant n0, c1.g(n) ≤ f(n)≤ c2.g(n) ,for all n ≥
n0 .
or Limn→∞f(n)/g(n) = c such that 0<c<∞
30. THE SET BIG Ω
Let g be a function from the non negative integers to
positive real numbers. Then Ω(g) i the set of functions f , also
from the non negative integers to positive real numbers such
that for some real constant c>0 and some non negative
integer constant n0, f(n)≥ c.g(n) ,for all n ≥ n0 .
or Limn→∞f(n)/g(n) > 0
Big Ω notation provides an asymptotic lower bound on a
function.
31. O – Notation, ω - Notation
� The asymptotic upper bound provided by O –notation may or may not be
asymptotically tight. o-notation is used to denote an upper bound that is
not asymptotically tight.
o(g(n))={ f(n): for any positive c>0, there exist constant n0 >0 such
that f(n)<c.g(n) for all n≥ n0.
2n = o(n2) but 2n2 ≠ o(n2)
Limn→∞f(n)/g(n) = 0
ω – notation is used to denote a lower bound that is not
asymptotically tight.
ω(g(n))={ f(n): for any positive c>0, there exist constant n0 >0 such that
c.g(n)< f(n) for all n ≥ n0. }
n2/2 = ω( n) but n2/2 ≠ ω( n2 )
Limn→∞f(n)/g(n) = ∞