SlideShare a Scribd company logo
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 1
What is Algorithm?
 The word algorithm comes from the name of a Persian mathematician Khowarizmi.
 In computer science, this word refers to a special method useable by a computer for
solution of a problem. The statement of the problem specifies in general terms the desired
input/output relationship.
 Algorithm is a step by step procedure, which defines a set of instructions to be executed in
certain order to get the desired output. Algorithms are generally created independent of
underlying languages, i.e. an algorithm can be implemented in more than one programming
language.
Characteristics of anAlgorithm
Not all procedures can be called an algorithm. An algorithm should have the below mentioned
characteristics −
 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or
phases), and their input/outputs should be clear and must lead to only one meaning.
 Input − An algorithm should have 0 or more well defined inputs.
 Output − An algorithm should have 1 or more well defined outputs, and should match
the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions which should be
independent of any programming code.
Algorithm Analysis
Algorithm Analysis-Measure resource requirements; how does the amount of
time and space an algorithm uses scale with increasing input size?
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation, as mentioned below −
 A priori analysis − This is theoretical analysis of an algorithm. Efficiency of algorithm
is measured by assuming that all other factors e.g. processor speed, are constant and
have no effect on implementation.
 A posterior analysis − This is empirical analysis of an algorithm. The selected algorithm
is implemented using programming language. This is then executed on target computer
machine. In this analysis, actual statistics like running time and space required are
collected.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 2
We shall learn here a priori algorithm analysis. Algorithm analysis deals with the execution
or running time of various operations involved. Running time of an operation can be defined
as no. of computer instructions executed per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
Algorithm X are the two main factors which decide the efficiency of X.
 Time Factor − The time is measured by counting the number of key operations such as comparisons
in sorting algorithm
 Space Factor − The space is measured by counting the maximum memory space required by the
algorithm.
The complexity of an algorithm f(n) gives the running time and / or storage space required by
the algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. Space required by an algorithm is equal to the sum of the following
two components −
 A fixed part that is a space required to store certain data and variables that are
independent of the size of the problem. For example simple variables & constant used,
program size etc.
 A variable part is a space required by variables, whose size depends on the size of the
problem. For example dynamic memory allocation, recursion stacks space etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I) Where C is the fixed part
and S(I) is the variable part of the algorithm which depends on instance characteristic I.
Following is a simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B and C and one constant. Hence S(P) = 1+3. Now space
depends on data types of given variables and constant types and it will be multiplied
accordingly.
Time Complexity
Time Complexity of an algorithm represents the amount of time required by the algorithm to
run to completion. Time requirements can be defined as a numerical function T(n),
where T(n) can be measured as the number of steps, provided each step consumes constant
time.For example, addition of two n-bit integers takes n steps. Consequently, the total
computational time is T(n) = c*n, where c is the time taken for addition of two bits. Here, we
observe that T(n) grows linearly as input size increases.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 3
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical bound/framing of its
run-time performance. Using asymptotic analysis, we can very well conclude the best case,
average case and worst case scenario of an algorithm.Asymptotic analysis are input bound i.e.,
if there's no input to the algorithm it is concluded to work in a constant time. Other than the
"input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical
units of computation. For example, running time of one operation is computed as f(n) and may
be for another operation it is computed as g(n2
). Which means first operation running time will
increase linearly with the increase in n and running time of second operation will increase
exponentially when n increases. Similarly the running time of both operations will be nearly
same if n is significantly small.
Usually, time required by an algorithm falls under three types −
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.
AsymptoticNotations
Following are commonly used asymptotic notations used in calculating running time
complexity of an algorithm.
 Ο Notation (“Big Oh Notation “)
 Ω Notation(“Omega Notation”)
 θ Notation(“Theta Notation”)
Big Oh Notation, Ο
The Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or longest amount of time an algorithm can possibly
take to complete.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 4
For example, for a function f(n)
Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Omega Notation, Ω
The Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or best amount of time an algorithm can possibly take
to complete.
For example, for a function f(n)
Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Theta Notation, θ
The θ(n) is the formal way to express both the lower bound and upper bound of an algorithm's
running time. It is represented as following −
θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }
Asymptotic Notation Summary
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 5
Properties of Asymptotic Growth Rate
Growth rate of Asymptotic Notation
The growth rate of the asymptotic notation is given from smallest to the biggest as follows:
1) Constant Ο(1)
2) Logarithmic Ο(log n)
3) Linear Ο(n)
4) n log n Ο(n log n)
5) quadratic Ο(n2
)
6) cubic Ο(n3
)
7) polynomial nΟ(1)
8) exponential 2Ο(n)
EXAMPLE 1
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 6
EXAMPLE 2
EXAMPLE -3
MORE EXAMPLES …
STANDARD NOTATIONS & COMMON FUNCTIONS
For all real a>0, b>0, c>0, and n
610033,3forsince)(61003 2222
 nnncnOnn
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 7
Data Structure Review
I) LINKED LIST
A linked list is a sequence of data structures, which are connected together via links. Linked List
is a sequence of links which contains items. Each link contains a connection to another link.
Linked list is the second most-used data structure after array. Following are the important terms
to understand the concept of Linked List.
 Link − Each link of a linked list can store a data called an element.
 Next − Each link of a linked list contains a link to the next link called Next.
 Linked List − A Linked List contains the connection link to the first link called First.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 8
Types of Linked List
Following are the various types of linked list.
 Simple Linked List − Item navigation is forward only.
 Doubly Linked List − Items can be navigated forward and backward.
 Circular Linked List − Last item contains link of the first element as next and the first
element has a link to the last element as previous.
Basic Operations
Following are the basic operations supported by a list.
o Insertion − Adds an element at the beginning of the list.
o Deletion − Deletes an element at the beginning of the list.
o Display − Displays the complete list.
o Search − Searches an element using the given key.
o Delete − Deletes an element using the given key.
Doubly Linked List
Doubly Linked List is a variation of Linked list in which navigation is possible in both ways,
either forward or backward easily as compared to Single Linked List. Following are the
important terms to understand the concept of doubly linked list.
 Link − Each link of a linked list can store a data called an element.
 Next − Each link of a linked list contains a link to the next link called Next.
 Prev − Each link of a linked list contains a link to the previous link called Prev.
 Linked List − A Linked List contains the connection link to the first link called First and
to the last link called Last.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 9
Circularly Linked List
Circular Linked List is a variation of Linked list in which the first element points to the last
element and the last element points to the first element. Both Singly Linked List and Doubly
Linked List can be made into a circular linked list.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 10
II) STACK
A stack is an Abstract Data Type (ADT), commonly used in most programming languages. It is
named stack as it behaves like a real-world stack, for example – a deck of cards or a pile of
plates, etc.
A real-world stack allows operations at one end only. For example, we can place or remove a
card or plate from the top of the stack only. Likewise, Stack ADT allows all data operations at
one end only. At any given time, we can only access the top element of a stack. This feature
makes it LIFO data structure. LIFO stands for Last-in-first-out. Here, the element which is
placed (inserted or added) last is accessed first. In stack terminology, insertion operation is called
PUSH operation and removal operation is called POP operation.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 11
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 12
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 13
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 14
III) queue
Queue is an abstract data structure, somewhat similar to Stacks. Unlike stacks, a queue is open at
both its ends. One end is always used to insert data (enqueue) and the other is used to remove
data (dequeue). Queue follows First-In-First-Out methodology, i.e., the data item stored first
will be accessed first.
A real-world example of queue can be a single-lane one-way road, where the vehicle enters first,
exits first. More real-world examples can be seen as queues at the ticket windows and bus-stops.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 15
IV) TREE
The data structures that we have discussed in previous lectures are linear data structures. The
linked list and stack are linear data structures. In these structures, the elements are in a line. We
put and get elements in and from a stack in linear order. Queue is also a linear data structure as a
line is developed in it. There are a number of applications where linear data structures are not
appropriate. In such cases, there is need of some non-linear data structure. Some examples will
show us that why nonlinear data structures are important. Tree is one of the non-linear data
structures. Look at the following figure. This figure below showing a genealogy tree of a family.
A tree is a widely used abstract data type (ADT)—or data structure implementing this ADT—
that simulates a hierarchical tree structure, with a root value and sub-trees of children with a
parent node, represented as a set of linked nodes.
A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a
root node), where each node is a data structure consisting of a value, together with a list of
references to nodes (the "children").
A tree is a data structure made up of nodes or vertices and edges without having any cycle.
The tree with no nodes is called the null or empty tree. A tree that is not empty consists of a root
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 16
node and potentially many levels of additional nodes that form a hierarchy. Terminologies used
in tree include root, children, sibling, parent, descendant, leaf & ancestor.
Degree: The number of sub trees of a node &
Depth: The depth of a node is the number of edges from the tree's root node to the node.
Leaf – The node which does not have any child node is called the leaf node.
Sub tree – Sub-tree represents the descendants of a node
Levels − Level of a node represents the generation of a node. If the root node is at level
0, then its next child node is at level 1, its grandchild is at level 2, and so on.
Binary Tree
The mathematical definition of a binary tree is “A binary tree is a finite set of elements that is
either empty or is partitioned into three disjoint subsets. The first subset contains a single
element called the root of the tree. The other two subsets are themselves binary trees called the
left and right sub-trees”. Each element of a binary tree is called a node of the tree. Following
figure shows a binary tree.
A binary tree has a special condition that each node can have a maximum of two children. A
binary tree has the benefits of both an ordered array and a linked list as search is as quick as in a
sorted array and insertion or deletion operation are as fast as in linked list.
Binary Search Tree Representation
Binary Search tree exhibits a special behavior. A node's left child must have a value less than its
parent's value and the node's right child must have a value greater than its parent value.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 17
A Binary Search Tree (BST) is a tree in which all the nodes follow the below-mentioned
properties −
 The left sub-tree of a node has a key less than or equal to its parent node's key.
 The right sub-tree of a node has a key greater than or equal to its parent node's key.
Thus, BST divides all its sub-trees into two segments; the left sub-tree and the right sub tree
and can be defined as –
Binary Search Tree Basic Operations
The basic operations that can be performed on a binary search tree data structure are the
following −
 Insert − Inserts an element in a tree/create a tree.
Search − Searches an element in a tree.
In-order Traversal: In this traversal method, the left sub tree is visited first, then the root and
later the right sub-tree.
Pre-order Traversal: In this traversal method, the root node is visited first, then the left sub
tree and finally the right sub tree.
Post-order Traversal: In this traversal method, the root node is visited last, hence the name.
First we traverse the left sub tree, then the right sub tree and finally the root node
Heaps
Heap is a special case of balanced binary tree data structure where the root-node key is compared
with its children and arranged accordingly. If α has child node β then −key(α) ≥ key(β)
As the value of parent is greater than that of child, this property generates Max Heap.
Based on these criteria, a heap can be of two types −
Min-Heap − Where the value of the root node is less than or equal to either of its children
Max-Heap − Where the value of the root node is greater than or equal to either of its
children.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 18
Max Heap Construction Algorithm
Graphs
A graph is a pictorial representation of a set of objects where some pairs of objects are
connected by links. The interconnected objects are represented by points termed as vertices, and
the links that connect the vertices are called edges.
Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of
edges, connecting the pairs of vertices. Take a look at the following graph:
In the above graph,
V = {a, b, c, d, e}
E = {ab, ac, bd, cd, de}
Graph Data Structure
Vertex − Each node of the graph is represented as a vertex
Edge − Edge represents a path between two vertices or a line between two vertices.
Adjacency − Two node or vertices are adjacent if they are connected to each other through
an edge. In the following example, B is adjacent to A, C is adjacent to B, and so on.
Path − Path represents a sequence of edges between the two vertices. In the following
example, ABCD represents a path from A to D.
Basic Operations
Following are the basic primary operations that can be performed on a Graph:
Add Vertex − Adds a vertex to the graph.
Add Edge − Adds an edge between the two vertices of the graph.
Display Vertex − Displays a vertex of the graph.
Graph Traversal Techniques
Tree traversal & applications
Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all
nodes are connected via edges (links) we always start from the root (head) node. That is, we
cannot randomly access a node in a tree. There are three ways which we use to traverse a tree −
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 19
 In-order Traversal
 Pre-order Traversal
 Post-order Traversal
In-order Traversal
In this traversal method, the left sub tree is visited first, then the root and later the right sub-tree.
We should always remember that every node may represent a sub tree itself. If a binary tree is
traversed in-order, the output will produce sorted key values in an ascending order.
We start from A, and following in-order traversal, we move to its left sub tree B. B is also
traversed in-order. The process goes on until all the nodes are visited. The output of in order
traversal of this tree will be −
D → B → E → A → F → C → G
Pre-order Traversal
In this traversal method, the root node is visited first, then the left sub tree and finally the right
sub tree.
We start from A, and following pre-order traversal, we first visit A itself and then move to its left
sub tree B. B is also traversed pre-order. The process goes on until all the nodes are visited. The
output of pre-order traversal of this tree will be −
A → B → D → E → C → F → G
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 20
Post-order Traversal
In this traversal method, the root node is visited last, hence the name. First we traverse the left
sub tree, then the right sub tree and finally the root node.
We start from A, and following pre-order traversal, we first visit the left sub tree B. B is also
traversed post-order. The process goes on until all the nodes are visited. The output of post-order
traversal of this tree will be −
D → E → B → F → G → C → A
Depth First Traversal
Depth First Search (DFS) algorithm traverses a graph in a depth-ward motion and uses a stack
to remember to get the next vertex to start a search, when a dead end occurs in any iteration.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 21
As in the example given above, DFS algorithm traverses from A to B to C to D first then to E,
then to F and lastly to G. It employs the following rules.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a
stack.
Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all
the vertices from the stack, which do not have adjacent vertices.)
Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.
Breadth First Traversal
Breadth First Search (BFS) algorithm traverses a graph in a breadth-ward motion and uses a
queue to remember to get the next vertex to start a search, when a dead end occurs in any
iteration.
As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and
G lastly to D. It employs the following rules.
 Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a
queue.
 Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue.
 Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty
Searching Techniques
1) Linear Search
Linear search is a very simple search algorithm. In this type of search, a sequential search is
made over all items one by one. Every item is checked and if a match is found then that
particular item is returned, otherwise the search continues till the end of the data collection.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 22
2) Binary search
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquers. For this algorithm to work properly, the
data collection should be in the sorted form.
Binary search looks for a particular item by comparing the middle most item of the collection. If
a match occurs, then the index of item is returned. If the middle item is greater than the item,
then the item is searched in the sub-array to the right of the middle item. Otherwise, the item is
searched for in the sub-array to the left of the middle item. This process continues on the sub-
array as well until the size of the sub array reduces to zero.
How Binary Search Works?
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us
assume that we need to search the location of value 31 using binary search.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 23
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 24
3) InterpolatIon search
Interpolation search is an improved variant of binary search. This search algorithm works on
the probing position of the required value. For this algorithm to work properly, the data
collection should be in a sorted form and equally distributed.
Binary search has a huge advantage of time complexity over linear search. Linear search has
worst-case complexity of Ο(n) whereas binary search has Ο(log n).
There are cases where the location of target data may be known in advance. For example, in
case of a telephone directory, if we want to search the telephone number of Morphius. Here,
linear search and even binary search will seem slow as we can directly jump to memory space
where the names start from 'M' are stored.
Positioning in Binary Search
In binary search, if the desired data is not found then the rest of the list is divided in two parts,
lower and higher. The search is carried out in either of them.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 25
4) HasHing
Hash Table is a data structure which stores data in an associative manner. In a hash table, data is
stored in an array format, where each data value has its own unique index value. Access of data
becomes very fast if we know the index of the desired data. Thus, it becomes a data structure in
which insertion and search operations are very fast irrespective of the size of the data. Hash
Table uses an array as a storage medium and uses hash technique to generate an index where an
element is to be inserted or is to be located from.
Hashing
Hashing is a technique to convert a range of key values into a range of indexes of an array. We're
going to use modulo operator to get a range of key values. Consider an example of hash table of
size 20, and the following items are to be stored. Item are in the (key, value) format.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 26
 (1,20)
 (2,70)
 (42,80)
 (4,25)
 (12,44)
 (14,32)
 (17,11)
 (13,78)
 (37,98)
Sorting Techniques
Sorting algorithm
Sorting refers to arranging data in a particular format. Sorting algorithm specifies the way
to arrange data in a particular order. Most common orders are in numerical or lexicographical
order. The importance of sorting lies in the fact that data searching can be optimized to a very
high level, if data is stored in a sorted manner. Sorting is also used to represent data in more
readable formats. Following are some of the examples of sorting in real-life scenarios:
 Telephone Directory – The telephone directory stores the telephone numbers of people
sorted by their names, so that the names can be searched easily.
 Dictionary – The dictionary stores words in an alphabetical order so that searching of
any word becomes easy.
In-place Sorting and Not-in-place Sorting
Sorting algorithms may require some extra space for comparison and temporary storage
of few data elements. These algorithms do not require any extra space and sorting is said to
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 27
happen in-place, or for example, within the array itself. This is called in-place sorting. Bubble
sort is an example of in-place sorting.
However, in some sorting algorithms, the program requires space which is more than or
equal to the elements being sorted. Sorting which uses equal or more space is called notin- place
sorting. Merge-sort is an example of not-in-place sorting.
Stable and Not Stable Sorting
If a sorting algorithm, after sorting the contents, does not change the sequence of similar content
in which they appear, it is called stable sorting.
If a sorting algorithm, after sorting the contents, changes the sequence of similar content in
which they appear, it is called unstable sorting.
Adaptive and Non-Adaptive Sorting Algorithm
A sorting algorithm is said to be adaptive, if it takes advantage of already 'sorted'
elements in the list that is to be sorted. That is, while sorting if the source list has some element
already sorted, adaptive algorithms will take this into account and will try not to re-order them.
A non-adaptive algorithm is one which does not take into account the elements which are
already sorted. They try to force every single element to be re-ordered to confirm their
sortedness.
Important Terms
Some terms are generally coined while discussing sorting techniques, here is a brief introduction
to them –
Increasing Order
A sequence of values is said to be in increasing order, if the successive element is greater than
the previous one. For example, 1, 3, 4, 6, 8, 9 are in increasing order, as every next element is
greater than the previous element.
Decreasing Order
A sequence of values is said to be in decreasing order, if the successive element is less than the
current one. For example, 9, 8, 6, 4, 3, 1 are in decreasing order, as every next element is less
than the previous element.
Non-Increasing Order
A sequence of values is said to be in non-increasing order, if the successive element is less than
or equal to its previous element in the sequence. This order occurs when the sequence contains
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 28
duplicate values. For example, 9, 8, 6, 3, 3, 1 are in non-increasing order, as every next element
is less than or equal to (in case of 3) but not greater than any previous element.
Non-Decreasing Order
A sequence of values is said to be in non-decreasing order, if the successive element is greater
than or equal to its previous element in the sequence. This order occurs when the sequence
contains duplicate values. For example, 1, 3, 3, 6, 8, 9 are in non-decreasing order, as every next
element is greater than or equal to (in case of 3) but not less than the previous one.
2) BuBBle sort algorithm
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm
in which each pair of adjacent elements is compared and the elements are swapped if they are not
in order. This algorithm is not suitable for large data sets as its average and worst case
complexity are of O(n2) where n is the number of items.
How Bubble Sort Works?
We take an unsorted array for our example. Bubble sort takes Ο(n2) time so we're keeping it
short and precise.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 29
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 30
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 31
3) insertion sort
This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which
is always sorted. For example, the lower part of an array is maintained to be sorted. An
element which is to be 'inserted in this sorted sub-list, has to find its appropriate place and
then it has to be inserted there. Hence the name, insertion sort. The array is searched
sequentially and unsorted items are moved and inserted into the sorted sub-list (in the same
array). This algorithm is not suitable for large data sets as average and worst case complexity
are of Ο(n2), where n is the number of items.
How Insertion Sort Works?
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 32
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 33
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 34
4) Selection Sort
Selection sort is a simple sorting algorithm. This sorting algorithm is an in-place
comparison-based algorithm in which the list is divided into two parts, the sorted part at
the left end and the unsorted part at the right end. Initially, the sorted part is empty and the
unsorted part is the entire list. The smallest element is selected from the unsorted array
and swapped with the leftmost element, and that element becomes a part of the sorted
array. This process continues moving unsorted array boundary by one element to the right.
This algorithm is not suitable for large data sets as its average and worst case complexities
are of O (n2), where n is the number of items.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 35
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 36
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 37
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 38
5) Merge sort
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms. Merge sort first divides
the array into equal halves and then combines them in a sorted manner.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 39
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 40
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 41
6) Shell Sort
Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This
algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right
and has to be moved to the far left. This algorithm uses insertion sort on a widely spread
elements, first to sort them and then sorts the less widely spaced elements. This spacing is
termed as interval. This interval is calculated based on Knuth's formula as −
h = h * 3 + 1
where −
h is interval with initial value 1
This algorithm is quite efficient for medium-sized data sets as its average and worst case
complexity are of O(n), where n is the number of items.
How Shell Sort Works?
Let us consider the following example to have an idea of how shell sort works. We take the same
array we have used in our previous examples. For our example and ease of understanding, we
take the interval of 4. Make a virtual sub-list of all values located at the interval of 4 positions.
Here these values are {35, 14}, {33, 19}, {42, 27} and {10, 14}
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 42
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 43
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 44
7) quick sort
Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data
into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller
than the specified value, say pivot, based on which the partition is made and another array holds
values greater than the pivot value. Quick sort partitions an array and then calls itself recursively
twice to sort the two resulting sub arrays. This algorithm is quite efficient for large-sized data
sets as its average and worst case complexity are of O(n log n), where n is the number of items.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 45
The divide & conquer method
The divide-and-conquer strategy solves a problem by:
1. Breaking it into sub problems that are themselves smaller instances of the same type of
problem
2. Recursively solving these sub problems
3. Appropriately combining their answers
In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and
then each problem is solved independently. When we keep on dividing the sub problems into
even smaller sub-problems, we may eventually reach a stage where no more division is possible.
Those "atomic" smallest possible sub-problems (fractions) are solved. The solution of all sub-
problems is finally merged in order to obtain the solution of an original problem.
Broadly, we can understand divide-and-conquer approach in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem. This step generally takes a recursive approach to divide
the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic
in nature but still represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the
problems are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively and
conquer & merge steps works so close that they appear as one.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 46
Examples
The following computer algorithms are based on divide-and-conquer programming approach −
o Merge Sort
o Quick Sort
o Binary Search
o Strassen's Matrix Multiplication
o Closest Pair (points)
There are various ways available to solve any computer problem, but the mentioned are a good
example of divide and conquer approach.
Divide and conquer (D&C) is an algorithm design paradigm based on multi-branched
recursion. A divide and conquer algorithm works by recursively breaking down a problem into
two or more sub-problems of the same or related type, until these become simple enough to be
solved directly. The solutions to the sub-problems are then combined to give a solution to the
original problem.
This divide and conquer technique is the basis of efficient algorithms for all kinds of problems,
such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g. the Karatsuba
algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and
computing the discrete Fourier transform (FFTs).
Understanding and designing D&C algorithms is a complex skill that requires a good
understanding of the nature of the underlying problem to be solved. As when proving a theorem
by induction, it is often necessary to replace the original problem with a more general or
complicated problem in order to initialize the recursion, and there is no systematic method for
finding the proper generalization. These D&C complications are seen when optimizing the
calculation of a Fibonacci number with efficient double recursion.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 47
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 48
The Greedy MeThod
An algorithm is designed to achieve optimum solution for a given problem. In greedy
algorithm approach, decisions are made from the given solution domain. As being greedy, the
closest solution that seems to provide an optimum solution is chosen. Greedy algorithms try to
find a localized optimum solution, which may eventually lead to globally optimized solutions.
However, generally greedy algorithms do not provide globally optimized solutions.
A greedy algorithm is an algorithmic paradigm that follows the problem solving heuristic
of making the locally optimal choice at each stage with the hope of finding a global optimum. In
many problems, a greedy strategy does not in general produce an optimal solution, but
nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global
optimal solution in a reasonable time.
In general, greedy algorithms have five components:
i. A candidate set, from which a solution is created
ii. A selection function, which chooses the best candidate to be added to the solution
iii. A feasibility function, that is used to determine if a candidate can be used to contribute to
a solution
iv. An objective function, which assigns a value to a solution, or a partial solution, and
v. A solution function, which will indicate when we have discovered a complete solution
A game like chess can be won only by thinking ahead: a player who is focused entirely on
immediate advantage is easy to defeat. But in many other games, such as Scrabble, it is possible
to do quite well by simply making whichever move seems best at the moment and not worrying
too much about future consequences. This sort of myopic behavior is easy and convenient,
making it an attractive algorithmic strategy.
Greedy algorithms build up a solution piece by piece, always choosing the next piece that
offers the most obvious and immediate benefit. Although such an approach can be disastrous for
some computational tasks, there are many for which it is optimal. Our first example is that of
minimum spanning trees.
If a greedy algorithm can be proven to yield the global optimum for a given problem class, it
typically becomes the method of choice because it is faster than other optimization methods like
dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's
algorithm for finding minimum spanning trees, and the algorithm for finding optimum Huffman
trees. Greedy algorithms appear in network routing as well.
Counting Coins
This problem is to count to a desired value by choosing the least possible coins and the greedy
approach forces the algorithm to pick the largest possible coin. If we are provided coins of € 1, 2,
5 and 10 and we are asked to count € 18 then the greedy procedure will be −
o 1 − Select one € 10 coin, the remaining count is 8
o 2 − Then select one € 5 coin, the remaining count is 3
o 3 − Then select one € 2 coin, the remaining count is 1
o 3 − And finally, the selection of one € 1 coins solves the problem
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if
we slightly change the problem then the same approach may not be able to produce the same
optimum result. For the currency system, where we have coins of 1, 7, 10 value, counting coins
for value 18 will be absolutely optimum but for count like 15, it may use more coins than
necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins.
Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1). Hence, we may
conclude that the greedy approach picks an immediate optimized solution and may fail where
global optimization is a major concern.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 49
Examples
Most networking algorithms use the greedy approach. Here is a list of few of them −
 Travelling Salesman Problem
 Prim's Minimal Spanning Tree Algorithm
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Graph - Map Coloring
 Graph - Vertex Cover
 Knapsack Problem
 Job Scheduling Problem
There are lots of similar problems that use the greedy approach to find an optimum solution.
A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a
connected, edge-weighted undirected graph that connects all the vertices together, without any
cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum
of edge weights is as small as possible. More generally, any undirected graph (not necessarily
connected) has a minimum spanning forest, which is a union of the minimum spanning trees for
its connected components.
There are quite a few use cases for minimum spanning trees. One example would be a
telecommunications company which is trying to lay out cables in a new neighborhood. If it is
constrained to bury the cable only along certain paths (e.g. along roads), then there would be a
graph representing which points are connected by those paths. Some of those paths might be
more expensive, because they are longer, or require the cable to be buried deeper; these paths
would be represented by edges with larger weights. Currency is an acceptable unit for edge
weight. A spanning tree for that graph would be a subset of those paths that has no cycles but
still connects to every house; there might be several spanning trees possible. A minimum
spanning tree would be one with the lowest total cost, thus would represent the least expensive
path for laying the cable.
Applications of MST
Minimum spanning trees have direct applications in the design of networks, including
computer networks, telecommunications networks, transportation networks, water supply
networks, and electrical grids .They are invoked as subroutines in algorithms for other problems,
including the Christofides algorithm for approximating the traveling salesman problem,
approximating the multi-terminal minimum cut problem (which is equivalent in the single-
terminal case to the maximum flow problem), and approximating the minimum-cost weighted
perfect matching.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 50
Other practical applications based on minimal spanning trees include:
o Civil Network Planning
o Computer Network Routing Protocol
o Cluster Analysis
o Taxonomy
o Cluster analysis: clustering points in the plane, single-linkage clustering (a method of hierarchical
clustering), graph-theoretic clustering,[22]
and clustering gene expression data.
o Constructing trees for broadcasting in computer networks.[24]
On Ethernet networks this is
accomplished by means of the Spanning tree protocol.
o Image registration and segmentation – see minimum spanning tree-based segmentation.
o Curvilinear feature extraction in computer vision.
o Handwriting recognition of mathematical expressions.
o Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse
response filters.
o Regionalisation of socio-geographic areas, the grouping of areas into homogeneous, contiguous
regions.
o Comparing eco toxicology data.
o Topological observability in power systems.
o Measuring homogeneity of two-dimensional materials.
o Minimax process control.
o Minimum spanning trees can also be used to describe financial markets. A correlation matrix can
be created by calculating a coefficient of correlation between any two stocks. This matrix can be
represented topologically as a complex network and a minimum spanning tree can be
constructed to visualize relationships.
MiniMuM spanning trees
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum
possible number of edges. Hence, a spanning tree does not have cycles and it cannot be
disconnected. By this definition, we can draw a conclusion that every connected and undirected
Graph G has at least one spanning tree. A disconnected graph does not have any spanning tree,
as it cannot be spanned to all its vertices.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 51
General Properties of Spanning Tree
We now understand that one graph can have more than one spanning tree. Following are a few
properties of the spanning tree connected to graph G -
 A connected graph G can have more than one spanning tree.
 All possible spanning trees of graph G, have the same number of edges and vertices.
 The spanning tree does not have any cycle (loops).
Removing one edge from the spanning tree will make the graph disconnected, i.e. the
spanning tree is minimally connected. Adding one edge to the spanning tree will create a circuit
or loop, i.e. the spanning tree is maximally acyclic.
Suppose you are asked to network a collection of computers by linking selected pairs of
them. This translates into a graph problem in which nodes are computers, undirected edges are
potential links, and the goal is to pick enough of these edges that the nodes are connected. But
this is not all; each link also has a maintenance cost, reflected in that edge's weight. What is the
cheapest possible network?
Property 1 : Removing a cycle edge cannot disconnect a graph.
So the solution must be connected and acyclic: undirected graphs of this kind are called trees.
The particular tree we want is the one with minimum total weight, known as the minimum
spanning tree. Here is its formal definition.
Input: An undirected graph G = (V, E); edge weights We.
Output: A tree T = (V, E’), with E’C E, that minimizes
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 52
This figure shows there may be more than one minimum spanning tree in a graph. In the figure,
the two trees below the graph are two possibilities of minimum spanning tree of the given graph
Both are greedy algorithms
Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach.
This algorithm treats the graph as a forest and every node it has as an individual tree. A tree
connects to another only and only if, it has the least cost among all available options and does
not violate MST properties.To understand Kruskal's algorithm let us consider the following
example –
Step 1 - Remove all loops and parallel edges
Remove all loops and parallel edges from the given graph.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 53
In case of parallel edges, keep the one which has the least cost associated and remove all
others.
Step 2 - Arrange all edges in their increasing order of weight
The next step is to create a set of edges and weight, and arrange them in an ascending order of
weightage (cost).
Step 3 - Add the edge which has the least weightage
Now we start adding edges to the graph beginning from the one which has the least weight.
Throughout, we shall keep checking that the spanning properties remain intact. In case, by
adding one edge, the spanning tree property does not hold then we shall consider not to include
the edge in the graph.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 54
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them does not
violate spanning tree properties, so we continue to our next edge selection. Next cost is 3, and
associated edges are A,C and C,D. We add them again –
Next cost in the table is 4, and we observe that adding it will create a circuit in the graph.
We ignore it. In the process we shall ignore/avoid all edges that create a circuit.
We observe that edges with cost 5 and 6 also create circuits. We ignore them and move on.
Now we are left with only one node to be added. Between the two least cost edges available 7
and 8, we shall add the edge with cost 7
By adding edge S,A we have included all the nodes of the graph and we now have minimum
cost spanning tree.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 55
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path first algorithms.
Prim's algorithm, in contrast with Kruskal's algorithm, treats the nodes as a single tree and keeps
on adding new nodes to the spanning tree from the given graph.
To contrast with Kruskal's algorithm and to understand Prim's algorithm better, we shall
use the same example –
Step 1 - Remove all loops and parallel edges
Remove all loops and parallel edges from the given graph. In case of parallel edges, keep the one
which has the least cost associated and remove all others.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 56
Step 2 - Choose any arbitrary node as root node
In this case, we choose S node as the root node of Prim's spanning tree. This node is arbitrarily
chosen, so any node can be the root node. One may wonder why any video can be a root node.
So the answer is, in the spanning tree all the nodes of a graph are included and because it is
connected then there must be at least one edge, which will join it to the rest of the tree.
Step 3 - Check outgoing edges and select the one with less cost
After choosing the root node S, we see that S,A and S,C are two edges with weight 7 and 8,
respectively. We choose the edge S,A as it is lesser than the other.
Now, the tree S-7-A is treated as one node and we check for all edges going out from it. We
select the one which has the lowest cost and include it in the tree.
After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will check all the
edges again. However, we will choose only the least cost edge. In this case, C-3-D is the new
edge, which is less than other edges' cost 8, 6, 4, etc
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 57
After adding node D to the spanning tree, we now have two edges going out of it having the
same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step will again yield
edge 2 as the least cost. Hence, we are showing a spanning tree with both edges included.
We may find that the output spanning tree of the same graph using two different algorithms is
same.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 58
Dijkstra's algorithm
Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a
graph, which may represent, for example, road networks. It was conceived by computer scientist
Edsger W. Dijkstra in 1956 and published three years later.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 59
The algorithm exists in many variants; Dijkstra's original variant found the shortest path between
two nodes, but a more common variant fixes a single node as the "source" node and finds
shortest paths from the source to all other nodes in the graph, producing a shortest-path tree.
For a given source node in the graph, the algorithm finds the shortest path between that
node and every other. It can also be used for finding the shortest paths from a single node to a
single destination node by stopping the algorithm once the shortest path to the destination node
has been determined. For example, if the nodes of the graph represent cities and edge path costs
represent driving distances between pairs of cities connected by a direct road, Dijkstra's
algorithm can be used to find the shortest route between one city and all other cities. As a result,
the shortest path algorithm is widely used in network routing protocols, most notably IS-IS
(Intermediate System to Intermediate System) and Open Shortest Path First (OSPF). It is also
employed as a subroutine in other algorithms such as Johnson's. Dijkstra's original algorithm
does not use a min-priority queue and runs in time O (| V | 2). In some fields, artificial
intelligence in particular, Dijkstra's algorithm or a variant of it is known as uniform-cost search
and formulated as an instance of the more general idea of best-first search.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 60
Bellman pioneered the systematic study of dynamic programming in the 1950s. Dynamic
programming is “planning over time." Dynamic programming is a very powerful algorithmic
paradigm in which a problem is solved by identifying a collection of sub problems and tackling
them one by one, smallest first, using the answers to small problems to help figure out larger
ones, until the whole lot of them is solved. Dynamic programming (also known as dynamic
optimization) is a method for solving a complex problem by breaking it down into a collection of
simpler sub problems, solving each of those sub problems just once, and storing their solutions.
The next time the same sub problem occurs, instead of re-computing its solution, one simply
looks up the previously computed solution, thereby saving computation time at the expense of a
(hopefully) modest expenditure in storage space. (Each of the sub problem solutions is indexed
in some way, typically based on the values of its input parameters, so as to facilitate its lookup.)
The technique of storing solutions to sub problems instead of re-computing them is called
"memoization".
Dynamic programming algorithms are often used for optimization. A dynamic
programming algorithm will examine the previously solved sub problems and will combine their
solutions to give the best solution for the given problem. In comparison, a greedy algorithm
treats the solution as some sequence of steps and picks the locally optimal choice at each step.
Using a greedy algorithm does not guarantee an optimal solution, because picking locally
optimal choices may result in a bad global solution, but it is often faster to calculate. Some
greedy algorithms (such as Kruskal's or Prim's for minimum spanning trees) are however proven
to lead to the optimal solution.
There are two key attributes that a problem must have in order for dynamic programming
to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved
by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide
and conquer" instead. This is why merge sort and quick sort are not classified as dynamic
programming problems. Optimal substructure means that the solution to a given optimization
problem can be obtained by the combination of optimal solutions to its sub-problems. Such
optimal substructures are usually described by means of recursion.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 61
Overlapping sub-problems means that the space of sub-problems must be small, that is,
any recursive algorithm solving the problem should solve the same sub-problems over and over,
rather than generating new sub-problems. For example, consider the recursive formulation for
generating the Fibonacci series: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43
= F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both
F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of
them), we end up solving the same problems over and over if we adopt a naive recursive solution
such as this. Dynamic programming takes account of this fact and solves each sub-problem only
once.
Applying DP to Matrix chain multiplication
Matrix chain multiplication is a well-known example that demonstrates utility of dynamic
programming. For example, engineering applications often have to multiply a chain of matrices.
It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our
task is to multiply matrices A 1, A 2 , . . . . A. As we know from basic linear algebra, matrix
multiplication is not commutative, but is associative; and we can multiply only two matrices at a
time. So, we can multiply this chain of matrices in many different ways, for example:
((A1 × A2) × A3) × ... An
A1×(((A2×A3)× ... ) × An)
(A1 × A2) × (A3 × ... An) and so on.
Application of dynamic programming to Sequence alignment
In computational genetics, sequence alignment is an important application where dynamic
programming is essential. Typically, the problem consists of transforming one sequence into
another using edit operations that replace, insert, or remove an element. Each operation has an
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 62
associated cost, and the goal is to find the sequence of edits with the lowest total cost.The
problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence
B by either:
1. inserting the first character of B, and performing an optimal alignment of A and the tail of B
2. deleting the first character of A, and performing the optimal alignment of the tail of A and B
3. replacing the first character of A with the first character of B, and performing optimal alignments
of the tails of A and B.
The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the
optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost
of the relevant operations to the cost of its neighboring cells, and selecting the optimum.
Different variants exist, such as Smith–Waterman algorithm and Needleman–Wunsch algorithm.
Fibonacci sequence
Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence,
based directly on the mathematical definition:
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 63
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 64
 Many string algorithms including longest common subsequence, longest increasing
subsequence, longest common substring, Levenshtein distance (edit distance)
 The use of transposition tables and refutation tables in computer chess
 Recurrent solutions to lattice models for protein-DNA binding-bioinfromatics
 The Viterbi algorithm (used for hidden Markov models
 Optimizing the order for chain matrix multiplication
 Recursive least squares method
 The Bellman–Ford algorithm for finding the shortest distance in a graph
 Floyd's all-pairs shortest path algorithm
 Some methods for solving the travelling salesman problem, either exactly (in exponential
time) or approximately
Edit distancE
When a spell checker encounters a possible misspelling, it looks in its dictionary for other words
that are close by. What is the appropriate notion of closeness in this case? A natural measure of
the distance between two strings is the extent to which they can be aligned, or matched up.
Technically, an alignment is simply a way of writing the strings one above the other. For
instance, here are two possible alignments of SNOWY and SUNNY:
The __ indicates a gap.; any number of these can be placed in either string. The cost of an
alignment is the number of columns in which the letters differ. And the edit distance between
two strings is the cost of their best possible alignment. Do you see that there is no better
alignment of SNOWY and SUNNY than the one shown here with a cost of 3?
Edit distance is so named because it can also be thought of as the minimum number of
edits, insertions, deletions, and substitutions of characters needed to transform the first string into
the second. For instance, the alignment shown on the left corresponds to three edits: insert U,
substitute O ! N, and delete W. In general, there are so many possible alignments between two
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 65
strings that it would be terribly inefficient to search through all of them for the best one. Instead
we turn to dynamic programming.
A dynamic programming solution to edit distance
When solving a problem by dynamic programming, the most crucial question is, what are the sub
problems. It is an easy matter to write down the algorithm: iteratively solve one sub problem
after the other, in order of increasing size.
KnapsacK
Neither version of this problem is likely to have a polynomial time algorithm. However,
using dynamic programming they can both be solved in O(nW) time, which is reasonable
when W is small, but is not polynomial since the input size is proportional to logW rather
than W.
Bellman–Ford algorithm
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source
vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for
the same problem, but more versatile, as it is capable of handling graphs in which some of the
edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel in
1955, but is instead named after Richard Bellman and Lester Ford, Jr., who published it in 1958
and 1956, respectively. Edward F. Moore also published the same algorithm in 1957, and for this
reason it is also sometimes called the Bellman–Ford–Moore algorithm.
Negative edge weights are found in various applications of graphs, hence the usefulness of this
algorithm. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative
value) that is reachable from the source, then there is no cheapest path: any path that has a point
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 66
on the negative cycle can be made cheaper by one more walk around the negative cycle. In such
a case, the Bellman–Ford algorithm can detect negative cycles and report their existence.
travelling salesman problem
The travelling salesman problem (TSP), or in recent years, the travelling salesperson
problem, asks the following question: "Given a list of cities and the distances between each pair
of cities, what is the shortest possible route that visits each city exactly once and returns to the
origin city?" It is an NP-hard problem in combinatorial optimization, important in operations
research and theoretical computer science. The travelling purchaser problem and the vehicle
routing problem are both generalizations of TSP.
In the theory of computational complexity, the decision version of the TSP (where, given a
length L, the task is to decide whether the graph has any tour shorter than L) belongs to the class
of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm
for the TSP increases super polynomially (but no more than exponentially) with the number of
cities. The problem was first formulated in 1930 and is one of the most intensively studied
problems in optimization. It is used as a benchmark for many optimization methods. Even
though the problem is computationally difficult, a large number of heuristics and exact
algorithms are known, so that some instances with tens of thousands of cities can be solved
completely and even problems with millions of cities can be approximated within a small
fraction of 1%.
The TSP has several applications even in its purest formulation, such as planning, logistics, and
the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas,
such as DNA sequencing. In these applications, the concept city represents, for example,
customers, soldering points, or DNA fragments, and the concept distance represents travelling
times or cost, or a similarity measure between DNA fragments. The TSP also appears in
astronomy, as astronomers observing many sources will want to minimize the time spent moving
the telescope between the sources. In many applications, additional constraints such as limited
resources or time windows may be imposed.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 67
A traveling salesman is getting ready for a big sales tour. Starting at his hometown,
suitcase in hand, he will conduct a journey in which each of his target cities is visited exactly
once before he returns home. Given the pair wise distances between cities, what is the best order
in which to visit them, so as to minimize the overall distance traveled?
Denote the cities by 1; : : : ; n, the salesman's hometown being 1, and let D = (dij) be the
matrix of intercity distances. The goal is to design a tour that starts and ends at 1, includes all
other cities exactly once, and has minimum total length. Figure below shows an example
involving five cities. Can you spot the optimal tour? Even in this tiny example, it is tricky for a
human to _nd the solution; imagine what happens when hundreds of cities are involved.
It turns out this problem is also difficult for computers. In fact, the traveling salesman
problem (TSP) is one of the most notorious computational tasks. There is a long history of
attempts at solving it, a long saga of failures and partial successes, and along the way, major
advances in algorithms and complexity theory. The most basic piece of bad news about the TSP,
, is that it is highly unlikely to be solvable in polynomial time. How long does it take, then?
Well, the brute-force approach is to evaluate every possible tour and return the best one. Since
there are (n 1)! Possibilities, this strategy takes O(n!) time.
We will now see that dynamic programming yields a much faster solution, though not a
polynomial one. What is the appropriate sub problem for the TSP? Sub problems refer to partial
solutions, and in this case the most obvious partial solution is the initial portion of a tour.
Suppose we have started at city 1 as required, have visited a few cities, and are now in city j.
What information do we need in order to extend this partial tour? We certainly need to know j,
since this will determine which cities are most convenient to visit next. And we also need to
know all the cities visited so far, so that we don't repeat any of them. Here, then, is an
appropriate sub problem.
A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 68
A)BAcktrAcking

More Related Content

What's hot

Data Structures and Algorithm Analysis
Data Structures  and  Algorithm AnalysisData Structures  and  Algorithm Analysis
Data Structures and Algorithm Analysis
Mary Margarat
 
Complexity of Algorithm
Complexity of AlgorithmComplexity of Algorithm
Complexity of Algorithm
Muhammad Muzammal
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
Swapnil Agrawal
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
smruti sarangi
 
Design & Analysis Of Algorithm
Design & Analysis Of AlgorithmDesign & Analysis Of Algorithm
Design & Analysis Of Algorithm
Computer Hardware & Trouble shooting
 
Algorithm chapter 2
Algorithm chapter 2Algorithm chapter 2
Algorithm chapter 2chidabdu
 
Daa notes 1
Daa notes 1Daa notes 1
Daa notes 1
smruti sarangi
 
asymptotic notation
asymptotic notationasymptotic notation
asymptotic notation
SangeethaSasi1
 
Algorithms.
Algorithms. Algorithms.
Data Structures- Part2 analysis tools
Data Structures- Part2 analysis toolsData Structures- Part2 analysis tools
Data Structures- Part2 analysis tools
Abdullah Al-hazmy
 
Ch 1 intriductions
Ch 1 intriductionsCh 1 intriductions
Ch 1 intriductionsirshad17
 
chapter 1
chapter 1chapter 1
chapter 1
yatheesha
 
Data Structure and Algorithms
Data Structure and Algorithms Data Structure and Algorithms
Data Structure and Algorithms
ManishPrajapati78
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
Mohamed Loey
 
DATA STRUCTURE AND ALGORITHM FULL NOTES
DATA STRUCTURE AND ALGORITHM FULL NOTESDATA STRUCTURE AND ALGORITHM FULL NOTES
DATA STRUCTURE AND ALGORITHM FULL NOTES
Aniruddha Paul
 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of Algorithms
Krishnan MuthuManickam
 
01 Analysis of Algorithms: Introduction
01 Analysis of Algorithms: Introduction01 Analysis of Algorithms: Introduction
01 Analysis of Algorithms: Introduction
Andres Mendez-Vazquez
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
Mallikarjun Biradar
 
Searching techniques with progrms
Searching techniques with progrmsSearching techniques with progrms
Searching techniques with progrms
Misssaxena
 

What's hot (20)

Data Structures and Algorithm Analysis
Data Structures  and  Algorithm AnalysisData Structures  and  Algorithm Analysis
Data Structures and Algorithm Analysis
 
Complexity of Algorithm
Complexity of AlgorithmComplexity of Algorithm
Complexity of Algorithm
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
 
Daa notes 2
Daa notes 2Daa notes 2
Daa notes 2
 
Design & Analysis Of Algorithm
Design & Analysis Of AlgorithmDesign & Analysis Of Algorithm
Design & Analysis Of Algorithm
 
Algorithm chapter 2
Algorithm chapter 2Algorithm chapter 2
Algorithm chapter 2
 
Daa notes 1
Daa notes 1Daa notes 1
Daa notes 1
 
asymptotic notation
asymptotic notationasymptotic notation
asymptotic notation
 
Algorithms.
Algorithms. Algorithms.
Algorithms.
 
Data Structures- Part2 analysis tools
Data Structures- Part2 analysis toolsData Structures- Part2 analysis tools
Data Structures- Part2 analysis tools
 
Ch 1 intriductions
Ch 1 intriductionsCh 1 intriductions
Ch 1 intriductions
 
chapter 1
chapter 1chapter 1
chapter 1
 
Data Structure and Algorithms
Data Structure and Algorithms Data Structure and Algorithms
Data Structure and Algorithms
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
 
DATA STRUCTURE AND ALGORITHM FULL NOTES
DATA STRUCTURE AND ALGORITHM FULL NOTESDATA STRUCTURE AND ALGORITHM FULL NOTES
DATA STRUCTURE AND ALGORITHM FULL NOTES
 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of Algorithms
 
01 Analysis of Algorithms: Introduction
01 Analysis of Algorithms: Introduction01 Analysis of Algorithms: Introduction
01 Analysis of Algorithms: Introduction
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
Cis435 week04
Cis435 week04Cis435 week04
Cis435 week04
 
Searching techniques with progrms
Searching techniques with progrmsSearching techniques with progrms
Searching techniques with progrms
 

Similar to Theory of algorithms final

Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithms
sangeetha s
 
Introduction to algorithms
Introduction to algorithmsIntroduction to algorithms
Introduction to algorithms
Madishetty Prathibha
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
MemMem25
 
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
Tanya Makkar
 
9 big o-notation
9 big o-notation9 big o-notation
9 big o-notationirdginfo
 
DATA STRUCTURE.pdf
DATA STRUCTURE.pdfDATA STRUCTURE.pdf
DATA STRUCTURE.pdf
ibrahim386946
 
DATA STRUCTURE
DATA STRUCTUREDATA STRUCTURE
DATA STRUCTURE
RobinRohit2
 
Data structures algorithms basics
Data structures   algorithms basicsData structures   algorithms basics
Data structures algorithms basics
ayeshasafdar8
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
skilljiolms
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
lilyMalar1
 
Discrete structure ch 3 short question's
Discrete structure ch 3 short question'sDiscrete structure ch 3 short question's
Discrete structure ch 3 short question's
hammad463061
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
AntareepMajumder
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx
RahikAhmed1
 
Data Structures in C
Data Structures in CData Structures in C
Data Structures in C
Jabs6
 
Aad introduction
Aad introductionAad introduction
Aad introductionMr SMAK
 
L1803016468
L1803016468L1803016468
L1803016468
IOSR Journals
 

Similar to Theory of algorithms final (20)

Unit i basic concepts of algorithms
Unit i basic concepts of algorithmsUnit i basic concepts of algorithms
Unit i basic concepts of algorithms
 
Introduction to algorithms
Introduction to algorithmsIntroduction to algorithms
Introduction to algorithms
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
TIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMSTIME EXECUTION   OF  DIFFERENT SORTED ALGORITHMS
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
 
9 big o-notation
9 big o-notation9 big o-notation
9 big o-notation
 
DATA STRUCTURE.pdf
DATA STRUCTURE.pdfDATA STRUCTURE.pdf
DATA STRUCTURE.pdf
 
DATA STRUCTURE
DATA STRUCTUREDATA STRUCTURE
DATA STRUCTURE
 
Algorithm.pptx
Algorithm.pptxAlgorithm.pptx
Algorithm.pptx
 
Algorithm.pptx
Algorithm.pptxAlgorithm.pptx
Algorithm.pptx
 
U nit i data structure-converted
U nit   i data structure-convertedU nit   i data structure-converted
U nit i data structure-converted
 
Data structures algorithms basics
Data structures   algorithms basicsData structures   algorithms basics
Data structures algorithms basics
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
 
Performance analysis and randamized agoritham
Performance analysis and randamized agorithamPerformance analysis and randamized agoritham
Performance analysis and randamized agoritham
 
Discrete structure ch 3 short question's
Discrete structure ch 3 short question'sDiscrete structure ch 3 short question's
Discrete structure ch 3 short question's
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx
 
Data Structures in C
Data Structures in CData Structures in C
Data Structures in C
 
Aad introduction
Aad introductionAad introduction
Aad introduction
 
Cs2251 daa
Cs2251 daaCs2251 daa
Cs2251 daa
 
L1803016468
L1803016468L1803016468
L1803016468
 

Recently uploaded

2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 

Recently uploaded (20)

2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 

Theory of algorithms final

  • 1. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 1 What is Algorithm?  The word algorithm comes from the name of a Persian mathematician Khowarizmi.  In computer science, this word refers to a special method useable by a computer for solution of a problem. The statement of the problem specifies in general terms the desired input/output relationship.  Algorithm is a step by step procedure, which defines a set of instructions to be executed in certain order to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language. Characteristics of anAlgorithm Not all procedures can be called an algorithm. An algorithm should have the below mentioned characteristics −  Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their input/outputs should be clear and must lead to only one meaning.  Input − An algorithm should have 0 or more well defined inputs.  Output − An algorithm should have 1 or more well defined outputs, and should match the desired output.  Finiteness − Algorithms must terminate after a finite number of steps.  Feasibility − Should be feasible with the available resources.  Independent − An algorithm should have step-by-step directions which should be independent of any programming code. Algorithm Analysis Algorithm Analysis-Measure resource requirements; how does the amount of time and space an algorithm uses scale with increasing input size? Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation, as mentioned below −  A priori analysis − This is theoretical analysis of an algorithm. Efficiency of algorithm is measured by assuming that all other factors e.g. processor speed, are constant and have no effect on implementation.  A posterior analysis − This is empirical analysis of an algorithm. The selected algorithm is implemented using programming language. This is then executed on target computer machine. In this analysis, actual statistics like running time and space required are collected.
  • 2. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 2 We shall learn here a priori algorithm analysis. Algorithm analysis deals with the execution or running time of various operations involved. Running time of an operation can be defined as no. of computer instructions executed per operation. Algorithm Complexity Suppose X is an algorithm and n is the size of input data, the time and space used by the Algorithm X are the two main factors which decide the efficiency of X.  Time Factor − The time is measured by counting the number of key operations such as comparisons in sorting algorithm  Space Factor − The space is measured by counting the maximum memory space required by the algorithm. The complexity of an algorithm f(n) gives the running time and / or storage space required by the algorithm in terms of n as the size of input data. Space Complexity Space complexity of an algorithm represents the amount of memory space required by the algorithm in its life cycle. Space required by an algorithm is equal to the sum of the following two components −  A fixed part that is a space required to store certain data and variables that are independent of the size of the problem. For example simple variables & constant used, program size etc.  A variable part is a space required by variables, whose size depends on the size of the problem. For example dynamic memory allocation, recursion stacks space etc. Space complexity S(P) of any algorithm P is S(P) = C + SP(I) Where C is the fixed part and S(I) is the variable part of the algorithm which depends on instance characteristic I. Following is a simple example that tries to explain the concept − Algorithm: SUM(A, B) Step 1 - START Step 2 - C ← A + B + 10 Step 3 - Stop Here we have three variables A, B and C and one constant. Hence S(P) = 1+3. Now space depends on data types of given variables and constant types and it will be multiplied accordingly. Time Complexity Time Complexity of an algorithm represents the amount of time required by the algorithm to run to completion. Time requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided each step consumes constant time.For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c*n, where c is the time taken for addition of two bits. Here, we observe that T(n) grows linearly as input size increases.
  • 3. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 3 Asymptotic Analysis Asymptotic analysis of an algorithm refers to defining the mathematical bound/framing of its run-time performance. Using asymptotic analysis, we can very well conclude the best case, average case and worst case scenario of an algorithm.Asymptotic analysis are input bound i.e., if there's no input to the algorithm it is concluded to work in a constant time. Other than the "input" all other factors are considered constant. Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For example, running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2 ). Which means first operation running time will increase linearly with the increase in n and running time of second operation will increase exponentially when n increases. Similarly the running time of both operations will be nearly same if n is significantly small. Usually, time required by an algorithm falls under three types −  Best Case − Minimum time required for program execution.  Average Case − Average time required for program execution.  Worst Case − Maximum time required for program execution. AsymptoticNotations Following are commonly used asymptotic notations used in calculating running time complexity of an algorithm.  Ο Notation (“Big Oh Notation “)  Ω Notation(“Omega Notation”)  θ Notation(“Theta Notation”) Big Oh Notation, Ο The Ο(n) is the formal way to express the upper bound of an algorithm's running time. It measures the worst case time complexity or longest amount of time an algorithm can possibly take to complete.
  • 4. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 4 For example, for a function f(n) Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. } Omega Notation, Ω The Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or best amount of time an algorithm can possibly take to complete. For example, for a function f(n) Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. } Theta Notation, θ The θ(n) is the formal way to express both the lower bound and upper bound of an algorithm's running time. It is represented as following − θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. } Asymptotic Notation Summary
  • 5. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 5 Properties of Asymptotic Growth Rate Growth rate of Asymptotic Notation The growth rate of the asymptotic notation is given from smallest to the biggest as follows: 1) Constant Ο(1) 2) Logarithmic Ο(log n) 3) Linear Ο(n) 4) n log n Ο(n log n) 5) quadratic Ο(n2 ) 6) cubic Ο(n3 ) 7) polynomial nΟ(1) 8) exponential 2Ο(n) EXAMPLE 1
  • 6. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 6 EXAMPLE 2 EXAMPLE -3 MORE EXAMPLES … STANDARD NOTATIONS & COMMON FUNCTIONS For all real a>0, b>0, c>0, and n 610033,3forsince)(61003 2222  nnncnOnn
  • 7. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 7 Data Structure Review I) LINKED LIST A linked list is a sequence of data structures, which are connected together via links. Linked List is a sequence of links which contains items. Each link contains a connection to another link. Linked list is the second most-used data structure after array. Following are the important terms to understand the concept of Linked List.  Link − Each link of a linked list can store a data called an element.  Next − Each link of a linked list contains a link to the next link called Next.  Linked List − A Linked List contains the connection link to the first link called First.
  • 8. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 8 Types of Linked List Following are the various types of linked list.  Simple Linked List − Item navigation is forward only.  Doubly Linked List − Items can be navigated forward and backward.  Circular Linked List − Last item contains link of the first element as next and the first element has a link to the last element as previous. Basic Operations Following are the basic operations supported by a list. o Insertion − Adds an element at the beginning of the list. o Deletion − Deletes an element at the beginning of the list. o Display − Displays the complete list. o Search − Searches an element using the given key. o Delete − Deletes an element using the given key. Doubly Linked List Doubly Linked List is a variation of Linked list in which navigation is possible in both ways, either forward or backward easily as compared to Single Linked List. Following are the important terms to understand the concept of doubly linked list.  Link − Each link of a linked list can store a data called an element.  Next − Each link of a linked list contains a link to the next link called Next.  Prev − Each link of a linked list contains a link to the previous link called Prev.  Linked List − A Linked List contains the connection link to the first link called First and to the last link called Last.
  • 9. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 9 Circularly Linked List Circular Linked List is a variation of Linked list in which the first element points to the last element and the last element points to the first element. Both Singly Linked List and Doubly Linked List can be made into a circular linked list.
  • 10. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 10 II) STACK A stack is an Abstract Data Type (ADT), commonly used in most programming languages. It is named stack as it behaves like a real-world stack, for example – a deck of cards or a pile of plates, etc. A real-world stack allows operations at one end only. For example, we can place or remove a card or plate from the top of the stack only. Likewise, Stack ADT allows all data operations at one end only. At any given time, we can only access the top element of a stack. This feature makes it LIFO data structure. LIFO stands for Last-in-first-out. Here, the element which is placed (inserted or added) last is accessed first. In stack terminology, insertion operation is called PUSH operation and removal operation is called POP operation.
  • 11. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 11
  • 12. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 12
  • 13. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 13
  • 14. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 14 III) queue Queue is an abstract data structure, somewhat similar to Stacks. Unlike stacks, a queue is open at both its ends. One end is always used to insert data (enqueue) and the other is used to remove data (dequeue). Queue follows First-In-First-Out methodology, i.e., the data item stored first will be accessed first. A real-world example of queue can be a single-lane one-way road, where the vehicle enters first, exits first. More real-world examples can be seen as queues at the ticket windows and bus-stops.
  • 15. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 15 IV) TREE The data structures that we have discussed in previous lectures are linear data structures. The linked list and stack are linear data structures. In these structures, the elements are in a line. We put and get elements in and from a stack in linear order. Queue is also a linear data structure as a line is developed in it. There are a number of applications where linear data structures are not appropriate. In such cases, there is need of some non-linear data structure. Some examples will show us that why nonlinear data structures are important. Tree is one of the non-linear data structures. Look at the following figure. This figure below showing a genealogy tree of a family. A tree is a widely used abstract data type (ADT)—or data structure implementing this ADT— that simulates a hierarchical tree structure, with a root value and sub-trees of children with a parent node, represented as a set of linked nodes. A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a root node), where each node is a data structure consisting of a value, together with a list of references to nodes (the "children"). A tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree. A tree that is not empty consists of a root
  • 16. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 16 node and potentially many levels of additional nodes that form a hierarchy. Terminologies used in tree include root, children, sibling, parent, descendant, leaf & ancestor. Degree: The number of sub trees of a node & Depth: The depth of a node is the number of edges from the tree's root node to the node. Leaf – The node which does not have any child node is called the leaf node. Sub tree – Sub-tree represents the descendants of a node Levels − Level of a node represents the generation of a node. If the root node is at level 0, then its next child node is at level 1, its grandchild is at level 2, and so on. Binary Tree The mathematical definition of a binary tree is “A binary tree is a finite set of elements that is either empty or is partitioned into three disjoint subsets. The first subset contains a single element called the root of the tree. The other two subsets are themselves binary trees called the left and right sub-trees”. Each element of a binary tree is called a node of the tree. Following figure shows a binary tree. A binary tree has a special condition that each node can have a maximum of two children. A binary tree has the benefits of both an ordered array and a linked list as search is as quick as in a sorted array and insertion or deletion operation are as fast as in linked list. Binary Search Tree Representation Binary Search tree exhibits a special behavior. A node's left child must have a value less than its parent's value and the node's right child must have a value greater than its parent value.
  • 17. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 17 A Binary Search Tree (BST) is a tree in which all the nodes follow the below-mentioned properties −  The left sub-tree of a node has a key less than or equal to its parent node's key.  The right sub-tree of a node has a key greater than or equal to its parent node's key. Thus, BST divides all its sub-trees into two segments; the left sub-tree and the right sub tree and can be defined as – Binary Search Tree Basic Operations The basic operations that can be performed on a binary search tree data structure are the following −  Insert − Inserts an element in a tree/create a tree. Search − Searches an element in a tree. In-order Traversal: In this traversal method, the left sub tree is visited first, then the root and later the right sub-tree. Pre-order Traversal: In this traversal method, the root node is visited first, then the left sub tree and finally the right sub tree. Post-order Traversal: In this traversal method, the root node is visited last, hence the name. First we traverse the left sub tree, then the right sub tree and finally the root node Heaps Heap is a special case of balanced binary tree data structure where the root-node key is compared with its children and arranged accordingly. If α has child node β then −key(α) ≥ key(β) As the value of parent is greater than that of child, this property generates Max Heap. Based on these criteria, a heap can be of two types − Min-Heap − Where the value of the root node is less than or equal to either of its children Max-Heap − Where the value of the root node is greater than or equal to either of its children.
  • 18. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 18 Max Heap Construction Algorithm Graphs A graph is a pictorial representation of a set of objects where some pairs of objects are connected by links. The interconnected objects are represented by points termed as vertices, and the links that connect the vertices are called edges. Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of edges, connecting the pairs of vertices. Take a look at the following graph: In the above graph, V = {a, b, c, d, e} E = {ab, ac, bd, cd, de} Graph Data Structure Vertex − Each node of the graph is represented as a vertex Edge − Edge represents a path between two vertices or a line between two vertices. Adjacency − Two node or vertices are adjacent if they are connected to each other through an edge. In the following example, B is adjacent to A, C is adjacent to B, and so on. Path − Path represents a sequence of edges between the two vertices. In the following example, ABCD represents a path from A to D. Basic Operations Following are the basic primary operations that can be performed on a Graph: Add Vertex − Adds a vertex to the graph. Add Edge − Adds an edge between the two vertices of the graph. Display Vertex − Displays a vertex of the graph. Graph Traversal Techniques Tree traversal & applications Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all nodes are connected via edges (links) we always start from the root (head) node. That is, we cannot randomly access a node in a tree. There are three ways which we use to traverse a tree −
  • 19. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 19  In-order Traversal  Pre-order Traversal  Post-order Traversal In-order Traversal In this traversal method, the left sub tree is visited first, then the root and later the right sub-tree. We should always remember that every node may represent a sub tree itself. If a binary tree is traversed in-order, the output will produce sorted key values in an ascending order. We start from A, and following in-order traversal, we move to its left sub tree B. B is also traversed in-order. The process goes on until all the nodes are visited. The output of in order traversal of this tree will be − D → B → E → A → F → C → G Pre-order Traversal In this traversal method, the root node is visited first, then the left sub tree and finally the right sub tree. We start from A, and following pre-order traversal, we first visit A itself and then move to its left sub tree B. B is also traversed pre-order. The process goes on until all the nodes are visited. The output of pre-order traversal of this tree will be − A → B → D → E → C → F → G
  • 20. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 20 Post-order Traversal In this traversal method, the root node is visited last, hence the name. First we traverse the left sub tree, then the right sub tree and finally the root node. We start from A, and following pre-order traversal, we first visit the left sub tree B. B is also traversed post-order. The process goes on until all the nodes are visited. The output of post-order traversal of this tree will be − D → E → B → F → G → C → A Depth First Traversal Depth First Search (DFS) algorithm traverses a graph in a depth-ward motion and uses a stack to remember to get the next vertex to start a search, when a dead end occurs in any iteration.
  • 21. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 21 As in the example given above, DFS algorithm traverses from A to B to C to D first then to E, then to F and lastly to G. It employs the following rules. Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a stack. Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all the vertices from the stack, which do not have adjacent vertices.) Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty. Breadth First Traversal Breadth First Search (BFS) algorithm traverses a graph in a breadth-ward motion and uses a queue to remember to get the next vertex to start a search, when a dead end occurs in any iteration. As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and G lastly to D. It employs the following rules.  Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a queue.  Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue.  Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty Searching Techniques 1) Linear Search Linear search is a very simple search algorithm. In this type of search, a sequential search is made over all items one by one. Every item is checked and if a match is found then that particular item is returned, otherwise the search continues till the end of the data collection.
  • 22. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 22 2) Binary search Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search algorithm works on the principle of divide and conquers. For this algorithm to work properly, the data collection should be in the sorted form. Binary search looks for a particular item by comparing the middle most item of the collection. If a match occurs, then the index of item is returned. If the middle item is greater than the item, then the item is searched in the sub-array to the right of the middle item. Otherwise, the item is searched for in the sub-array to the left of the middle item. This process continues on the sub- array as well until the size of the sub array reduces to zero. How Binary Search Works? For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the process of binary search with a pictorial example. The following is our sorted array and let us assume that we need to search the location of value 31 using binary search.
  • 23. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 23
  • 24. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 24 3) InterpolatIon search Interpolation search is an improved variant of binary search. This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in a sorted form and equally distributed. Binary search has a huge advantage of time complexity over linear search. Linear search has worst-case complexity of Ο(n) whereas binary search has Ο(log n). There are cases where the location of target data may be known in advance. For example, in case of a telephone directory, if we want to search the telephone number of Morphius. Here, linear search and even binary search will seem slow as we can directly jump to memory space where the names start from 'M' are stored. Positioning in Binary Search In binary search, if the desired data is not found then the rest of the list is divided in two parts, lower and higher. The search is carried out in either of them.
  • 25. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 25 4) HasHing Hash Table is a data structure which stores data in an associative manner. In a hash table, data is stored in an array format, where each data value has its own unique index value. Access of data becomes very fast if we know the index of the desired data. Thus, it becomes a data structure in which insertion and search operations are very fast irrespective of the size of the data. Hash Table uses an array as a storage medium and uses hash technique to generate an index where an element is to be inserted or is to be located from. Hashing Hashing is a technique to convert a range of key values into a range of indexes of an array. We're going to use modulo operator to get a range of key values. Consider an example of hash table of size 20, and the following items are to be stored. Item are in the (key, value) format.
  • 26. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 26  (1,20)  (2,70)  (42,80)  (4,25)  (12,44)  (14,32)  (17,11)  (13,78)  (37,98) Sorting Techniques Sorting algorithm Sorting refers to arranging data in a particular format. Sorting algorithm specifies the way to arrange data in a particular order. Most common orders are in numerical or lexicographical order. The importance of sorting lies in the fact that data searching can be optimized to a very high level, if data is stored in a sorted manner. Sorting is also used to represent data in more readable formats. Following are some of the examples of sorting in real-life scenarios:  Telephone Directory – The telephone directory stores the telephone numbers of people sorted by their names, so that the names can be searched easily.  Dictionary – The dictionary stores words in an alphabetical order so that searching of any word becomes easy. In-place Sorting and Not-in-place Sorting Sorting algorithms may require some extra space for comparison and temporary storage of few data elements. These algorithms do not require any extra space and sorting is said to
  • 27. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 27 happen in-place, or for example, within the array itself. This is called in-place sorting. Bubble sort is an example of in-place sorting. However, in some sorting algorithms, the program requires space which is more than or equal to the elements being sorted. Sorting which uses equal or more space is called notin- place sorting. Merge-sort is an example of not-in-place sorting. Stable and Not Stable Sorting If a sorting algorithm, after sorting the contents, does not change the sequence of similar content in which they appear, it is called stable sorting. If a sorting algorithm, after sorting the contents, changes the sequence of similar content in which they appear, it is called unstable sorting. Adaptive and Non-Adaptive Sorting Algorithm A sorting algorithm is said to be adaptive, if it takes advantage of already 'sorted' elements in the list that is to be sorted. That is, while sorting if the source list has some element already sorted, adaptive algorithms will take this into account and will try not to re-order them. A non-adaptive algorithm is one which does not take into account the elements which are already sorted. They try to force every single element to be re-ordered to confirm their sortedness. Important Terms Some terms are generally coined while discussing sorting techniques, here is a brief introduction to them – Increasing Order A sequence of values is said to be in increasing order, if the successive element is greater than the previous one. For example, 1, 3, 4, 6, 8, 9 are in increasing order, as every next element is greater than the previous element. Decreasing Order A sequence of values is said to be in decreasing order, if the successive element is less than the current one. For example, 9, 8, 6, 4, 3, 1 are in decreasing order, as every next element is less than the previous element. Non-Increasing Order A sequence of values is said to be in non-increasing order, if the successive element is less than or equal to its previous element in the sequence. This order occurs when the sequence contains
  • 28. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 28 duplicate values. For example, 9, 8, 6, 3, 3, 1 are in non-increasing order, as every next element is less than or equal to (in case of 3) but not greater than any previous element. Non-Decreasing Order A sequence of values is said to be in non-decreasing order, if the successive element is greater than or equal to its previous element in the sequence. This order occurs when the sequence contains duplicate values. For example, 1, 3, 3, 6, 8, 9 are in non-decreasing order, as every next element is greater than or equal to (in case of 3) but not less than the previous one. 2) BuBBle sort algorithm Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm in which each pair of adjacent elements is compared and the elements are swapped if they are not in order. This algorithm is not suitable for large data sets as its average and worst case complexity are of O(n2) where n is the number of items. How Bubble Sort Works? We take an unsorted array for our example. Bubble sort takes Ο(n2) time so we're keeping it short and precise.
  • 29. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 29
  • 30. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 30
  • 31. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 31 3) insertion sort This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted. For example, the lower part of an array is maintained to be sorted. An element which is to be 'inserted in this sorted sub-list, has to find its appropriate place and then it has to be inserted there. Hence the name, insertion sort. The array is searched sequentially and unsorted items are moved and inserted into the sorted sub-list (in the same array). This algorithm is not suitable for large data sets as average and worst case complexity are of Ο(n2), where n is the number of items. How Insertion Sort Works?
  • 32. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 32
  • 33. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 33
  • 34. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 34 4) Selection Sort Selection sort is a simple sorting algorithm. This sorting algorithm is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is empty and the unsorted part is the entire list. The smallest element is selected from the unsorted array and swapped with the leftmost element, and that element becomes a part of the sorted array. This process continues moving unsorted array boundary by one element to the right. This algorithm is not suitable for large data sets as its average and worst case complexities are of O (n2), where n is the number of items.
  • 35. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 35
  • 36. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 36
  • 37. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 37
  • 38. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 38 5) Merge sort Merge sort is a sorting technique based on divide and conquer technique. With worst-case time complexity being Ο(n log n), it is one of the most respected algorithms. Merge sort first divides the array into equal halves and then combines them in a sorted manner.
  • 39. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 39
  • 40. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 40
  • 41. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 41 6) Shell Sort Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left. This algorithm uses insertion sort on a widely spread elements, first to sort them and then sorts the less widely spaced elements. This spacing is termed as interval. This interval is calculated based on Knuth's formula as − h = h * 3 + 1 where − h is interval with initial value 1 This algorithm is quite efficient for medium-sized data sets as its average and worst case complexity are of O(n), where n is the number of items. How Shell Sort Works? Let us consider the following example to have an idea of how shell sort works. We take the same array we have used in our previous examples. For our example and ease of understanding, we take the interval of 4. Make a virtual sub-list of all values located at the interval of 4 positions. Here these values are {35, 14}, {33, 19}, {42, 27} and {10, 14}
  • 42. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 42
  • 43. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 43
  • 44. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 44 7) quick sort Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value. Quick sort partitions an array and then calls itself recursively twice to sort the two resulting sub arrays. This algorithm is quite efficient for large-sized data sets as its average and worst case complexity are of O(n log n), where n is the number of items.
  • 45. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 45 The divide & conquer method The divide-and-conquer strategy solves a problem by: 1. Breaking it into sub problems that are themselves smaller instances of the same type of problem 2. Recursively solving these sub problems 3. Appropriately combining their answers In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem is solved independently. When we keep on dividing the sub problems into even smaller sub-problems, we may eventually reach a stage where no more division is possible. Those "atomic" smallest possible sub-problems (fractions) are solved. The solution of all sub- problems is finally merged in order to obtain the solution of an original problem. Broadly, we can understand divide-and-conquer approach in a three-step process. Divide/Break This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the actual problem. Conquer/Solve This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are considered 'solved' on their own. Merge/Combine When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution of the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that they appear as one.
  • 46. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 46 Examples The following computer algorithms are based on divide-and-conquer programming approach − o Merge Sort o Quick Sort o Binary Search o Strassen's Matrix Multiplication o Closest Pair (points) There are various ways available to solve any computer problem, but the mentioned are a good example of divide and conquer approach. Divide and conquer (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g. the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs). Understanding and designing D&C algorithms is a complex skill that requires a good understanding of the nature of the underlying problem to be solved. As when proving a theorem by induction, it is often necessary to replace the original problem with a more general or complicated problem in order to initialize the recursion, and there is no systematic method for finding the proper generalization. These D&C complications are seen when optimizing the calculation of a Fibonacci number with efficient double recursion.
  • 47. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 47
  • 48. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 48 The Greedy MeThod An algorithm is designed to achieve optimum solution for a given problem. In greedy algorithm approach, decisions are made from the given solution domain. As being greedy, the closest solution that seems to provide an optimum solution is chosen. Greedy algorithms try to find a localized optimum solution, which may eventually lead to globally optimized solutions. However, generally greedy algorithms do not provide globally optimized solutions. A greedy algorithm is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time. In general, greedy algorithms have five components: i. A candidate set, from which a solution is created ii. A selection function, which chooses the best candidate to be added to the solution iii. A feasibility function, that is used to determine if a candidate can be used to contribute to a solution iv. An objective function, which assigns a value to a solution, or a partial solution, and v. A solution function, which will indicate when we have discovered a complete solution A game like chess can be won only by thinking ahead: a player who is focused entirely on immediate advantage is easy to defeat. But in many other games, such as Scrabble, it is possible to do quite well by simply making whichever move seems best at the moment and not worrying too much about future consequences. This sort of myopic behavior is easy and convenient, making it an attractive algorithmic strategy. Greedy algorithms build up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. Although such an approach can be disastrous for some computational tasks, there are many for which it is optimal. Our first example is that of minimum spanning trees. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's algorithm for finding minimum spanning trees, and the algorithm for finding optimum Huffman trees. Greedy algorithms appear in network routing as well. Counting Coins This problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If we are provided coins of € 1, 2, 5 and 10 and we are asked to count € 18 then the greedy procedure will be − o 1 − Select one € 10 coin, the remaining count is 8 o 2 − Then select one € 5 coin, the remaining count is 3 o 3 − Then select one € 2 coin, the remaining count is 1 o 3 − And finally, the selection of one € 1 coins solves the problem Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the problem then the same approach may not be able to produce the same optimum result. For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1). Hence, we may conclude that the greedy approach picks an immediate optimized solution and may fail where global optimization is a major concern.
  • 49. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 49 Examples Most networking algorithms use the greedy approach. Here is a list of few of them −  Travelling Salesman Problem  Prim's Minimal Spanning Tree Algorithm  Kruskal's Minimal Spanning Tree Algorithm  Dijkstra's Minimal Spanning Tree Algorithm  Graph - Map Coloring  Graph - Vertex Cover  Knapsack Problem  Job Scheduling Problem There are lots of similar problems that use the greedy approach to find an optimum solution. A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of the minimum spanning trees for its connected components. There are quite a few use cases for minimum spanning trees. One example would be a telecommunications company which is trying to lay out cables in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. along roads), then there would be a graph representing which points are connected by those paths. Some of those paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects to every house; there might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost, thus would represent the least expensive path for laying the cable. Applications of MST Minimum spanning trees have direct applications in the design of networks, including computer networks, telecommunications networks, transportation networks, water supply networks, and electrical grids .They are invoked as subroutines in algorithms for other problems, including the Christofides algorithm for approximating the traveling salesman problem, approximating the multi-terminal minimum cut problem (which is equivalent in the single- terminal case to the maximum flow problem), and approximating the minimum-cost weighted perfect matching.
  • 50. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 50 Other practical applications based on minimal spanning trees include: o Civil Network Planning o Computer Network Routing Protocol o Cluster Analysis o Taxonomy o Cluster analysis: clustering points in the plane, single-linkage clustering (a method of hierarchical clustering), graph-theoretic clustering,[22] and clustering gene expression data. o Constructing trees for broadcasting in computer networks.[24] On Ethernet networks this is accomplished by means of the Spanning tree protocol. o Image registration and segmentation – see minimum spanning tree-based segmentation. o Curvilinear feature extraction in computer vision. o Handwriting recognition of mathematical expressions. o Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. o Regionalisation of socio-geographic areas, the grouping of areas into homogeneous, contiguous regions. o Comparing eco toxicology data. o Topological observability in power systems. o Measuring homogeneity of two-dimensional materials. o Minimax process control. o Minimum spanning trees can also be used to describe financial markets. A correlation matrix can be created by calculating a coefficient of correlation between any two stocks. This matrix can be represented topologically as a complex network and a minimum spanning tree can be constructed to visualize relationships. MiniMuM spanning trees A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of edges. Hence, a spanning tree does not have cycles and it cannot be disconnected. By this definition, we can draw a conclusion that every connected and undirected Graph G has at least one spanning tree. A disconnected graph does not have any spanning tree, as it cannot be spanned to all its vertices.
  • 51. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 51 General Properties of Spanning Tree We now understand that one graph can have more than one spanning tree. Following are a few properties of the spanning tree connected to graph G -  A connected graph G can have more than one spanning tree.  All possible spanning trees of graph G, have the same number of edges and vertices.  The spanning tree does not have any cycle (loops). Removing one edge from the spanning tree will make the graph disconnected, i.e. the spanning tree is minimally connected. Adding one edge to the spanning tree will create a circuit or loop, i.e. the spanning tree is maximally acyclic. Suppose you are asked to network a collection of computers by linking selected pairs of them. This translates into a graph problem in which nodes are computers, undirected edges are potential links, and the goal is to pick enough of these edges that the nodes are connected. But this is not all; each link also has a maintenance cost, reflected in that edge's weight. What is the cheapest possible network? Property 1 : Removing a cycle edge cannot disconnect a graph. So the solution must be connected and acyclic: undirected graphs of this kind are called trees. The particular tree we want is the one with minimum total weight, known as the minimum spanning tree. Here is its formal definition. Input: An undirected graph G = (V, E); edge weights We. Output: A tree T = (V, E’), with E’C E, that minimizes
  • 52. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 52 This figure shows there may be more than one minimum spanning tree in a graph. In the figure, the two trees below the graph are two possibilities of minimum spanning tree of the given graph Both are greedy algorithms Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach. This algorithm treats the graph as a forest and every node it has as an individual tree. A tree connects to another only and only if, it has the least cost among all available options and does not violate MST properties.To understand Kruskal's algorithm let us consider the following example – Step 1 - Remove all loops and parallel edges Remove all loops and parallel edges from the given graph.
  • 53. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 53 In case of parallel edges, keep the one which has the least cost associated and remove all others. Step 2 - Arrange all edges in their increasing order of weight The next step is to create a set of edges and weight, and arrange them in an ascending order of weightage (cost). Step 3 - Add the edge which has the least weightage Now we start adding edges to the graph beginning from the one which has the least weight. Throughout, we shall keep checking that the spanning properties remain intact. In case, by adding one edge, the spanning tree property does not hold then we shall consider not to include the edge in the graph.
  • 54. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 54 The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them does not violate spanning tree properties, so we continue to our next edge selection. Next cost is 3, and associated edges are A,C and C,D. We add them again – Next cost in the table is 4, and we observe that adding it will create a circuit in the graph. We ignore it. In the process we shall ignore/avoid all edges that create a circuit. We observe that edges with cost 5 and 6 also create circuits. We ignore them and move on. Now we are left with only one node to be added. Between the two least cost edges available 7 and 8, we shall add the edge with cost 7 By adding edge S,A we have included all the nodes of the graph and we now have minimum cost spanning tree.
  • 55. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 55 Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the greedy approach. Prim's algorithm shares a similarity with the shortest path first algorithms. Prim's algorithm, in contrast with Kruskal's algorithm, treats the nodes as a single tree and keeps on adding new nodes to the spanning tree from the given graph. To contrast with Kruskal's algorithm and to understand Prim's algorithm better, we shall use the same example – Step 1 - Remove all loops and parallel edges Remove all loops and parallel edges from the given graph. In case of parallel edges, keep the one which has the least cost associated and remove all others.
  • 56. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 56 Step 2 - Choose any arbitrary node as root node In this case, we choose S node as the root node of Prim's spanning tree. This node is arbitrarily chosen, so any node can be the root node. One may wonder why any video can be a root node. So the answer is, in the spanning tree all the nodes of a graph are included and because it is connected then there must be at least one edge, which will join it to the rest of the tree. Step 3 - Check outgoing edges and select the one with less cost After choosing the root node S, we see that S,A and S,C are two edges with weight 7 and 8, respectively. We choose the edge S,A as it is lesser than the other. Now, the tree S-7-A is treated as one node and we check for all edges going out from it. We select the one which has the lowest cost and include it in the tree. After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will check all the edges again. However, we will choose only the least cost edge. In this case, C-3-D is the new edge, which is less than other edges' cost 8, 6, 4, etc
  • 57. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 57 After adding node D to the spanning tree, we now have two edges going out of it having the same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step will again yield edge 2 as the least cost. Hence, we are showing a spanning tree with both edges included. We may find that the output spanning tree of the same graph using two different algorithms is same.
  • 58. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 58 Dijkstra's algorithm Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
  • 59. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 59 The algorithm exists in many variants; Dijkstra's original variant found the shortest path between two nodes, but a more common variant fixes a single node as the "source" node and finds shortest paths from the source to all other nodes in the graph, producing a shortest-path tree. For a given source node in the graph, the algorithm finds the shortest path between that node and every other. It can also be used for finding the shortest paths from a single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined. For example, if the nodes of the graph represent cities and edge path costs represent driving distances between pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. As a result, the shortest path algorithm is widely used in network routing protocols, most notably IS-IS (Intermediate System to Intermediate System) and Open Shortest Path First (OSPF). It is also employed as a subroutine in other algorithms such as Johnson's. Dijkstra's original algorithm does not use a min-priority queue and runs in time O (| V | 2). In some fields, artificial intelligence in particular, Dijkstra's algorithm or a variant of it is known as uniform-cost search and formulated as an instance of the more general idea of best-first search.
  • 60. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 60 Bellman pioneered the systematic study of dynamic programming in the 1950s. Dynamic programming is “planning over time." Dynamic programming is a very powerful algorithmic paradigm in which a problem is solved by identifying a collection of sub problems and tackling them one by one, smallest first, using the answers to small problems to help figure out larger ones, until the whole lot of them is solved. Dynamic programming (also known as dynamic optimization) is a method for solving a complex problem by breaking it down into a collection of simpler sub problems, solving each of those sub problems just once, and storing their solutions. The next time the same sub problem occurs, instead of re-computing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the sub problem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to sub problems instead of re-computing them is called "memoization". Dynamic programming algorithms are often used for optimization. A dynamic programming algorithm will examine the previously solved sub problems and will combine their solutions to give the best solution for the given problem. In comparison, a greedy algorithm treats the solution as some sequence of steps and picks the locally optimal choice at each step. Using a greedy algorithm does not guarantee an optimal solution, because picking locally optimal choices may result in a bad global solution, but it is often faster to calculate. Some greedy algorithms (such as Kruskal's or Prim's for minimum spanning trees) are however proven to lead to the optimal solution. There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion.
  • 61. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 61 Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci series: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once. Applying DP to Matrix chain multiplication Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices A 1, A 2 , . . . . A. As we know from basic linear algebra, matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example: ((A1 × A2) × A3) × ... An A1×(((A2×A3)× ... ) × An) (A1 × A2) × (A3 × ... An) and so on. Application of dynamic programming to Sequence alignment In computational genetics, sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an
  • 62. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 62 associated cost, and the goal is to find the sequence of edits with the lowest total cost.The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either: 1. inserting the first character of B, and performing an optimal alignment of A and the tail of B 2. deleting the first character of A, and performing the optimal alignment of the tail of A and B 3. replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B. The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum. Different variants exist, such as Smith–Waterman algorithm and Needleman–Wunsch algorithm. Fibonacci sequence Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition:
  • 63. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 63
  • 64. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 64  Many string algorithms including longest common subsequence, longest increasing subsequence, longest common substring, Levenshtein distance (edit distance)  The use of transposition tables and refutation tables in computer chess  Recurrent solutions to lattice models for protein-DNA binding-bioinfromatics  The Viterbi algorithm (used for hidden Markov models  Optimizing the order for chain matrix multiplication  Recursive least squares method  The Bellman–Ford algorithm for finding the shortest distance in a graph  Floyd's all-pairs shortest path algorithm  Some methods for solving the travelling salesman problem, either exactly (in exponential time) or approximately Edit distancE When a spell checker encounters a possible misspelling, it looks in its dictionary for other words that are close by. What is the appropriate notion of closeness in this case? A natural measure of the distance between two strings is the extent to which they can be aligned, or matched up. Technically, an alignment is simply a way of writing the strings one above the other. For instance, here are two possible alignments of SNOWY and SUNNY: The __ indicates a gap.; any number of these can be placed in either string. The cost of an alignment is the number of columns in which the letters differ. And the edit distance between two strings is the cost of their best possible alignment. Do you see that there is no better alignment of SNOWY and SUNNY than the one shown here with a cost of 3? Edit distance is so named because it can also be thought of as the minimum number of edits, insertions, deletions, and substitutions of characters needed to transform the first string into the second. For instance, the alignment shown on the left corresponds to three edits: insert U, substitute O ! N, and delete W. In general, there are so many possible alignments between two
  • 65. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 65 strings that it would be terribly inefficient to search through all of them for the best one. Instead we turn to dynamic programming. A dynamic programming solution to edit distance When solving a problem by dynamic programming, the most crucial question is, what are the sub problems. It is an easy matter to write down the algorithm: iteratively solve one sub problem after the other, in order of increasing size. KnapsacK Neither version of this problem is likely to have a polynomial time algorithm. However, using dynamic programming they can both be solved in O(nW) time, which is reasonable when W is small, but is not polynomial since the input size is proportional to logW rather than W. Bellman–Ford algorithm The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers. The algorithm was first proposed by Alfonso Shimbel in 1955, but is instead named after Richard Bellman and Lester Ford, Jr., who published it in 1958 and 1956, respectively. Edward F. Moore also published the same algorithm in 1957, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. Negative edge weights are found in various applications of graphs, hence the usefulness of this algorithm. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no cheapest path: any path that has a point
  • 66. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 66 on the negative cycle can be made cheaper by one more walk around the negative cycle. In such a case, the Bellman–Ford algorithm can detect negative cycles and report their existence. travelling salesman problem The travelling salesman problem (TSP), or in recent years, the travelling salesperson problem, asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. In the theory of computational complexity, the decision version of the TSP (where, given a length L, the task is to decide whether the graph has any tour shorter than L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases super polynomially (but no more than exponentially) with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, a large number of heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%. The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources. In many applications, additional constraints such as limited resources or time windows may be imposed.
  • 67. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 67 A traveling salesman is getting ready for a big sales tour. Starting at his hometown, suitcase in hand, he will conduct a journey in which each of his target cities is visited exactly once before he returns home. Given the pair wise distances between cities, what is the best order in which to visit them, so as to minimize the overall distance traveled? Denote the cities by 1; : : : ; n, the salesman's hometown being 1, and let D = (dij) be the matrix of intercity distances. The goal is to design a tour that starts and ends at 1, includes all other cities exactly once, and has minimum total length. Figure below shows an example involving five cities. Can you spot the optimal tour? Even in this tiny example, it is tricky for a human to _nd the solution; imagine what happens when hundreds of cities are involved. It turns out this problem is also difficult for computers. In fact, the traveling salesman problem (TSP) is one of the most notorious computational tasks. There is a long history of attempts at solving it, a long saga of failures and partial successes, and along the way, major advances in algorithms and complexity theory. The most basic piece of bad news about the TSP, , is that it is highly unlikely to be solvable in polynomial time. How long does it take, then? Well, the brute-force approach is to evaluate every possible tour and return the best one. Since there are (n 1)! Possibilities, this strategy takes O(n!) time. We will now see that dynamic programming yields a much faster solution, though not a polynomial one. What is the appropriate sub problem for the TSP? Sub problems refer to partial solutions, and in this case the most obvious partial solution is the initial portion of a tour. Suppose we have started at city 1 as required, have visited a few cities, and are now in city j. What information do we need in order to extend this partial tour? We certainly need to know j, since this will determine which cities are most convenient to visit next. And we also need to know all the cities visited so far, so that we don't repeat any of them. Here, then, is an appropriate sub problem.
  • 68. A d m a s U n i v e r s i t y , T h e o r y o f A l g o r i t h m s N o t e P a g e | 68 A)BAcktrAcking