SlideShare a Scribd company logo
1 of 57
Chapter Six
Coun’t…
Advanced Sorting and
Searching Algorithms
Types of Advanced Sorting Algorithms
 Shell sort
 Quick sort
 Heap sort
 Merge sort
Shell sort
 Shell sort is the oldest fast sorting algorithm,
which is an improvement of insertion sort.
 It is developed by Donald shell in 1959.
 it is fast, easy to understand and easy to
implement.
 It sorts the given data by using a method of two
dimensional array.
 Means, first it divide in to a number of columns
and sort the items in a column.
Coun’t ….
The idea of shell sort is the following:
 Arrange the data sequence in a two-dimensional
array.
 Then sort the columns of the array. The effect is
that the data sequence is partially sorted.
 The process above is repeated, but each time with
a narrower array, i.e. with a smaller number of
columns.
 In the last step, the array consists of only one
column. in each step, the sortedness of the
sequence is increased, until in the last step it is
completely sorted.
Coun’t…
 Shell sorting is done when the list is 1- sorted
which is sorted using insertion sort at the end.
And its time complexity is O(n3/2).
 If number of columns are large then the sorting
algorithm might prove inefficient as well.
 If number of columns are coming small then the
sorting algorithm is efficient.
 E.g.
Sort the following data using shell sort.
3 7 9 0 5 1 6 8 4 2 0 6 1 5 7 3 4 9 8 2
Solution:
Coun’t …
 First, it is arranged in an array with 7 columns (left), then
the columns are sorted (right):
Let divide in to 7 columns:
3 7 9 0 5 1 6 3 3 2 0 5 1 5
8 4 2 0 6 1 5 sort by column 7 4 4 0 6 1 6
7 3 4 9 8 2 8 7 9 9 8 2
 Data elements 8 and 9 have now already come to
the end of the sequence, but a small element (2)
is also still there.
 in the next step, the sequence is arranged in 3
columns, which are again sorted:
Coun’t …
 Now divide also in to three columns:-
3 3 2 0 0 1
0 5 1 1 2 2
5 7 4 Sort by column 3 3 4
4 0 6 4 5 6
1 6 8 5 6 8
7 9 9 7 7 9
8 2 8 9
 Now the sequence is almost completely sorted. when
arranging it in one column in the last step it is only a
6, an 8 and a 9 that have a little bit to their correct
position.
Example
Now arrange in one row.
0 0 1 1 2 2 3 3 4 4 5 6 5 6 8 7 7 9 8 9
 From the above almost all of the data are arranged in
their proper position. The only unsorted items are 6,
8 and 9.
 So make it sorted the above items using insertion sort.
So at the end :
0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9
Are the sorted elements.
 N.B. Time complexity of shell sort is O(n3/2)
Coun’t…
 Knuth has experimented with several values and
recommends that spacing h for an array of size N be
based on the following formula.
 h1=1
 hi + 1=3hi+1, and stop with ht when ht+2>=N
E.g. To sort 100 items we first find ht such that ht>=100. for 100
items, h5 is selected.
 So for 100 items:- to select the value of h using the
above formula.
h1=1 h2= 4, h3= 13, h4= 40, h5= 121
 but, 125>=100. So ht is two steps lower, or h3.There fore
our sequence of h values will be 13-4-1.
 Once the initial h value has been determined, subsequent
may be calculated using the above formula.
An important property of Shell sort
 A sequence which is hk sorted that is then hk-1-sorted
will remain hk sorted. this means that work done by
early phases is not undone by later phases
 The action of an hk sorted is to perform an insertion sort
on hk independent sub arrays
 shell sort is a non stable in place sort.
E.g. Sot the following list using shell sort algorithm.
5 8 2 4 1 3 9 7 6 0
 Choose h3 =5 (n/2 =10)
5 8 2 4 1 sort 3 8 2 4 0
3 9 7 6 0 5 9 7 6 1
 Choose h2 =3
3 8 2 1 0 2
4 0 5 Sorted 3 7 5
9 7 6 4 8 6
1 9
 Choose h1= 1
1 0 2 3 7 5 4 8 6 9 one shell sorted list
Then using insertion 0 1 2 3 4 5 6 7 8 9
2. Quick Sort
 Quick sort was invented by C.H.R. Hoare.
 it is the fast sorting algorithm, which is widely
applied in practice.
 on the average case, it has O(n log n) time
complexity.
 making quick sort is suitable for sorting big
data volumes. The idea of the algorithm is
quite simple and once you realize it, you can
write quick sort as fast as bubble sort.
Coun’t…
 The main idea of this algorithm is to split the array into
left and right parts in such a way that all the elements in
the right part are larger than all the elements in the left
part, then sort each part (recursively) to produce a
sorted array.
Algorithm:
 Choose a pivot value (mostly the first element is taken as
the pivot value)
 Partition the pivot element and partition the list so that:
 the left part has items less than or equal to the pivot value
 the right part has items greater than or equal to the pivot
value
 Sort both parts. Apply quick sort algorithm recursively to
the left and the right pars.
Example, for the array
[3, 2, 7, 10, 6, 1, 4, 9, 11 ] sort using Quick sort
Figure: Execution of quick sort on the array [3 2 7 10 6 1 4 9 11]
Coun’t …
 What is the complexity of quick sort? The worst case occurs
when the largest (or smallest) element is always chosen as
the bound.
 In this case, the algorithm operates on arrays of size n, n-1,
n-2, ... , 2. To perform the partitions requires n-1, n-2, n-3, ...
1 comparisons,
 which means that time complexity of quick sort
is O(n2) in the worst case.
 And it has 0(n log n) in the average case.
3. Heap Sort
 The heap sort algorithm uses the data structure called
the heap.
 A heap sort is defined as a complete binary tree in which
each node has a value greater than both its children(if
any). or
 Heap tree is a binary tree in which each node has a value
greater than both its children (if any).
 It uses a process called "adjust to accomplish its task
(building a heap tree) whenever a value is larger than its
parent.
 Each node in the heap corresponds to an element of the
array, with the root node corresponds to an element of
the array, with the root node corresponding to the
element with the index 0 in the array.
Coun’t …
 A node corresponding to index I, then its left child has
index (2*i +1) and its right child has index (2*i +2).
 If any or both of these elements do not exist in the array,
then the corresponding child node does not exist either.
 N.B: In a heap the largest element is located at the root
node.
 The time complexity of heap sort is O(nlogn).
 it has an extremely important advantage of worst-case
O(n log n) runtime. But quick sot requires O(n2) running
time in worst case.
Algorithm:
1. Construct a binary tree:
 The root node corresponds to Data[0]. If we consider the
index associated with a particular node to be i, then the left
child of this node corresponds to the element with index
2*i+1 and the right child corresponds to the element with
index 2*i+2. If any or both of these elements do not exist in
the array, then the corresponding child node does not exist
either.
 Construct the heap tree from initial binary tree using "adjust"
process.
 Sort by swapping the root value with the lowest, right most
value and deleting the lowest, right most value and
inserting the deleted value in the array in it proper position.
Example: Sort the following list using heap sort
algorithm.
5 8 2 4 1 3 9 7 6 0
5
RootNodePtr
8
4
7
1
0
6
2
9
3
9
RootNodePtr
8
7
4
1
0
6
5
2
3
Construct the initial binary tree Construct the heap tree
Step 1
Coun’t …
0
RootNodePtr
8
7
4
1
6
5
2
3
9
RootNodePtr
8
7
4
1
0
6
5
2
3
 Swap the root node with the lowest, right most node and
delete the lowest, right most value; insert the deleted value in
the array in its proper position; adjust the heap tree; and repeat
this process until the tree is empty.
9
Step 2
Coun’t … Step 3:
8
RootNodePtr
7
6
4
1
0
5
2
3
0
RootNodePtr
7
6
4
1
5
2
3
8 9
Coun’t …Step 4:
7
RootNodePtr
6
4
0
1
5
2
3
0
RootNodePtr
6
4 1
5
2
3
7 8 9
Coun’t … Step 5:
6
RootNodePtr
4
0 1
5
2
3
2
RootNodePtr
4
0 1
5
3
6 7 8 9
Coun’t … Step 6:
5
RootNodePtr
4
0 1
3
2
2
RootNodePtr
4
0 1
3
5 6 7 8 9
Coun’t … Step 7:
4
RootNodePtr
2
0 1
3
1
RootNodePtr
2
0
3
4 5 6 7 8 9
Coun’t … Step 8 and 9:
3
RootNodePtr
2
0
1
0
RootNodePtr
2 1
2
RootNodePtr
0 1
1
RootNodePtr
0
2 3 4 5 6 7 8 9
3 4 5 6 7 8 9
Coun’t … Step 10 and 11
1
RootNodePtr
0
0
RootNodePtr
0
RootNodePtr
RootNodePtr
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
4. Merge Sort:
 Like quick sort, merge sort uses divide and conquer strategy
and its time complexity is O(nlogn).
 it begins by dividing a list in to two sublists, and then
recursively divides each of these sublists until there are
sublists with no one element each.
 These sublists are then combined using a simple merging
technique.
 In order to combine two lists, the first value of each is
evaluated, and the smaller value is added to the output list.
This process continues until one of the lists has become
exhausted, at which point the remainder of the other list is
simply appended to the output list.
 Two closest lists are combined at each end, until all
elements are merged back into a single list.
Algorithm of Merge sort
 Divide the array in to two halves.
 Recursively sort the first n/2 items.
 Recursively sort the last n/2 items.
 Merge sorted items (using an auxiliary
array).
Example: Sort the following list using
merge sort algorithm:
5 8 2 4 1 3 9 7 6 0
Solution:
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
5 8 2 4 1 3 9 7 6 0
4 1 6 0
1 4
5 8 1 2 4 3 9 0 6 7
0 6
1 2 4 5 8 0 3 6 7 9
0 1 2 3 4 5 6 7 8 9
Division
phase
Sorting and merging
phase
Example2: sort the following lists using merge sort:
1 8 6 4 10 5 3 2 22
Solution:
1 8 6 4 10 5 3 2 22
1 8 6 4 10 5 3 2 22
1 8 6
4 10
8
1
3 5 2 22
1 6 8 4 10 3 5 2 22
2 3 5 22
1 4 6 8 10
1 2 3 4 5 6 8 10 22
Disadvantage of merge sort
 It needs additional storage for merging, which
for large amount of data could be an
insurmountable obstacle.
 One solution to this drawback uses a linked list.
End of Chapter 6
Questions ?
Chapter Seven
Graphs
Graph
 A graph is a mathematical abstraction used to
represent "connectivity of information".
 used to represent arbitrary relationships among data
objects. It is a nonlinear data structure.
 a graph G consists of a set of V or vertices (nodes) and
a set of edges (arcs). And It written as G= (V, E).
where, V is a finite non empty set of vertices.
E is a set of pairs of arcs, sometimes called edges.
 An edge E = ( v, w), is a pair of vertices v and w, and
is said to be incident with v and w.
Coun’t …
1
5
2
3 4
V (G) = { 1, 2, 3, 4, 5}
E(G)= { (1,2), (2,3), (3,4), (4,5), (1,5), (1,3), (3,5) }
 You may notice that the edge incident with node 1 and
node 5 is written as (1, 5);
 we could also have written (5,1) instead of (1,5). The
same applies to all the other edges. This is true for an
undirected graph.
Types of Graph
 There are two types of graphs:
Undirected graph
Directed graph
 In an undirected graph, pair of vertices representing
any edge is unordered. Thus (v, w) and (w, v)
represent the same edge.
 a directed graph each edge is an ordered pairs of
vertices, i.e. each edge is represented by a directed
pair. E.g. if e = (v, w) then v is tail or initial vertex
and w is head of final vertex.
 Subsequently (v, w) and (w, v) represent two
different edges.
1
5
2
3 4
Coun’t ...
The direction is indicated by arrow.
V (G) = {1, 2, 3, 4, 5}
E (G) = { (1,2), (2,3), (3,4), (5,3), (5,4), (5,1) }
Directed graph can be referred as digraph and
undirected graph as graph.
Adjacent Vertices
 Vertex v1 is said to be adjacent to vertex v2 if there is
an edge (v1, v2) or (v2, v1).
Let us consider the graph :
Example:
Vertices adjacent to node 3 are 1, 5, 6, 4
Vertices adjacent to node 2 are 1 and 7
1
2
3
7
6 4
5
 Finite Graph: Graph with finite number of nodes and
finite number of edges.
 Loop: An edge with identical end points is called a
loop.
 Multiple edges: The edges connected the same end
points.
 Multi-Graph: A graph with multiple edges. For
multi graphs, even though, there are finite numbers
of nodes, the edges may not be finite.
 Path: A numbers of edges b/n the two vertices.
 A path from vertex v to vertex w is a sequence of vertices,
each adjacent to the next. Consider the above example
again.
Coun’t ….
Consider the following graph again.
 1, 3, 4 is a path
 1, 3, 6 is a path
 1, 2, 7 is a path
 1,3,4,7 is a path
 We may notice that there is a path which starts at vertex 1
and finishes at vertex 1. i.e. path 1,3,4,7,2,1. Such a path is
called a cycle.
1
2
3
7
6 4
5
 Cycle: - A cycle is a path in which first and
last vertices are the same.
 Connected Graph: A graph is called 'connected' ,
if there exists a path between any two of its nodes.
The above graph is a connected graph.
5
4
1
2
3
A weakly connected graph.
Since, There There does not exist a
directed path from vertex 1 to
vertex 4 and also from vertex 5 to
other vertices and so on. Therefore
it is a weakly connected graph.
3
5
4
1
2
Strongly Connected Graph, since
There is a path from any vertex to
another vertex.
Coun’t …
 Unilaterally Connected graph: For any pair of
nodes (u, v), there is a path from either 'u' to 'v'
(or) 'v' to 'u'.
 Simple Graph: A simple graph is a graph with the
following properties:
 No parallel edges.
 It can't have more than one loop at a given node.
 Degree
 There is no limitation of number of edges incident on one
vertex. It could be none, one or more. The number of edges
incident on a vertex determines its degree. (or)
 The number of edges containing a node is called the
degree of that node.
Example:
3
4
5 6
1
2
7
The degree of vertex 3 is 4
5
4
1
2
3
In a digraph we attach an in degree and an
out degree to each of the vertices.
In degree:
The In degree of a node 'u' is the
number of edges ending at 'u' .
Out degree:
The out degree of a node 'u' is the
number of edges starting from 'u'.
Example:
The in degree of vertex 5 is 2
Out degree of vertex 5 is 1
In degree of vertex v is the number of
edges for which vertex v is a head and out
degree is the number of edges for which
vertex is a tail.
Source: A node n is called source if out degree (n) > 0 and in
degree (n) = 0.
Sink: A node n is called a sink if in degree (n) > 0 and out
degree (n) = 0.
Tree Graph: It is a special type of graph.
A graph is said to be a tree graph, if it has the following two
properties.
 It is connected and
 There are no cycles in the graph.
1
6
5
2
4
3
6
1
0
9
8
7
Application of Graphs
 Applications:
 In project planning by means of a directed graph, also
called an active networking
 Finding the shortest path in a directed or in-directed
graph.
Graph Representation
 Graph is a mathematical structure and finds its application
in many areas of interest in which problems need to be
solved using computers.
 This mathematical structure must be represented as some
kind of data structures. Two such representations are
commonly used. They are:
 Adjacent matrix and
 Adjacency list representation
Adjacency Matrix
1
5
2
3 4
 The adjacency matrix A for graph G =
(V,E) with n vertices, is an n X n
matrix of bits, such that,
 Aij = 1 if there is an edge
from vi to vjand
 Aij = 0 if there is no such
edge
Vertex 1 2 3 4 5
1 0 1 1 0 1
2 1 0 1 0 0
3 1 1 0 1 1
4 0 0 1 0 1
5 1 0 1 1 0
The adjacency matrix for an undirected graph is
symmetric, as the lower and upper triangles are same.
Also all the diagonal elements are zero.
1
3
2
4
5
6
7
Vertex 1 2 3 4 5 6 7
1 0 1 1 1 0 0 0
2 0 0 0 0 0 0 0
3 0 0 0 0 1 0 0
4 0 0 0 0 0 1 0
5 0 0 0 1 0 0 1
6 0 0 0 0 0 0 0
7 0 0 0 0 0 1 0
Coun’t …
 The total number of 1’s account for the number of edges
in the digraph.
 The number of 1’s in each row tells the out degree of the
corresponding vertex.
 The total number of 1's in each column tells the in
degree of the corresponding vertex.
Disadvantages of Adjacency matrix:
 It takes O(n2) space to represent a graph with n
vertices.
 It takes O(n2)time to solve a problem
Adjacency list representation
 Specify all vertices adjacent to each vertex of the
graph.
 It represents the graph by using linear space. For
each vertex, we keep a list of all adjacent vertices.
And the implementation is called star
representation or as link list.
 E.g.
 Star representation
C
f
d
g
e
b
a a c d f
b d e
c a f
d a b e f
e b d
f a c d
g
Coun’t …
 In adjacency list representation, we store a graph as a linked list
structure.
E.g. to the above graph to represent using adjacency list:
b
d
a
e
f
g
c
c
a d f
c
a e
b
d
b
d
a
d
f
a
f
e
Coun’t …
 The adjacency list representation needs a list of all of its
nodes. I.e. and for each node, a linked list of its adjacent
nodes.
 Note that adjacent vertices may appear in the adjacency
list in arbitrary order. Also an arrow from v2 to v3 in the
list linked to v1, does not mean that v2 and v3 are adjacent.
 In general we use two basic data structure to represent the
graph. The vertex and the list of edge.
 Therefore , we use the following types to store a vertex.
Struct vertexTag {
Data-type element;
int visited;
edgeTage *edges;
Vertextag *next;
};
Graph Traversal
 A graph traversal means visiting all the nodes of the graph
exactly once. Two graph traversal methods are commonly
used. These are,
 Depth First Search (DFS)
 Breadth First Search (BFS)
Reading
Assignment
 about Graph Traversal

More Related Content

Similar to Advanced s and s algorithm.ppt

Selection sort
Selection sortSelection sort
Selection sort
asra khan
 
Sorting algorithums > Data Structures & Algorithums
Sorting algorithums  > Data Structures & AlgorithumsSorting algorithums  > Data Structures & Algorithums
Sorting algorithums > Data Structures & Algorithums
Ain-ul-Moiz Khawaja
 
computer notes - Stack
computer notes - Stackcomputer notes - Stack
computer notes - Stack
ecomputernotes
 

Similar to Advanced s and s algorithm.ppt (20)

Unit vii sorting
Unit   vii sorting Unit   vii sorting
Unit vii sorting
 
Selection sort
Selection sortSelection sort
Selection sort
 
Quick and Heap Sort with examples
Quick and Heap Sort with examplesQuick and Heap Sort with examples
Quick and Heap Sort with examples
 
Sorting algorithums > Data Structures & Algorithums
Sorting algorithums  > Data Structures & AlgorithumsSorting algorithums  > Data Structures & Algorithums
Sorting algorithums > Data Structures & Algorithums
 
computer notes - Stack
computer notes - Stackcomputer notes - Stack
computer notes - Stack
 
sorting_part1.ppt
sorting_part1.pptsorting_part1.ppt
sorting_part1.ppt
 
algorithm assignmenteeeeeee.pptx
algorithm assignmenteeeeeee.pptxalgorithm assignmenteeeeeee.pptx
algorithm assignmenteeeeeee.pptx
 
lecture-k-sorting.ppt
lecture-k-sorting.pptlecture-k-sorting.ppt
lecture-k-sorting.ppt
 
3.ppt
3.ppt3.ppt
3.ppt
 
Lecture k-sorting
Lecture k-sortingLecture k-sorting
Lecture k-sorting
 
Data Structures 6
Data Structures 6Data Structures 6
Data Structures 6
 
Data structures arrays
Data structures   arraysData structures   arrays
Data structures arrays
 
one main advantage of bubble sort as compared to others
one main advantage of bubble sort as compared to othersone main advantage of bubble sort as compared to others
one main advantage of bubble sort as compared to others
 
advanced searching and sorting.pdf
advanced searching and sorting.pdfadvanced searching and sorting.pdf
advanced searching and sorting.pdf
 
Master of Computer Application (MCA) – Semester 4 MC0080
Master of Computer Application (MCA) – Semester 4  MC0080Master of Computer Application (MCA) – Semester 4  MC0080
Master of Computer Application (MCA) – Semester 4 MC0080
 
Lec23
Lec23Lec23
Lec23
 
Analysis and Design of Algorithms -Sorting Algorithms and analysis
Analysis and Design of Algorithms -Sorting Algorithms and analysisAnalysis and Design of Algorithms -Sorting Algorithms and analysis
Analysis and Design of Algorithms -Sorting Algorithms and analysis
 
Chapter 11 - Sorting and Searching
Chapter 11 - Sorting and SearchingChapter 11 - Sorting and Searching
Chapter 11 - Sorting and Searching
 
3.8 quick sort
3.8 quick sort3.8 quick sort
3.8 quick sort
 
Chapter3.pptx
Chapter3.pptxChapter3.pptx
Chapter3.pptx
 

More from LegesseSamuel (10)

WachemoUniversity_Cryptography_and_Network_Security.pdf
WachemoUniversity_Cryptography_and_Network_Security.pdfWachemoUniversity_Cryptography_and_Network_Security.pdf
WachemoUniversity_Cryptography_and_Network_Security.pdf
 
DC Lecture 04 and 05 Mutual Excution and Election Algorithms.pdf
DC Lecture 04 and 05 Mutual Excution and Election Algorithms.pdfDC Lecture 04 and 05 Mutual Excution and Election Algorithms.pdf
DC Lecture 04 and 05 Mutual Excution and Election Algorithms.pdf
 
ADVANCED CALCULUS-SCHAUMSOUTLINE SERIES.pdf
ADVANCED CALCULUS-SCHAUMSOUTLINE SERIES.pdfADVANCED CALCULUS-SCHAUMSOUTLINE SERIES.pdf
ADVANCED CALCULUS-SCHAUMSOUTLINE SERIES.pdf
 
ch20.ppt
ch20.pptch20.ppt
ch20.ppt
 
ch14.ppt
ch14.pptch14.ppt
ch14.ppt
 
ch11.ppt
ch11.pptch11.ppt
ch11.ppt
 
LinkedQueues.ppt
LinkedQueues.pptLinkedQueues.ppt
LinkedQueues.ppt
 
Lecture-7.ppt
Lecture-7.pptLecture-7.ppt
Lecture-7.ppt
 
Computer Programming.pdf
Computer Programming.pdfComputer Programming.pdf
Computer Programming.pdf
 
Lect_4_Requirement Modeling(Use Case_and_Static).pdf
Lect_4_Requirement Modeling(Use Case_and_Static).pdfLect_4_Requirement Modeling(Use Case_and_Static).pdf
Lect_4_Requirement Modeling(Use Case_and_Static).pdf
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Recently uploaded (20)

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using Ballerina
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 

Advanced s and s algorithm.ppt

  • 1. Chapter Six Coun’t… Advanced Sorting and Searching Algorithms
  • 2. Types of Advanced Sorting Algorithms  Shell sort  Quick sort  Heap sort  Merge sort
  • 3. Shell sort  Shell sort is the oldest fast sorting algorithm, which is an improvement of insertion sort.  It is developed by Donald shell in 1959.  it is fast, easy to understand and easy to implement.  It sorts the given data by using a method of two dimensional array.  Means, first it divide in to a number of columns and sort the items in a column.
  • 4. Coun’t …. The idea of shell sort is the following:  Arrange the data sequence in a two-dimensional array.  Then sort the columns of the array. The effect is that the data sequence is partially sorted.  The process above is repeated, but each time with a narrower array, i.e. with a smaller number of columns.  In the last step, the array consists of only one column. in each step, the sortedness of the sequence is increased, until in the last step it is completely sorted.
  • 5. Coun’t…  Shell sorting is done when the list is 1- sorted which is sorted using insertion sort at the end. And its time complexity is O(n3/2).  If number of columns are large then the sorting algorithm might prove inefficient as well.  If number of columns are coming small then the sorting algorithm is efficient.  E.g. Sort the following data using shell sort. 3 7 9 0 5 1 6 8 4 2 0 6 1 5 7 3 4 9 8 2 Solution:
  • 6. Coun’t …  First, it is arranged in an array with 7 columns (left), then the columns are sorted (right): Let divide in to 7 columns: 3 7 9 0 5 1 6 3 3 2 0 5 1 5 8 4 2 0 6 1 5 sort by column 7 4 4 0 6 1 6 7 3 4 9 8 2 8 7 9 9 8 2  Data elements 8 and 9 have now already come to the end of the sequence, but a small element (2) is also still there.  in the next step, the sequence is arranged in 3 columns, which are again sorted:
  • 7. Coun’t …  Now divide also in to three columns:- 3 3 2 0 0 1 0 5 1 1 2 2 5 7 4 Sort by column 3 3 4 4 0 6 4 5 6 1 6 8 5 6 8 7 9 9 7 7 9 8 2 8 9  Now the sequence is almost completely sorted. when arranging it in one column in the last step it is only a 6, an 8 and a 9 that have a little bit to their correct position.
  • 8. Example Now arrange in one row. 0 0 1 1 2 2 3 3 4 4 5 6 5 6 8 7 7 9 8 9  From the above almost all of the data are arranged in their proper position. The only unsorted items are 6, 8 and 9.  So make it sorted the above items using insertion sort. So at the end : 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 Are the sorted elements.  N.B. Time complexity of shell sort is O(n3/2)
  • 9. Coun’t…  Knuth has experimented with several values and recommends that spacing h for an array of size N be based on the following formula.  h1=1  hi + 1=3hi+1, and stop with ht when ht+2>=N E.g. To sort 100 items we first find ht such that ht>=100. for 100 items, h5 is selected.  So for 100 items:- to select the value of h using the above formula. h1=1 h2= 4, h3= 13, h4= 40, h5= 121  but, 125>=100. So ht is two steps lower, or h3.There fore our sequence of h values will be 13-4-1.  Once the initial h value has been determined, subsequent may be calculated using the above formula.
  • 10. An important property of Shell sort  A sequence which is hk sorted that is then hk-1-sorted will remain hk sorted. this means that work done by early phases is not undone by later phases  The action of an hk sorted is to perform an insertion sort on hk independent sub arrays  shell sort is a non stable in place sort.
  • 11. E.g. Sot the following list using shell sort algorithm. 5 8 2 4 1 3 9 7 6 0  Choose h3 =5 (n/2 =10) 5 8 2 4 1 sort 3 8 2 4 0 3 9 7 6 0 5 9 7 6 1  Choose h2 =3 3 8 2 1 0 2 4 0 5 Sorted 3 7 5 9 7 6 4 8 6 1 9  Choose h1= 1 1 0 2 3 7 5 4 8 6 9 one shell sorted list Then using insertion 0 1 2 3 4 5 6 7 8 9
  • 12. 2. Quick Sort  Quick sort was invented by C.H.R. Hoare.  it is the fast sorting algorithm, which is widely applied in practice.  on the average case, it has O(n log n) time complexity.  making quick sort is suitable for sorting big data volumes. The idea of the algorithm is quite simple and once you realize it, you can write quick sort as fast as bubble sort.
  • 13. Coun’t…  The main idea of this algorithm is to split the array into left and right parts in such a way that all the elements in the right part are larger than all the elements in the left part, then sort each part (recursively) to produce a sorted array. Algorithm:  Choose a pivot value (mostly the first element is taken as the pivot value)  Partition the pivot element and partition the list so that:  the left part has items less than or equal to the pivot value  the right part has items greater than or equal to the pivot value  Sort both parts. Apply quick sort algorithm recursively to the left and the right pars.
  • 14. Example, for the array [3, 2, 7, 10, 6, 1, 4, 9, 11 ] sort using Quick sort Figure: Execution of quick sort on the array [3 2 7 10 6 1 4 9 11]
  • 15. Coun’t …  What is the complexity of quick sort? The worst case occurs when the largest (or smallest) element is always chosen as the bound.  In this case, the algorithm operates on arrays of size n, n-1, n-2, ... , 2. To perform the partitions requires n-1, n-2, n-3, ... 1 comparisons,  which means that time complexity of quick sort is O(n2) in the worst case.  And it has 0(n log n) in the average case.
  • 16. 3. Heap Sort  The heap sort algorithm uses the data structure called the heap.  A heap sort is defined as a complete binary tree in which each node has a value greater than both its children(if any). or  Heap tree is a binary tree in which each node has a value greater than both its children (if any).  It uses a process called "adjust to accomplish its task (building a heap tree) whenever a value is larger than its parent.  Each node in the heap corresponds to an element of the array, with the root node corresponds to an element of the array, with the root node corresponding to the element with the index 0 in the array.
  • 17. Coun’t …  A node corresponding to index I, then its left child has index (2*i +1) and its right child has index (2*i +2).  If any or both of these elements do not exist in the array, then the corresponding child node does not exist either.  N.B: In a heap the largest element is located at the root node.  The time complexity of heap sort is O(nlogn).  it has an extremely important advantage of worst-case O(n log n) runtime. But quick sot requires O(n2) running time in worst case.
  • 18. Algorithm: 1. Construct a binary tree:  The root node corresponds to Data[0]. If we consider the index associated with a particular node to be i, then the left child of this node corresponds to the element with index 2*i+1 and the right child corresponds to the element with index 2*i+2. If any or both of these elements do not exist in the array, then the corresponding child node does not exist either.  Construct the heap tree from initial binary tree using "adjust" process.  Sort by swapping the root value with the lowest, right most value and deleting the lowest, right most value and inserting the deleted value in the array in it proper position.
  • 19. Example: Sort the following list using heap sort algorithm. 5 8 2 4 1 3 9 7 6 0 5 RootNodePtr 8 4 7 1 0 6 2 9 3 9 RootNodePtr 8 7 4 1 0 6 5 2 3 Construct the initial binary tree Construct the heap tree Step 1
  • 20. Coun’t … 0 RootNodePtr 8 7 4 1 6 5 2 3 9 RootNodePtr 8 7 4 1 0 6 5 2 3  Swap the root node with the lowest, right most node and delete the lowest, right most value; insert the deleted value in the array in its proper position; adjust the heap tree; and repeat this process until the tree is empty. 9 Step 2
  • 21. Coun’t … Step 3: 8 RootNodePtr 7 6 4 1 0 5 2 3 0 RootNodePtr 7 6 4 1 5 2 3 8 9
  • 23. Coun’t … Step 5: 6 RootNodePtr 4 0 1 5 2 3 2 RootNodePtr 4 0 1 5 3 6 7 8 9
  • 24. Coun’t … Step 6: 5 RootNodePtr 4 0 1 3 2 2 RootNodePtr 4 0 1 3 5 6 7 8 9
  • 25. Coun’t … Step 7: 4 RootNodePtr 2 0 1 3 1 RootNodePtr 2 0 3 4 5 6 7 8 9
  • 26. Coun’t … Step 8 and 9: 3 RootNodePtr 2 0 1 0 RootNodePtr 2 1 2 RootNodePtr 0 1 1 RootNodePtr 0 2 3 4 5 6 7 8 9 3 4 5 6 7 8 9
  • 27. Coun’t … Step 10 and 11 1 RootNodePtr 0 0 RootNodePtr 0 RootNodePtr RootNodePtr 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
  • 28. 4. Merge Sort:  Like quick sort, merge sort uses divide and conquer strategy and its time complexity is O(nlogn).  it begins by dividing a list in to two sublists, and then recursively divides each of these sublists until there are sublists with no one element each.  These sublists are then combined using a simple merging technique.  In order to combine two lists, the first value of each is evaluated, and the smaller value is added to the output list. This process continues until one of the lists has become exhausted, at which point the remainder of the other list is simply appended to the output list.  Two closest lists are combined at each end, until all elements are merged back into a single list.
  • 29. Algorithm of Merge sort  Divide the array in to two halves.  Recursively sort the first n/2 items.  Recursively sort the last n/2 items.  Merge sorted items (using an auxiliary array).
  • 30. Example: Sort the following list using merge sort algorithm: 5 8 2 4 1 3 9 7 6 0 Solution:
  • 31. 5 8 2 4 1 3 9 7 6 0 5 8 2 4 1 3 9 7 6 0 5 8 2 4 1 3 9 7 6 0 5 8 2 4 1 3 9 7 6 0 4 1 6 0 1 4 5 8 1 2 4 3 9 0 6 7 0 6 1 2 4 5 8 0 3 6 7 9 0 1 2 3 4 5 6 7 8 9 Division phase Sorting and merging phase
  • 32. Example2: sort the following lists using merge sort: 1 8 6 4 10 5 3 2 22 Solution: 1 8 6 4 10 5 3 2 22 1 8 6 4 10 5 3 2 22 1 8 6 4 10 8 1 3 5 2 22 1 6 8 4 10 3 5 2 22 2 3 5 22 1 4 6 8 10 1 2 3 4 5 6 8 10 22
  • 33. Disadvantage of merge sort  It needs additional storage for merging, which for large amount of data could be an insurmountable obstacle.  One solution to this drawback uses a linked list.
  • 34. End of Chapter 6 Questions ?
  • 36. Graph  A graph is a mathematical abstraction used to represent "connectivity of information".  used to represent arbitrary relationships among data objects. It is a nonlinear data structure.  a graph G consists of a set of V or vertices (nodes) and a set of edges (arcs). And It written as G= (V, E). where, V is a finite non empty set of vertices. E is a set of pairs of arcs, sometimes called edges.  An edge E = ( v, w), is a pair of vertices v and w, and is said to be incident with v and w.
  • 37. Coun’t … 1 5 2 3 4 V (G) = { 1, 2, 3, 4, 5} E(G)= { (1,2), (2,3), (3,4), (4,5), (1,5), (1,3), (3,5) }  You may notice that the edge incident with node 1 and node 5 is written as (1, 5);  we could also have written (5,1) instead of (1,5). The same applies to all the other edges. This is true for an undirected graph.
  • 38. Types of Graph  There are two types of graphs: Undirected graph Directed graph  In an undirected graph, pair of vertices representing any edge is unordered. Thus (v, w) and (w, v) represent the same edge.  a directed graph each edge is an ordered pairs of vertices, i.e. each edge is represented by a directed pair. E.g. if e = (v, w) then v is tail or initial vertex and w is head of final vertex.  Subsequently (v, w) and (w, v) represent two different edges.
  • 39. 1 5 2 3 4 Coun’t ... The direction is indicated by arrow. V (G) = {1, 2, 3, 4, 5} E (G) = { (1,2), (2,3), (3,4), (5,3), (5,4), (5,1) } Directed graph can be referred as digraph and undirected graph as graph.
  • 40. Adjacent Vertices  Vertex v1 is said to be adjacent to vertex v2 if there is an edge (v1, v2) or (v2, v1). Let us consider the graph : Example: Vertices adjacent to node 3 are 1, 5, 6, 4 Vertices adjacent to node 2 are 1 and 7 1 2 3 7 6 4 5
  • 41.  Finite Graph: Graph with finite number of nodes and finite number of edges.  Loop: An edge with identical end points is called a loop.  Multiple edges: The edges connected the same end points.  Multi-Graph: A graph with multiple edges. For multi graphs, even though, there are finite numbers of nodes, the edges may not be finite.  Path: A numbers of edges b/n the two vertices.  A path from vertex v to vertex w is a sequence of vertices, each adjacent to the next. Consider the above example again.
  • 42. Coun’t …. Consider the following graph again.  1, 3, 4 is a path  1, 3, 6 is a path  1, 2, 7 is a path  1,3,4,7 is a path  We may notice that there is a path which starts at vertex 1 and finishes at vertex 1. i.e. path 1,3,4,7,2,1. Such a path is called a cycle. 1 2 3 7 6 4 5
  • 43.  Cycle: - A cycle is a path in which first and last vertices are the same.  Connected Graph: A graph is called 'connected' , if there exists a path between any two of its nodes. The above graph is a connected graph. 5 4 1 2 3 A weakly connected graph. Since, There There does not exist a directed path from vertex 1 to vertex 4 and also from vertex 5 to other vertices and so on. Therefore it is a weakly connected graph.
  • 44. 3 5 4 1 2 Strongly Connected Graph, since There is a path from any vertex to another vertex.
  • 45. Coun’t …  Unilaterally Connected graph: For any pair of nodes (u, v), there is a path from either 'u' to 'v' (or) 'v' to 'u'.  Simple Graph: A simple graph is a graph with the following properties:  No parallel edges.  It can't have more than one loop at a given node.  Degree  There is no limitation of number of edges incident on one vertex. It could be none, one or more. The number of edges incident on a vertex determines its degree. (or)  The number of edges containing a node is called the degree of that node.
  • 47. 5 4 1 2 3 In a digraph we attach an in degree and an out degree to each of the vertices. In degree: The In degree of a node 'u' is the number of edges ending at 'u' . Out degree: The out degree of a node 'u' is the number of edges starting from 'u'. Example: The in degree of vertex 5 is 2 Out degree of vertex 5 is 1 In degree of vertex v is the number of edges for which vertex v is a head and out degree is the number of edges for which vertex is a tail.
  • 48. Source: A node n is called source if out degree (n) > 0 and in degree (n) = 0. Sink: A node n is called a sink if in degree (n) > 0 and out degree (n) = 0. Tree Graph: It is a special type of graph. A graph is said to be a tree graph, if it has the following two properties.  It is connected and  There are no cycles in the graph. 1 6 5 2 4 3 6 1 0 9 8 7
  • 49. Application of Graphs  Applications:  In project planning by means of a directed graph, also called an active networking  Finding the shortest path in a directed or in-directed graph. Graph Representation  Graph is a mathematical structure and finds its application in many areas of interest in which problems need to be solved using computers.  This mathematical structure must be represented as some kind of data structures. Two such representations are commonly used. They are:  Adjacent matrix and  Adjacency list representation
  • 50. Adjacency Matrix 1 5 2 3 4  The adjacency matrix A for graph G = (V,E) with n vertices, is an n X n matrix of bits, such that,  Aij = 1 if there is an edge from vi to vjand  Aij = 0 if there is no such edge Vertex 1 2 3 4 5 1 0 1 1 0 1 2 1 0 1 0 0 3 1 1 0 1 1 4 0 0 1 0 1 5 1 0 1 1 0
  • 51. The adjacency matrix for an undirected graph is symmetric, as the lower and upper triangles are same. Also all the diagonal elements are zero. 1 3 2 4 5 6 7 Vertex 1 2 3 4 5 6 7 1 0 1 1 1 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 1 0 0 4 0 0 0 0 0 1 0 5 0 0 0 1 0 0 1 6 0 0 0 0 0 0 0 7 0 0 0 0 0 1 0
  • 52. Coun’t …  The total number of 1’s account for the number of edges in the digraph.  The number of 1’s in each row tells the out degree of the corresponding vertex.  The total number of 1's in each column tells the in degree of the corresponding vertex. Disadvantages of Adjacency matrix:  It takes O(n2) space to represent a graph with n vertices.  It takes O(n2)time to solve a problem
  • 53. Adjacency list representation  Specify all vertices adjacent to each vertex of the graph.  It represents the graph by using linear space. For each vertex, we keep a list of all adjacent vertices. And the implementation is called star representation or as link list.  E.g.  Star representation C f d g e b a a c d f b d e c a f d a b e f e b d f a c d g
  • 54. Coun’t …  In adjacency list representation, we store a graph as a linked list structure. E.g. to the above graph to represent using adjacency list: b d a e f g c c a d f c a e b d b d a d f a f e
  • 55. Coun’t …  The adjacency list representation needs a list of all of its nodes. I.e. and for each node, a linked list of its adjacent nodes.  Note that adjacent vertices may appear in the adjacency list in arbitrary order. Also an arrow from v2 to v3 in the list linked to v1, does not mean that v2 and v3 are adjacent.  In general we use two basic data structure to represent the graph. The vertex and the list of edge.  Therefore , we use the following types to store a vertex. Struct vertexTag { Data-type element; int visited; edgeTage *edges; Vertextag *next; };
  • 56. Graph Traversal  A graph traversal means visiting all the nodes of the graph exactly once. Two graph traversal methods are commonly used. These are,  Depth First Search (DFS)  Breadth First Search (BFS)