SlideShare a Scribd company logo
1 of 595
Download to read offline
Algorithm Design and Analysis
Sayed Chhattan Shah
Associate Professor of Computer Science
Mobile Grid and Cloud Computing Lab
Department of Information Communication Engineering
Hankuk University of Foreign Studies Korea
www.mgclab.com
Acknowledgements
▪ The material in these slides is taken from different sources including:
o Algorithm Design, First Edition
o Introduction to Algorithms, Third Edition
o The Stony Brook Algorithm Repository
o Algorithms and Complexity Course by Atri Rudra
o NPTEL Design and Analysis of Algorithms by Madhavan Mukund
o AlgoDaily
Course Information
▪ eClass
Introduction to Algorithms
Check whether a given number N is positive or negative
Read N
IF (N==0) Print N is neither positive nor negative
IF (N>0) Print N is a positive number
IF (N<0) Print N is a negative number
Sorting by Colors
SORTING BY COLORS
Sorting cards with colors on them into piles of the same color
1) Pick up all of the cards.
2) Pick a card from your hand and look at the color of the card.
3) If there is already a pile of cards of that color, put this card on that pile.
4) If there is no pile of cards of that color, make a new pile of just this card color.
5) If there is still a card in your hand, go back to the second step.
6) If there is not still a card in your hand, then the cards are sorted.
▪ Each of these examples are algorithms, a set of
instructions for solving a problem
▪ Algorithm is named after the 9th century Persian
mathematician Al-Khwarizmi
▪ Algorithms are especially important to computers because
computers are general purpose machines for solving problems
▪ In order for a computer to be useful, we must give it a problem
to solve and a technique for solving the problem
▪ Through the use of algorithms, we can make computers
intelligent by programming them with various algorithms to
solve problems
▪ Algorithms typically have following characteristics
o Name
o Description
o Input Algorithm receives input
o Output Produces output
o Generality The algorithm applies to set of inputs
o Order of Operations Exact order of steps to perform
o Precision The steps are precisely stated
o Finiteness The algorithm terminates
o Correctness The output produced by algorithm is correct
▪ The algorithm receives three values a, b, and c as an input and
produces a value large as an output
▪ Steps are stated sufficiently precise
▪ Algorithm terminates after finitely many steps correctly answering the
given question
▪ The algorithm is general
o It can find a largest value of any three numbers
PROBLEM Given a list of positive numbers, return the largest number on the list
INPUTS A list L of positive numbers
OUTPUTS A number max, which will be the largest number on the list
ALGORITHM
1) Set max to 0
2) For each number x in the list L, compare it to max. If x is larger, set max to x
3) Output max
An Example Algorithm
How to specify the operations in algorithm?
▪ Write the algorithm using plain English
o Plain English is too wordy and ambiguous
▪ Often an English sentence can be interpreted in many different ways
▪ Write the algorithm using programming languages
o These languages are collections of basic operations that a computer understands
o Without knowledge of the programming language, it would be difficult for you to know what this
algorithm does
Sums the numbers from 1 to 10 and
displays the answer on the computer
screen
▪ Combine the familiarity of plain English with the structure and order of
programming languages
▪ A good compromise is structured English
An Example Algorithm
Algorithms for solving the problem of sorting
A problem of sorting a list of numbers
A problem of sorting a list of numbers
Input
Output
https://courses.cs.vt.edu/~csonline/Algorithms/Lessons/SimpleSort/index.html
Computer must perform six comparisons
(7 < 8), (7 > 5), (5 > 2), (2 < 4), (2 < 6), and finally (2 < 3)
Simple sort algorithm
Six more comparisons are required to determine that 3 is smallest
(7 < 8), (7 > 5), (5 < MAX), (5 > 4), (4 < 6), and finally (4 > 3)
The Selection Sort
▪ The array is virtually split into a sorted and an unsorted part.
▪ The smallest element is selected from the unsorted array and swapped
with the leftmost element, and that element becomes a part of the
sorted array.
▪ Animation
The Selection Sort
array[] = 64 25 12 22 11
Find the minimum element in array[0...4] and place it at beginning
11 25 12 22 64
Find the minimum element in array[1...4] and place it at beginning of array[1...4]
11 12 25 22 64
Find the minimum element in array[2...4] and place it at beginning of array[2...4]
11 12 22 25 64
Find the minimum element in array[3...4] and place it at beginning of array[3...4]
11 12 22 25 64
https://en.wikipedia.org/wiki/Selection_sort
https://en.wikipedia.org/wiki/Selection_sort
The Insertion Sort
▪ Insertion sort is a simple sorting algorithm that works similar to the way
you sort playing cards in your hands
The Insertion Sort
▪ The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in
the sorted part
▪ Animation
The Insertion Sort
Bubble Sort
▪ Bubble sort works by repeatedly comparing each pair of adjacent
elements and swapping them if they are in the wrong order
▪ Animation
Bubble Sort
https://en.wikipedia.org/wiki/Bubble_sort
Linear and binary search algorithms
Linear search algorithm visualization
Binary search algorithm visualization
https://cs50.harvard.edu/
Binary search algorithm visualization
https://cs50.harvard.edu/
▪ Math Preliminaries in Algorithm Analysis
https://tutorial.math.lamar.edu/Classes/CalcI/SummationNotation.aspx
Algorithm Analysis
▪ Computational complexity or simply complexity of an algorithm is the
amount of resources required to run it
▪ Time complexity
▪ A measure of the amount of time required to execute an algorithm
▪ Space complexity
▪ Amount of memory space required to execute an algorithm
https://en.wikipedia.org/wiki/Computational_complexity
Algorithm Analysis
▪ The analysis of algorithms is the process of finding the computational
complexity of algorithms
▪ The amount of time, storage, or other resources needed to execute them
https://en.wikipedia.org/wiki/Analysis_of_algorithms
Algorithm Analysis
Algorithm Analysis
▪ An algorithm that is space-efficient uses the least amount of
computer memory to solve the problem
▪ An algorithm that is time-efficient uses the least amount of time
to solve the problem
▪ How do we compare the time efficiency of two algorithms that solve
the same problem?
Algorithm Analysis
▪ How do we compare the time efficiency of two algorithms that solve
the same problem?
▪ Manual
Algorithm Analysis
▪ How do we compare the time efficiency of two algorithms that solve
the same problem?
▪ Experimental Approach
o Implement algorithms in a programming language, and run them to
compare their time requirements
Algorithm Analysis
▪ Experimental Approach
o Comparing the programs instead of algorithms has difficulties because the
results would depend on
▪ How are the algorithms coded?
• We should not compare implementations, because they are sensitive to
programming style that may cloud the issue of which algorithm is inherently
more efficient
▪ What computer should we use?
• We should compare the efficiency of the algorithms independently of a
particular computer
Algorithm Analysis
▪ When we analyze algorithms, we should employ mathematical
techniques that analyze algorithms independently of specific
implementations and computers
▪ To analyze algorithms
o Count the number of primitive operations
▪ Evaluating an expression (x + y)
▪ Assigning a value to a variable (x ←5)
▪ Comparing two numbers (x < y)
▪ Returning from a method
o Express the efficiency of algorithms using growth functions
Algorithm Analysis
Each operation in an algorithm has a cost
Each operation takes a certain amount of time but it is constant
count = count + 1
A sequence of operations
count = count + 1 Cost c1
sum = sum + count Cost c2
Total Cost = c1 + c2
Cost of basic operations
https://algs4.cs.princeton.edu/lectures
Most primitive operations take constant time
https://algs4.cs.princeton.edu/lectures
If Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n c3 1
Total Cost <= c1 + max(c2, c3)
Simple Loop
Cost Times
i = 1 c1 1
sum = 0 c2 1
while (i <= n) { c3 n+1
i = i + 1 c4 n
sum = sum + i c5 n
}
Total Cost = c1 + c2 + (n + 1)*c3 + n*c4 + n*c5
Nested Loop
Cost Times
i=1 c1 1
sum = 0 c2 1
while (i <= n) { c3 n+1
j=1 c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i c6 n*n
j = j + 1 c7 n*n
}
i = i +1 c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 +
n*(n+1)*c5 +n*n*c6 + n*n*c7 + n*c8
for (int i = 1; i <= n; i++) {
perform 100 operations A
for (int j = 1; j <= n; j++) {
perform 2 operations B
}
}
for (int i = 1; i <= n; i++) {
perform 100 operations A
for (int j = 1; j <= n; j++) {
perform 2 operations B
}
}
Total Operations = A + B
General Rules for Estimation
▪ Consecutive Statements Just add the running times of those consecutive
statements
▪ If Else Never more than the running time of the test plus the larger of running
times of S1 and S2
▪ Loops The running time of a loop is at most the running time of the
statements inside of that loop times the number of iterations
▪ Nested Loops Running time of a nested loop containing a statement in the
inner most loop is the running time of statement multiplied by the product of
the sized of all loops
Linear Search
int linearSearch(int array[], int n, int x)
{
int i;
for (i = 0; i < n; i++)
if (array[i] == x)
return i;
return 0;
}
▪ Worst Case Analysis The maximum amount of time that an algorithm
require to solve a problem of size n
o This gives an upper bound for time complexity of an algorithm
o Normally, we try to find worst-case behavior of an algorithm
▪ Best Case Analysis The minimum amount of time that an algorithm
require to solve a problem of size n
o The best case behavior of an algorithm is NOT so useful
Asymptotic Analysis
▪ Average Case Analysis The average amount of time that an algorithm
require to solve a problem of size n
o Sometimes, it is difficult to find the average-case behavior of an algorithm
o We have to look at all possible data organizations of a given size n and
their distribution probabilities of these organizations
o Worst-case analysis is more common than average-case analysis
Algorithmic Runtime
Linear Search
int linearSearch(int array[], int n, int x)
{
int i;
for (i = 0; i < n; i++)
if (array[i] == x)
return i;
return 0;
}
Worst case performance x does not exist
Best case performance x matches with the first element
Linear Search
int linearSearch(int array[], int n, int x)
{
int i;
for (i = 0; i < n; i++)
if (array[i] == x)
return i;
return 0;
}
Worst case performance x does not exist T(n) = n
Best case performance x matches with the first element T(n) = 1
▪ Growth rate of algorithm
o How fast the time of an algorithm grows as a function of problem size
o Problem or input size depends on the particular problem:
▪ For a search problem, the problem size is the number of elements in the
search space
▪ For a sorting problem, the problem size is the number of elements in the
given list
Asymptotic Analysis
Asymptotic Analysis
▪ Asymptotic analysis is an analysis of algorithms that focuses on
o Analyzing problems of large input size
o Consider only the leading term of the formula
o Ignore the coefficient of the leading term
Example
T(n) = 10n3 + n2 + 40n + 800
If n = 1,000, then T(n) = 10,001,040,800
error is 0.01% if we drop all but the n3 the dominating term
Asymptotic Analysis
Basic function that often appear in algorithm analysis
1. Constant function f(n) = c
2. Linear function f(n) = n
3. Quadratic function f(n) = n2
4. Cubic function f(n) = n3
5. Log function f(n) = log n
6. Log linear function f(n) = n log n
7. Exponential function f(n) = bn
Constant Function
▪ An algorithm is said to run in constant time if it requires the same amount
of time regardless of the input size
▪ Array: accessing any element
Linear Function
▪ An algorithm is said to run in linear time if its time execution is directly
proportional to the input size
▪ Time grows linearly as input size increases
Examples
Linear search, traversing, find minimum or maximum
Computing the maximum
max  a1
for i = 2 to n {
if (ai > max)
max  ai
}
Quadratic function
▪ An algorithm is said to run in quadratic time if its time execution is
proportional to the square of the input size
▪ This function arises in algorithm analysis any time we use nested loops
▪ The outer loop performs primitive operations in linear time; for each iteration, the inner
loop also perform primitive operations in linear time
▪ Time complexity of most algorithms is quadratic
Logarithmic function
▪ An algorithm is said to run in logarithmic time if its time execution is
proportional to the logarithm of the input size
▪ Example
▪ Binary Search
Exponential Function
▪ For a given variable n, the function always returns bn, where b is base
and n is power
▪ This function is also common in algorithm analysis
▪ Growth rate of exponential function is faster than all other functions
Which algorithm is the most efficient?
Assume that we have a computer which can operate at a speed of 1 million instructions per second
75
https://cathyatseneca.gitbooks.io/data-structures-and-algorithms/content/analysis/notations.html
Asymptotic notations are mathematical tools to represent the time
complexity of algorithms for asymptotic analysis
Asymptotic Notations
https://cathyatseneca.gitbooks.io/data-structures-and-algorithms/content/analysis/notations.html
Informally
OR
Asymptotic Notations
Let T(n) be a function—the worst-case running time of a certain algorithm on
an input of size n.
Given another function f(n), we say that T(n) is O(f (n)) if, for sufficiently large n,
the function T(n) is bounded above by a constant multiple of f (n).
Asymptotic Upper Bounds
Note that O(·) expresses only an upper bound, not the exact growth rate
of the function
Asymptotic Lower Bounds
Asymptotically Tight Bounds
Big O Examples
https://www3.cs.stonybrook.edu/~skiena
Big Omega Examples
https://www3.cs.stonybrook.edu/~skiena
Big Theta Examples
https://www3.cs.stonybrook.edu/~skiena
Big-O: Functions Ranking
▪ O(1) constant time
▪ O(log n) log time
▪ O(n) linear time
▪ O(n log n) log linear time
▪ O(n2) quadratic time
▪ O(n3) cubic time
▪ O(2n) exponential time
BETTER
WORSE
Big O Complexity Chart https://www.bigocheatsheet.com
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=n; j++)
(3) print(i,j)
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=n; j++)
(3) print(i,j)
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=i; j++)
(3) print(i,j)
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=i; j++)
(3) print(i,j)
Line (1) is obviously n+1
Line (2) loop j is dependent not of n but of i
Frequency count of line 2 in summation form
Frequency count of line 3 in summation form
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=i; j++)
(3) print(i,j)
(1) for (i=1; i<=n; i++)
(2) for (j=1; j<=i; j++)
(3) print(i,j)
Total frequency count in terms of n
(1) for (i=1; i<=100; i++)
(2) for (j=1; j<=50; j++)
(3) print(i,j)
(1) for (i=1; i<=100; i++)
(2) for (j=1; j<=50; j++)
(3) print(i,j)
Line (1): 100+1 times
Line (2): loop j 50+1 times, and repeated 100 times by the outer loop i
(50+1)(100) = 5100
Line (3): 1 time, repeated by 50 times by loop j, and 100 times by the loop i
(1)(50)(100) = 5000
f(n) = 10201
O(1)
https://en.wikipedia.org/wiki/Selection_sort
Time Complexity of Selection Sort
▪ To find the minimum element from the array of n elements, n−1 comparisons
are required.
▪ After inserting the minimum element in its proper position, the size of an
unsorted array reduces to n−1 and then n−2 comparisons are required to find
the minimum in the unsorted array.
denote the number of times the while loop test in line 5 is
executed for that value of j
The worst case when the array is in reverse sorted order
Introduction to Basic Data Structures
Arrays
▪ Array is a linear data structure consisting of a collection of
elements
▪ Elements are accessed via index
https://lucasmagnum.medium.com
In worst case it takes O(n) time
Add o at index i
In the worst case it takes O(n) time
Remove i
Arrays
▪ Analysis
o Add and remove run in O(N)
o Index based lookup O(1)
o Lookup O(N)
Linked List
▪ A linked list is a linear data structure in which nodes are
arranged in a linear order
o The order in a linked list is determined by a pointer in each node
▪ Each node stores
o Data
o Link to the next node
The last node is linked to a terminator used to signify the end of the list
Linked List
▪ Search for a node in the List
▪ The worst case Time Complexity for retrieving a node from
anywhere in the list is O(n)
https://www.educative.io/edpresso/what-is-a-singly-linked-list
Linked List
▪ Add a node to the List
▪ The worst case Time Complexity
o Front of the list O(1)
o End of the list O(n)
o Anywhere in the list O(n)
https://www.educative.io/edpresso/what-is-a-singly-linked-list
Linked List
▪ Remove a node from the list
▪ The worst case Time Complexity
o Front of the list O(1)
o End of the list O(n)
o Anywhere in the list O(n)
https://www.educative.io/edpresso/what-is-a-singly-linked-list
Stacks
▪ A stack is a linear data structure
▪ Insertion and deletion of items takes place at one end called top
of the stack
o INSERT operation on a stack is often called PUSH
o DELETE operation is often called POP
▪ Insertion and deletion follow LIFO principle
o LIFO Last In First Out
https://visualgo.net
https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
Stacks
▪ An array implementation of a stack S
Each of the three stack
operations takes O(1)
time
The space used is O(n)
Stacks
▪ Applications
o Page visited history in a Web browser
o Undo sequence in a text editor
https://visualgo.net
Stacks
http://www3.nccu.edu.tw/~yuf
Queue
▪ Queue also stores objects
▪ INSERT operation on a queue is called ENQUEUE
▪ DELETE operation is called DEQUEUE
▪ It has two ends
o Elements are inserted at one end
o Elements are deleted from the other end
▪ Insert and delete operations follow FIFO principle
o FIFO First In First Out
A queue implemented using an array Q [1..12]
The queue has 5 elements, in locations Q [7..11]
A queue implemented using an array Q [1..12]
The configuration of the queue after the calls
ENQUEUE(Q, 17), ENQUEUE(Q, 3) and ENQUEUE(Q, 5)
A queue implemented using an array Q [1..12]
The configuration of the queue after the call DEQUEUE(Q)
returns the key value 15 formerly at the head of the queue.
The new head has key 6.
The pseudocode assumes that n = Q.length
Each operation
takes O(1) time
Queue
▪ Applications
o Round Robin Schedulers
https://visualgo.net
Algorithm Design and Analysis
Sayed Chhattan Shah
Associate Professor of Computer Science
Mobile Grid and Cloud Computing Lab
Department of Information Communication Engineering
Hankuk University of Foreign Studies Korea
www.mgclab.com
 Graphs were being studied long before computers were invented
 Graphs describe
o Roads maps
o Airline routes
o Course prerequisites
 Graphs algorithms run
o Large communication networks
o The software that makes the Internet function
o Programs to determine optimal placement of components on a silicon chip
Graph Theory
 A graph consists of a set of vertices or nodes and a set of edges or relations
between pairs of vertices
 Edges represent paths or connections between vertices
Graph Theory
 Undirected graph G = (V, E)
o V = nodes
o E = edges between pairs of nodes
o Graph size parameters: n = |V| and m = |E|
Graph Theory
V = { 1, 2, 3, 4, 5, 6, 7, 8 }
E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 }
n = 8
m = 11
 The edges of a graph are
directed if the existence of
an edge from A to B does
not necessarily guarantee
that there is a path in both
directions
 A graph with directed edges is called a directed graph or digraph
 A graph with undirected edges is an undirected graph or simply a graph
Graph Theory
 The edges in a graph may have associated values known as their weights
 A graph with weighted edges is known as a weighted graph
Graph Theory
Graph Theory
 A path in an undirected graph G = (V, E) is a sequence P of nodes v1,
v2, …, vk-1, vk with the property that each consecutive pair vi, vi+1 is
joined by an edge in E
 A path is simple if all nodes are distinct
Two paths from U to V
Graph Theory
 A cycle is a path v1, v2, …, vk-1, vk in which v1 = vk, k > 2, and the first
k-1 nodes are all distinct.
cycle C = 1-2-4-5-3-1
A cycle is a path that begins and ends on
the same vertex
 Degree
o Number of edges incident on a node
The degree of 5 is 3
Graph Theory
Degree = in-degree + out-degree
in-degree = number of edges entering
out-degree = number of edges leaving
out-degree(1)=2
in-degree(1)=0
out-degree(2)=2
in-degree(2)=2
out-degree(3)=1
in-degree(3)=4
Graph Theory
Graph Theory
 An undirected graph is connected if for every pair of nodes u and v,
there is a path between u and v
 Subgraph
 Vertex and edge sets are subsets of those of G
 A supergraph of a graph G is a graph that contains G as a subgraph
Graph Theory
 Complete graph
 Every pair of distinct vertices is connected by a unique edge
Graph Theory
 Planar graphs can be drawn on a plane such that no two edges intersect
Graph Theory
 Non linear data structure
o data structures in which data items are not arranged in a sequence
Graph Theory
Computer Networks
Transportation Networks
Internet
Graphs and Networks
Graph
(Network)
Vertexes
(Nodes)
Edges
(Arcs) Flow
Communications
Telephones exchanges,
computers, satellites
Cables, fiber optics,
microwave relays
Voice, video,
packets
Circuits
Gates
Registers
Processors
Wires Current
Mechanical Joints Rods, beams, springs Heat, energy
Hydraulic
Reservoirs, pumping
stations, lakes Pipelines Fluid, oil
Financial Stocks, currency Transactions Money
Transportation
Airports, rail yards, street
intersections
Highways, railbeds,
airway routes
Freight, vehicles,
passengers
Trees
 An undirected graph is a tree if it is connected and does not contain a cycle
 Theorem
o Let G be an undirected graph on n nodes. Any two of the following statements
imply the third
 G is connected
 G does not contain a cycle
 G has n-1 edges
Theorem
An undirected graph is a tree if and only if there is a unique simple
path between any two of its vertices
Pizza Shop Tree
Owner Jake
Manager Brad Chef Carol
Waitress Waiter Cook Helper
Joyce Chris Max Len
File System
Forest
Graphs containing no simple circuits that are not
connected, but each connected component is a tree
Forest
Tree Forest
A graph
Not a tree or forest
Rooted Trees
a tree the same tree, rooted at 1
v
parent of v
child of v
root r
A rooted tree is a tree in which one vertex has been designed
as the root and every edge is directed away from the root
a
b
c
d
e f
g
a
b
c
d
e
f
g
root node
a
b c
d e f g
h i
parent of g
siblings
leaf
internal vertex
A vertex that has
children is called
an internal vertex
How many internal vertices?
Owner Jake
Manager Brad Chef Carol
Waitress Waiter Cook Helper
Joyce Chris Max Len
a
b c
d e f g
h i ancestors of h and i
a
b c
d e f g
h i
subtree with b as its root
subtree with c as its root
a is the parent of b, b is the child of a,
c, d, e are siblings,
a, b, d are ancestors of f
c, d, e, f, g are descendants of b
c, e, f, g are leaves of the tree
a, b, d are internal vertices of the tree
(at least one child)
subtree with d as its root
a
b
f
c e
d
g
f
d
g
Trees
 A rooted tree is called an m-ary tree if every internal vertex has no more than m
children
 The tree is called a full m-ary tree if every internal vertex has exactly m
children
 An m-ary tree with m=2 is called a binary tree
Full binary tree Full 3-ary tree Full 5-ary tree Not full 3-ary tree
A tree with n vertices has n-1 edges
Trees
The level of a vertex v in a rooted tree is the length of the unique path from
the root to this vertex
The level of the root is defined to be zero
The height of a rooted tree is the maximum of the levels of vertices
Height = 4
Level
0
1
2
3
4
Trees
Binary Tree
 Every vertices in a binary tree has at most 2 children
 Each child is designated as either left child or right child
 A full binary tree is a binary tree in which each vertex has either 2
children or zero children
a
e
b
c
f
Binary Tree
d
a
e
b
c
f
b left child of a
f right child of c
right subtree of a
d
left subtree of a
Binary Search Tree
 Data are associated with vertices
 The data are arranged so that, for each vertex v in T, each data item
in the left subtree of v is less than the data item in v, and each data
item in the right subtree of v is greater than the data item in v
Binary Search Tree
Binary Search Tree
Graph Representation: Adjacency Matrix
 An n × n matrix where A[u, v] = 1 if the graph contains the edge (u, v) and
0 otherwise
o Space proportional to n2
o Presence of a particular edge can be checked in constant time (1) time
o Identifying all edges takes (n2) time
1 2 3 4 5 6 7 8
1 0 1 1 0 0 0 0 0
2 1 0 1 1 1 0 0 0
3 1 1 0 0 1 0 1 1
4 0 1 0 1 1 0 0 0
5 0 1 1 1 0 1 0 0
6 0 0 0 0 1 0 0 0
7 0 0 1 0 0 0 0 1
8 0 0 1 0 0 0 1 0
Graph Representation: Adjacency List
 Node indexed array of lists
o Two representations of each edge
o Space proportional to m + n
o Checking if (u, v) is an edge takes O(degree(u)) time
o Identifying all edges takes (m + n) time
degree = number of neighbors of u
1 2 3
2
3
4 2 5
5
6
7 3 8
8
1 3 4 5
1 2 5 8
7
2 3 4 6
5
3 7
Comparison of the Two Representations
 Adjacency Matrix
o Space proportional to n2
o Checking if (u, v) is an edge takes (1) time
o Identifying all edges takes (n2) time
o Suitable for densely connected graph
 Adjacency list
o Space proportional to m + n
o Checking if (u, v) is an edge takes O(degree(u)) time
o Identifying all edges takes (m + n) time
o Need to use link list for programming
o Suitable for large-scale, sparsely connected graph
degree = number of neighbors of u
 Operations on a Graph
 Traversing a graph
 Searching a graph
 Adding a node
 Adding an edge
 Deleting a node
 Deleting an edge
 Updating weight of an edge
 Determining whether there is an edge between two nodes
 Find all neighbors of a node
 Union
 Interaction
Graph Theory
 Traversals of graphs
o Most graph algorithms involve visiting each vertex in a systematic order
o The two most common traversal algorithms
 Breadth-first search
 Depth-first search
Graph Theory
BFS and Shortest Path Problem
 Given any source vertex s BFS visits the other vertices at increasing
distances away from s
o Distance is number of edges on a path from s
2
4
3
5
1
7
6
9
8
0
Consider s = vertex 1
Nodes at distance 1 = 2, 3, 7, 9
s
Example
Nodes at distance 2 = 8, 6, 5, 4
Nodes at distance 3 = 0
2
4
3
5
1
7
6
9
8
0
Adjacency List
Source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
F
F
F
F
F
F
F
F
F
F
Q = { }
Initialize visited
table (all False)
Initialize Q to be empty
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
F
F
T
F
F
F
F
F
F
F
Q = { 2 }
Flag that 2 has
been visited
Place source 2 on the queue
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
F
T
T
F
T
F
F
F
T
F
Q = {2} → { 8, 1, 4 }
Mark neighbors
1, 4 and 8 as
visited
Dequeue 2
Place all unvisited neighbors of 2 on the queue
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
F
T
F
F
F
T
T
Q = { 8, 1, 4 } → { 1, 4, 0, 9 }
Mark new visited
Neighbors 0, 9
Dequeue 8
Place all unvisited neighbors of 8 on the queue
Notice that 2 is not placed on the queue again, it has been visited
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
F
F
T
T
T
Q = { 1, 4, 0, 9 } → { 4, 0, 9, 3, 7 }
Mark new visited
Neighbors 3, 7
Dequeue 1
Place all unvisited neighbors of 1 on the queue
Only nodes 3 and 7 haven’t been visited yet.
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
F
F
T
T
T
Q = { 4, 0, 9, 3, 7 } → { 0, 9, 3, 7 }
Dequeue 4
4 has no unvisited neighbors
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
F
F
T
T
T
Q = { 0, 9, 3, 7 } → { 9, 3, 7 }
Dequeue 0
0 has no unvisited neighbors
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
F
F
T
T
T
Q = { 9, 3, 7 } → { 3, 7 }
Dequeue 9
9 has no unvisited neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
F
T
T
T
Q = { 3, 7 } → { 7, 5 }
Dequeue 3
place neighbor 5 on the queue
Neighbors
Mark new
visited Vertex 5
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
Q = { 7, 5 } → { 5, 6 }
Dequeue 7
place neighbor 6 on the queue
Neighbors
Mark new visited
Vertex 6
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
Q = { 5, 6} → { 6 }
Dequeue 5
no unvisited neighbors of 5
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
Q = { 6 } → { }
Dequeue 6
no unvisited neighbors of 6
Neighbors
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
Q = { }
STOP
Q is empty
What did we discover?
Look at visited table
There exists a path from source
vertex 2 to all vertices in the graph
Applications of BFS
 What can we do with the BFS code we discussed?
o Is there a path from source s to a vertex v?
o Is an undirected graph connected?
Applications of BFS
 What can we do with the BFS code we discussed?
o Is there a path from source s to a vertex v?
 Check flag[v]
o Is an undirected graph connected?
 Scan array flag[ ]
 If there exists flag[u] = false then …
Example
 Apply BFS algorithm on the following graph.
 Source vertex is 1.
Time Complexity
Each vertex is put on the queue exactly
once, when it is first encountered, so
there are 2 |V | queue operations
Over the course of execution,
the inner loop looks at each
edge once in directed
graphs or twice in undirected
graphs, and therefore takes
O(|E|) time.
The overall running time of this
algorithm is linear
O(|V|+|E|)
The for loop takes time
proportional to
= 2|E|
Shortest Path Recording
 BFS we saw only tells us whether a path exists from source s to other
vertices v
o It doesn’t tell us the path
o We need to modify the algorithm to record the path
Record where you came from
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
F
T
T
F
T
F
F
F
T
F
Q = {2} → { 8, 1, 4 }
Mark neighbors
as visited and
Record in Pred
that we came from 2
Dequeue 2
Place all unvisited neighbors of 2 on the queue
Neighbors
-
2
-
-
2
-
-
-
2
-
Pred
BFS Finished
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
Q = { }
Pred now can be traced backward
to report the path
8
2
-
1
2
3
7
1
2
8
Pred
2
4
3
5
1
7
6
9
8
0
Adjacency List
source
0
1
2
3
4
5
6
7
8
9
Visited Table (T/F)
T
T
T
T
T
T
T
T
T
T
8
2
-
1
2
3
7
1
2
8
Pred
Report path from s to v:
Path(2-0) ⇒
Path(2-6) ⇒
Path(2-1) ⇒
BFS application: Connected Component
 Find all nodes reachable from starting node s
Connected component containing node 1 = { 1, 2, 3, 4, 5, 6, 7, 8 }
BFS application: Connected Component
 We can re-use the previous BFS to compute the connected components of a
graph G
BFS application: Connected Component
 We can re-use the previous BFS to compute the connected components of a
graph G
A graph with 3 components
BFS_connectedComponents ( G ) {
// Component number
i = 1;
for every vertex v
flag[v] = false;
for every vertex v
if ( flag[v] == false ) {
print ( “Component ” + i++ );
BFS( v );
}
}
Depth-First Search
 In a depth-first search,
o start at a vertex,
o visit it,
o choose one adjacent vertex to visit;
o then, choose a vertex adjacent to that vertex to visit,
o and so on until you go no further;
o then back up and see whether a new vertex can be found
Example
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Mark 0 as being
visited
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Finish order:
Discovery or visit order:
0
Choose an adjacent
vertex that is not
being visited
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Finish order:
Discovery or visit order:
0
Choose an adjacent
vertex that is not
being visited
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Finish order:
Discovery or visit order:
0, 1
Recursively choose
an adjacent vertex
that is not being
visited
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Finish order:
Discovery or visit order:
0, 1, 3
Recursively choose
an adjacent vertex
that is not being
visited
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Finish order:
Discovery or visit order:
0, 1, 3
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
Recursively choose
an adjacent vertex
that is not being
visited
Finish order:
Discovery or visit order:
0, 1, 3, 4
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4
4 5
5 6
6
There are no
vertices adjacent to
4 that are not being
visited
Finish order:
Discovery or visit order:
0, 1, 3, 4
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4 5
5 6
6
Mark 4 as visited
Finish order:
4
Discovery or visit order:
0, 1, 3, 4
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3
3 4 5
5 6
6
Return from the
recursion to 3; all
adjacent nodes to 3
are being visited
Finish order:
4
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3 4 5
5 6
6
Mark 3 as visited
Finish order:
4, 3
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3 4 5
5 6
6
Return from the
recursion to 1
Finish order:
4, 3
0 visited 0
0 being visited
0
0 unvisited
0
0
1
1
2
2
3 4 5
5 6
6
All vertices
adjacent to 1 are
being visited
Finish order:
4, 3
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
Mark 1 as visited
Finish order:
4, 3, 1
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
Return from the
recursion to 0
Finish order:
4, 3, 1
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
2 is adjacent to 1
and is not being
visited
Finish order:
4, 3, 1
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
2 is adjacent to 1
and is not being
visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
5 is adjacent to 2
and is not being
visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
5 is adjacent to 2
and is not being
visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2, 5
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
6 is adjacent to 5
and is not being
visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2, 5
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
6 is adjacent to 5
and is not being
visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2, 5, 6
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
6
There are no
vertices adjacent to
6 not being visited;
mark 6 as visited
Finish order:
4, 3, 1
Discovery or visit order:
0, 1, 3, 4, 2, 5, 6
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
There are no
vertices adjacent to
6 not being visited;
mark 6 as visited
Finish order:
4, 3, 1, 6
Discovery or visit order:
0, 1, 3, 4, 2, 5, 6
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
Return from the
recursion to 5
Finish order:
4, 3, 1, 6
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5
5 6
Mark 5 as visited
Finish order:
4, 3, 1, 6
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5 6
Mark 5 as visited
Finish order:
4, 3, 1, 6, 5
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5 6
Return from the
recursion to 2
Finish order:
4, 3, 1, 6, 5
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
2
3 4 5 6
Mark 2 as visited
Finish order:
4, 3, 1, 6, 5
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
3 4 5 6
Mark 2 as visited
Finish order:
4, 3, 1, 6, 5, 2
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
3 4 5 6
Return from the
recursion to 0
Finish order:
4, 3, 1, 6, 5, 2
0 visited 0
0 being visited
0
0 unvisited
0
0
1
2
3 4 5 6
There are no nodes
adjacent to 0 not
being visited
Finish order:
4, 3, 1, 6, 5, 2
0 visited 0
0 being visited
0
0 unvisited
0
1
2
3 4 5 6
Mark 0 as visited
Finish order:
4, 3, 1, 6, 5, 2, 0
Discovery or Visit order:
0, 1, 3, 4, 2, 5, 6, 0
DFS(G, v) v is the vertex where the search starts
Stack S start with an empty stack
For each vertex u
set visited[u] = false
PUSH S, v
while (S is not empty) do
u = POP S
if (not visited[u]) then
visited[u] = true
For each unvisited neighbor w of u
PUSH S, w
end if
end while
END DFS()
Depth-First Search
DFS(G, v) v is the vertex where the search starts
Stack S start with an empty stack
For each vertex u
set visited[u] = false
PUSH S, v
while (S is not empty) do
u = POP S
if (not visited[u]) then
visited[u] = true
For each unvisited neighbor w of u
PUSH S, w
end if
end while
END DFS()
Depth-First Search
1
2
3
4
5
6
Connected Component
 Can we re-use the previous DFS to compute the connected components of a
graph G?
Cycle detection
 Given a graph G = (V, E) cycle detection problem is to determine if there is
a cycle in the graph
Cycle detection
 Can we use DFS to detect a cycle in an undirected graph?
 During DFS, for any current vertex x if there an adjacent vertex y is present
which is already visited and y is not a direct parent of x then there is a cycle
in graph
1
2
3
5
4
A coloring of a graph is an assignment of a color to each vertex such that no
neighboring vertices have the same color
Graph Coloring
Bipartite Graph
Bipartite Graph
Bipartite Graph
Bipartite Graph
A bipartite graph is an undirected graph
G = (V, E) in which V can be partitioned into 2 sets V1 and
V2 such that (u, v)  E implies either u  V1 and v  V2
OR v  V1 and u  V2
u1
u2
u3
u4
v1
v2
v3
V1 V2
Graph Theory
Bipartite Graphs
 Applications
o Stable marriage: men = red, women = blue
o Scheduling: machines = red, jobs = blue
a bipartite graph
Bipartite Graph
v1
v2 v3
v6 v5 v4
v7
a bipartite graph G
Bipartite Graph
v1
v2 v3
v6 v5 v4
v7
v2
v4
v5
v7
v1
v3
v6
a bipartite graph G another drawing of G
How can we know if a given graph is bipartite?
A bipartite graph is possible if the graph coloring is possible using
two colors such that vertices in a set are colored with the same color.
How can we know if a given graph is bipartite?
Bipartite Graph
1. Assign RED color to the source vertex
o put into set U
2. Color all the neighbors with BLUE color
o put into set V
3. Color all neighbor’s neighbor with RED color
o put into set U
4. This way, assign color to all vertices such that it satisfies all the constraints of m
way coloring problem where m = 2.
5. While assigning colors, if we find a neighbor which is colored with same color as
current vertex, then the graph cannot be colored with 2 vertices or graph is not
Bipartite
Bipartite Graph
Strong Connectivity
 Node u and v are mutually reachable if there is a path from u to v and also
a path from v to u.
 A graph is strongly connected if every pair of nodes is mutually reachable.
 Lemma. Let s be any node. G is strongly connected iff every node is
reachable from s, and s is reachable from every node.
 Proof  Follows from definition.
 Proof  Path from u to v: concatenate u-s path with s-v path.
Path from v to u: concatenate v-s path with s-u path.
s
v
u
Strong Connectivity
 Test if a graph is strongly connected?
strongly connected not strongly connected
Algorithm
 Perform DFS or BFS starting from every vertices in the graph
 If DFS or BFS visits every vertex in the graph then graph is strongly
connected
strongly connected not strongly connected
Algorithm
 Perform DFS or BFS starting from every vertices in the graph
 If DFS or BFS visits every vertex in the graph then graph is strongly
connected
 O(|V|(|V|+|E|))
strongly connected not strongly connected
Algorithm
 Can determine if G is strongly connected in O(m + n) time
o Pick any node s.
o Run BFS from s in G
o Run BFS from s in Grev
o Return true iff all nodes reached in both BFS executions
reverse orientation of every edge in G
strongly connected not strongly connected
On reversing all edges of the
graph, the type of graph
wouldn’t change. Strongly
connected graph will remain
strongly connected.
Directed Acyclic Graphs
 An DAG is a directed graph that contains no directed cycles
 Directed acyclic graphs can be used to encode precedence relations or
dependencies in a natural way
 Precedence constraints
o Edge (vi, vj) means task vi must occur before vj
 Applications
o Course prerequisite graph course vi must be taken before vj
o Compilation module vi must be compiled before vj
o Pipeline of computing jobs output of job vi needed to determine input of job vj
Directed Acyclic Graphs
Example
Directed Acyclic Graphs
 A topological order of a directed graph G = (V, E) is an ordering of its
nodes as v1, v2, …, vn so that for every edge (vi, vj) we have i < j
DAG Topological ordering
v2 v3
v6 v5 v4
v7 v1
v1 v2 v3 v4 v5 v6 v7
A linear ordering of its vertices such that for
every directed edge uv from vertex u to
vertex v, u comes before v in the ordering
The graph has many valid topological sorts
5 7 3 11 8 2 9 10
3 5 7 8 11 2 9 10
5 7 3 8 11 10 9 2
7 5 11 3 10 8 9 2
5 7 11 2 3 8 9 10
3 7 8 5 11 10 2 9
Topological sort or order
If the graph has
a cycle, all
courses in the
cycle become
impossible to
take
Directed Acyclic Graphs
v1 vi vj vn
the supposed topological order: v1, …, vn
the directed cycle C
Directed Acyclic Graphs
 Every DAG have a topological ordering so how do we find one efficiently?
Directed Acyclic Graphs
w x u v
Algorithm to compute a topological ordering of G
v1
Example
Topological order:
v2 v3
v6 v5 v4
v7 v1
v2
Topological order: v1
v2 v3
v6 v5 v4
v7
v3
Topological order: v1, v2
v3
v6 v5 v4
v7
v4
Topological order: v1, v2, v3
v6 v5 v4
v7
v5
Topological order: v1, v2, v3, v4
v6 v5
v7
v6
Topological order: v1, v2, v3, v4, v5
v6
v7
v7
Topological order: v1, v2, v3, v4, v5, v6
v7
Topological order: v1, v2, v3, v4, v5, v6, v7.
v2 v3
v6 v5 v4
v7 v1
v1 v2 v3 v4 v5 v6 v7
Running Time
 Identifying a node v with no incoming edges, and deleting it
from G, can be done in O(n) time
 Since the algorithm runs for n iterations, the total running
time is O(𝑛2
)
 Can we achieve a running time of O(m + n) using the same
high level algorithm?
Topological Sort Algorithm
1) Store each vertex’s indegree in an array
2) Initialize a queue with all indegree zero vertices
3) While there are vertices remaining in the queue:
o Dequeue and output a vertex
o Reduce indegree of all vertices adjacent to it by 1
o Enqueue any of these vertices whose indegree became zero
Running Time
 Initialize indegree array O(|E|)
 Initialize Queue with indegree 0 vertices O(|V|)
 Dequeue and output vertex O(|V|)
o |V| vertices, each takes only O(1) to dequeue and output
 Reduce indegree of all vertices adjacent to a vertex and Enqueue any
indegree 0 vertices O(|E|)
 Total time = O(|V| + |E|)
Running Time
 Topological sort using DFS
Running Time
 Topological sort using DFS
o Order nodes in reverse order that DFS finishes visiting them
o Link
Algorithm Design and Analysis
Sayed Chhattan Shah
Associate Professor of Computer Science
Mobile Grid and Cloud Computing Lab
Department of Information Communication Engineering
Hankuk University of Foreign Studies Korea
www.mgclab.com
Optimization problems
 An optimization problem is the problem of finding the best solution from
all feasible solutions
o Greedy Method
o Dynamic Programming
Greedy Algorithms
Greedy Algorithms
 A greedy algorithm always makes the choice that looks best at the moment
o It makes a locally optimal choice in the hope that this choice will lead to a
globally optimal solution
 Greedy algorithms do not always yield optimal solutions
Interval Scheduling
Interval Scheduling
 Suppose there are meeting requests, and meeting takes time ( , )
o Meeting starts at and ends at
 The constraint is that no two meeting can be scheduled together if their
intervals overlap
 The goal is to schedule as many meetings as possible
Interval Scheduling
 Suppose we have a set of n proposed activities that wish to use a resource,
such as a lecture hall, which can serve only one activity at a time.
 The goal is to schedule as many activities as possible
 We have set of jobs or tasks or requests
 Job j starts at sj and finishes at fj
 Two jobs compatible if they don't overlap
 The goal is to find maximum subset of mutually compatible jobs
Time
0 1 2 3 4 5 6 7 8 9 10 11
f
g
h
e
a
b
c
d
Interval Scheduling
Interval Scheduling
 Sample Input and Output
Output
[Task 2, Task 3, Task 4] is an optimal solution because these tasks
have no conflicts with each other and any set with 4 tasks will
have at least two intervals in conflict.
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Interval Scheduling
 Example 1 Assume that the input intervals do not overlap each other
Output
Set is the same as the input solution set since neither Tasks
conflict with each other
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R
Add i to S
Return S
Interval Scheduling
 Assume that the maximum amount of conflicts that any Task can have is 1
Output
In this case it doesn’t matter which Task we choose. We can have a final set of
[Task 1, Task 2] or [Task 2, Task 3] and both solutions have a total of two tasks in them
so both of them are considered optimal.
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R
Add i to S
Remove all requests that conflict with i from R
Return S
Interval Scheduling
 What if a Task can have an arbitrary number of conflicts?
In this example, blindly choosing a Task to add to the Solution set will not work by
observing the example. If we choose Task 1, we can choose Task 2 or Task 5 to add to
the solution set since those two are the only Tasks which do not conflict with Task 1. If
we choose to add Task 4 to the solution set we can add Task 3 and Task 5 which will be
a more optimal solution.
Example 3
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Interval Scheduling
 This problem can be solved using the greedy approach of choosing the next
element of the Task list based on some property the Tasks have and
iteratively building up a solution.
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R where v(i) is minimized
Add i to S
Remove all requests that conflict with i from R
Return S
Generic Algorithm
Interval Scheduling
 Shortest Duration
o A natural solution would be to select Tasks based on their duration.
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R where f(i) - s(i) is minimized
Add i to S
Remove all requests that conflict with i from R
Return S
Interval Scheduling
 If we run the previous algorithm with Example 3 what is the result?
1. Choose Task 2
2. Remove Task 4 and Task 5 since they conflict with Task 2
3. Choose Task 3
4. Remove Task 1 since it conflicts with Task 3
5. Exit algorithm
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Interval Scheduling
 If we run the previous algorithm with Example 3 what is the result?
An optimal solution for this example can be found by observation.
[Task 3, Task 4, Task 5] has the most amount of intervals that do not conflict
with each other.
[Task 2, Task 3] is not an optimal solution so this is not the correct way to
greedily solve this problem.
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Interval Scheduling
 Earliest Start Time
o Since shortest duration does not work, we need to find a new parameter to use
for choosing the Tasks.
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R where s(i) is minimized
Add i to S
Remove all requests that conflict with i from R
Return S
Interval Scheduling
 If we run the previous algorithm with Example 3 what is the result?
1. Choose Task 3
2. Remove 1 since it is in conflict
3. Choose Task 4
4. Remove Task 2
5. Choose Task 5
Input
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
If algorithm broke ties correctly
(Task 1 and Task 3 in this
example) then an optimal solution
would emerge.
Interval Scheduling
 If we run the previous algorithm with Example 3 what is the result?
Counter example
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Interval Scheduling
 If we run the previous algorithm with Example 3 what is the result?
Counter example
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
The algorithm will first select Task 6 and remove all others that are in
conflict with it which happens to be everything else in the input set.
Interval Scheduling
 Minimum Number of Conflicts
o Previous example could work if we updated the algorithm to select Tasks that
have the minimum number of conflicts across the entire input Task set
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
If i and j are two distinct intervals, there is
a conflict if s(j)<f(i)<f(j) or s(i)<f(j)<f(i)
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R with the minimum number of conflicts
Add i to S
Remove all requests that conflict with i from R
Return S
Interval Scheduling
 Minimum Number of Conflicts
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
Counter example
The only optimal solution [Task 3, Task 4, Task 5, Task 8, Task 9, Task 10, Task 11]
Interval Scheduling
 Minimum Number of Conflicts
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
A simple run of the algorithm draft on this example:
1. Choose Task 13 since it only has 2 conflicts
2. Remove Task 9 and Task 10
3. Choose Task 8
4. Remove Task 12, Task 15, Task 17
5. Choose Task 11
...
Since the algorithm already did not place Task 9 and Task 10 in the solution set, we know this
will not be an optimal solution.
Interval Scheduling
 Earliest End or Finish Time
http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
R: set of requests
Initialize S to be the empty set
While R is not empty
Choose i in R where f(i) is the smallest
Add i to S
Remove all requests that conflict with i from R
Return S
Choose an activity that leaves the resource
available for as many other activities as possible
Sort jobs by finish times so that f1  f2  ...  fn
A  
for j = 1 to n {
if (job j compatible with A)
A  A  {j}
}
return A
A is set of jobs selected
Interval Scheduling
Sorting n jobs takes O(n log n) time
O(n) time to go through the sorted list of n jobs
Interval Scheduling: Greedy Algorithm
Sort jobs by finish times so that f1  f2  ...  fn
A  
for j = 1 to n {
if (job j compatible with A)
A  A  {j}
}
return A
A is set of jobs selected
Correctness
 Algorithm produces a solution A
 Let O be any optimal allocation
 A and O need not be identical.
o They can have multiple allocations of same size
 Show that |A| = |O|
https://www.cmi.ac.in/~madhavan
Correctness
 Let A = i1, i2, ... ik
o Jobs in A are sorted: f(i1) ≤ s(i2), f(i2) ≤ s(i3), …
 Let O = j1, j2, ... jm
o Jobs in O are sorted: f(j1) ≤ s(j2), f(j2) ≤ s(j3), …
 The goal is to show that k = m
https://www.cmi.ac.in/~madhavan
Correctness
 Claim For each r ≤ k, f(ir) ≤ f(jr)
o The greedy solution stays ahead of O
 Proof by induction on r
o r = 1: greedy algorithm chooses job i1 with earliest overall finish time
https://www.cmi.ac.in/~madhavan
Correctness
o r > 1: Assume, by induction that f(ir-1) ≤ f(jr-1)
o Then, it must be the case that f(ir) ≤ f(jr)
o If not, algorithm would choose jr rather than ir
https://www.cmi.ac.in/~madhavan
Correctness
 Suppose m > k
 We know that f(ik) ≤ f(jk)
 Consider job jk+1 in O
o Greedy algorithm terminates when R is empty
 R is set of request or jobs
o Since f(ik) ≤ f(jk) ≤ s(jk+1), this job is compatible with A = i1, i2, ... ik
o After selecting ik, R still contains jk+1 Contradiction
https://www.cmi.ac.in/~madhavan
Interval Partitioning Problem
 Assume we have many identical resources available and we wish to
schedule all the requests using as few resources as possible
 Interval partitioning
o Lecture j starts at sj and finishes at fj
o The goal is find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room
Interval Partitioning Problem
Interval Partitioning
 Interval partitioning
o Lecture j starts at sj and finishes at fj
o The goal is find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room
 Example
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
b
a
e
d g
f i
j
3 3:30 4 4:30
Interval Partitioning
 Interval partitioning
o Lecture j starts at sj and finishes at fj
o The goal is find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room
 Example This schedule uses 4 classrooms to schedule 10 lectures
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
b
a
e
d g
f i
j
3 3:30 4 4:30
1
2
3
4
Interval Partitioning
 Example This schedule uses only 3
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
a e
f
g i
j
3 3:30 4 4:30
d
b
1
2
3
Interval Partitioning
 Example This schedule uses only 3
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
a e
f
g i
j
3 3:30 4 4:30
d
b
1
2
3
Is there any hope of using just two resources?
Interval Partitioning
 Example This schedule uses only 3
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
a e
f
g i
j
3 3:30 4 4:30
d
b
1
2
3
Is there any hope of using just two resources? NO
We need at least three resources.
Intervals a, b, and c all pass over a common point on the time-line, and hence
they all need to be scheduled on different resources.
 Suppose we define the depth of a set of intervals to be the maximum
number that pass over any single point on the time-line
 Depth of schedule below = 3
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
a e
f
g i
j
3 3:30 4 4:30
d
b
a, b, c all contain 9:30
1
2
3
Interval Partitioning
 Suppose we define the depth of a set of intervals to be the maximum
number that pass over any single point on the time-line
 Depth of schedule below = 3
Time
9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30
h
c
a e
f
g i
j
3 3:30 4 4:30
d
b
a, b, c all contain 9:30
1
2
3
Interval Partitioning
Number of classrooms needed  depth
 Consider lectures in increasing order of start time: assign lecture to any
compatible classroom.
Sort intervals by starting time so that s1  s2  ...  sn.
d  0
for j = 1 to n {
if (lecture j is compatible with some classroom k)
schedule lecture j in classroom k
else
allocate a new classroom d + 1
schedule lecture j in classroom d + 1
d  d + 1
}
number of allocated classrooms
Interval Partitioning
 Consider lectures in increasing order of start time: assign lecture to any
compatible classroom.
 Implementation O ( )
Sort intervals by starting time so that s1  s2  ...  sn.
d  0
for j = 1 to n {
if (lecture j is compatible with some classroom k)
schedule lecture j in classroom k
else
allocate a new classroom d + 1
schedule lecture j in classroom d + 1
d  d + 1
}
number of allocated classrooms
Interval Partitioning
Scheduling to Minimizing Lateness
 We have a single resource and a set of n requests to use the resource
 Each request i taking a time ti
 Once a request starts to be served it continues using the resource until its
completion
 Each request i has a deadline di
 The goal is to schedule all jobs to minimize maximum lateness
dj 6
tj 3
1
8
2
2
9
1
3
9
4
4
14
3
5
15
2
6
Scheduling to Minimizing Lateness
 Minimizing lateness problem
o Single resource that processes one job at a time
o Set of jobs
o Each job j requires tj units of processing time and is due at time dj
o If job j starts at time sj it finishes at time fj = sj + tj
o A job i is late if it misses the deadline means if fj > dj
o Lateness: j = max { 0, fj - dj }
o The goal is to schedule all jobs to minimize maximum lateness L = max j
This problem arises naturally when scheduling
jobs that need to use a single machine
Scheduling the jobs in the order 1, 2, 3 incurs a maximum lateness of 0
Scheduling to Minimizing Lateness
 Example
 Job scheduled order = 3 2 6 1 5 4
Lateness check
dj 6
tj 3
1
8
2
2
9
1
3
9
4
4
14
3
5
15
2
6
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
d5 = 14
d2 = 8 d6 = 15 d1 = 6 d4 = 9
d3 = 9
lateness = 0
lateness = 2 max lateness = 6
Minimizing Lateness
 Greedy template Consider jobs in some order
o [Shortest processing time first] Consider jobs in ascending order of processing time tj
o [Smallest slack] Consider jobs in ascending order of slack dj – tj
o [Earliest deadline first] Consider jobs in ascending order of deadline dj
 Consider jobs in some order
o [Shortest processing time first] Consider jobs in ascending order of processing
time tj
counterexample
dj
tj
100
1
1
10
10
2
Minimizing Lateness
 Consider jobs in some order
o [Smallest slack] Consider jobs in ascending order of slack dj – tj
 The ones that need to be started with minimal delay
counterexample
dj
tj
2
1
1
10
10
2
Minimizing Lateness
 Consider jobs in some order
o [Smallest slack] Consider jobs in ascending order of slack dj – tj
 The ones that need to be started with minimal delay
counterexample
dj
tj
2
1
1
10
10
2
Minimizing Lateness
Sorting by increasing slack would place the second
job first in the schedule, and the first job would incur
a lateness of 9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
d5 = 14
d2 = 8 d6 = 15
d1 = 6 d4 = 9
d3 = 9
max lateness = 1
Minimizing Lateness
 Greedy algorithm Earliest deadline first
dj 6
tj 3
1
8
2
2
9
1
3
9
4
4
14
3
5
15
2
6
Sort n jobs by deadline so that d1  d2  …  dn
t  0
for j = 1 to n
Assign job j to interval [t, t + tj]
sj  t, fj  t + tj
t  t + tj
output intervals [sj, fj]
Minimizing Lateness
 Greedy algorithm Earliest deadline first
Shortest Paths in a Graph
shortest path from Princeton CS department to Einstein's house
Shortest Path Problem
 Shortest path network
o Directed graph G = (V, E)
o Source s, destination t
o Length e = length of edge e
 Shortest path problem: find shortest directed path from s to t
cost of path = sum of edge costs in path
s
3
t
2
6
7
4
5
23
18
2
9
14
15
5
30
20
44
16
11
6
19
6
Cost of path s-2-3-5-t
= 9 + 23 + 2 + 16 = 50
 Given a graph and a source vertex in graph, find shortest paths from
source to all vertices in the given graph
o Transport finished product from a factory to all retail outlets
o Courier company delivers items from a distribution centre to addressees
Dijkstra’s Shortest Path Algorithm
 For each destination
o Enumerate all the paths from source to that destination
o Calculate the cost of all enumerated paths
o Select the path with the min cost
Dijkstra’s Shortest Path Algorithm
 For each destination
o Enumerate all the paths from source to that destination
o Calculate the cost of all enumerated paths
o Select the path with the min cost
 Takes > (n-1)! steps for a complete graph with n nodes
Dijkstra’s Shortest Path Algorithm
1) Create an empty set S
2) Assign a distance value to all vertices in the input graph
o Initialize all distance values as INFINITE.
o Assign distance value as 0 for the source vertex so that it is picked first
3) While S doesn’t include all vertices
a) Pick a vertex u which is not there in S and has minimum distance value
b) Add vertex u to S
c) Update distance value of all adjacent vertices of u
o To update the distance values, iterate through all adjacent vertices
o For every adjacent vertex v, if sum of distance value of u and weight of edge u-v, is less than the
distance value of v, then update the distance value of v
Dijkstra’s Shortest Path Algorithm
Dijkstra's Shortest Path Algorithm
 Find shortest path from s to t
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6







0
S = { }
Q = { s, 2, 3, 4, 5, 6, 7, t }
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6







0
S = { }
Q = { s, 2, 3, 4, 5, 6, 7, t }
delmin
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s }
Q = { 2, 3, 4, 5, 6, 7, t }



s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s }
Q = { 2, 3, 4, 5, 6, 7, t }



delmin
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s, 2 }
Q = { 3, 4, 5, 6, 7, t }



s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s, 2 }
Q = { 3, 4, 5, 6, 7, t }



33
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s, 2 }
Q = { 3, 4, 5, 6, 7, t }



33
delmin
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9



14

0
S = { s, 2, 6 }
Q = { 3, 4, 5, 7, t }



33
44
32
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 6 }
Q = { 3, 4, 5, 7, t }



44
delmin
 33
32
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 6, 7 }
Q = { 3, 4, 5, t }



44 35
59
24
 33
32
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 6, 7 }
Q = { 3, 4, 5, t }



44 35
59
delmin
 33
32
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 6, 7 }
Q = { 4, 5, t }



44 35
59
51
34
 33
32
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 6, 7 }
Q = { 4, 5, t }



44 35
59
51
34
delmin
 33
32
24
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 5, 6, 7 }
Q = { 4, t }



44 35
59
51
34
24
50
45
 33
32
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 5, 6, 7 }
Q = { 4, t }



44 35
59
51
34
24
50
45
delmin
 33
32
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 4, 5, 6, 7 }
Q = { t }



44 35
59
51
34
24
50
45
 33
32
s
3
t
2
6
7
4
5
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 4, 5, 6, 7 }
Q = { t }



44 35
59
51
34
50
45
delmin
 33
32
24
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 4, 5, 6, 7, t }
Q = { }



44 35
59
51
34
50
45
 33
32
s
3
t
2
6
7
4
5
24
18
2
9
14
15
5
30
20
44
16
11
6
19
6
15
9


14

0
S = { s, 2, 3, 4, 5, 6, 7, t }
Q = { }



44 35
59
51
34
50
45
 33
32
https://en.wikipedia.org/wiki/Dijkstra’s_algorithm
 Each new shortest path we discover extends an earlier one
 By induction, assume we have identified shortest paths to all vertices already in set
 Next vertex at min distance is v, via x
 Cannot later find a shorter path from y to w to v
Correctness
 Adjacency matrix
o Outer loop runs n times
o O(n) scan to find vertex with a minimum distance
o O(n) scan of adjacency matrix to find all neighbors
o Overall O(n^2 )
Complexity
 Adjacency list
o Scan neighbors
o O(m) across all iterations
o However, finding a vertex with minimum distance still takes O(n) in each
iteration
o Overall O(n^2 )
Complexity
Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is
a tree is called a spanning tree
Spanning Trees
Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is
a tree is called a spanning tree
Spanning Trees
Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is
a tree is called a spanning tree
Spanning Trees
Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is
a tree is called a spanning tree
Spanning Trees
Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is
a tree is called a spanning tree
Spanning Trees
Spanning tree with the lowest total edge weights
Minimum Spanning Trees
Given a connected weighted undirected graph G, design
an algorithm that outputs a minimum spanning tree of G
Minimum Spanning Tree Problem
There are nn-2 spanning trees in a complete graph
Applications
 MST is fundamental problem with diverse applications
o Network design
 Telephone
 Electrical
 Hydraulic
 TV cable
 Computer
 Road
Greedy Algorithms
 Kruskal's algorithm
 Prim's algorithm
 Reverse-Delete algorithm
 All three algorithms produce a MST
Initially, trees of the forest are the vertices (no edges)
In each step add the cheapest edge that does not create a cycle
Demo
Kruskal's Algorithm
1. Set A = ∅ and F = E, the set of all edges
2. Choose an edge e in F of minimum weight,
and check whether adding e to A creates a cycle
IF Yes, remove e from F
IF No, move e from F to A
3. IF F = ∅, stop and output the minimum
spanning tree (V, A). Otherwise go to step 2
1. Set A = ∅ and F = E, the set of all edges
2. Choose an edge e in F of minimum weight,
and check whether adding e to A creates a cycle
IF Yes, remove e from F
IF No, move e from F to A
3. IF F = ∅, stop and output the minimum
spanning tree (V, A). Otherwise go to step 2
Start with minimum cost edge
Keep extending the tree with smallest edge
Prim's Algorithm
Prim's Algorithm
ReachSet = {0}
UnReachSet = {1, 2, ..., N-1}
SpanningTree = {}
while ( UnReachSet ≠ empty )
{
Find edge e = (x, y) such that:
1. x ReachSet
2. y UnReachSet
3. e has smallest cost
SpanningTree = SpanningTree {e}
ReachSet = ReachSet {y}
UnReachSet = UnReachSet - {y}
}
http://www.mathcs.emory.edu/~cheung
Find a minimum cost spanning tree for this graph
http://www.mathcs.emory.edu/~cheung
We pick the starting node: node 0 and mark it as reached
http://www.mathcs.emory.edu/~cheung
Find an edge with minimum cost that
connects a reached node to an unreached node
http://www.mathcs.emory.edu/~cheung
Add the edge (0,3) to the MST and mark the node 3 as reached
http://www.mathcs.emory.edu/~cheung
Find an edge with minimum cost that connects a
reached node to an unreached node
http://www.mathcs.emory.edu/~cheung
Add the edge (3,4) to the MST and mark the node 4 as reached
http://www.mathcs.emory.edu/~cheung
Find an edge with minimum cost that connects
a reached node to an unreached node
http://www.mathcs.emory.edu/~cheung
Add the edge (0,1) to the MST and mark the node 1 as reached
http://www.mathcs.emory.edu/~cheung
Next edge added is (1,7)
http://www.mathcs.emory.edu/~cheung
Next edge added is (7,2)
http://www.mathcs.emory.edu/~cheung
Next edge added is (2,5)
http://www.mathcs.emory.edu/~cheung
Next edge added is (0,8)
http://www.mathcs.emory.edu/~cheung
Next edge added is (5,6)
http://www.mathcs.emory.edu/~cheung
http://www.mathcs.emory.edu/~cheung
ReachSet = {0}
UnReachSet = {1, 2, ..., N-1}
SpanningTree = {}
while ( UnReachSet ≠ empty )
{
Find edge e = (x, y) such that:
1. x ReachSet
2. y UnReachSet
3. e has smallest cost
SpanningTree = SpanningTree {e}
ReachSet = ReachSet {y}
UnReachSet = UnReachSet - {y}
}
ReachSet = {0}
UnReachSet = {1, 2, ..., N-1}
SpanningTree = {}
while ( UnReachSet ≠ empty )
{
Find edge e = (x, y) such that:
1. x ReachSet
2. y UnReachSet
3. e has smallest cost
SpanningTree = SpanningTree {e}
ReachSet = ReachSet {y}
UnReachSet = UnReachSet - {y}
}
http://www.mathcs.emory.edu/~cheung
Complexity
ReachSet = {0}
UnReachSet = {1, 2, ..., N-1}
SpanningTree = {}
while ( UnReachSet ≠ empty )
{
Find edge e = (x, y) such that:
1. x ReachSet
2. y UnReachSet
3. e has smallest cost
SpanningTree = SpanningTree {e}
ReachSet = ReachSet {y}
UnReachSet = UnReachSet - {y}
}
http://www.mathcs.emory.edu/~cheung
Complexity
O(VE)
Knapsack Problem
Fill it up with maximum value
https://en.wikipedia.org/wiki/Knapsack_problem
Fractional Knapsack Problem
https://algodaily.com/
https://algodaily.com/
Algorithm Design and Analysis
Sayed Chhattan Shah
Associate Professor of Computer Science
Mobile Grid and Cloud Computing Lab
Department of Information Communication Engineering
Hankuk University of Foreign Studies Korea
www.mgclab.com
 Divide and conquer refers to a class of algorithmic techniques in which one
breaks the input into several parts, solves the problem in each part
recursively, and then combines the solutions to these sub-problems into an
overall solution
 Analyzing the running time of a divide and conquer algorithm generally
involves solving a recurrence relation
Divide and Conquer
 Binary tree = a tree where each node has at most 2 children nodes
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 Perfect binary tree = a binary tree where each level contains the maximum
number of nodes
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 Properties of the perfect binary tree
o The number of nodes at depth d in a perfect binary tree = 2d
o Proof
o The number of nodes doubles every time the depth increases by 1
o Therefore, number of nodes at depth d = 2d
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 Properties of the perfect binary tree
o A perfect binary tree of height h has 2h+1 − 1 nodes
o Proof
o Number of nodes = 20 + 21 + ... 2h = 2h+1 − 1
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 Properties of the perfect binary tree
o Number of leaf nodes in a perfect binary tree of height h = 2h
o Proof
 All the leaf nodes in a perfect binary tree of height h has a depth equal to h
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 The minimum number of nodes in a binary tree of height h = h + 1
Binary Trees
http://www.mathcs.emory.edu/~cheung/
 The maximum number of nodes in a binary tree of height h = 2h+1 − 1
 Proof
o The perfect binary tree has the maximum number of nodes
o We have already shown that the number nodes in a perfect binary tree = 2h+1 − 1
Binary Trees
http://www.mathcs.emory.edu/~cheung/
Recurrence Relations
Recurrence Tree
The test function is printing value 3 times for test(3) and
calling itself 4 times.
For n it will print n times and will call itself n + 1 times
Recurrence Relation
T(n)
1
T(n-1)
T(n) = T(n-1) + 1 if n>0
T(n) = 1 if n=0
1
T(n) = T(n-1) + 1
T(n-1) = T(n-2)+1
T(n-2) = T(n-3)+1
T(n) = [T(n-2)+1] + 1 = T(n-2)+2
T(n) = [T(n-3)+1]+2
T(n) = T(n-3)+3
.
. Continue for k times
.
T(n) = [T(n-k)+k]
Assume n-k = 0 therefore n=k
T(n)= T(n-n)+n = T(0)+n
T(n)= 1+n
T(n)= O(n)
Recurrence Tree
T(n) = 1+2+3+...+(n-1)+n = n(n+1)/2
Recurrence Tree
T(n)
1
n+1
n
T(n-1)
T(n) = T(n-1) + 2n+2
T(n) = T(n-1) + n
T(n) = T(n-1) + n if n>0
T(n) = 1 if n=0
T(n) = T(n-1) + n
T(n-1) = T(n-2) + n-1
T(n-2) = T(n-3) + n-2
Substitute values
T(n) = T(n-2) + n-1 + n
T(n) = T(n-3) + n-2 + n-1 + n
.
. If we continue for k times
.
T(n) = T(n-k) +(n-(k-1))+(n-(k-2)) + ...+ (n-1) + n
Assume n-k=0 therefore k=n
T(n)= T(n-n) + (n-n+1)+ (n-n+2) + ...+ (n-1)+n
T(n) = T(0)+ 1+2+3+...+(n-1)+n = n(n+1)/2
T(n) = 1+n(n+1)/2
T(n) = O(n^2)
Dividing Functions
Recurrence Tree
n/2^k = 1
n=2^k
k = log n
O(log n)
Recurrence Tree
T(n) = T(n/2) + 1 if n>1
T(n) = 1 if n=1
T(n) = T(n/2) + 1
T(n/2) = T(n/2^2) + 1
T(n) = T(n/2^2) + 2
T(n) = T(n/2^3) + 3
.
.
.
T(n) = T(n/2^k) + k
Assume n/2^k = 1
Therefore n=2^k and k = log n
T(n)= T(1) + log(n) = 1+log(n) = O(log n)
T(n) = 2T(n/2) + n if n>1
T(n) = 1 if n=1
k
times
n k
k = log n
n log n
Recurrence Tree
T(n) = 2T(n/2) + n
T(n/2) = 2T(n/2^2) + n/2
T(n) = 2[2T(n/2^2 + n/2)] + n
T(n) = 2^2 T(n/2^2) + n + n
T(n/2^2 ) = 2T(n/2^3) + n/2^2
T(n) = 2^2[2T(n/2^3 ) + n/2^2] + n + n
T(n) = 2^3T(n/2^3 ) + n + n + n
T(n) = 2^3T(n/2^3 ) + 3n
.
.
.
T(n) = 2^k T(n/2^k )+ k n
Assume n/2^k = 1
Therefore n=2^k and k = log n
T(n) = n x 1 + k n
T(n) = n + n log n
T(n) = O(n log n)
 Merge sort
o Divide array or list in two equal parts
o Separately sort left and right parts
o Merge or combine the two sorted parts to get the full array sorted
Divide and Conquer
 Merge Operation
o How can we efficiently merge two sorted lists?
Divide and Conquer
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
Traverse B and C simultaneously from left to right and
write the smallest element at the current positions to A
 Merge sort
o Divide array or list in two equal parts
o Separately sort left and right parts
o Merge or combine the two sorted parts to get the full array sorted
Divide and Conquer
Example 1
Merge Sort Example
https://en.wikipedia.org/wiki/Merge_sort
Example 2
Merge ( result[], a[], b[] )
{
k = 0 // k = output location in result[]
for ( every element in a[] and b[] )
{
let i be the first unprocessed element in a
let j be the first unprocessed element in b
if ( a[] and b[] are not exhausted )
{
if ( a[i] < b[j] )
// Copy smallest over and shift
result[k] = a[i]
k++ and i++
else
// Copy smallest over and shift
result[k++] = b[j++];
k++ and j++
}
else if ( a[] is exhausted )
result[k++] = b[j++]
else if ( b[] is exhausted )
result[k++] = a[j++]
}
Merge the two sorted arrays a and b into one sorted array result
MergeSort(list []) {
// Copy the first half of list into a new array
int firstLength = list.length / 2
int[] one = new int[firstLength]
for (int n = 0 n < firstLength n++)
one[n] = list[n]
// Copy the second half of list into another array
int secondLength = list.length - firstLength
int[] two = new int[secondLength]
for (int n = firstLength n < list.length n++)
two[n - firstLength] = list[n]
// Sort the two smaller arrays
MergeSort(one)
MergeSort(two)
Merge(list, one, two)
}
}
 Merge Operation
o Merge just iterates over the arrays and the arrays can have at most size of n
o Thus Merge function has a running time of O(n)
 Merge Sort
o Merge sort function is breaking the problem size of n into two sub-problems of
size each
Divide and Conquer Algorithm
 Laptop executes 108
compares per second
 Supercomputer executes 1012 compares per second
 Good algorithms are better than supercomputers
Divide and Conquer Algorithm
Algorithm Design and Analysis
Sayed Chhattan Shah
Associate Professor of Computer Science
Mobile Grid and Cloud Computing Lab
Department of Information Communication Engineering
Hankuk University of Foreign Studies Korea
www.mgclab.com
 Dynamic programming, like the divide-and-conquer method, solves
problems by combining the solutions to subproblems
o Divide-and-conquer algorithms partition the problem into disjoint
subproblems, solve the subproblems recursively, and then combine
their solutions to solve the original problem
o Dynamic programming applies when the subproblems overlap
 When subproblems share sub-subproblems
Dynamic Programming
https://www.cmi.ac.in/~madhavan
 Factorial
o F(0) = 1
o F(n) = n x F(n-1)
Dynamic Programming
https://www.cmi.ac.in/~madhavan
 Recursive Program
Dynamic Programming
https://www.cmi.ac.in/~madhavan
Factorial(n)
if (n <= 0)
return 1
else
return n x Factorial(n-1)
 Recursive Program
Dynamic Programming
https://www.cmi.ac.in/~madhavan
Factorial(n)
if (n <= 0)
return 1
else
return n x Factorial(n-1)
 Factorial(n-1) is a subproblem of Factorial(n)
 Solution can be derived by combining solutions to subproblems
 Fibonacci numbers
o Fib(0) = 0
o Fib(1) = 1
o Fib(n) = Fib(n-1) + Fib(n-2)
o 0 1 1 2 3 5 8 13 21 34 55 89 144
Dynamic Programming
https://www.cmi.ac.in/~madhavan
 Fibonacci numbers
o Fib(0) = 0
o Fib(1) = 1
o Fib(n) = Fib(n-1) + Fib(n-2)
Dynamic Programming
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5) Fib(5)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5) Fib(5)
Fib(4) Fib(3)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
2
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
2
Fib(1) Fib(0)
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
Fib(1) Fib(0)
1 0
1 1
2
1 0
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
Fib(1) Fib(0)
1 0
1 1
2
1 0
1
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
Fib(1) Fib(0)
1 0
1 1
2
1 0
1
3
https://www.cmi.ac.in/~madhavan
Fib(n)
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
return value
Compute Fib(5)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
Fib(1) Fib(0)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1 1 1
1
0 0
1 1
2
3
5
2
 Overlapping subproblems
o Wasteful recomputation
o Computation tree grows exponentially
Dynamic Programming
https://www.cmi.ac.in/~madhavan
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
Fib(1) Fib(0)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1 1 1
1
0 0
1 1
2
3
5
2
 Never re-evaluate a subproblem
o Build a table of values already computed
 Memory table
o Memoization
 Remind yourself that this value has already been seen before
Dynamic Programming
https://www.cmi.ac.in/~madhavan
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
Fib(5)
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
Fib(5)
Fib(4) Fib(3)
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
3 2
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
2
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
3 2
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
2 1
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
3 2
4 3
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
2 1
3
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
3 2
4 3
5 5
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
1
2
3 2
 Memoization
o Store each newly computed value in a table
o Look up table before starting a recursive computation
o Computation tree is linear
Dynamic Programming
https://www.cmi.ac.in/~madhavan
n Fib (n)
1 1
0 0
2 1
3 2
4 3
5 5
Fib(5)
Fib(4) Fib(3)
Fib(3) Fib(2)
Fib(2) Fib(1)
Fib(1) Fib(0)
1 0
1 1
1
2
3
5
2
 Memoized Fibonacci
Dynamic Programming
https://www.cmi.ac.in/~madhavan
Fib(n)
if (Fibtable[n])
return Fibtable[n]
if (n==0 or n==1)
value = n
else
value = Fib(n-1) + Fib(n-2)
Fibtable[n] = value
return value
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis
Algorithm Design and Analysis

More Related Content

What's hot

Cohen and Sutherland Algorithm for 7-8 marks
Cohen and Sutherland Algorithm for 7-8 marksCohen and Sutherland Algorithm for 7-8 marks
Cohen and Sutherland Algorithm for 7-8 marksRehan Khan
 
Image segmentation in Digital Image Processing
Image segmentation in Digital Image ProcessingImage segmentation in Digital Image Processing
Image segmentation in Digital Image ProcessingDHIVYADEVAKI
 
Overview of the graphics system
Overview of the graphics systemOverview of the graphics system
Overview of the graphics systemKamal Acharya
 
Edge Detection and Segmentation
Edge Detection and SegmentationEdge Detection and Segmentation
Edge Detection and SegmentationA B Shinde
 
Bressenham’s Midpoint Circle Drawing Algorithm
Bressenham’s Midpoint Circle Drawing AlgorithmBressenham’s Midpoint Circle Drawing Algorithm
Bressenham’s Midpoint Circle Drawing AlgorithmMrinmoy Dalal
 
Motion estimation overview
Motion estimation overviewMotion estimation overview
Motion estimation overviewYoss Cohen
 
Overview of Concurrency Control & Recovery in Distributed Databases
Overview of Concurrency Control & Recovery in Distributed DatabasesOverview of Concurrency Control & Recovery in Distributed Databases
Overview of Concurrency Control & Recovery in Distributed DatabasesMeghaj Mallick
 
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION By Sintiak Haque
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION  By Sintiak Haque4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION  By Sintiak Haque
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION By Sintiak HaqueSintiak haque
 
Web engineering - MVC
Web engineering - MVCWeb engineering - MVC
Web engineering - MVCNosheen Qamar
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)ananth
 
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...rohitjasudkar
 
COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I Hemantha Kulathilake
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial DomainDEEPASHRI HK
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram ProcessingAmnaakhaan
 
Character generation
Character generationCharacter generation
Character generationAnkit Garg
 

What's hot (20)

Cohen and Sutherland Algorithm for 7-8 marks
Cohen and Sutherland Algorithm for 7-8 marksCohen and Sutherland Algorithm for 7-8 marks
Cohen and Sutherland Algorithm for 7-8 marks
 
Image segmentation in Digital Image Processing
Image segmentation in Digital Image ProcessingImage segmentation in Digital Image Processing
Image segmentation in Digital Image Processing
 
ANIMATION SEQUENCE
ANIMATION SEQUENCEANIMATION SEQUENCE
ANIMATION SEQUENCE
 
Overview of the graphics system
Overview of the graphics systemOverview of the graphics system
Overview of the graphics system
 
Edge Detection and Segmentation
Edge Detection and SegmentationEdge Detection and Segmentation
Edge Detection and Segmentation
 
Clipping
ClippingClipping
Clipping
 
Bressenham’s Midpoint Circle Drawing Algorithm
Bressenham’s Midpoint Circle Drawing AlgorithmBressenham’s Midpoint Circle Drawing Algorithm
Bressenham’s Midpoint Circle Drawing Algorithm
 
Distributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query ProcessingDistributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query Processing
 
Motion estimation overview
Motion estimation overviewMotion estimation overview
Motion estimation overview
 
Overview of Concurrency Control & Recovery in Distributed Databases
Overview of Concurrency Control & Recovery in Distributed DatabasesOverview of Concurrency Control & Recovery in Distributed Databases
Overview of Concurrency Control & Recovery in Distributed Databases
 
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION By Sintiak Haque
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION  By Sintiak Haque4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION  By Sintiak Haque
4-CONNECTED AND 8-CONNECTED NEIGHBOR SELECTION By Sintiak Haque
 
JPEG Image Compression
JPEG Image CompressionJPEG Image Compression
JPEG Image Compression
 
Web engineering - MVC
Web engineering - MVCWeb engineering - MVC
Web engineering - MVC
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)
 
Shading methods
Shading methodsShading methods
Shading methods
 
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...
Computer Graphics (Hidden surfaces and line removal, Curves and surfaces, Sur...
 
COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial Domain
 
Histogram Processing
Histogram ProcessingHistogram Processing
Histogram Processing
 
Character generation
Character generationCharacter generation
Character generation
 

Similar to Algorithm Design and Analysis

Data Structure & Algorithms - Introduction
Data Structure & Algorithms - IntroductionData Structure & Algorithms - Introduction
Data Structure & Algorithms - Introductionbabuk110
 
Analysis of Algorithm full version 2024.pptx
Analysis of Algorithm  full version  2024.pptxAnalysis of Algorithm  full version  2024.pptx
Analysis of Algorithm full version 2024.pptxrajesshs31r
 
Design Analysis of Alogorithm 1 ppt 2024.pptx
Design Analysis of Alogorithm 1 ppt 2024.pptxDesign Analysis of Alogorithm 1 ppt 2024.pptx
Design Analysis of Alogorithm 1 ppt 2024.pptxrajesshs31r
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)jehan1987
 
Data Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm AnalysisData Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm AnalysisFerdin Joe John Joseph PhD
 
Data Structure & Algorithms - Mathematical
Data Structure & Algorithms - MathematicalData Structure & Algorithms - Mathematical
Data Structure & Algorithms - Mathematicalbabuk110
 
Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity CalculationAkhil Kaushik
 
Analysis and Algorithms: basic Introduction
Analysis and Algorithms: basic IntroductionAnalysis and Algorithms: basic Introduction
Analysis and Algorithms: basic Introductionssuseraf8b2f
 
Algorithm in Computer, Sorting and Notations
Algorithm in Computer, Sorting  and NotationsAlgorithm in Computer, Sorting  and Notations
Algorithm in Computer, Sorting and NotationsAbid Kohistani
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...AntareepMajumder
 
DAA-Unit1.pptx
DAA-Unit1.pptxDAA-Unit1.pptx
DAA-Unit1.pptxNishaS88
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxskilljiolms
 
Introduction to Data Structures Sorting and searching
Introduction to Data Structures Sorting and searchingIntroduction to Data Structures Sorting and searching
Introduction to Data Structures Sorting and searchingMvenkatarao
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptxRahikAhmed1
 

Similar to Algorithm Design and Analysis (20)

Searching Algorithms
Searching AlgorithmsSearching Algorithms
Searching Algorithms
 
Data Structure & Algorithms - Introduction
Data Structure & Algorithms - IntroductionData Structure & Algorithms - Introduction
Data Structure & Algorithms - Introduction
 
Analysis of Algorithm full version 2024.pptx
Analysis of Algorithm  full version  2024.pptxAnalysis of Algorithm  full version  2024.pptx
Analysis of Algorithm full version 2024.pptx
 
Design Analysis of Alogorithm 1 ppt 2024.pptx
Design Analysis of Alogorithm 1 ppt 2024.pptxDesign Analysis of Alogorithm 1 ppt 2024.pptx
Design Analysis of Alogorithm 1 ppt 2024.pptx
 
Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
 
chapter 1
chapter 1chapter 1
chapter 1
 
Unit 2 algorithm
Unit   2 algorithmUnit   2 algorithm
Unit 2 algorithm
 
Algorithm analysis (All in one)
Algorithm analysis (All in one)Algorithm analysis (All in one)
Algorithm analysis (All in one)
 
Data Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm AnalysisData Structures and Algorithm - Week 11 - Algorithm Analysis
Data Structures and Algorithm - Week 11 - Algorithm Analysis
 
Cis435 week01
Cis435 week01Cis435 week01
Cis435 week01
 
Algorithms
Algorithms Algorithms
Algorithms
 
Data Structure & Algorithms - Mathematical
Data Structure & Algorithms - MathematicalData Structure & Algorithms - Mathematical
Data Structure & Algorithms - Mathematical
 
Algorithms & Complexity Calculation
Algorithms & Complexity CalculationAlgorithms & Complexity Calculation
Algorithms & Complexity Calculation
 
Analysis and Algorithms: basic Introduction
Analysis and Algorithms: basic IntroductionAnalysis and Algorithms: basic Introduction
Analysis and Algorithms: basic Introduction
 
Algorithm in Computer, Sorting and Notations
Algorithm in Computer, Sorting  and NotationsAlgorithm in Computer, Sorting  and Notations
Algorithm in Computer, Sorting and Notations
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
 
DAA-Unit1.pptx
DAA-Unit1.pptxDAA-Unit1.pptx
DAA-Unit1.pptx
 
VCE Unit 01 (2).pptx
VCE Unit 01 (2).pptxVCE Unit 01 (2).pptx
VCE Unit 01 (2).pptx
 
Introduction to Data Structures Sorting and searching
Introduction to Data Structures Sorting and searchingIntroduction to Data Structures Sorting and searching
Introduction to Data Structures Sorting and searching
 
2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx2. Introduction to Algorithm.pptx
2. Introduction to Algorithm.pptx
 

More from Sayed Chhattan Shah

Introduction to System Programming
Introduction to System ProgrammingIntroduction to System Programming
Introduction to System ProgrammingSayed Chhattan Shah
 
Introduction to Differential Equations
Introduction to Differential EquationsIntroduction to Differential Equations
Introduction to Differential EquationsSayed Chhattan Shah
 
Cloud and Edge Computing Systems
Cloud and Edge Computing SystemsCloud and Edge Computing Systems
Cloud and Edge Computing SystemsSayed Chhattan Shah
 
Introduction to Internet of Things
Introduction to Internet of ThingsIntroduction to Internet of Things
Introduction to Internet of ThingsSayed Chhattan Shah
 
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...Sayed Chhattan Shah
 
IEEE 802.11 Architecture and Services
IEEE 802.11 Architecture and ServicesIEEE 802.11 Architecture and Services
IEEE 802.11 Architecture and ServicesSayed Chhattan Shah
 
Routing in Mobile Ad hoc Networks
Routing in Mobile Ad hoc NetworksRouting in Mobile Ad hoc Networks
Routing in Mobile Ad hoc NetworksSayed Chhattan Shah
 
Keynote Talk on Recent Advances in Mobile Grid and Cloud Computing
Keynote Talk on Recent Advances in Mobile Grid and Cloud ComputingKeynote Talk on Recent Advances in Mobile Grid and Cloud Computing
Keynote Talk on Recent Advances in Mobile Grid and Cloud ComputingSayed Chhattan Shah
 
Keynote on Mobile Grid and Cloud Computing
Keynote on Mobile Grid and Cloud ComputingKeynote on Mobile Grid and Cloud Computing
Keynote on Mobile Grid and Cloud ComputingSayed Chhattan Shah
 
Introduction to Mobile Ad hoc Networks
Introduction to Mobile Ad hoc NetworksIntroduction to Mobile Ad hoc Networks
Introduction to Mobile Ad hoc NetworksSayed Chhattan Shah
 
Tips on Applying for a Scholarship
Tips on Applying for a ScholarshipTips on Applying for a Scholarship
Tips on Applying for a ScholarshipSayed Chhattan Shah
 
Introduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed ComputingIntroduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed ComputingSayed Chhattan Shah
 

More from Sayed Chhattan Shah (17)

Introduction to System Programming
Introduction to System ProgrammingIntroduction to System Programming
Introduction to System Programming
 
Introduction to Differential Equations
Introduction to Differential EquationsIntroduction to Differential Equations
Introduction to Differential Equations
 
Cloud and Edge Computing Systems
Cloud and Edge Computing SystemsCloud and Edge Computing Systems
Cloud and Edge Computing Systems
 
Introduction to Internet of Things
Introduction to Internet of ThingsIntroduction to Internet of Things
Introduction to Internet of Things
 
IoT Network Technologies
IoT Network TechnologiesIoT Network Technologies
IoT Network Technologies
 
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...
5G Network: Requirements, Design Principles, Architectures, and Enabling Tech...
 
Data Center Networks
Data Center NetworksData Center Networks
Data Center Networks
 
IEEE 802.11 Architecture and Services
IEEE 802.11 Architecture and ServicesIEEE 802.11 Architecture and Services
IEEE 802.11 Architecture and Services
 
Routing in Mobile Ad hoc Networks
Routing in Mobile Ad hoc NetworksRouting in Mobile Ad hoc Networks
Routing in Mobile Ad hoc Networks
 
Keynote Talk on Recent Advances in Mobile Grid and Cloud Computing
Keynote Talk on Recent Advances in Mobile Grid and Cloud ComputingKeynote Talk on Recent Advances in Mobile Grid and Cloud Computing
Keynote Talk on Recent Advances in Mobile Grid and Cloud Computing
 
Keynote on Mobile Grid and Cloud Computing
Keynote on Mobile Grid and Cloud ComputingKeynote on Mobile Grid and Cloud Computing
Keynote on Mobile Grid and Cloud Computing
 
Introduction to Mobile Ad hoc Networks
Introduction to Mobile Ad hoc NetworksIntroduction to Mobile Ad hoc Networks
Introduction to Mobile Ad hoc Networks
 
Cloud Robotics
Cloud RoboticsCloud Robotics
Cloud Robotics
 
Introduction to Cloud Computing
Introduction to Cloud ComputingIntroduction to Cloud Computing
Introduction to Cloud Computing
 
Tips on Applying for a Scholarship
Tips on Applying for a ScholarshipTips on Applying for a Scholarship
Tips on Applying for a Scholarship
 
Cluster and Grid Computing
Cluster and Grid ComputingCluster and Grid Computing
Cluster and Grid Computing
 
Introduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed ComputingIntroduction to Parallel and Distributed Computing
Introduction to Parallel and Distributed Computing
 

Recently uploaded

Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...anjaliyadav012327
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...Pooja Nehwal
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 

Recently uploaded (20)

Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 

Algorithm Design and Analysis

  • 1. Algorithm Design and Analysis Sayed Chhattan Shah Associate Professor of Computer Science Mobile Grid and Cloud Computing Lab Department of Information Communication Engineering Hankuk University of Foreign Studies Korea www.mgclab.com
  • 2. Acknowledgements ▪ The material in these slides is taken from different sources including: o Algorithm Design, First Edition o Introduction to Algorithms, Third Edition o The Stony Brook Algorithm Repository o Algorithms and Complexity Course by Atri Rudra o NPTEL Design and Analysis of Algorithms by Madhavan Mukund o AlgoDaily
  • 5.
  • 6. Check whether a given number N is positive or negative Read N IF (N==0) Print N is neither positive nor negative IF (N>0) Print N is a positive number IF (N<0) Print N is a negative number
  • 8. SORTING BY COLORS Sorting cards with colors on them into piles of the same color 1) Pick up all of the cards. 2) Pick a card from your hand and look at the color of the card. 3) If there is already a pile of cards of that color, put this card on that pile. 4) If there is no pile of cards of that color, make a new pile of just this card color. 5) If there is still a card in your hand, go back to the second step. 6) If there is not still a card in your hand, then the cards are sorted.
  • 9. ▪ Each of these examples are algorithms, a set of instructions for solving a problem ▪ Algorithm is named after the 9th century Persian mathematician Al-Khwarizmi
  • 10. ▪ Algorithms are especially important to computers because computers are general purpose machines for solving problems ▪ In order for a computer to be useful, we must give it a problem to solve and a technique for solving the problem ▪ Through the use of algorithms, we can make computers intelligent by programming them with various algorithms to solve problems
  • 11. ▪ Algorithms typically have following characteristics o Name o Description o Input Algorithm receives input o Output Produces output o Generality The algorithm applies to set of inputs o Order of Operations Exact order of steps to perform o Precision The steps are precisely stated o Finiteness The algorithm terminates o Correctness The output produced by algorithm is correct
  • 12.
  • 13. ▪ The algorithm receives three values a, b, and c as an input and produces a value large as an output ▪ Steps are stated sufficiently precise ▪ Algorithm terminates after finitely many steps correctly answering the given question ▪ The algorithm is general o It can find a largest value of any three numbers
  • 14. PROBLEM Given a list of positive numbers, return the largest number on the list INPUTS A list L of positive numbers OUTPUTS A number max, which will be the largest number on the list ALGORITHM 1) Set max to 0 2) For each number x in the list L, compare it to max. If x is larger, set max to x 3) Output max An Example Algorithm
  • 15. How to specify the operations in algorithm? ▪ Write the algorithm using plain English o Plain English is too wordy and ambiguous ▪ Often an English sentence can be interpreted in many different ways ▪ Write the algorithm using programming languages o These languages are collections of basic operations that a computer understands o Without knowledge of the programming language, it would be difficult for you to know what this algorithm does Sums the numbers from 1 to 10 and displays the answer on the computer screen
  • 16. ▪ Combine the familiarity of plain English with the structure and order of programming languages ▪ A good compromise is structured English
  • 18. Algorithms for solving the problem of sorting A problem of sorting a list of numbers
  • 19. A problem of sorting a list of numbers Input Output
  • 21. Computer must perform six comparisons (7 < 8), (7 > 5), (5 > 2), (2 < 4), (2 < 6), and finally (2 < 3) Simple sort algorithm
  • 22. Six more comparisons are required to determine that 3 is smallest (7 < 8), (7 > 5), (5 < MAX), (5 > 4), (4 < 6), and finally (4 > 3)
  • 23.
  • 24.
  • 25. The Selection Sort ▪ The array is virtually split into a sorted and an unsorted part. ▪ The smallest element is selected from the unsorted array and swapped with the leftmost element, and that element becomes a part of the sorted array. ▪ Animation
  • 26. The Selection Sort array[] = 64 25 12 22 11 Find the minimum element in array[0...4] and place it at beginning 11 25 12 22 64 Find the minimum element in array[1...4] and place it at beginning of array[1...4] 11 12 25 22 64 Find the minimum element in array[2...4] and place it at beginning of array[2...4] 11 12 22 25 64 Find the minimum element in array[3...4] and place it at beginning of array[3...4] 11 12 22 25 64 https://en.wikipedia.org/wiki/Selection_sort
  • 28. The Insertion Sort ▪ Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards in your hands
  • 29. The Insertion Sort ▪ The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are picked and placed at the correct position in the sorted part ▪ Animation
  • 31. Bubble Sort ▪ Bubble sort works by repeatedly comparing each pair of adjacent elements and swapping them if they are in the wrong order ▪ Animation
  • 33. Linear and binary search algorithms
  • 34. Linear search algorithm visualization
  • 35. Binary search algorithm visualization https://cs50.harvard.edu/
  • 36. Binary search algorithm visualization https://cs50.harvard.edu/
  • 37. ▪ Math Preliminaries in Algorithm Analysis https://tutorial.math.lamar.edu/Classes/CalcI/SummationNotation.aspx Algorithm Analysis
  • 38. ▪ Computational complexity or simply complexity of an algorithm is the amount of resources required to run it ▪ Time complexity ▪ A measure of the amount of time required to execute an algorithm ▪ Space complexity ▪ Amount of memory space required to execute an algorithm https://en.wikipedia.org/wiki/Computational_complexity Algorithm Analysis
  • 39. ▪ The analysis of algorithms is the process of finding the computational complexity of algorithms ▪ The amount of time, storage, or other resources needed to execute them https://en.wikipedia.org/wiki/Analysis_of_algorithms Algorithm Analysis
  • 40. Algorithm Analysis ▪ An algorithm that is space-efficient uses the least amount of computer memory to solve the problem ▪ An algorithm that is time-efficient uses the least amount of time to solve the problem
  • 41. ▪ How do we compare the time efficiency of two algorithms that solve the same problem? Algorithm Analysis
  • 42. ▪ How do we compare the time efficiency of two algorithms that solve the same problem? ▪ Manual Algorithm Analysis
  • 43. ▪ How do we compare the time efficiency of two algorithms that solve the same problem? ▪ Experimental Approach o Implement algorithms in a programming language, and run them to compare their time requirements Algorithm Analysis
  • 44. ▪ Experimental Approach o Comparing the programs instead of algorithms has difficulties because the results would depend on ▪ How are the algorithms coded? • We should not compare implementations, because they are sensitive to programming style that may cloud the issue of which algorithm is inherently more efficient ▪ What computer should we use? • We should compare the efficiency of the algorithms independently of a particular computer Algorithm Analysis
  • 45. ▪ When we analyze algorithms, we should employ mathematical techniques that analyze algorithms independently of specific implementations and computers ▪ To analyze algorithms o Count the number of primitive operations ▪ Evaluating an expression (x + y) ▪ Assigning a value to a variable (x ←5) ▪ Comparing two numbers (x < y) ▪ Returning from a method o Express the efficiency of algorithms using growth functions Algorithm Analysis
  • 46. Each operation in an algorithm has a cost Each operation takes a certain amount of time but it is constant count = count + 1 A sequence of operations count = count + 1 Cost c1 sum = sum + count Cost c2 Total Cost = c1 + c2
  • 47. Cost of basic operations https://algs4.cs.princeton.edu/lectures
  • 48. Most primitive operations take constant time https://algs4.cs.princeton.edu/lectures
  • 49. If Statement Cost Times if (n < 0) c1 1 absval = -n c2 1 else absval = n c3 1 Total Cost <= c1 + max(c2, c3)
  • 50. Simple Loop Cost Times i = 1 c1 1 sum = 0 c2 1 while (i <= n) { c3 n+1 i = i + 1 c4 n sum = sum + i c5 n } Total Cost = c1 + c2 + (n + 1)*c3 + n*c4 + n*c5
  • 51. Nested Loop Cost Times i=1 c1 1 sum = 0 c2 1 while (i <= n) { c3 n+1 j=1 c4 n while (j <= n) { c5 n*(n+1) sum = sum + i c6 n*n j = j + 1 c7 n*n } i = i +1 c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5 +n*n*c6 + n*n*c7 + n*c8
  • 52.
  • 53. for (int i = 1; i <= n; i++) { perform 100 operations A for (int j = 1; j <= n; j++) { perform 2 operations B } }
  • 54. for (int i = 1; i <= n; i++) { perform 100 operations A for (int j = 1; j <= n; j++) { perform 2 operations B } } Total Operations = A + B
  • 55. General Rules for Estimation ▪ Consecutive Statements Just add the running times of those consecutive statements ▪ If Else Never more than the running time of the test plus the larger of running times of S1 and S2 ▪ Loops The running time of a loop is at most the running time of the statements inside of that loop times the number of iterations ▪ Nested Loops Running time of a nested loop containing a statement in the inner most loop is the running time of statement multiplied by the product of the sized of all loops
  • 56. Linear Search int linearSearch(int array[], int n, int x) { int i; for (i = 0; i < n; i++) if (array[i] == x) return i; return 0; }
  • 57. ▪ Worst Case Analysis The maximum amount of time that an algorithm require to solve a problem of size n o This gives an upper bound for time complexity of an algorithm o Normally, we try to find worst-case behavior of an algorithm ▪ Best Case Analysis The minimum amount of time that an algorithm require to solve a problem of size n o The best case behavior of an algorithm is NOT so useful Asymptotic Analysis
  • 58. ▪ Average Case Analysis The average amount of time that an algorithm require to solve a problem of size n o Sometimes, it is difficult to find the average-case behavior of an algorithm o We have to look at all possible data organizations of a given size n and their distribution probabilities of these organizations o Worst-case analysis is more common than average-case analysis Algorithmic Runtime
  • 59. Linear Search int linearSearch(int array[], int n, int x) { int i; for (i = 0; i < n; i++) if (array[i] == x) return i; return 0; } Worst case performance x does not exist Best case performance x matches with the first element
  • 60. Linear Search int linearSearch(int array[], int n, int x) { int i; for (i = 0; i < n; i++) if (array[i] == x) return i; return 0; } Worst case performance x does not exist T(n) = n Best case performance x matches with the first element T(n) = 1
  • 61. ▪ Growth rate of algorithm o How fast the time of an algorithm grows as a function of problem size o Problem or input size depends on the particular problem: ▪ For a search problem, the problem size is the number of elements in the search space ▪ For a sorting problem, the problem size is the number of elements in the given list Asymptotic Analysis
  • 62. Asymptotic Analysis ▪ Asymptotic analysis is an analysis of algorithms that focuses on o Analyzing problems of large input size o Consider only the leading term of the formula o Ignore the coefficient of the leading term
  • 63. Example T(n) = 10n3 + n2 + 40n + 800 If n = 1,000, then T(n) = 10,001,040,800 error is 0.01% if we drop all but the n3 the dominating term Asymptotic Analysis
  • 64.
  • 65. Basic function that often appear in algorithm analysis 1. Constant function f(n) = c 2. Linear function f(n) = n 3. Quadratic function f(n) = n2 4. Cubic function f(n) = n3 5. Log function f(n) = log n 6. Log linear function f(n) = n log n 7. Exponential function f(n) = bn
  • 66. Constant Function ▪ An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size ▪ Array: accessing any element
  • 67. Linear Function ▪ An algorithm is said to run in linear time if its time execution is directly proportional to the input size ▪ Time grows linearly as input size increases Examples Linear search, traversing, find minimum or maximum Computing the maximum max  a1 for i = 2 to n { if (ai > max) max  ai }
  • 68. Quadratic function ▪ An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size ▪ This function arises in algorithm analysis any time we use nested loops ▪ The outer loop performs primitive operations in linear time; for each iteration, the inner loop also perform primitive operations in linear time ▪ Time complexity of most algorithms is quadratic
  • 69. Logarithmic function ▪ An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size ▪ Example ▪ Binary Search
  • 70. Exponential Function ▪ For a given variable n, the function always returns bn, where b is base and n is power ▪ This function is also common in algorithm analysis ▪ Growth rate of exponential function is faster than all other functions
  • 71.
  • 72.
  • 73. Which algorithm is the most efficient?
  • 74. Assume that we have a computer which can operate at a speed of 1 million instructions per second
  • 75. 75
  • 76. https://cathyatseneca.gitbooks.io/data-structures-and-algorithms/content/analysis/notations.html Asymptotic notations are mathematical tools to represent the time complexity of algorithms for asymptotic analysis Asymptotic Notations
  • 78. Let T(n) be a function—the worst-case running time of a certain algorithm on an input of size n. Given another function f(n), we say that T(n) is O(f (n)) if, for sufficiently large n, the function T(n) is bounded above by a constant multiple of f (n). Asymptotic Upper Bounds
  • 79. Note that O(·) expresses only an upper bound, not the exact growth rate of the function
  • 85. Big-O: Functions Ranking ▪ O(1) constant time ▪ O(log n) log time ▪ O(n) linear time ▪ O(n log n) log linear time ▪ O(n2) quadratic time ▪ O(n3) cubic time ▪ O(2n) exponential time BETTER WORSE
  • 86. Big O Complexity Chart https://www.bigocheatsheet.com
  • 87. (1) for (i=1; i<=n; i++) (2) for (j=1; j<=n; j++) (3) print(i,j)
  • 88. (1) for (i=1; i<=n; i++) (2) for (j=1; j<=n; j++) (3) print(i,j)
  • 89. (1) for (i=1; i<=n; i++) (2) for (j=1; j<=i; j++) (3) print(i,j)
  • 90. (1) for (i=1; i<=n; i++) (2) for (j=1; j<=i; j++) (3) print(i,j) Line (1) is obviously n+1 Line (2) loop j is dependent not of n but of i Frequency count of line 2 in summation form
  • 91. Frequency count of line 3 in summation form (1) for (i=1; i<=n; i++) (2) for (j=1; j<=i; j++) (3) print(i,j)
  • 92. (1) for (i=1; i<=n; i++) (2) for (j=1; j<=i; j++) (3) print(i,j) Total frequency count in terms of n
  • 93. (1) for (i=1; i<=100; i++) (2) for (j=1; j<=50; j++) (3) print(i,j)
  • 94. (1) for (i=1; i<=100; i++) (2) for (j=1; j<=50; j++) (3) print(i,j) Line (1): 100+1 times Line (2): loop j 50+1 times, and repeated 100 times by the outer loop i (50+1)(100) = 5100 Line (3): 1 time, repeated by 50 times by loop j, and 100 times by the loop i (1)(50)(100) = 5000 f(n) = 10201 O(1)
  • 96. Time Complexity of Selection Sort ▪ To find the minimum element from the array of n elements, n−1 comparisons are required. ▪ After inserting the minimum element in its proper position, the size of an unsorted array reduces to n−1 and then n−2 comparisons are required to find the minimum in the unsorted array.
  • 97.
  • 98. denote the number of times the while loop test in line 5 is executed for that value of j
  • 99.
  • 100.
  • 101. The worst case when the array is in reverse sorted order
  • 102.
  • 103. Introduction to Basic Data Structures
  • 104. Arrays ▪ Array is a linear data structure consisting of a collection of elements ▪ Elements are accessed via index https://lucasmagnum.medium.com
  • 105. In worst case it takes O(n) time Add o at index i
  • 106. In the worst case it takes O(n) time Remove i
  • 107. Arrays ▪ Analysis o Add and remove run in O(N) o Index based lookup O(1) o Lookup O(N)
  • 108. Linked List ▪ A linked list is a linear data structure in which nodes are arranged in a linear order o The order in a linked list is determined by a pointer in each node ▪ Each node stores o Data o Link to the next node The last node is linked to a terminator used to signify the end of the list
  • 109. Linked List ▪ Search for a node in the List ▪ The worst case Time Complexity for retrieving a node from anywhere in the list is O(n) https://www.educative.io/edpresso/what-is-a-singly-linked-list
  • 110. Linked List ▪ Add a node to the List ▪ The worst case Time Complexity o Front of the list O(1) o End of the list O(n) o Anywhere in the list O(n) https://www.educative.io/edpresso/what-is-a-singly-linked-list
  • 111. Linked List ▪ Remove a node from the list ▪ The worst case Time Complexity o Front of the list O(1) o End of the list O(n) o Anywhere in the list O(n) https://www.educative.io/edpresso/what-is-a-singly-linked-list
  • 112. Stacks ▪ A stack is a linear data structure ▪ Insertion and deletion of items takes place at one end called top of the stack o INSERT operation on a stack is often called PUSH o DELETE operation is often called POP ▪ Insertion and deletion follow LIFO principle o LIFO Last In First Out https://visualgo.net
  • 114. Stacks ▪ An array implementation of a stack S
  • 115. Each of the three stack operations takes O(1) time The space used is O(n)
  • 116. Stacks ▪ Applications o Page visited history in a Web browser o Undo sequence in a text editor https://visualgo.net
  • 118. Queue ▪ Queue also stores objects ▪ INSERT operation on a queue is called ENQUEUE ▪ DELETE operation is called DEQUEUE ▪ It has two ends o Elements are inserted at one end o Elements are deleted from the other end ▪ Insert and delete operations follow FIFO principle o FIFO First In First Out
  • 119. A queue implemented using an array Q [1..12] The queue has 5 elements, in locations Q [7..11]
  • 120. A queue implemented using an array Q [1..12] The configuration of the queue after the calls ENQUEUE(Q, 17), ENQUEUE(Q, 3) and ENQUEUE(Q, 5)
  • 121. A queue implemented using an array Q [1..12] The configuration of the queue after the call DEQUEUE(Q) returns the key value 15 formerly at the head of the queue. The new head has key 6.
  • 122. The pseudocode assumes that n = Q.length Each operation takes O(1) time
  • 123. Queue ▪ Applications o Round Robin Schedulers https://visualgo.net
  • 124.
  • 125. Algorithm Design and Analysis Sayed Chhattan Shah Associate Professor of Computer Science Mobile Grid and Cloud Computing Lab Department of Information Communication Engineering Hankuk University of Foreign Studies Korea www.mgclab.com
  • 126.  Graphs were being studied long before computers were invented  Graphs describe o Roads maps o Airline routes o Course prerequisites  Graphs algorithms run o Large communication networks o The software that makes the Internet function o Programs to determine optimal placement of components on a silicon chip Graph Theory
  • 127.  A graph consists of a set of vertices or nodes and a set of edges or relations between pairs of vertices  Edges represent paths or connections between vertices Graph Theory
  • 128.  Undirected graph G = (V, E) o V = nodes o E = edges between pairs of nodes o Graph size parameters: n = |V| and m = |E| Graph Theory V = { 1, 2, 3, 4, 5, 6, 7, 8 } E = { 1-2, 1-3, 2-3, 2-4, 2-5, 3-5, 3-7, 3-8, 4-5, 5-6 } n = 8 m = 11
  • 129.  The edges of a graph are directed if the existence of an edge from A to B does not necessarily guarantee that there is a path in both directions  A graph with directed edges is called a directed graph or digraph  A graph with undirected edges is an undirected graph or simply a graph Graph Theory
  • 130.  The edges in a graph may have associated values known as their weights  A graph with weighted edges is known as a weighted graph Graph Theory
  • 131. Graph Theory  A path in an undirected graph G = (V, E) is a sequence P of nodes v1, v2, …, vk-1, vk with the property that each consecutive pair vi, vi+1 is joined by an edge in E  A path is simple if all nodes are distinct Two paths from U to V
  • 132. Graph Theory  A cycle is a path v1, v2, …, vk-1, vk in which v1 = vk, k > 2, and the first k-1 nodes are all distinct. cycle C = 1-2-4-5-3-1 A cycle is a path that begins and ends on the same vertex
  • 133.  Degree o Number of edges incident on a node The degree of 5 is 3 Graph Theory
  • 134. Degree = in-degree + out-degree in-degree = number of edges entering out-degree = number of edges leaving out-degree(1)=2 in-degree(1)=0 out-degree(2)=2 in-degree(2)=2 out-degree(3)=1 in-degree(3)=4 Graph Theory
  • 135. Graph Theory  An undirected graph is connected if for every pair of nodes u and v, there is a path between u and v
  • 136.  Subgraph  Vertex and edge sets are subsets of those of G  A supergraph of a graph G is a graph that contains G as a subgraph Graph Theory
  • 137.  Complete graph  Every pair of distinct vertices is connected by a unique edge Graph Theory
  • 138.  Planar graphs can be drawn on a plane such that no two edges intersect Graph Theory
  • 139.  Non linear data structure o data structures in which data items are not arranged in a sequence Graph Theory
  • 143. Graphs and Networks Graph (Network) Vertexes (Nodes) Edges (Arcs) Flow Communications Telephones exchanges, computers, satellites Cables, fiber optics, microwave relays Voice, video, packets Circuits Gates Registers Processors Wires Current Mechanical Joints Rods, beams, springs Heat, energy Hydraulic Reservoirs, pumping stations, lakes Pipelines Fluid, oil Financial Stocks, currency Transactions Money Transportation Airports, rail yards, street intersections Highways, railbeds, airway routes Freight, vehicles, passengers
  • 144. Trees  An undirected graph is a tree if it is connected and does not contain a cycle  Theorem o Let G be an undirected graph on n nodes. Any two of the following statements imply the third  G is connected  G does not contain a cycle  G has n-1 edges
  • 145. Theorem An undirected graph is a tree if and only if there is a unique simple path between any two of its vertices
  • 146. Pizza Shop Tree Owner Jake Manager Brad Chef Carol Waitress Waiter Cook Helper Joyce Chris Max Len
  • 147.
  • 149. Forest Graphs containing no simple circuits that are not connected, but each connected component is a tree
  • 150. Forest Tree Forest A graph Not a tree or forest
  • 151. Rooted Trees a tree the same tree, rooted at 1 v parent of v child of v root r A rooted tree is a tree in which one vertex has been designed as the root and every edge is directed away from the root
  • 153. root node a b c d e f g h i parent of g siblings leaf internal vertex A vertex that has children is called an internal vertex
  • 154. How many internal vertices? Owner Jake Manager Brad Chef Carol Waitress Waiter Cook Helper Joyce Chris Max Len
  • 155. a b c d e f g h i ancestors of h and i
  • 156. a b c d e f g h i subtree with b as its root subtree with c as its root
  • 157. a is the parent of b, b is the child of a, c, d, e are siblings, a, b, d are ancestors of f c, d, e, f, g are descendants of b c, e, f, g are leaves of the tree a, b, d are internal vertices of the tree (at least one child) subtree with d as its root a b f c e d g f d g
  • 158. Trees  A rooted tree is called an m-ary tree if every internal vertex has no more than m children  The tree is called a full m-ary tree if every internal vertex has exactly m children  An m-ary tree with m=2 is called a binary tree Full binary tree Full 3-ary tree Full 5-ary tree Not full 3-ary tree
  • 159. A tree with n vertices has n-1 edges Trees
  • 160. The level of a vertex v in a rooted tree is the length of the unique path from the root to this vertex The level of the root is defined to be zero The height of a rooted tree is the maximum of the levels of vertices Height = 4 Level 0 1 2 3 4 Trees
  • 161. Binary Tree  Every vertices in a binary tree has at most 2 children  Each child is designated as either left child or right child  A full binary tree is a binary tree in which each vertex has either 2 children or zero children
  • 163. a e b c f b left child of a f right child of c right subtree of a d left subtree of a
  • 164. Binary Search Tree  Data are associated with vertices  The data are arranged so that, for each vertex v in T, each data item in the left subtree of v is less than the data item in v, and each data item in the right subtree of v is greater than the data item in v
  • 167. Graph Representation: Adjacency Matrix  An n × n matrix where A[u, v] = 1 if the graph contains the edge (u, v) and 0 otherwise o Space proportional to n2 o Presence of a particular edge can be checked in constant time (1) time o Identifying all edges takes (n2) time 1 2 3 4 5 6 7 8 1 0 1 1 0 0 0 0 0 2 1 0 1 1 1 0 0 0 3 1 1 0 0 1 0 1 1 4 0 1 0 1 1 0 0 0 5 0 1 1 1 0 1 0 0 6 0 0 0 0 1 0 0 0 7 0 0 1 0 0 0 0 1 8 0 0 1 0 0 0 1 0
  • 168. Graph Representation: Adjacency List  Node indexed array of lists o Two representations of each edge o Space proportional to m + n o Checking if (u, v) is an edge takes O(degree(u)) time o Identifying all edges takes (m + n) time degree = number of neighbors of u 1 2 3 2 3 4 2 5 5 6 7 3 8 8 1 3 4 5 1 2 5 8 7 2 3 4 6 5 3 7
  • 169. Comparison of the Two Representations  Adjacency Matrix o Space proportional to n2 o Checking if (u, v) is an edge takes (1) time o Identifying all edges takes (n2) time o Suitable for densely connected graph  Adjacency list o Space proportional to m + n o Checking if (u, v) is an edge takes O(degree(u)) time o Identifying all edges takes (m + n) time o Need to use link list for programming o Suitable for large-scale, sparsely connected graph degree = number of neighbors of u
  • 170.  Operations on a Graph  Traversing a graph  Searching a graph  Adding a node  Adding an edge  Deleting a node  Deleting an edge  Updating weight of an edge  Determining whether there is an edge between two nodes  Find all neighbors of a node  Union  Interaction Graph Theory
  • 171.  Traversals of graphs o Most graph algorithms involve visiting each vertex in a systematic order o The two most common traversal algorithms  Breadth-first search  Depth-first search Graph Theory
  • 172. BFS and Shortest Path Problem  Given any source vertex s BFS visits the other vertices at increasing distances away from s o Distance is number of edges on a path from s 2 4 3 5 1 7 6 9 8 0 Consider s = vertex 1 Nodes at distance 1 = 2, 3, 7, 9 s Example Nodes at distance 2 = 8, 6, 5, 4 Nodes at distance 3 = 0
  • 173.
  • 174. 2 4 3 5 1 7 6 9 8 0 Adjacency List Source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) F F F F F F F F F F Q = { } Initialize visited table (all False) Initialize Q to be empty
  • 175. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) F F T F F F F F F F Q = { 2 } Flag that 2 has been visited Place source 2 on the queue
  • 176. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) F T T F T F F F T F Q = {2} → { 8, 1, 4 } Mark neighbors 1, 4 and 8 as visited Dequeue 2 Place all unvisited neighbors of 2 on the queue Neighbors
  • 177. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T F T F F F T T Q = { 8, 1, 4 } → { 1, 4, 0, 9 } Mark new visited Neighbors 0, 9 Dequeue 8 Place all unvisited neighbors of 8 on the queue Notice that 2 is not placed on the queue again, it has been visited
  • 178. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T F F T T T Q = { 1, 4, 0, 9 } → { 4, 0, 9, 3, 7 } Mark new visited Neighbors 3, 7 Dequeue 1 Place all unvisited neighbors of 1 on the queue Only nodes 3 and 7 haven’t been visited yet. Neighbors
  • 179. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T F F T T T Q = { 4, 0, 9, 3, 7 } → { 0, 9, 3, 7 } Dequeue 4 4 has no unvisited neighbors Neighbors
  • 180. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T F F T T T Q = { 0, 9, 3, 7 } → { 9, 3, 7 } Dequeue 0 0 has no unvisited neighbors Neighbors
  • 181. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T F F T T T Q = { 9, 3, 7 } → { 3, 7 } Dequeue 9 9 has no unvisited neighbors
  • 182. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T F T T T Q = { 3, 7 } → { 7, 5 } Dequeue 3 place neighbor 5 on the queue Neighbors Mark new visited Vertex 5
  • 183. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T Q = { 7, 5 } → { 5, 6 } Dequeue 7 place neighbor 6 on the queue Neighbors Mark new visited Vertex 6
  • 184. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T Q = { 5, 6} → { 6 } Dequeue 5 no unvisited neighbors of 5 Neighbors
  • 185. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T Q = { 6 } → { } Dequeue 6 no unvisited neighbors of 6 Neighbors
  • 186. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T Q = { } STOP Q is empty What did we discover? Look at visited table There exists a path from source vertex 2 to all vertices in the graph
  • 187. Applications of BFS  What can we do with the BFS code we discussed? o Is there a path from source s to a vertex v? o Is an undirected graph connected?
  • 188. Applications of BFS  What can we do with the BFS code we discussed? o Is there a path from source s to a vertex v?  Check flag[v] o Is an undirected graph connected?  Scan array flag[ ]  If there exists flag[u] = false then …
  • 189. Example  Apply BFS algorithm on the following graph.  Source vertex is 1.
  • 191. Each vertex is put on the queue exactly once, when it is first encountered, so there are 2 |V | queue operations Over the course of execution, the inner loop looks at each edge once in directed graphs or twice in undirected graphs, and therefore takes O(|E|) time. The overall running time of this algorithm is linear O(|V|+|E|)
  • 192. The for loop takes time proportional to = 2|E|
  • 193. Shortest Path Recording  BFS we saw only tells us whether a path exists from source s to other vertices v o It doesn’t tell us the path o We need to modify the algorithm to record the path
  • 194. Record where you came from
  • 195. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) F T T F T F F F T F Q = {2} → { 8, 1, 4 } Mark neighbors as visited and Record in Pred that we came from 2 Dequeue 2 Place all unvisited neighbors of 2 on the queue Neighbors - 2 - - 2 - - - 2 - Pred
  • 196. BFS Finished 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T Q = { } Pred now can be traced backward to report the path 8 2 - 1 2 3 7 1 2 8 Pred
  • 197. 2 4 3 5 1 7 6 9 8 0 Adjacency List source 0 1 2 3 4 5 6 7 8 9 Visited Table (T/F) T T T T T T T T T T 8 2 - 1 2 3 7 1 2 8 Pred Report path from s to v: Path(2-0) ⇒ Path(2-6) ⇒ Path(2-1) ⇒
  • 198. BFS application: Connected Component  Find all nodes reachable from starting node s Connected component containing node 1 = { 1, 2, 3, 4, 5, 6, 7, 8 }
  • 199. BFS application: Connected Component  We can re-use the previous BFS to compute the connected components of a graph G
  • 200. BFS application: Connected Component  We can re-use the previous BFS to compute the connected components of a graph G A graph with 3 components BFS_connectedComponents ( G ) { // Component number i = 1; for every vertex v flag[v] = false; for every vertex v if ( flag[v] == false ) { print ( “Component ” + i++ ); BFS( v ); } }
  • 201. Depth-First Search  In a depth-first search, o start at a vertex, o visit it, o choose one adjacent vertex to visit; o then, choose a vertex adjacent to that vertex to visit, o and so on until you go no further; o then back up and see whether a new vertex can be found
  • 202. Example 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6
  • 203. Mark 0 as being visited 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Finish order: Discovery or visit order: 0
  • 204. Choose an adjacent vertex that is not being visited 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Finish order: Discovery or visit order: 0
  • 205. Choose an adjacent vertex that is not being visited 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Finish order: Discovery or visit order: 0, 1
  • 206. Recursively choose an adjacent vertex that is not being visited 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Finish order: Discovery or visit order: 0, 1, 3
  • 207. Recursively choose an adjacent vertex that is not being visited 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Finish order: Discovery or visit order: 0, 1, 3
  • 208. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 Recursively choose an adjacent vertex that is not being visited Finish order: Discovery or visit order: 0, 1, 3, 4
  • 209. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 4 5 5 6 6 There are no vertices adjacent to 4 that are not being visited Finish order: Discovery or visit order: 0, 1, 3, 4
  • 210. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 5 5 6 6 Mark 4 as visited Finish order: 4 Discovery or visit order: 0, 1, 3, 4
  • 211. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 3 4 5 5 6 6 Return from the recursion to 3; all adjacent nodes to 3 are being visited Finish order: 4
  • 212. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 4 5 5 6 6 Mark 3 as visited Finish order: 4, 3
  • 213. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 4 5 5 6 6 Return from the recursion to 1 Finish order: 4, 3
  • 214. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 1 2 2 3 4 5 5 6 6 All vertices adjacent to 1 are being visited Finish order: 4, 3
  • 215. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 Mark 1 as visited Finish order: 4, 3, 1
  • 216. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 Return from the recursion to 0 Finish order: 4, 3, 1
  • 217. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 2 is adjacent to 1 and is not being visited Finish order: 4, 3, 1
  • 218. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 2 is adjacent to 1 and is not being visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2
  • 219. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 5 is adjacent to 2 and is not being visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2
  • 220. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 5 is adjacent to 2 and is not being visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2, 5
  • 221. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 6 is adjacent to 5 and is not being visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2, 5
  • 222. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 6 is adjacent to 5 and is not being visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2, 5, 6
  • 223. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 6 There are no vertices adjacent to 6 not being visited; mark 6 as visited Finish order: 4, 3, 1 Discovery or visit order: 0, 1, 3, 4, 2, 5, 6
  • 224. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 There are no vertices adjacent to 6 not being visited; mark 6 as visited Finish order: 4, 3, 1, 6 Discovery or visit order: 0, 1, 3, 4, 2, 5, 6
  • 225. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 Return from the recursion to 5 Finish order: 4, 3, 1, 6
  • 226. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 5 6 Mark 5 as visited Finish order: 4, 3, 1, 6
  • 227. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 6 Mark 5 as visited Finish order: 4, 3, 1, 6, 5
  • 228. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 6 Return from the recursion to 2 Finish order: 4, 3, 1, 6, 5
  • 229. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 2 3 4 5 6 Mark 2 as visited Finish order: 4, 3, 1, 6, 5
  • 230. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 3 4 5 6 Mark 2 as visited Finish order: 4, 3, 1, 6, 5, 2
  • 231. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 3 4 5 6 Return from the recursion to 0 Finish order: 4, 3, 1, 6, 5, 2
  • 232. 0 visited 0 0 being visited 0 0 unvisited 0 0 1 2 3 4 5 6 There are no nodes adjacent to 0 not being visited Finish order: 4, 3, 1, 6, 5, 2
  • 233. 0 visited 0 0 being visited 0 0 unvisited 0 1 2 3 4 5 6 Mark 0 as visited Finish order: 4, 3, 1, 6, 5, 2, 0 Discovery or Visit order: 0, 1, 3, 4, 2, 5, 6, 0
  • 234. DFS(G, v) v is the vertex where the search starts Stack S start with an empty stack For each vertex u set visited[u] = false PUSH S, v while (S is not empty) do u = POP S if (not visited[u]) then visited[u] = true For each unvisited neighbor w of u PUSH S, w end if end while END DFS() Depth-First Search
  • 235. DFS(G, v) v is the vertex where the search starts Stack S start with an empty stack For each vertex u set visited[u] = false PUSH S, v while (S is not empty) do u = POP S if (not visited[u]) then visited[u] = true For each unvisited neighbor w of u PUSH S, w end if end while END DFS() Depth-First Search 1 2 3 4 5 6
  • 236. Connected Component  Can we re-use the previous DFS to compute the connected components of a graph G?
  • 237. Cycle detection  Given a graph G = (V, E) cycle detection problem is to determine if there is a cycle in the graph
  • 238. Cycle detection  Can we use DFS to detect a cycle in an undirected graph?
  • 239.  During DFS, for any current vertex x if there an adjacent vertex y is present which is already visited and y is not a direct parent of x then there is a cycle in graph 1 2 3 5 4
  • 240. A coloring of a graph is an assignment of a color to each vertex such that no neighboring vertices have the same color Graph Coloring
  • 245. A bipartite graph is an undirected graph G = (V, E) in which V can be partitioned into 2 sets V1 and V2 such that (u, v)  E implies either u  V1 and v  V2 OR v  V1 and u  V2 u1 u2 u3 u4 v1 v2 v3 V1 V2 Graph Theory
  • 246. Bipartite Graphs  Applications o Stable marriage: men = red, women = blue o Scheduling: machines = red, jobs = blue a bipartite graph
  • 247. Bipartite Graph v1 v2 v3 v6 v5 v4 v7 a bipartite graph G
  • 248. Bipartite Graph v1 v2 v3 v6 v5 v4 v7 v2 v4 v5 v7 v1 v3 v6 a bipartite graph G another drawing of G
  • 249. How can we know if a given graph is bipartite?
  • 250. A bipartite graph is possible if the graph coloring is possible using two colors such that vertices in a set are colored with the same color. How can we know if a given graph is bipartite? Bipartite Graph
  • 251. 1. Assign RED color to the source vertex o put into set U 2. Color all the neighbors with BLUE color o put into set V 3. Color all neighbor’s neighbor with RED color o put into set U 4. This way, assign color to all vertices such that it satisfies all the constraints of m way coloring problem where m = 2. 5. While assigning colors, if we find a neighbor which is colored with same color as current vertex, then the graph cannot be colored with 2 vertices or graph is not Bipartite Bipartite Graph
  • 252. Strong Connectivity  Node u and v are mutually reachable if there is a path from u to v and also a path from v to u.  A graph is strongly connected if every pair of nodes is mutually reachable.  Lemma. Let s be any node. G is strongly connected iff every node is reachable from s, and s is reachable from every node.  Proof  Follows from definition.  Proof  Path from u to v: concatenate u-s path with s-v path. Path from v to u: concatenate v-s path with s-u path. s v u
  • 253. Strong Connectivity  Test if a graph is strongly connected? strongly connected not strongly connected
  • 254. Algorithm  Perform DFS or BFS starting from every vertices in the graph  If DFS or BFS visits every vertex in the graph then graph is strongly connected strongly connected not strongly connected
  • 255. Algorithm  Perform DFS or BFS starting from every vertices in the graph  If DFS or BFS visits every vertex in the graph then graph is strongly connected  O(|V|(|V|+|E|)) strongly connected not strongly connected
  • 256. Algorithm  Can determine if G is strongly connected in O(m + n) time o Pick any node s. o Run BFS from s in G o Run BFS from s in Grev o Return true iff all nodes reached in both BFS executions reverse orientation of every edge in G strongly connected not strongly connected On reversing all edges of the graph, the type of graph wouldn’t change. Strongly connected graph will remain strongly connected.
  • 257. Directed Acyclic Graphs  An DAG is a directed graph that contains no directed cycles
  • 258.  Directed acyclic graphs can be used to encode precedence relations or dependencies in a natural way  Precedence constraints o Edge (vi, vj) means task vi must occur before vj  Applications o Course prerequisite graph course vi must be taken before vj o Compilation module vi must be compiled before vj o Pipeline of computing jobs output of job vi needed to determine input of job vj Directed Acyclic Graphs
  • 260. Directed Acyclic Graphs  A topological order of a directed graph G = (V, E) is an ordering of its nodes as v1, v2, …, vn so that for every edge (vi, vj) we have i < j DAG Topological ordering v2 v3 v6 v5 v4 v7 v1 v1 v2 v3 v4 v5 v6 v7 A linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering
  • 261.
  • 262. The graph has many valid topological sorts 5 7 3 11 8 2 9 10 3 5 7 8 11 2 9 10 5 7 3 8 11 10 9 2 7 5 11 3 10 8 9 2 5 7 11 2 3 8 9 10 3 7 8 5 11 10 2 9
  • 263. Topological sort or order If the graph has a cycle, all courses in the cycle become impossible to take
  • 264. Directed Acyclic Graphs v1 vi vj vn the supposed topological order: v1, …, vn the directed cycle C
  • 265. Directed Acyclic Graphs  Every DAG have a topological ordering so how do we find one efficiently?
  • 267. Algorithm to compute a topological ordering of G
  • 269. v2 Topological order: v1 v2 v3 v6 v5 v4 v7
  • 270. v3 Topological order: v1, v2 v3 v6 v5 v4 v7
  • 271. v4 Topological order: v1, v2, v3 v6 v5 v4 v7
  • 272. v5 Topological order: v1, v2, v3, v4 v6 v5 v7
  • 273. v6 Topological order: v1, v2, v3, v4, v5 v6 v7
  • 274. v7 Topological order: v1, v2, v3, v4, v5, v6 v7
  • 275. Topological order: v1, v2, v3, v4, v5, v6, v7. v2 v3 v6 v5 v4 v7 v1 v1 v2 v3 v4 v5 v6 v7
  • 276. Running Time  Identifying a node v with no incoming edges, and deleting it from G, can be done in O(n) time  Since the algorithm runs for n iterations, the total running time is O(𝑛2 )  Can we achieve a running time of O(m + n) using the same high level algorithm?
  • 277. Topological Sort Algorithm 1) Store each vertex’s indegree in an array 2) Initialize a queue with all indegree zero vertices 3) While there are vertices remaining in the queue: o Dequeue and output a vertex o Reduce indegree of all vertices adjacent to it by 1 o Enqueue any of these vertices whose indegree became zero
  • 278. Running Time  Initialize indegree array O(|E|)  Initialize Queue with indegree 0 vertices O(|V|)  Dequeue and output vertex O(|V|) o |V| vertices, each takes only O(1) to dequeue and output  Reduce indegree of all vertices adjacent to a vertex and Enqueue any indegree 0 vertices O(|E|)  Total time = O(|V| + |E|)
  • 279. Running Time  Topological sort using DFS
  • 280. Running Time  Topological sort using DFS o Order nodes in reverse order that DFS finishes visiting them o Link
  • 281.
  • 282. Algorithm Design and Analysis Sayed Chhattan Shah Associate Professor of Computer Science Mobile Grid and Cloud Computing Lab Department of Information Communication Engineering Hankuk University of Foreign Studies Korea www.mgclab.com
  • 283. Optimization problems  An optimization problem is the problem of finding the best solution from all feasible solutions o Greedy Method o Dynamic Programming
  • 285. Greedy Algorithms  A greedy algorithm always makes the choice that looks best at the moment o It makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution  Greedy algorithms do not always yield optimal solutions
  • 287. Interval Scheduling  Suppose there are meeting requests, and meeting takes time ( , ) o Meeting starts at and ends at  The constraint is that no two meeting can be scheduled together if their intervals overlap  The goal is to schedule as many meetings as possible
  • 288. Interval Scheduling  Suppose we have a set of n proposed activities that wish to use a resource, such as a lecture hall, which can serve only one activity at a time.  The goal is to schedule as many activities as possible
  • 289.  We have set of jobs or tasks or requests  Job j starts at sj and finishes at fj  Two jobs compatible if they don't overlap  The goal is to find maximum subset of mutually compatible jobs Time 0 1 2 3 4 5 6 7 8 9 10 11 f g h e a b c d Interval Scheduling
  • 290. Interval Scheduling  Sample Input and Output Output [Task 2, Task 3, Task 4] is an optimal solution because these tasks have no conflicts with each other and any set with 4 tasks will have at least two intervals in conflict. Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 291. Interval Scheduling  Example 1 Assume that the input intervals do not overlap each other Output Set is the same as the input solution set since neither Tasks conflict with each other Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 292. R: set of requests Initialize S to be the empty set While R is not empty Choose i in R Add i to S Return S
  • 293. Interval Scheduling  Assume that the maximum amount of conflicts that any Task can have is 1 Output In this case it doesn’t matter which Task we choose. We can have a final set of [Task 1, Task 2] or [Task 2, Task 3] and both solutions have a total of two tasks in them so both of them are considered optimal. Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 294. R: set of requests Initialize S to be the empty set While R is not empty Choose i in R Add i to S Remove all requests that conflict with i from R Return S
  • 295. Interval Scheduling  What if a Task can have an arbitrary number of conflicts? In this example, blindly choosing a Task to add to the Solution set will not work by observing the example. If we choose Task 1, we can choose Task 2 or Task 5 to add to the solution set since those two are the only Tasks which do not conflict with Task 1. If we choose to add Task 4 to the solution set we can add Task 3 and Task 5 which will be a more optimal solution. Example 3 http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 296. Interval Scheduling  This problem can be solved using the greedy approach of choosing the next element of the Task list based on some property the Tasks have and iteratively building up a solution. http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html R: set of requests Initialize S to be the empty set While R is not empty Choose i in R where v(i) is minimized Add i to S Remove all requests that conflict with i from R Return S Generic Algorithm
  • 297. Interval Scheduling  Shortest Duration o A natural solution would be to select Tasks based on their duration. http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html R: set of requests Initialize S to be the empty set While R is not empty Choose i in R where f(i) - s(i) is minimized Add i to S Remove all requests that conflict with i from R Return S
  • 298. Interval Scheduling  If we run the previous algorithm with Example 3 what is the result? 1. Choose Task 2 2. Remove Task 4 and Task 5 since they conflict with Task 2 3. Choose Task 3 4. Remove Task 1 since it conflicts with Task 3 5. Exit algorithm Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 299. Interval Scheduling  If we run the previous algorithm with Example 3 what is the result? An optimal solution for this example can be found by observation. [Task 3, Task 4, Task 5] has the most amount of intervals that do not conflict with each other. [Task 2, Task 3] is not an optimal solution so this is not the correct way to greedily solve this problem. Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 300. Interval Scheduling  Earliest Start Time o Since shortest duration does not work, we need to find a new parameter to use for choosing the Tasks. http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html R: set of requests Initialize S to be the empty set While R is not empty Choose i in R where s(i) is minimized Add i to S Remove all requests that conflict with i from R Return S
  • 301. Interval Scheduling  If we run the previous algorithm with Example 3 what is the result? 1. Choose Task 3 2. Remove 1 since it is in conflict 3. Choose Task 4 4. Remove Task 2 5. Choose Task 5 Input http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html If algorithm broke ties correctly (Task 1 and Task 3 in this example) then an optimal solution would emerge.
  • 302. Interval Scheduling  If we run the previous algorithm with Example 3 what is the result? Counter example http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html
  • 303. Interval Scheduling  If we run the previous algorithm with Example 3 what is the result? Counter example http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html The algorithm will first select Task 6 and remove all others that are in conflict with it which happens to be everything else in the input set.
  • 304. Interval Scheduling  Minimum Number of Conflicts o Previous example could work if we updated the algorithm to select Tasks that have the minimum number of conflicts across the entire input Task set http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html If i and j are two distinct intervals, there is a conflict if s(j)<f(i)<f(j) or s(i)<f(j)<f(i)
  • 305. http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html R: set of requests Initialize S to be the empty set While R is not empty Choose i in R with the minimum number of conflicts Add i to S Remove all requests that conflict with i from R Return S
  • 306. Interval Scheduling  Minimum Number of Conflicts http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html Counter example The only optimal solution [Task 3, Task 4, Task 5, Task 8, Task 9, Task 10, Task 11]
  • 307. Interval Scheduling  Minimum Number of Conflicts http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html A simple run of the algorithm draft on this example: 1. Choose Task 13 since it only has 2 conflicts 2. Remove Task 9 and Task 10 3. Choose Task 8 4. Remove Task 12, Task 15, Task 17 5. Choose Task 11 ... Since the algorithm already did not place Task 9 and Task 10 in the solution set, we know this will not be an optimal solution.
  • 308. Interval Scheduling  Earliest End or Finish Time http://www-student.cse.buffalo.edu/~atri/cse331/support/examples/interval_scheduling/index.html R: set of requests Initialize S to be the empty set While R is not empty Choose i in R where f(i) is the smallest Add i to S Remove all requests that conflict with i from R Return S Choose an activity that leaves the resource available for as many other activities as possible
  • 309. Sort jobs by finish times so that f1  f2  ...  fn A   for j = 1 to n { if (job j compatible with A) A  A  {j} } return A A is set of jobs selected Interval Scheduling
  • 310. Sorting n jobs takes O(n log n) time O(n) time to go through the sorted list of n jobs Interval Scheduling: Greedy Algorithm Sort jobs by finish times so that f1  f2  ...  fn A   for j = 1 to n { if (job j compatible with A) A  A  {j} } return A A is set of jobs selected
  • 311. Correctness  Algorithm produces a solution A  Let O be any optimal allocation  A and O need not be identical. o They can have multiple allocations of same size  Show that |A| = |O| https://www.cmi.ac.in/~madhavan
  • 312. Correctness  Let A = i1, i2, ... ik o Jobs in A are sorted: f(i1) ≤ s(i2), f(i2) ≤ s(i3), …  Let O = j1, j2, ... jm o Jobs in O are sorted: f(j1) ≤ s(j2), f(j2) ≤ s(j3), …  The goal is to show that k = m https://www.cmi.ac.in/~madhavan
  • 313. Correctness  Claim For each r ≤ k, f(ir) ≤ f(jr) o The greedy solution stays ahead of O  Proof by induction on r o r = 1: greedy algorithm chooses job i1 with earliest overall finish time https://www.cmi.ac.in/~madhavan
  • 314. Correctness o r > 1: Assume, by induction that f(ir-1) ≤ f(jr-1) o Then, it must be the case that f(ir) ≤ f(jr) o If not, algorithm would choose jr rather than ir https://www.cmi.ac.in/~madhavan
  • 315. Correctness  Suppose m > k  We know that f(ik) ≤ f(jk)  Consider job jk+1 in O o Greedy algorithm terminates when R is empty  R is set of request or jobs o Since f(ik) ≤ f(jk) ≤ s(jk+1), this job is compatible with A = i1, i2, ... ik o After selecting ik, R still contains jk+1 Contradiction https://www.cmi.ac.in/~madhavan
  • 316. Interval Partitioning Problem  Assume we have many identical resources available and we wish to schedule all the requests using as few resources as possible
  • 317.  Interval partitioning o Lecture j starts at sj and finishes at fj o The goal is find minimum number of classrooms to schedule all lectures so that no two occur at the same time in the same room Interval Partitioning Problem
  • 318. Interval Partitioning  Interval partitioning o Lecture j starts at sj and finishes at fj o The goal is find minimum number of classrooms to schedule all lectures so that no two occur at the same time in the same room  Example Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c b a e d g f i j 3 3:30 4 4:30
  • 319. Interval Partitioning  Interval partitioning o Lecture j starts at sj and finishes at fj o The goal is find minimum number of classrooms to schedule all lectures so that no two occur at the same time in the same room  Example This schedule uses 4 classrooms to schedule 10 lectures Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c b a e d g f i j 3 3:30 4 4:30 1 2 3 4
  • 320. Interval Partitioning  Example This schedule uses only 3 Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c a e f g i j 3 3:30 4 4:30 d b 1 2 3
  • 321. Interval Partitioning  Example This schedule uses only 3 Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c a e f g i j 3 3:30 4 4:30 d b 1 2 3 Is there any hope of using just two resources?
  • 322. Interval Partitioning  Example This schedule uses only 3 Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c a e f g i j 3 3:30 4 4:30 d b 1 2 3 Is there any hope of using just two resources? NO We need at least three resources. Intervals a, b, and c all pass over a common point on the time-line, and hence they all need to be scheduled on different resources.
  • 323.  Suppose we define the depth of a set of intervals to be the maximum number that pass over any single point on the time-line  Depth of schedule below = 3 Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c a e f g i j 3 3:30 4 4:30 d b a, b, c all contain 9:30 1 2 3 Interval Partitioning
  • 324.  Suppose we define the depth of a set of intervals to be the maximum number that pass over any single point on the time-line  Depth of schedule below = 3 Time 9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 h c a e f g i j 3 3:30 4 4:30 d b a, b, c all contain 9:30 1 2 3 Interval Partitioning Number of classrooms needed  depth
  • 325.  Consider lectures in increasing order of start time: assign lecture to any compatible classroom. Sort intervals by starting time so that s1  s2  ...  sn. d  0 for j = 1 to n { if (lecture j is compatible with some classroom k) schedule lecture j in classroom k else allocate a new classroom d + 1 schedule lecture j in classroom d + 1 d  d + 1 } number of allocated classrooms Interval Partitioning
  • 326.  Consider lectures in increasing order of start time: assign lecture to any compatible classroom.  Implementation O ( ) Sort intervals by starting time so that s1  s2  ...  sn. d  0 for j = 1 to n { if (lecture j is compatible with some classroom k) schedule lecture j in classroom k else allocate a new classroom d + 1 schedule lecture j in classroom d + 1 d  d + 1 } number of allocated classrooms Interval Partitioning
  • 327. Scheduling to Minimizing Lateness  We have a single resource and a set of n requests to use the resource  Each request i taking a time ti  Once a request starts to be served it continues using the resource until its completion  Each request i has a deadline di  The goal is to schedule all jobs to minimize maximum lateness dj 6 tj 3 1 8 2 2 9 1 3 9 4 4 14 3 5 15 2 6
  • 328. Scheduling to Minimizing Lateness  Minimizing lateness problem o Single resource that processes one job at a time o Set of jobs o Each job j requires tj units of processing time and is due at time dj o If job j starts at time sj it finishes at time fj = sj + tj o A job i is late if it misses the deadline means if fj > dj o Lateness: j = max { 0, fj - dj } o The goal is to schedule all jobs to minimize maximum lateness L = max j This problem arises naturally when scheduling jobs that need to use a single machine
  • 329. Scheduling the jobs in the order 1, 2, 3 incurs a maximum lateness of 0
  • 330. Scheduling to Minimizing Lateness  Example  Job scheduled order = 3 2 6 1 5 4 Lateness check dj 6 tj 3 1 8 2 2 9 1 3 9 4 4 14 3 5 15 2 6 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 d5 = 14 d2 = 8 d6 = 15 d1 = 6 d4 = 9 d3 = 9 lateness = 0 lateness = 2 max lateness = 6
  • 331. Minimizing Lateness  Greedy template Consider jobs in some order o [Shortest processing time first] Consider jobs in ascending order of processing time tj o [Smallest slack] Consider jobs in ascending order of slack dj – tj o [Earliest deadline first] Consider jobs in ascending order of deadline dj
  • 332.  Consider jobs in some order o [Shortest processing time first] Consider jobs in ascending order of processing time tj counterexample dj tj 100 1 1 10 10 2 Minimizing Lateness
  • 333.  Consider jobs in some order o [Smallest slack] Consider jobs in ascending order of slack dj – tj  The ones that need to be started with minimal delay counterexample dj tj 2 1 1 10 10 2 Minimizing Lateness
  • 334.  Consider jobs in some order o [Smallest slack] Consider jobs in ascending order of slack dj – tj  The ones that need to be started with minimal delay counterexample dj tj 2 1 1 10 10 2 Minimizing Lateness Sorting by increasing slack would place the second job first in the schedule, and the first job would incur a lateness of 9
  • 335. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 d5 = 14 d2 = 8 d6 = 15 d1 = 6 d4 = 9 d3 = 9 max lateness = 1 Minimizing Lateness  Greedy algorithm Earliest deadline first dj 6 tj 3 1 8 2 2 9 1 3 9 4 4 14 3 5 15 2 6
  • 336. Sort n jobs by deadline so that d1  d2  …  dn t  0 for j = 1 to n Assign job j to interval [t, t + tj] sj  t, fj  t + tj t  t + tj output intervals [sj, fj] Minimizing Lateness  Greedy algorithm Earliest deadline first
  • 337. Shortest Paths in a Graph shortest path from Princeton CS department to Einstein's house
  • 338. Shortest Path Problem  Shortest path network o Directed graph G = (V, E) o Source s, destination t o Length e = length of edge e  Shortest path problem: find shortest directed path from s to t cost of path = sum of edge costs in path s 3 t 2 6 7 4 5 23 18 2 9 14 15 5 30 20 44 16 11 6 19 6 Cost of path s-2-3-5-t = 9 + 23 + 2 + 16 = 50
  • 339.  Given a graph and a source vertex in graph, find shortest paths from source to all vertices in the given graph o Transport finished product from a factory to all retail outlets o Courier company delivers items from a distribution centre to addressees Dijkstra’s Shortest Path Algorithm
  • 340.  For each destination o Enumerate all the paths from source to that destination o Calculate the cost of all enumerated paths o Select the path with the min cost Dijkstra’s Shortest Path Algorithm
  • 341.  For each destination o Enumerate all the paths from source to that destination o Calculate the cost of all enumerated paths o Select the path with the min cost  Takes > (n-1)! steps for a complete graph with n nodes Dijkstra’s Shortest Path Algorithm
  • 342. 1) Create an empty set S 2) Assign a distance value to all vertices in the input graph o Initialize all distance values as INFINITE. o Assign distance value as 0 for the source vertex so that it is picked first 3) While S doesn’t include all vertices a) Pick a vertex u which is not there in S and has minimum distance value b) Add vertex u to S c) Update distance value of all adjacent vertices of u o To update the distance values, iterate through all adjacent vertices o For every adjacent vertex v, if sum of distance value of u and weight of edge u-v, is less than the distance value of v, then update the distance value of v Dijkstra’s Shortest Path Algorithm
  • 343. Dijkstra's Shortest Path Algorithm  Find shortest path from s to t s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6
  • 350. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 S = { s, 2 } Q = { 3, 4, 5, 6, 7, t }    33 delmin
  • 352. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6 } Q = { 3, 4, 5, 7, t }    44 delmin  33 32
  • 353. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6, 7 } Q = { 3, 4, 5, t }    44 35 59 24  33 32
  • 354. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6, 7 } Q = { 3, 4, 5, t }    44 35 59 delmin  33 32
  • 355. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 6, 7 } Q = { 4, 5, t }    44 35 59 51 34  33 32
  • 356. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 6, 7 } Q = { 4, 5, t }    44 35 59 51 34 delmin  33 32 24
  • 357. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 5, 6, 7 } Q = { 4, t }    44 35 59 51 34 24 50 45  33 32
  • 358. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 5, 6, 7 } Q = { 4, t }    44 35 59 51 34 24 50 45 delmin  33 32
  • 359. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7 } Q = { t }    44 35 59 51 34 24 50 45  33 32
  • 360. s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7 } Q = { t }    44 35 59 51 34 50 45 delmin  33 32 24
  • 361. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7, t } Q = { }    44 35 59 51 34 50 45  33 32
  • 362. s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7, t } Q = { }    44 35 59 51 34 50 45  33 32
  • 364.  Each new shortest path we discover extends an earlier one  By induction, assume we have identified shortest paths to all vertices already in set  Next vertex at min distance is v, via x  Cannot later find a shorter path from y to w to v Correctness
  • 365.  Adjacency matrix o Outer loop runs n times o O(n) scan to find vertex with a minimum distance o O(n) scan of adjacency matrix to find all neighbors o Overall O(n^2 ) Complexity
  • 366.  Adjacency list o Scan neighbors o O(m) across all iterations o However, finding a vertex with minimum distance still takes O(n) in each iteration o Overall O(n^2 ) Complexity
  • 367. Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is a tree is called a spanning tree Spanning Trees
  • 368. Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is a tree is called a spanning tree Spanning Trees
  • 369. Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is a tree is called a spanning tree Spanning Trees
  • 370. Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is a tree is called a spanning tree Spanning Trees
  • 371. Given a graph G=(V, E) a subgraph of G that connects all of the vertices and is a tree is called a spanning tree Spanning Trees
  • 372. Spanning tree with the lowest total edge weights Minimum Spanning Trees
  • 373. Given a connected weighted undirected graph G, design an algorithm that outputs a minimum spanning tree of G Minimum Spanning Tree Problem
  • 374.
  • 375.
  • 376. There are nn-2 spanning trees in a complete graph
  • 377. Applications  MST is fundamental problem with diverse applications o Network design  Telephone  Electrical  Hydraulic  TV cable  Computer  Road
  • 378. Greedy Algorithms  Kruskal's algorithm  Prim's algorithm  Reverse-Delete algorithm  All three algorithms produce a MST
  • 379. Initially, trees of the forest are the vertices (no edges) In each step add the cheapest edge that does not create a cycle Demo Kruskal's Algorithm
  • 380. 1. Set A = ∅ and F = E, the set of all edges 2. Choose an edge e in F of minimum weight, and check whether adding e to A creates a cycle IF Yes, remove e from F IF No, move e from F to A 3. IF F = ∅, stop and output the minimum spanning tree (V, A). Otherwise go to step 2
  • 381.
  • 382.
  • 383. 1. Set A = ∅ and F = E, the set of all edges 2. Choose an edge e in F of minimum weight, and check whether adding e to A creates a cycle IF Yes, remove e from F IF No, move e from F to A 3. IF F = ∅, stop and output the minimum spanning tree (V, A). Otherwise go to step 2
  • 384. Start with minimum cost edge Keep extending the tree with smallest edge Prim's Algorithm
  • 385. Prim's Algorithm ReachSet = {0} UnReachSet = {1, 2, ..., N-1} SpanningTree = {} while ( UnReachSet ≠ empty ) { Find edge e = (x, y) such that: 1. x ReachSet 2. y UnReachSet 3. e has smallest cost SpanningTree = SpanningTree {e} ReachSet = ReachSet {y} UnReachSet = UnReachSet - {y} } http://www.mathcs.emory.edu/~cheung
  • 386. Find a minimum cost spanning tree for this graph http://www.mathcs.emory.edu/~cheung
  • 387. We pick the starting node: node 0 and mark it as reached http://www.mathcs.emory.edu/~cheung
  • 388. Find an edge with minimum cost that connects a reached node to an unreached node http://www.mathcs.emory.edu/~cheung
  • 389. Add the edge (0,3) to the MST and mark the node 3 as reached http://www.mathcs.emory.edu/~cheung
  • 390. Find an edge with minimum cost that connects a reached node to an unreached node http://www.mathcs.emory.edu/~cheung
  • 391. Add the edge (3,4) to the MST and mark the node 4 as reached http://www.mathcs.emory.edu/~cheung
  • 392. Find an edge with minimum cost that connects a reached node to an unreached node http://www.mathcs.emory.edu/~cheung
  • 393. Add the edge (0,1) to the MST and mark the node 1 as reached http://www.mathcs.emory.edu/~cheung
  • 394. Next edge added is (1,7) http://www.mathcs.emory.edu/~cheung
  • 395. Next edge added is (7,2) http://www.mathcs.emory.edu/~cheung
  • 396. Next edge added is (2,5) http://www.mathcs.emory.edu/~cheung
  • 397. Next edge added is (0,8) http://www.mathcs.emory.edu/~cheung
  • 398. Next edge added is (5,6) http://www.mathcs.emory.edu/~cheung
  • 400. ReachSet = {0} UnReachSet = {1, 2, ..., N-1} SpanningTree = {} while ( UnReachSet ≠ empty ) { Find edge e = (x, y) such that: 1. x ReachSet 2. y UnReachSet 3. e has smallest cost SpanningTree = SpanningTree {e} ReachSet = ReachSet {y} UnReachSet = UnReachSet - {y} }
  • 401. ReachSet = {0} UnReachSet = {1, 2, ..., N-1} SpanningTree = {} while ( UnReachSet ≠ empty ) { Find edge e = (x, y) such that: 1. x ReachSet 2. y UnReachSet 3. e has smallest cost SpanningTree = SpanningTree {e} ReachSet = ReachSet {y} UnReachSet = UnReachSet - {y} } http://www.mathcs.emory.edu/~cheung Complexity
  • 402. ReachSet = {0} UnReachSet = {1, 2, ..., N-1} SpanningTree = {} while ( UnReachSet ≠ empty ) { Find edge e = (x, y) such that: 1. x ReachSet 2. y UnReachSet 3. e has smallest cost SpanningTree = SpanningTree {e} ReachSet = ReachSet {y} UnReachSet = UnReachSet - {y} } http://www.mathcs.emory.edu/~cheung Complexity O(VE)
  • 403. Knapsack Problem Fill it up with maximum value https://en.wikipedia.org/wiki/Knapsack_problem
  • 406.
  • 407. Algorithm Design and Analysis Sayed Chhattan Shah Associate Professor of Computer Science Mobile Grid and Cloud Computing Lab Department of Information Communication Engineering Hankuk University of Foreign Studies Korea www.mgclab.com
  • 408.  Divide and conquer refers to a class of algorithmic techniques in which one breaks the input into several parts, solves the problem in each part recursively, and then combines the solutions to these sub-problems into an overall solution  Analyzing the running time of a divide and conquer algorithm generally involves solving a recurrence relation Divide and Conquer
  • 409.  Binary tree = a tree where each node has at most 2 children nodes Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 410.  Perfect binary tree = a binary tree where each level contains the maximum number of nodes Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 411.  Properties of the perfect binary tree o The number of nodes at depth d in a perfect binary tree = 2d o Proof o The number of nodes doubles every time the depth increases by 1 o Therefore, number of nodes at depth d = 2d Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 412.  Properties of the perfect binary tree o A perfect binary tree of height h has 2h+1 − 1 nodes o Proof o Number of nodes = 20 + 21 + ... 2h = 2h+1 − 1 Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 413.  Properties of the perfect binary tree o Number of leaf nodes in a perfect binary tree of height h = 2h o Proof  All the leaf nodes in a perfect binary tree of height h has a depth equal to h Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 414.  The minimum number of nodes in a binary tree of height h = h + 1 Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 415.  The maximum number of nodes in a binary tree of height h = 2h+1 − 1  Proof o The perfect binary tree has the maximum number of nodes o We have already shown that the number nodes in a perfect binary tree = 2h+1 − 1 Binary Trees http://www.mathcs.emory.edu/~cheung/
  • 417. Recurrence Tree The test function is printing value 3 times for test(3) and calling itself 4 times. For n it will print n times and will call itself n + 1 times
  • 418. Recurrence Relation T(n) 1 T(n-1) T(n) = T(n-1) + 1 if n>0 T(n) = 1 if n=0 1
  • 419. T(n) = T(n-1) + 1 T(n-1) = T(n-2)+1 T(n-2) = T(n-3)+1 T(n) = [T(n-2)+1] + 1 = T(n-2)+2 T(n) = [T(n-3)+1]+2 T(n) = T(n-3)+3 . . Continue for k times . T(n) = [T(n-k)+k] Assume n-k = 0 therefore n=k T(n)= T(n-n)+n = T(0)+n T(n)= 1+n T(n)= O(n)
  • 420.
  • 422. T(n) = 1+2+3+...+(n-1)+n = n(n+1)/2 Recurrence Tree
  • 424. T(n) = T(n-1) + n T(n) = T(n-1) + n if n>0 T(n) = 1 if n=0
  • 425. T(n) = T(n-1) + n T(n-1) = T(n-2) + n-1 T(n-2) = T(n-3) + n-2 Substitute values T(n) = T(n-2) + n-1 + n T(n) = T(n-3) + n-2 + n-1 + n . . If we continue for k times . T(n) = T(n-k) +(n-(k-1))+(n-(k-2)) + ...+ (n-1) + n Assume n-k=0 therefore k=n T(n)= T(n-n) + (n-n+1)+ (n-n+2) + ...+ (n-1)+n T(n) = T(0)+ 1+2+3+...+(n-1)+n = n(n+1)/2 T(n) = 1+n(n+1)/2 T(n) = O(n^2)
  • 428. n/2^k = 1 n=2^k k = log n O(log n) Recurrence Tree
  • 429. T(n) = T(n/2) + 1 if n>1 T(n) = 1 if n=1
  • 430. T(n) = T(n/2) + 1 T(n/2) = T(n/2^2) + 1 T(n) = T(n/2^2) + 2 T(n) = T(n/2^3) + 3 . . . T(n) = T(n/2^k) + k Assume n/2^k = 1 Therefore n=2^k and k = log n T(n)= T(1) + log(n) = 1+log(n) = O(log n)
  • 431.
  • 432. T(n) = 2T(n/2) + n if n>1 T(n) = 1 if n=1
  • 433. k times n k k = log n n log n Recurrence Tree
  • 434. T(n) = 2T(n/2) + n T(n/2) = 2T(n/2^2) + n/2 T(n) = 2[2T(n/2^2 + n/2)] + n T(n) = 2^2 T(n/2^2) + n + n T(n/2^2 ) = 2T(n/2^3) + n/2^2 T(n) = 2^2[2T(n/2^3 ) + n/2^2] + n + n T(n) = 2^3T(n/2^3 ) + n + n + n T(n) = 2^3T(n/2^3 ) + 3n . . . T(n) = 2^k T(n/2^k )+ k n Assume n/2^k = 1 Therefore n=2^k and k = log n T(n) = n x 1 + k n T(n) = n + n log n T(n) = O(n log n)
  • 435.  Merge sort o Divide array or list in two equal parts o Separately sort left and right parts o Merge or combine the two sorted parts to get the full array sorted Divide and Conquer
  • 436.  Merge Operation o How can we efficiently merge two sorted lists? Divide and Conquer Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 437. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 438. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 439. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 440. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 441. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 442. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 443. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 444. Traverse B and C simultaneously from left to right and write the smallest element at the current positions to A
  • 445.  Merge sort o Divide array or list in two equal parts o Separately sort left and right parts o Merge or combine the two sorted parts to get the full array sorted Divide and Conquer
  • 447.
  • 449. Merge ( result[], a[], b[] ) { k = 0 // k = output location in result[] for ( every element in a[] and b[] ) { let i be the first unprocessed element in a let j be the first unprocessed element in b if ( a[] and b[] are not exhausted ) { if ( a[i] < b[j] ) // Copy smallest over and shift result[k] = a[i] k++ and i++ else // Copy smallest over and shift result[k++] = b[j++]; k++ and j++ } else if ( a[] is exhausted ) result[k++] = b[j++] else if ( b[] is exhausted ) result[k++] = a[j++] } Merge the two sorted arrays a and b into one sorted array result
  • 450. MergeSort(list []) { // Copy the first half of list into a new array int firstLength = list.length / 2 int[] one = new int[firstLength] for (int n = 0 n < firstLength n++) one[n] = list[n] // Copy the second half of list into another array int secondLength = list.length - firstLength int[] two = new int[secondLength] for (int n = firstLength n < list.length n++) two[n - firstLength] = list[n] // Sort the two smaller arrays MergeSort(one) MergeSort(two) Merge(list, one, two) } }
  • 451.  Merge Operation o Merge just iterates over the arrays and the arrays can have at most size of n o Thus Merge function has a running time of O(n)  Merge Sort o Merge sort function is breaking the problem size of n into two sub-problems of size each Divide and Conquer Algorithm
  • 452.  Laptop executes 108 compares per second  Supercomputer executes 1012 compares per second  Good algorithms are better than supercomputers Divide and Conquer Algorithm
  • 453.
  • 454. Algorithm Design and Analysis Sayed Chhattan Shah Associate Professor of Computer Science Mobile Grid and Cloud Computing Lab Department of Information Communication Engineering Hankuk University of Foreign Studies Korea www.mgclab.com
  • 455.  Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems o Divide-and-conquer algorithms partition the problem into disjoint subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem o Dynamic programming applies when the subproblems overlap  When subproblems share sub-subproblems Dynamic Programming https://www.cmi.ac.in/~madhavan
  • 456.  Factorial o F(0) = 1 o F(n) = n x F(n-1) Dynamic Programming https://www.cmi.ac.in/~madhavan
  • 457.  Recursive Program Dynamic Programming https://www.cmi.ac.in/~madhavan Factorial(n) if (n <= 0) return 1 else return n x Factorial(n-1)
  • 458.  Recursive Program Dynamic Programming https://www.cmi.ac.in/~madhavan Factorial(n) if (n <= 0) return 1 else return n x Factorial(n-1)  Factorial(n-1) is a subproblem of Factorial(n)  Solution can be derived by combining solutions to subproblems
  • 459.  Fibonacci numbers o Fib(0) = 0 o Fib(1) = 1 o Fib(n) = Fib(n-1) + Fib(n-2) o 0 1 1 2 3 5 8 13 21 34 55 89 144 Dynamic Programming https://www.cmi.ac.in/~madhavan
  • 460.  Fibonacci numbers o Fib(0) = 0 o Fib(1) = 1 o Fib(n) = Fib(n-1) + Fib(n-2) Dynamic Programming https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value
  • 461. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5)
  • 462. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3)
  • 463. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2)
  • 464. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1)
  • 465. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0)
  • 466. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0
  • 467. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1
  • 468. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1
  • 469. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 2
  • 470. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 2 Fib(1) Fib(0)
  • 471. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) 1 0 1 1 2 1 0
  • 472. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) 1 0 1 1 2 1 0 1
  • 473. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) 1 0 1 1 2 1 0 1 3
  • 474. https://www.cmi.ac.in/~madhavan Fib(n) if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) return value Compute Fib(5) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 1 1 1 0 0 1 1 2 3 5 2
  • 475.  Overlapping subproblems o Wasteful recomputation o Computation tree grows exponentially Dynamic Programming https://www.cmi.ac.in/~madhavan Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 1 1 1 0 0 1 1 2 3 5 2
  • 476.  Never re-evaluate a subproblem o Build a table of values already computed  Memory table o Memoization  Remind yourself that this value has already been seen before Dynamic Programming https://www.cmi.ac.in/~madhavan
  • 477.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) Fib(5)
  • 478.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) Fib(5) Fib(4) Fib(3)
  • 479.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2)
  • 480.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1)
  • 481.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0)
  • 482.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1
  • 483.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0
  • 484.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1
  • 485.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1
  • 486.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 3 2 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 2
  • 487.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 3 2 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 2 1
  • 488.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 3 2 4 3 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 2 1 3
  • 489.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 3 2 4 3 5 5 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 1 2 3 2
  • 490.  Memoization o Store each newly computed value in a table o Look up table before starting a recursive computation o Computation tree is linear Dynamic Programming https://www.cmi.ac.in/~madhavan n Fib (n) 1 1 0 0 2 1 3 2 4 3 5 5 Fib(5) Fib(4) Fib(3) Fib(3) Fib(2) Fib(2) Fib(1) Fib(1) Fib(0) 1 0 1 1 1 2 3 5 2
  • 491.  Memoized Fibonacci Dynamic Programming https://www.cmi.ac.in/~madhavan Fib(n) if (Fibtable[n]) return Fibtable[n] if (n==0 or n==1) value = n else value = Fib(n-1) + Fib(n-2) Fibtable[n] = value return value