Credit : Nusrat Jahan & Fahima Hossain , Dept. of CSE, JnU, Dhaka.
Randomized Algorithm- Advanced Algorithm, Deterministic, Non Deterministic, LAS Vegas, MONTE Carlo Algorithm.
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
Introduction
Dynamic Programming
How Dynamic Programming reduces computation
Steps in Dynamic Programming
Dynamic Programming Properties
Principle of Optimality
Problem solving using Dynamic Programming
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
Introduction
Dynamic Programming
How Dynamic Programming reduces computation
Steps in Dynamic Programming
Dynamic Programming Properties
Principle of Optimality
Problem solving using Dynamic Programming
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Linear regression with gradient descentSuraj Parmar
Intro to the very popular optimization Technique(Gradient descent) with linear regression . Linear regression with Gradient descent on www.landofai.com
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Linear regression with gradient descentSuraj Parmar
Intro to the very popular optimization Technique(Gradient descent) with linear regression . Linear regression with Gradient descent on www.landofai.com
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores. The naive algorithms for computing these scores explicitly compute the PCA of the covariance matrix which uses space quadratic in the dimensionality of the data. We give the first streaming algorithms
that use space that is linear or sublinear in the dimension. We prove general results showing that any sketch of a matrix that satisfies a certain operator norm guarantee can be used to approximate these scores. We instantiate these results with powerful matrix sketching techniques such as Frequent Directions and random projections to derive efficient and practical algorithms for these problems, which we validate over real-world data sets. Our main technical contribution is to prove matrix perturbation
inequalities for operators arising in the computation of these measures.
-Proceedings: https://arxiv.org/abs/1804.03065
-Origin: https://arxiv.org/abs/1804.03065
Unit 1: Fundamentals of the Analysis of Algorithmic Efficiency, Units for Measuring Running Time, PROPERTIES OF AN ALGORITHM, Growth of Functions, Algorithm - Analysis, Asymptotic Notations, Recurrence Relation and problems
Weather, opponents, geopolitics: so many uncertainties in such a case ? How to manage power systems in spite of these uncertainties, and how to decide investments.
Talk at Saint-Etienne in 2015; thanks to R. Leriche and to the "games and optimizations" days in Saint-Etienne.
TMPA-2017: The Quest for Average Response TimeIosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
The Quest for Average Response Time
Thomas A. Henzinger (President, IST, Austria Institute of Science and Technology)
For video follow the link: https://youtu.be/bCMj2toH1b4
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Similar to Randomized Algorithm- Advanced Algorithm (20)
Modern Block Cipher- Modern Symmetric-Key CipherMahbubur Rahman
Introduction to Modern Symmetric-Key Ciphers- This lecture will cover only "Modern Block Cipher".
Slide Credit: Maleka Khatun & Mahbubur Rahman
Dept. of CSE, JnU, BD.
This slide is prepared By these following Students of Dept. of CSE JnU, Dhaka. Thanks To: Nusrat Jahan, Arifatun Nesa, Fatema Akter, Maleka Khatun, Tamanna Tabassum.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
2. Presented To:
Nusrat Jahan Id: B-150305002
Fahima Hossain Id: B-150305036
2
Dr. Md. Manowarul Islam
Associate Professor
Dept. of Computer Science & Engineering,
Jagannath University, Dhaka
Presented By:
3. Outline
Deterministic VS Non-Deterministic
Deterministic Algorithm
Randomized Algorithms
Types of Randomized Algorithms
Las Vegas
Monte Carlo
Las Vegas
Quick Sort
Smallest Enclosing Circle
Monto Carlo
Minimum Cut
Primality Testing
Smallest Enclosing Disk
Minimum Spanning Tree
Michael's Algorithms
3
5. Deterministic vs Non-Deterministic
Deterministic Algorithm:
• In deterministic algorithm, for a given particular input, the computer will
always produce the same output going through the same states.
• Can solve the problem in polynomial time.
• Can determine what is the next step.
Non-Deterministic Algorithm:
• In non-deterministic algorithm, for the same input, the compiler may
produce different output in different runs.
• Can’t solve the problem in polynomial time.
• Can’t determine what is the next step.
5
6. Deterministic Algorithm
6
Goal of Deterministic Algorithm
The solution produced by the algorithm is correct.
The number of computational steps is same for different runs of
the algorithm with the same input.
7. Deterministic Algorithm
7
Problem in
Deterministic Algorithm
Given a computational problem –
• It may be difficult to formulate an algorithm
with good running time, or
• The exploitation of running time of an
algorithm for that problem with the number of
inputs.
Efficient heuristics,
Approximation algorithms,
Randomized algorithms
Remedies
9. Randomized Algorithm
9
What is a Randomized Algorithm?
• An algorithm that uses random numbers to decide what to do next
anywhere in its logic is called Randomized Algorithm.
• A randomized algorithm is an algorithm that employees a degree of
randomness as a part of its logic.
• A randomized algorithm is one that makes random choices during its
execution.
10. “
To overcome the computation problem of exploitation of running time of a
deterministic algorithm, randomized algorithm is used.
Randomized algorithm uses uniform random bits also called as pseudo random
number as an input to guides its behavior (Output).
Randomized algorithms rely on the statistical properties of random numbers
(e.g. randomized algorithm is quick sort).
It tries to achieve good performance in the average case.
10
11. Why use Randomized Algorithm
11
Simple and easy to implement. For example, Karger's min-cut algorithm
Faster and produces optimum output with very high probability.
To improve efficiency with faster runtimes. For example, we could use a randomized
quicksort algorithm. Deterministic quicksort can be quite slow on certain worst case
inputs (e.g., input that is almost sorted), but randomized quicksort is fast on all
inputs.
To improve memory usage. Random sampling as a way to sparsify input and then
working with this smaller input is a common technique.
In parallel/distributed computing, each machine only has a part of the data, but still
has to make decisions that affect global outcomes. Randomization plays a key role in
informing these decisions.
14. Las Vegas
14
Always produces correct output.
Running time is random.
Time complexity is based on a random value and time complexity is
evaluated as expected value.
So correctness is deterministic, time complexity is probabilistic.
Expected running time should be polynomial.
1. Improve performance
Ex.: Randomized quicksort
2. Searching in solution space
Use
16. Divide and Conquer
16
The design of Quicksort is based on the divide-and-conquer paradigm.
Divide: Partition the array A[p..r] into two subarrays A[p..q-1] and A[q+1,r]
such that,
A[x] <= A[q] for all x in [p..q-1]
A[x] > A[q] for all x in [q+1,r]
Conquer: Recursively sort A[p..q-1] and A[q+1,r]
Combine: nothing to do here
≤ 𝒙 𝒙 ≥ 𝒙
17. Deterministic QuickSort Algorithm
17
• Given an array A containing n (comparable) elements, sort them in
increasing/decreasing order.
• Here a pivot element is chosen either leftmost or rightmost number for performing
the algorithm.
The Problem
• If 𝑝 < 𝑟 then,
• 𝐶𝑜𝑚𝑝𝑢𝑡𝑒 𝑞 ← 𝑷𝒂𝒓𝒕𝒊𝒕𝒊𝒐𝒏 (𝑨, 𝒑, 𝒓)
• 𝑄𝑆𝑂𝑅𝑇 (𝐴, 𝑝, 𝑞 − 1).
• 𝑄𝑆𝑂𝑅𝑇 (𝐴, 𝑞 + 1, 𝑟).
QSORT(A, p, q)
19. Deterministic QuickSort Algorithm
19
• The running time is the dependent on the PARTITION procedure.
• Each time the PARTITION procedure is called, it selects a pivot element. Thus, there
can be at most n calls to PARTITION over the entire execution of the quicksort
algorithm.
• PARTITION takes 𝑂(1) time plus an amount of time that is proportional to the number
of iterations of the 𝒇𝒐𝒓 loop.
• The running time of QUICKSORT is 𝑂(𝑛 + 𝑋), X be the number of comparisons
performed in the 𝒇𝒐𝒓 loop of PARTITION.
21. Randomized QuickSort Algorithm
21
Randomized-Quicksort(A, p, r)
if p < r then
q := Randomized-Partition(A, p, r);
Randomized-Quicksort(A, p,q-1);
Randomized-Quicksort(A,p+1,r);
Randomized-Partition(A, p, r)
i := Random(p, r);
swap(A[i], A[r]);
p := Partition(A, p, r);
Return p;
Almost the same as Partition as Deterministic QuickSort, but now the pivot
element is not the rightmost/leftmost element, but rather an element from A[p..r]
that is chosen uniformly at random.
22. Randomized QuickSort Algorithm
22
Pos 1 2 3 ……. …… ……. …… ……. …… n
value 𝑥1 𝑥2 𝑥3 ……. …… ……. …… ……. …… 𝑥𝑛
Pick a
random
value
Pos 1 2 3 ……. …… m …… ……. …… n
value 𝑥1 𝑥2 𝑥3 ……. …… 𝑥m …… ……. …… 𝑥𝑛
Let’s m will
be the
pivot value
Pos 1 2 3 ……. …… m …… ……. …… n
value 𝑥1 𝑥2 𝑥3 ……. …… 𝑥n …… ……. …… 𝑥𝑚
i = 𝒙𝐦
Perform
swap
23. Randomized QuickSort Algorithm
23
Goal
The running time of quicksort depends mostly on the number of
comparisons performed in all calls to the Randomized-Partition routine.
Let X denote the random variable counting the number of comparisons
in all calls to Randomized-Partition.
24. 24
What was the main Problem in Deterministic Quicksort
1 2 … … … n
Suppose given Sorted
array and we have to
perform here Quicksort
n
Pivot
1 2 … … … n
n-1
1 2 … … … n
n-2
1st element
will be fixed
position
Then perform for n-1
number
2nd element
will be fixed
position
In that case the algorithm doesn’t perform divide and conquer.
This is the worse case that it has to check from 1st element to last element
for every time….
25. 25
What will be happened in case of Randomized Quicksort
1 2 3 4 5 6
4
Pick a random
number
1 2 3 6 5 4
i
p r
Swap(A[i],A[r])
and then perform
Partition function
Pivot
p r
26. 26
What will be happened in case of Randomized Quicksort
1 2 3 6 5 4
i=p-1 j x
1 2 3 6 5 4
i j x
1 2 3 6 5 4
i j x
1 2 3 6 5 4
i j
A[j] <= x
?
No
1 2 3 6 5 4
i j A[j] <= x
?
No
1 2 3 5 6
4
4
6
27. Comparison
27
Best Case: 𝑂 𝑛 log 𝑛
Worst Case: 𝑂 (𝑛2)
Expected Case: 𝑂 𝑛 log 𝑛
Expected Worst Case: 𝑂 (𝑛2)
Randomized Quicksort
Deterministic Quicksort
In worst case the randomized function can pick the index of
corner element every time.
But it is rare to pick the corner element.
28. Average runtime vs Expected runtime
28
Average runtime is averaged
over all inputs of a
deterministic algorithm.
Expected runtime is the expected
value of the runtime random
variable of a randomized
algorithm.
It effectively “average” over all
sequences of random numbers.
Expected runtime
Average runtime
30. Smallest Enclosing Circle
30
Problem Definition
Given n points in a plane, compute the smallest radius circle
that encloses all n point.
Also known as minimum covering circle problem, bounding
circle problem, smallest enclosing circle problem
31. Smallest Enclosing Circle
31
Applications: Facility location problem (1-center problem)
Best deterministic algorithm : [Nimrod Megiddo, 1983]
⬦ 𝑂(𝑛3) time complexity, too complex, uses advanced geometry
Randomized Las Vegas algorithm: [Emo Welz, 1991]
⬦ Expected 𝑂(𝑛) time complexity, too simple, uses elementary
geometry.
⬦ The algorithm is recursive.
⬦ Based on a linear programming algorithm of “Raimund Seidel”.
34. “
It may produce incorrect answer.
We are able to bound its probability.
By running it many times on independent random
variables, we can make the failure probability
arbitrarily small at the expense of running time.
E.g. Randomized Mincut Algorithm
34
35. Monte Carlo Example
⬥ Suppose we want to find a number among n given numbers
which is larger than or equal to the median.
⬥ Suppose A1 < … < An .
⬥ We want Ai , such that i ≥ n/2. It’s obvious that the best
deterministic algorithm needs O(n) time to produce the
answer. n may be very large! Suppose n is 100,000,000,000!
⬥ Choose 100 of the numbers with equal probability.
⬥ Find the maximum among these numbers. Return the
maximum.
35
36. Monte Carlo Example
36
The running time of the given algorithm is O(1).
The probability of Failure is 1/(2100).
Consider that the algorithm may return a wrong answer but
the probability is very smaller than the hardware failure or
even an earthquake!
38. 38
Problem Statement: an array of n points in the plane and the
problem is to find the closest pair of points in the array.
Distance between two points p and q can be found by the
following formula:
|pq| = (𝑝𝑥 − 𝑞𝑥)2+ (𝑝𝑦 − 𝑞𝑦)2
Closest Pair of Points
39. Algorithm
⬥ Input: An array of n points P[ ].
⬥ Output: Smallest distance between two points in the given
array.
39
P[ ] = {0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17}
40. Algorithm Cont….
40
Sort the array according to the x-coordinates at first as preprocessing
step.
P[ ] = {13, 12, 11, 0, 14, 16, 1, 10, 17, 9, 2, 15, 3, 8, 4, 5, 7, 6}
1. Find the middle point in sorted array. We can take P[n/2] as the
middle point.
P[ ] = {13, 12, 11, 0, 14, 16, 1, 10, 17, 9, 2, 15, 3, 8, 4, 5, 7, 6}
2. Divide the array in two halves. The first subarray contains points
for P[0] to P[n/2] and the second subarray contains points from
P[n/2+1] to P[n-1].
𝑃𝐿 = {13, 12, 11, 0, 14, 16, 1, 10, 17} 𝑃𝑅 = {9, 2, 15, 3, 8, 4, 5, 7, 6}
41. Algorithm Cont….
41
3. Recursively find the smallest distance between two subarrays. Let
the distance be dl and dr. Find the minimum of dl and dr. Let the
minimum be d.
d = min(dl, dr)
42. 42
Our knowledge: Insertion time depends on
whether the closest pair is changed or not.
If output is the same: 1 clock tick. If output is
not the same: |D| clock ticks.
With random insertion order, show that the
expected total number of clock ticks used by D
is O(n)
44. Minimum Cut
⬥ Min-Cut of a weighted graph is defined as the minimum sum
of weights of (at least one)edges that when removed from the
graph divides the graph into two groups.
⬥ The algorithm works on a method of shrinking the graph
until only one node is left in the graph.
⬥ Minimum value in the list would be the minimum cut value
of the graph.
44
45. Minimum Cut Cont....
45
• Select the edge with minimum weight and according this minimum
weight edge next move is done in e network graph.
Some points are taken in consideration when working with Min-Cut:
• A cut of connected graph is obstained bye dividing vertex set V of
graph G into 2 sets 𝑉1 & 𝑉2.
• There are no common vertices in 𝑉1 & 𝑉2, that is, two sets are
disjoint.
• 𝑉1 U 𝑉2 = V
46. Minimum Cut Cont….
Algorithm:
Repeat steps 2 to 4 until only two
vertices are left.
Pick an edge e(u,v) at random.
Merge u and v.
Remove self loops from E.
Return |E|.
46
a
b
d
c
e
f
50. Minimum Cut
50
Problem definition: Given a connected graph G=(V,E) on n
vertices and m edges, compute the smallest set of edges that
will make G disconnected.
Best deterministic algorithm : [Stoer and Wagner, 1997]
• O(mn) time complexity.
Randomized Monte Carlo algorithm: [Karger, 1993]
• O(m log n) time complexity.
Error probability: n−𝑐 for any 𝑐 that we desire.
51. Applications of Minimum Cut Algorithm
51
Partitioning items in a database,
Identify clusters of related documents,
Network reliability,
Network design,
Circuit design, etc.
52. Smallest Enclosing Disk
52
The Problem: Given a set of n points, P = {𝑝1, 𝑝2, ….. 𝑝𝑛} in 2D,
compute a disk of minimum radius that contains all the points in P.
Trivial Solution: Consider each triple of points 𝑝𝑖 , 𝑝𝑘 ∈ P, and check
whether every other point in P lies inside the circle defined by 𝑝𝑖 , 𝑝𝑗 ,
𝑝𝑘.
Time complexity: O(𝑛4)
An Easy Implementable Efficient Solution: Consider furthest point
Voronoi diagram. Its each vertex represents a circle containing all the
points in P. Choose the one with minimum radius.
Time complexity: O(n log n)
53. 53
Goal: compute a disk containing k points and having radius at most 2*𝑟𝑜𝑝𝑡.
𝑟𝑜𝑝𝑡 = smallest radius disk containing k points
K=2 means the problem is closest pair problem.
It’s a simple form of clustering.
K=5
54. Smallest Enclosing Disk Cont….
54
A Simple Randomized Algorithm
We generate a random permutation of the points in P.
Notations:
𝑃𝑖 = {𝑝1, 𝑝2,…,𝑝𝑖}.
𝐷𝑖 = the smallest enclosing disk of 𝑃𝑖 .
An incremental procedure Result:
• If 𝑃𝑖 ∈ 𝐷𝑖−1 then 𝐷𝑖 = 𝐷𝑖−1.
• If 𝑃𝑖 ∉ 𝐷𝑖−1 then pi lies on the boundary of 𝐷𝑖 .
55. Smallest Enclosing Disk Cont….
Algorithm:
1. Compute a random permutation of P = {𝑝1, 𝑝2,…,𝑝𝑖}.
2. Let 𝐷2 be the smallest enclosing disk for {𝑝1, 𝑝2}.
3. for i = 3 to n do
4. if 𝑃𝑖 ∈ 𝐷𝑖−1
5. then 𝐷𝑖 = 𝐷𝑖−1
6. else 𝐷𝑖 = MINIDISKWITHPOINT({𝑝1, 𝑝2,…,𝑝𝑖}, 𝑝𝑖)
7. Return 𝐷𝑛.
55
Algorithm MINIDISC(P)
Input: A set P of n points in the plane.
Output: The smallest enclosing disk for P.
56. Smallest Enclosing Disk Cont….
56
Algorithm MINIDISCWITHPOINT(P, q)
Idea: Incrementally add points from P one by one and compute the
smallest enclosing circle under the assumption that the point q (the 2nd
parameter) is on the boundary.
Input: A set of points P, and another point q.
Output: Smallest enclosing disk for P with q on the boundary.
57. Smallest Enclosing Disk Cont….
57
Algorithm MINIDISCWITH2POINT(P, 𝑞1, 𝑞2)
Idea: Thus we have two fixed points; so we need to
choose another point among P {q1, q2} to have the
smallest enclosing disk containing P.
58. Time Complexity
58
MINIDISKWITHPOINTS needs O(n) time if we do not consider
the time taken in the call of the routine MINIDISKWITH2POINTS.
Worst case: O(𝑛3
)
Expected case: MINIDISKWITH2POINTS needs O(n) time.
59. Primality Testing
59
Algorithm:
for i = 2 to N-1
{
if (x mod i == 0)
return -1
}
return 1
Time Complexity: O(N)
Mathematical Result: 2, ….. , 𝑁
Prime Number: divisible only by 1 and the number itself.
Ex: 1, 2, 3, 5, 7, 11, ………
61. Primality Testing Cont….
61
Fermat’s Theorem: If n is prime, then 𝑎𝑛−1 ≡ 1 (mod n) for
any integer a < n.
n = 3 , a = 2, then 22 % 3 = 4 % 3 = 1
n = 5, a = 2, then 25−1
% 5 = 16 % 5 = 1
n = 5, a = 3, then 35−1 % 5 = 81 % 5 = 1
62. Primality Testing Cont….
62
Algorithm:
if((𝑎𝑛−1)% = = 1)
for(i = 1 to large) do
{
a = random (n) 1, ………, n-1
z = 𝑎𝑛−1
if((z%n) ≠ 1)
return False
}
return True
Input: Prime = n
Sufficient number of integers a < n
63. Primality Testing
63
Applications:
RSA-cryptosystem
Algebraic algorithms
Best deterministic algorithm : [Agrawal, Kayal and Saxena,
O(n6 ) time complexity.
Randomized Monte Carlo algorithm: [Rabin, 1980]
O(k n2
) time complexity.
Error probability: 2−k for any k that we desire.
For 𝐧= 50, this probability is 𝟏𝟎−𝟏𝟓
65. “
⬥ A randomized data structure for fast searching
⬥ Keys represented in compressed storage as bit array
⬥ Two operations supported – Insert and Search
⬥ Search returns YES (present) or NO (not present)
⬥ NO is always correct YES is correct with a probability
⬥ Similar to Monte Carlo with one-sided error
⬥ Many practical applications in networks, content
search etc.
65
66. Bloom Filter Operation
66
A bit array A[0..m-1] of size m
Initially all bits are set to 0
A set of k random and independent hash functions ℎ0,
ℎ1, …,ℎ𝑘−1 producing a hash value between 0 and m – 1
Insert key x
Compute 𝑦𝑖 = ℎ𝑖(x) for i = 0, 1,….,k – 1
Set A[𝑦𝑖] = 1 for i = 0, 1, …,k – 1
(𝑦0, 𝑦1, 𝑦2,…,𝑦𝑘−1) is called the signature of x
Search for a key x
Compute 𝑦𝑖 = hi (x) for i = 0, 1,…,k – 1
Answer YES if A[𝑦𝑖 ] =1 for all i, NO otherwise
67. 67
NO answer is always correct
If x was inserted, corresponding 𝑦𝑖 ’s must have been set to 1
for all i
YES answers may be correct
𝑦𝑖 ’s may have been set to 1 due to insert of other keys
Note that all of them must have been set to 1 by insert of
other keys
Could be insert of more than one key, each setting some bits
70. 70
Classifying Randomized Algorithms by Their Methods
Avoiding Worst-Case Inputs: Obtained by hiding the details of the
algorithm from the adversary. Since the algorithm is chosen
randomly, he can’t pick an input that is bad for all of them.
Sampling: Randomness is used for choosing a simple random
sample, without replacement, of k items from a population of
unknown size n in a single pass over the items. In this way, the
adversary can’t direct us to non-representative samples.
71. 71
Classifying Randomized Algorithms by Their Methods
Hashing: Obtained by selecting a hash function at random from a
family of hash functions. This guarantees a low number of collisions
in expectation, even if the data is chosen by an adversary.
Building Random Structures: By creating a randomized algorithm to
create structures, the probability can be reached to substantial.
Symmetry Breaking: Randomization can break the deadlocks of
making the progress of multiple processes stymied.
72. Advantages of Randomized Algorithms
72
The algorithm is usually simple and easy to implement,
The algorithm is fast with very high probability, and
It produces optimum output with very high probability.
73. Difficulties in Randomized Algorithm
73
There is a finite probability of getting incorrect answer.
However, the probability of getting a wrong answer can be made
arbitrarily small by the repeated employment of randomness.
Analysis of running time or probability of getting a correct
answer is usually difficult.
Getting truly random numbers is impossible. One needs to
depend on pseudo random numbers. So, the result highly
depends on the quality of the random numbers.
Its quality depends on quality of random number generator used
as part of the algorithm.
The other disadvantage of randomized algorithm is hardware
failure.
75. 75
Tool for sorting: Randomized Quick Sort, then there is no user that always
gets worst case. Everybody gets expected O(n Log n) time.
Cryptography: Randomized algorithms have huge applications in
Cryptography, e.g: RSA Crypto-System.
Load Balancing.
Number-Theoretic Applications: Primality Testing
Data Structures: Hashing, Sorting, Searching, Order Statistic and
Computational Geometry.
Algebraic identities: Polynomial and matrix identity verification. Interactive
proof systems.
Application
76. 76
Mathematical programming: Faster algorithms for linear programming,
Rounding linear program solutions to integer program solutions
Graph algorithms: Minimum spanning trees, shortest paths, minimum cuts.
Counting and enumeration: Matrix permanent Counting combinatorial
structures.
Parallel and distributed computing: Deadlock avoidance distributed
consensus.
Probabilistic existence proofs: Show that a combinatorial object arises with
non-zero probability among objects drawn from a suitable probability space.
Application