This document provides an overview of greedy algorithms and their use in solving optimization problems. It discusses key aspects of greedy algorithms including making locally optimal choices at each step, optimal substructures, and the greedy choice property. Two problems addressed in detail are the activity selection problem and building Huffman trees. The activity selection problem can be solved optimally using a greedy approach by always selecting the activity with the earliest finish time. Huffman trees provide data compression by assigning codes to characters based on frequency, with more common characters having shorter codes, and can be constructed greedily by repeatedly combining the two subtrees with lowest weight.
Algorithm Design and Complexity - Course 4 - Heaps and Dynamic ProgammingTraian Rebedea
Course 4 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
Algorithm Design and Complexity - Course 1&2Traian Rebedea
Courses 1 & 2 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
In computer science, divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type until these become simple enough to be solved directly.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Algorithm Design and Complexity - Course 4 - Heaps and Dynamic ProgammingTraian Rebedea
Course 4 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
Algorithm Design and Complexity - Course 1&2Traian Rebedea
Courses 1 & 2 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
In computer science, divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type until these become simple enough to be solved directly.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
This is the second lecture in the CS 6212 class. Covers asymptotic notation and data structures. Also outlines the coming lectures wherein we will study the various algorithm design techniques.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
This is the second lecture in the CS 6212 class. Covers asymptotic notation and data structures. Also outlines the coming lectures wherein we will study the various algorithm design techniques.
it contains the detail information about Dynamic programming, Knapsack problem, Forward / backward knapsack, Optimal Binary Search Tree (OBST), Traveling sales person problem(TSP) using dynamic programming
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
what is Algorithm and classification and its complexity
Time Complexity
Time Space trade-off
Asymptotic time complexity of algorithm and its notation
Why do we need to classify running time of algorithm into growth rates?
Big O-h notation and example
Big omega notation and example
Big theta notation and its example
best among the 3 notation
finding complexity f(n) for certain cases
1. Average case
2.Best case
3.Worst case
Searching
Sorting
complexity of Sorting
Conclusion
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Similar to Algorithm Design and Complexity - Course 5 (20)
Wholi: The right people find each other (at the right time)
Two key elements in this talk:
•PART 1: Machine learning for entity extraction
Natural language processing (NLP), information extraction
•PART 2: Matching profiles using deep learning classifier
Deep learning, word embeddings
Deep neural networks for matching online social networking profilesTraian Rebedea
> Proposed a large dataset for matching online social networking profiles
›This allowed us to train a deep neural network for profile matching using both domain-specific features and word embeddings generated from textual descriptions from social profiles
›Experiments showed that the NN surpassed both unsupervised and supervised models, achieving a high precision (P = 0.95) with a good recall rate (R = 0.85)
Detecting and Describing Historical Periods in a Large CorporaTraian Rebedea
Many historic periods (or events) are remembered
by slogans, expressions or words that are strongly linked to them. Educated people are also able to determine whether a particular word or expression is related to a specific period in human history. The present paper aims to establish correlations between significant historic periods (or events) and the texts written in that period. In order to achieve this, we have developed a system that automatically links words (and topics discovered using Latent Dirichlet Allocation) to periods of time in the recent history. For this analysis to be relevant and conclusive, it must be undertaken on a representative set of texts written throughout history. To this end, instead of relying on manually selected texts, the Google Books Ngram corpus has been chosen as a basis for the analysis. Although it provides only word n-gram statistics for the texts written in a given year, the resulting time series can be used to provide insights about the most important periods and events in recent history, by automatically linking them with specific keywords or even LDA topics.
Practical Machine Learning - Part 1 contains:
- Basic notations of ML (what tasks are there, what is a model, how to measure performance)
- A couple of examples of problems and solutions (taken from previous work)
- A brief presentation of open-source software used for ML (R, scikit-learn, Weka)
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
3. Greedy Algorithms
Efficient method to solve some optimization problems
The solutions to an optimization problem must satisfy a
global optimum
Advantages:
More difficult to verify
Simplification: choose the solution that looks best at each step
This is called a locally optimal solution
Simpler to build the solution
Less time / Better complexity
Disadvantage:
The locally optimal solution does not always lead to the globally
optimal solution
May not correctly solve the problem (but may provide good
approximations)
4. Greedy Algorithms (2)
At each step, we choose the best solution according
to the local optimum (greedy) choice
We abandon all the other possible solutions
We‟ll look at two problems that have a greedy
solution that leads to the global optimum as well
The solving paths that are not considered by the greedy
choice are discarded!
Activity selection
Huffman trees
Greedy is an algorithm design technique (pattern)!
5. General Greedy Scheme
SolveGreedy(Local_choice, Problem)
partial_sols = InitialSolution(problem); // determine the starting point
final_sols = Φ;
WHILE (partial_sols ≠ Φ)
FOREACH (s IN partial_sols)
IF (s is a solution for Problem) {
final_sols = final_sols U {s};
partial_sols = partial_sols {s};
} ELSE // can you optimize current solving path locally ?
IF(CanOptimize(s, Local_choice, Problem)) // YES
partial_sols = partial_sols {s} U
OptimizeLocally(s, Local_choice, Problem)
ELSE partial_sols = partial_sols {s};
// NO
RETURN final_sols;
Most times we follow only a single solving path!
6. Activity Selection Problem
Given a set of n activities that require exclusive use
of a common resource for a given period of time,
determine the largest subset of non-overlapping
activities
These activities are called mutually compatible
There might be more than a single solution
We want to identify one of these best solutions
Similar to DP, not suitable for finding all possible solutions
Notations:
S = {a1, … , an} are the activities
Each activity has a start time, si, and a finish time, fi
Each activity requires the common resource for the
interval [si, fi)
7. Activity Selection Problem (2)
E.g.
Activity = classes
Activity = processes
Resource = classroom
Resource = CPU
There exist some other activity selection problems
that are more difficult:
Maximize the usage time of the resource
Maximize income if each activity pays for the usage of the
resource
8. Example – from CLRS
We can devise a greedy solution if we consider the
activities sorted by their finish times
i
1
2
3
4
5
6
7
8
9
s[i] 1
2
4
1
5
8
9
11
13
f[i] 3
5
7
8
9
10
11
14
16
Solution: {a1, a3, a6, a8}
Not unique: {a2, a5, a7, a9}
9. Define the Sub-Problems
First, define the similar sub-problems
Let‟s consider the subset of activities that:
Start after ai finishes (start after fi)
Finish before aj starts (finish before sj)
They are compatible with all activities that:
Finish before fi
Start after sj
Si,j = {all ak in S | fi <= sk < fk < sj}
We also add two invented activities:
a0 = [-INF, 0)
an+1 = [INF, INF + 1)
10. Define the Sub-Problems (2)
S0,n+1 = S = the entire set of activities
When the activities are sorted by their finish time
f0 <= f1 <= f2 <= … <= fn <= fn+1
Si,j = Φ if i > j
fi <= sk < fk < sj < fj
=> fi < fj
Therefore, the sub-problems are Si,j with 0 <= i < j <=
n+1
11. Optimal Substructure
Suppose an optimal solution to Si,j includes the
activity ak
Then, we need to solve two sub-problems:
Therefore, the solution to Si,j is made of:
Si,k: all activities that start after ai and finish before ak
Sk,j: all activities that start after ak and finish before aj
The solution to Si,k
ak
The solution to Sk,j
Because ak is compatible with both Si,k and Sk,j
|Solution to Si,j| = |Solution to Si,k| + 1 + |Solution to Sk,j|
12. Optimal Substructure (2)
If an optimal solution to Si,j includes ak, then the subsolutions for Si,k and Sk,j must also be optimal
Ai,j = optimal solution for Si,j
Ai,j = Ai,k U {ak} U Ak,j
If Si,j is not empty
We know ak
c[i, j] = |Ai,j| = maximum size of the subset of
mutually compatible activities in Ai,j
c[i, j] = 0 if i >= j
13. Recursive Formulation
As we do not know the value of k, we must try all the
possible choices in order to find it
Now, we can solve this problem using DP
O(n2) sub-problems
O(n) choices at each step
O(n3) complexity for the DP solution
We can find a better one by using a greedy strategy!
14. Greedy Choice
Theorem
If Si,j is not empty and am is the activity with the
earliest finish time in Si,j
Then, am is used by at least one of the maximum
size subset of mutually independent activities in Si,j
Si,m = Φ , therefore only Sm,j needs to be solved
For any other solution to Si,j , we can replace the
activity that finishes earliest in this solution (let‟s call
it ak) with am, and these activities are still mutually
independent, as am finishes earlier than ak
15. Greedy Choice (2)
The previous theorem offers the greedy choice
The number of sub-problems considered in the
optimal solution at each step:
The number of choices to be considered at each
step:
DP: 2
Greedy: 1
DP: j-i-1
Greedy: 1
As we have a single choice and a single subproblem to solve, we can solve the problem topdown
16. Greedy Solution
In order to solve Si,j
Just choose the activity with the earliest finish time in Si,j
am
Then, solve Sm,j
In order to solve S = S0,n+1
First choice am1 (is always a1 – why?) for S0,n+1
Then need to solve Sm1,n+1
Second choice am2 for Sm1,n+1
Then need to solve Sm2,n+1
…
17. Recursive Algorithm
Because the greedy algorithm considers the activities sorted by their
finish time, we first need to sort by the finish time!
O(n logn)
RecursiveActivitySelection(s, f, i, n)
m = i +1
WHILE (m <= n AND s[m] < f[i])
m++
// find the activity with the earliest
// start time that starts after activity i finishes
IF (m <= n) THEN
RETURN {am} U RecursiveActivitySelection(s, f, m, n)
RETURN Φ
Initial call: RecursiveActivitySelection(s, f, 0, n)
Complexity: (n) – go through each activity once
18. Iterative Algorithm
Can turn the recursive algorithm into an iterative one
IterativeActivitySelection(s, f, n)
A = {a1}
i=1
FOR (m = 2..n)
IF (s[m] < f[i])
CONTINUE
ELSE
A = A U {am}
i=m
RETURN A
Complexity:
(n) – go once through each activity
19. Huffman Trees
Efficient method of compressing files
Especially text files
Builds a Huffman tree in a greedy fashion
Specific for the encoded text/file
It is used for compressing the file
The compressed file and the Huffman tree are used
to recreate the original file
Example text: “ana are mere”
20. Huffman Trees (2)
K – set of keys that are encoded (the characters in the
original text file)
In the original text, all the keys are represented on the
same number of bits
Objective: we want to find an alternative representation
for each key such that:
The keys that are most frequent are represented on a smaller
number of bits than the ones that are less frequent
We are able to distinguish easily in this new representation
what are the keys that were in the original file
Example: text files
Original representation: char – 8 bits or ASCII – 7 bits
New representation: 1 bit for the most frequent character in the
encoded text and so on…
21. Huffman Trees (3)
Huffman encoding tree:
An ordered binary tree
Only the leaves contain the keys from the set K
All internal nodes must have exactly 2 children
The edges are coded:
0 – left edge
1 – right edge
The code in the new representation for each key is the
set of codes from the root to the leaf containing that key
Start from the frequency of appearance of each key in
the original file: p(k) for each k in K
Example: “ana are mere”
p(a) = p(e) = 0.25; p(n) = p(m) = 0.083;p(r) = p( ) = 0.166
22. The Huffman Tree
T – encoding tree for the set of keys K
code_length(k) – the length of the code for key k in tree T
level(k, T) – the level in tree T for the leaf corresponding to key
k
The cost of an encoding tree T for a set of keys K that have
the frequencies p:
Cost (T )
code _ length(k ) * p(k )
k K
level (k , T ) * p(k )
k K
Huffman Tree = An encoding tree of minimum cost for a set of
keys K with frequencies p
The codes in this tree are called Huffman codes
Optimization problem!
23. Building the Huffman Tree
We can devise a greedy algorithm for building a Huffman
tree for any set of keys K
Steps:
1. For each key k in K build a simple tree with a single
node that contains k and has the weight w = p(k). Let
the forest of trees be called Forest.
2. Choose any two trees from Forest that have the
minimum weights. Let them be t1 and t2.
3. Remove t1 and t2 from Forest and add a new tree:
a)
b)
c)
That has a new root r that does not contain any key (as it is
not a leaf)
The two descendents of r are t1 and t2 respectively.
The weight of the new tree is w(r) = w(t1) + w(t2)
Repeat steps 2 and 3 until Forest contains a single tree
4.
=> the Huffman tree
26. Algorithm for building the Huffman Tree
On the whiteboard
Straightforward from the pseudocode
27. Decoding the File
Encoded text:
0010100011000101000111001101011
a n a „‟ a r
e „‟ m e r e
We also need the Huffman tree
Starting from the first bit, we walk the tree from the
root to the first leaf we encounter
When at a leaf, append the key corresponding to that leaf
to the decoded text
Go to the root again and repeat until we reach the end of
the encoded text
28. Greedy Algorithms – Conclusions
Greedy algorithms that build the globally optimal solution
can be devised for some problems that have an optimal
substructure
Steps for devising a greedy algorithm:
Determine the optimal substructure
Develop a recursive solution
Prove that at any stage of recursion, one of the optimal
choices is the greedy choice. Therefore, it‟s always safe
to make the greedy choice
Show that all but one of the sub-problems resulting from
the greedy choice are empty
Develop a recursive greedy algorithm
Convert it to an iterative algorithm
29. Greedy Algorithms – Conclusions (2)
Properties for optimization problems that accept
correct greedy solutions:
Optimal substructure
Greedy choice property
Preprocessing is essential for efficient greedy
algorithms:
E.g. sort some data prior to process it with the greedy
algorithm
30. Greedy vs. DP
Similarities:
Optimization problems
Optimal substructure (including division into subproblems)
Make a choice at each step
Differences:
Greedy: 1 choice, 1 sub-problem to be solved
Greedy is top-down, DP is bottom-up
Greedy has the greedy choice property
Greedy does not use memoization as the other subproblems are not important (they are discarded if they are
not used by the greedy choice)
31. Knapsack Problem
Given a set on n items:
Which are the items that should be carried in order
to maximize the total value that can be carried in a
knapsack of total weight W?
Values v[i]
Weights w[i]
Optimization problem
Similar to the change-making problem
Given a set of divisions (coins and banknotes for a
currency), find the minimum number of coins and
banknotes needed to change a given amount of money
32. Knapsack Problem (2)
Can be solved efficiently if:
Are allowed to carry fractions of the items
Fractional knapsack problem
Greedy solution: sort the items according to the ratio v[i]/w[i] and
choose the items in the order of the highest ratio until the
knapsack is full
We are not allowed to carry fractions of the items
Integer (0/1) knapsack problem
But the values for weights and values are relatively small
integers
DP solution: on whiteboard
33. Knapsack Problem (3)
However, in the general case:
Real values for weights
Very high values for weights
The problem can only be solved using a
backtracking approach
The problem is NP-complete
The class of the most difficult problems that can be solved
on a computer (at this moment, it‟s considered that these
problems cannot be solved in polynomial time)
34. References
CLRS – Chapter 16
MIT OCW – Introduction to Algorithms – video
lecture 16
http://www.math.fau.edu/locke/Greedy.htm