The document provides resources for learning data structures and algorithms. It lists popular books, training websites, and YouTube channels on the topic. It then covers various data structures like arrays, lists, stacks, queues, hash tables, trees, and graphs. It explains each data structure with examples and discusses common algorithms like sorting, searching, and graph algorithms. Key algorithms covered include topological sort, minimum spanning tree, and shortest path finding. The document is a comprehensive guide to data structures and algorithms.
INFORMATIVE ESSAYThe purpose of the Informative Essay assignme.docxcarliotwaycave
INFORMATIVE ESSAY
The purpose of the Informative Essay assignment is to choose a job or task that you know how to do and then write a minimum of 2 full pages, maximum of 3 full pages, Informative Essay teaching the reader how to do that job or task. You will follow the organization techniques explained in Unit 6.
Here are the details:
1. Read the Lecture Notes in Unit 6. You may also find the information in Chapter 10.5 in our text on Process Analysis helpful. The lecture notes will really be the most important to read in writing this assignment. However, here is a link to that chapter that you may look at in addition to the lecture notes:
https://open.lib.umn.edu/writingforsuccess/chapter/10-5-process-analysis/ (Links to an external site.)
2. Choose your topic, that is, the job or task you want to teach. As the notes explain, this should be a job or task that you already know how to do, and it should be something you can do well. At this point, think about your audience (reader). Will your reader need any knowledge or experience to do this job or task, or will you write these instructions for a general reader where no experience is required to perform the job?
3. Plan your outline to organize this essay. Unit 6 notes offer advice on this organization process. Be sure to include an introductory paragraph that has the four main points presented in the lecture notes.
4. Write the essay. It will need to be at least 2 FULL pages long, maximum of 3 full pages long. You will use the MLA formatting that you used in previous essays from Units 3, 4, and 5.
5. Be sure to include a title for your essay.
6. After writing the essay, be sure to take time to read it several times for revision and editing. It would be helpful to have at least one other person proofread it as well before submitting the assignment.
Quiz2
# comments start with #
# to quit q()
# two steps to install any library
#install.packages("rattle")
#library(rattle)
setwd("D:/AJITH/CUMBERLANDS/Ph.D/SEMESTER 3/Data Science & Big Data Analy (ITS-836-51)/RStudio/Week2")
getwd()
x <- 3 # x is a vector of length 1
print(x)
v1 <- c(2,4,6,8,10)
print(v1)
print(v1[3])
v <- c(1:10) #creates a vector of 10 elements numbered 1 through 10. More complicated data
print(v)
print(v[6])
# Import test data
test<-read.csv("CVEs.csv")
test1<-read.csv("CVEs.csv", sep=",")
test2<-read.table("CVEs.csv", sep=",")
write.csv(test2, file="out.csv")
# Write CSV in R
write.table(test1, file = "out1.csv",row.names=TRUE, na="",col.names=TRUE, sep=",")
head(test)
tail(test)
summary(test)
head <- head(test)
tail <- tail(test)
cor(test$X, test$index)
sd(test$index)
var(test$index)
plot(test$index)
hist(test$index)
str(test$index)
quit()
Quiz3
setwd("C:/Users/ialsmadi/Desktop/University_of_Cumberlands/Lectures/Week2/RScripts")
getwd()
# Import test data
data<-read.csv("yearly_sales.csv")
#A 5-number summary is a set of 5 descriptive statistics for summarizing a continuous univariate data set.
#It consists o ...
In this presentation I have presented about the iPython, how to install iPython, how to start with iPython. How to plot graph, how to animate/design it, the different type of graphs. What all functions you can take help of with iPython
Monads and Monoids: from daily java to Big Data analytics in Scala
Finally, after two decades of evolution, Java 8 made a step towards functional programming. What can Java learn from other mature functional languages? How to leverage obscure mathematical abstractions such as Monad or Monoid in practice? Usually people find it scary and difficult to understand. Oleksiy will explain these concepts in simple words to give a feeling of powerful tool applicable in many domains, from daily Java and Scala routines to Big Data analytics with Storm or Hadoop.
INFORMATIVE ESSAYThe purpose of the Informative Essay assignme.docxcarliotwaycave
INFORMATIVE ESSAY
The purpose of the Informative Essay assignment is to choose a job or task that you know how to do and then write a minimum of 2 full pages, maximum of 3 full pages, Informative Essay teaching the reader how to do that job or task. You will follow the organization techniques explained in Unit 6.
Here are the details:
1. Read the Lecture Notes in Unit 6. You may also find the information in Chapter 10.5 in our text on Process Analysis helpful. The lecture notes will really be the most important to read in writing this assignment. However, here is a link to that chapter that you may look at in addition to the lecture notes:
https://open.lib.umn.edu/writingforsuccess/chapter/10-5-process-analysis/ (Links to an external site.)
2. Choose your topic, that is, the job or task you want to teach. As the notes explain, this should be a job or task that you already know how to do, and it should be something you can do well. At this point, think about your audience (reader). Will your reader need any knowledge or experience to do this job or task, or will you write these instructions for a general reader where no experience is required to perform the job?
3. Plan your outline to organize this essay. Unit 6 notes offer advice on this organization process. Be sure to include an introductory paragraph that has the four main points presented in the lecture notes.
4. Write the essay. It will need to be at least 2 FULL pages long, maximum of 3 full pages long. You will use the MLA formatting that you used in previous essays from Units 3, 4, and 5.
5. Be sure to include a title for your essay.
6. After writing the essay, be sure to take time to read it several times for revision and editing. It would be helpful to have at least one other person proofread it as well before submitting the assignment.
Quiz2
# comments start with #
# to quit q()
# two steps to install any library
#install.packages("rattle")
#library(rattle)
setwd("D:/AJITH/CUMBERLANDS/Ph.D/SEMESTER 3/Data Science & Big Data Analy (ITS-836-51)/RStudio/Week2")
getwd()
x <- 3 # x is a vector of length 1
print(x)
v1 <- c(2,4,6,8,10)
print(v1)
print(v1[3])
v <- c(1:10) #creates a vector of 10 elements numbered 1 through 10. More complicated data
print(v)
print(v[6])
# Import test data
test<-read.csv("CVEs.csv")
test1<-read.csv("CVEs.csv", sep=",")
test2<-read.table("CVEs.csv", sep=",")
write.csv(test2, file="out.csv")
# Write CSV in R
write.table(test1, file = "out1.csv",row.names=TRUE, na="",col.names=TRUE, sep=",")
head(test)
tail(test)
summary(test)
head <- head(test)
tail <- tail(test)
cor(test$X, test$index)
sd(test$index)
var(test$index)
plot(test$index)
hist(test$index)
str(test$index)
quit()
Quiz3
setwd("C:/Users/ialsmadi/Desktop/University_of_Cumberlands/Lectures/Week2/RScripts")
getwd()
# Import test data
data<-read.csv("yearly_sales.csv")
#A 5-number summary is a set of 5 descriptive statistics for summarizing a continuous univariate data set.
#It consists o ...
In this presentation I have presented about the iPython, how to install iPython, how to start with iPython. How to plot graph, how to animate/design it, the different type of graphs. What all functions you can take help of with iPython
Monads and Monoids: from daily java to Big Data analytics in Scala
Finally, after two decades of evolution, Java 8 made a step towards functional programming. What can Java learn from other mature functional languages? How to leverage obscure mathematical abstractions such as Monad or Monoid in practice? Usually people find it scary and difficult to understand. Oleksiy will explain these concepts in simple words to give a feeling of powerful tool applicable in many domains, from daily Java and Scala routines to Big Data analytics with Storm or Hadoop.
Effective Numerical Computation in NumPy and SciPyKimikazu Kato
Presented at PyCon JP 2014.
Video is available at
http://bit.ly/1tXYhw6
This talk explores case studies of effective usage of Numpy/Scipy and shows that the computational speed sometimes improves drastically with the appropriate derivation of formulas and performance-conscious implementation. I especially focus on scipy.sparse, the module for sparse matrices, which is often useful in the areas of machine learning and natural language processing.
Matplotlib adalah pustaka plotting 2D Python yang menghasilkan gambar berkual...HendraPurnama31
Matplotlib adalah pustaka plotting 2D Python yang menghasilkan gambar berkualitas publikasi dalam berbagai format cetak dan lingkungan interaktif di berbagai platform.
Abstract: This PDSG workshop introduces the basics of Python libraries used in machine learning. Libraries covered are Numpy, Pandas and MathlibPlot.
Level: Fundamental
Requirements: One should have some knowledge of programming and some statistics.
Effective Numerical Computation in NumPy and SciPyKimikazu Kato
Presented at PyCon JP 2014.
Video is available at
http://bit.ly/1tXYhw6
This talk explores case studies of effective usage of Numpy/Scipy and shows that the computational speed sometimes improves drastically with the appropriate derivation of formulas and performance-conscious implementation. I especially focus on scipy.sparse, the module for sparse matrices, which is often useful in the areas of machine learning and natural language processing.
Matplotlib adalah pustaka plotting 2D Python yang menghasilkan gambar berkual...HendraPurnama31
Matplotlib adalah pustaka plotting 2D Python yang menghasilkan gambar berkualitas publikasi dalam berbagai format cetak dan lingkungan interaktif di berbagai platform.
Abstract: This PDSG workshop introduces the basics of Python libraries used in machine learning. Libraries covered are Numpy, Pandas and MathlibPlot.
Level: Fundamental
Requirements: One should have some knowledge of programming and some statistics.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
DA_02_algorithms.pptx
1. Become a Data Architect – session 2
Data Structures & Algorithms
There are many sites with questions and answers, books, youtube videos, etc.
Books:
• Introduction to Algorithms, 3rd Edition (The MIT Press)
• Cracking the Coding Interview: 189 Programming Questions and Solutions 6th Edition - by Gayle Laakmann McDowell
• Computer Algorithms - by Horowitz, Sahni, Rajsekaran
• Data Structures and Algorithms - by Aho, Ullman, Hopcroft
• Algorithm Design: Foundations, Analysis, and Internet Examples by Goodrich, Tamassia
• Designing Data-Intensive Applications - by Martin Kleppmann
Training websites:
- https://leetcode.com/
- https://www.hackerrank.com/
- https://www.educative.io/
- https://www.interviewcake.com/
Youtube - multiple channels, for example: Gaurav Sen
- https://www.youtube.com/watch?v=_5vrfuwhvlQ
- https://www.youtube.com/watch?v=zaRkONvyGr8
etc.
2. =======================================
array - arr[i] - elements have same type
=======================================
list - lst[i] - like array but:
- elements may have different size and type
- elements may be complext structures (lists, dicts, etc.)
[1,2,3]
[1,"dog",[2,3]]
=======================================
tuple - like a list, but immutable (can not be changed)
(1,2,3)
((1,2),(3,4))
=======================================
string - "abcde fghij" - ordered set of characters
=======================================
sequence - any ordered set (list, tuples, string, ...)
=======================================
index - numbering elements in a sequence (usually starts with 0)
aa[i]
aa[i][j]
=======================================
slice - subset of elements of a sequence, for example:
aa = "mama papa"
012345678
bb = aa[2:5] = "ma p" (2 - included, 5 - not included)
=======================================
3. =======================================
stack (FILO = First In Last Out)
st = [1,2,3]
st.append(4) [1,2,3,4]
aa = st.pop() aa=4, st=[1,2,3]
=======================================
queue (FIFO = First In First Out)
import collections
qq = collections.deque()
for ii in range(6):
qq.append(ii) # qq == deque([0, 1, 2, 3, 4, 5])
aa = qq.popleft() # 0
bb = qq.popleft() # 1
# qq == deque([2, 3, 4, 5])
=======================================
Priority Queue
Suppose that we create a queue of tickets.
Each ticket is described by a tuple with two numbers:
(priority_num, ticket_num)
priority_num : 1..10 (1=highest, 10=lowest priority)
ticket_num : sequentially growing number
As tickets enter the queue, we sort the queue
(by priority_num, ticket_num)
so that tickets with higher priority (lower priority_num)
will move forward.
And within same priority, "earlier" tickets (with lower
ticket_num) will move forward.
Usually the queue was sorted before adding a new element.
So sorting simply means moving the element forward
until it finds its place.
Usually priority queue is implemented using
a min-heap structure (see later in this document)
5. hash (hashmap, dict)
stores key-value pairs
famous for having constant-time for insert/delete/read
aa = {"k1":"v1", "k2":55} # {'k1':'v1', 'k2':55}
aa['k3'] = 33 # {'k1':'v1', 'k2':55, 'k3':33}
# deleting
if 'k2' in aa:
del aa['k2'] # {'k1':'v1', 'k3':33}
# upsert
if 'k2' in aa:
aa['k2'] += 1
else:
aa['k2'] = 1
Merging two dicts:
starting python 3.5: z = (**d1, **d2)
starting python 3.9: z = d1 | d2
Using defaultdict:
from collections import defaultdict
dd = defaultdict(list)
dd['k1'].append(1)
dd['k1'].append(2)
dd['k1'].append(3)
for kk in dd.keys():
print(f"{kk} => {dd[kk]}") # k1 => [1, 2, 3]
How hash works:
We make an array of buckets.
The key is mapped to one of the buckets
using a simple hashing function
The tuple (key,val) is placed in this bucket.
Reading/deleting by key - same hashing function is used
to locate the item
# ------------------------------
Example of hashing function:
# ------------------------------
hash = 0
for char in key_str:
hash = hash*33 + ord(char)
idx = hash / num_buckets
# ------------------------------
6. Consistent hashing (ring hash)
Imagine that you have a layer of web servers followed by layer of app servers
• How do you do load balancing between layers?
• How you determine to which server on the next layer to go?
• How you do it consistently to take advantage of caching on layers?
• How you re-hash the system when you scale numbers of server up or down?
Consistent hashing solves these problems by providing a distribution scheme
which does not directly depend on the number of servers.
Idea:
1. Make a circle with a big number of "positions" (232 .. 2160)
This circle is called "hash ring".
2. Map each server to 1024 random positions on circle
3. Map each request (IP/port/...) to the same circle, and go clockwise
to find the server to process this request
4. Continue going clockwise to find a 2nd server (for backup)
• https://medium.com/system-design-blog/consistent-hashing-b9134c8a9062
• https://en.wikipedia.org/wiki/Consistent_hashing
• https://www.youtube.com/watch?v=zaRkONvyGr8
Original MIT Thesis by Daniel Lewin (1998):
"Consistent hashing and random trees : algorithms for caching in distributed networks"
• https://dspace.mit.edu/handle/1721.1/9947
• https://github.com/papers-we-love/papers-we-love/blob/master/distributed_systems/consistent-hashing-and-
random-trees.pdf
7. Bloom filter
Imagine that we have a huge lookup table (~millions of words)
and we testing if a word is in this table or not.
We may use a huge hash on disk, causing big disk I/O.
Bloom filter uses a small compact bitmap instead of big hash.
This allows to reject most negatives,
while allowing very few false-positives.
The negatives are definitely negatives,
but positives are "maybe" positives.
An empty Bloom filter is a bit array of m bits, all set to 0.
There must also be k different hash functions defined,
each of which maps an element to one of m bits (sets it to 1).
So if we use k hash functions - we can get up to "k" bits set.
Example: m=30, k=10
To query for an element (test whether it is in the set),
feed it to each of the k hash functions to get k array positions.
If any of the bits at these positions is 0, the element is not in the set.
If all are 1, then it may be positive - or false-positive.
If map is big enough, the patterns will be sparse,
and probability of false-positives very low.
- https://hur.st/bloomfilter
Bloom filter was developed by
Burton Howard Bloom, MIT
graduate, in 1970.
https://www.cs.princeton.edu/courses/archive/spr
05/cos598E/bib/p422-bloom.pdf
https://www.quora.com/Where-can-one-find-a-photo-
and-biographical-details-for-Burton-Howard-Bloom-
inventor-of-the-Bloom-filter
Bloom filters are called filters
because they are often used
as a cheap first pass to filter out
segments of a dataset that do not
match a query
8. BST (Binary Search Tree)
rebalancing BST (rotations)
height of the tree h = lg2(N)..N, search time ~h
=======================================
Trie - prefix tree, good for words/characters
(one node may have 26 children)
typically implemented using dictionaries
=======================================
Linked List
head of the list
going through the list
finding cycle in the linked list
by using turtle and rabbit (slow and fast runners)
9. Binary Heaps (MaxHeap and MinHeap)
Data structure which looks like binary tree
where each parent is >= (or <= for min-heap) than values of children
Binary heaps are a common way of implementing priority queues.
Given an array
we can "heapify" it in place by swapping elements:
0 - top (root) element
1,2 - next layer (left and right children)
3,4,5,6 - next layer
etc.
0 1 2 3 4 5 6 ...
--- ------- ...
last parent's idx = floor(N/2) - 1
Time complexity:
building heap - O(N)
push (at bottom) and adjust - O(lg(N))
pop (from top) and adjust - O(lg(N))
heap sort - O(N*log(N))
https://stackoverflow.com/questions/9755721/how-can-building-a-heap-be-on-time-complexity
10. Sorting:
• in-place
• stable sort (preserves order of duplicates)
• bubble sort O(n^2)
(go through whole list swapping neighbors until nothing to swap)
• selection sort O(n^2)
divide list into two portions: left sorted, right not sorted yet
on each step: select min value from right - append to left
• insert sort O(n^2)
(take one element, insert it in its place on the left, repeat)
• merge sort O(n log n)
binary division, sort small pieces, then merge them layer by layer
• heap sort O(n log n)
make MinHeap in the array,
take min value, out of the heap, fix the heap, repeat.
• quick sort O(n log n) - or O(n^2) in worst case
pick an element in the middle called a pivot
move to the right of it all elements which are > pivot
move to the left of it all elements which are < pivot
Now recursively apply the same process to left and right subarrays.
• tim sort O(n) - O(n log n)
used in python, combination of merge and insertion sort algorithms.
Takes advantage of runs of consecutive ordered elements
11. topological sort
schedule a sequence of jobs based on their dependencies.
Example: gnu-make files for compilation, etc.
Dependencies may be represented by a graph:
jobs are points (vertices of a graph)
x ---> y means that we need to calculate "x" before "y"
(x needed for calculating "y")
Kahn algorithm:
jobs=[1,2,3,4, 5, 6]
dependencies = [(1,3), (3,2), (3,4), (5,6)]
# (m, n) where n is a dependency of m
1. for each job calculate number of dependencies NoD
2. find jobs with no dependencies - put them into a queue.
3. process the queue one by one like this:
take queue element, append to the output
take its dependencies, reduce their NoD by one
if NoD for any of them becomes zero, add those to the queue
Complexity: O(Njobs + Ndependencies)
12. Searches:
• Binary search
• DFS (Depth-First Search) - recursion
• BFS (Breadth-First Search) - queue, level by level
Note: Recursion can always be rewritten as iteration
factorial_recursive(n):
if n<=1:
return 1
else:
return n * factorial_recursive(n-1)
factorial_iterative(n):
ff = 1
for ii in range(1,n+1):
ff *= ii
return ff
13. Algorithm complexity (time and space):
Big "O" notation
O(1), O(N) , O(N^2) , O(N*log(N)), O(p*k), etc.
==================================
- bottoms-up and top-down algorithms
- divide-and-conquer
- greediness
==================================
declarative vs imperative procedural programming
==================================
functional programming
• declarative
• evaluate functions instead of simply setting values
(recursion instead of for-loop)
• create/modify functions at run time
• pass functions as arguments
• use pure functions (avoid side effects (global vars, ...))
==================================
dynamic programming = memoization
(caching results to reuse them instead of recalculating)
common to use a dictionary for that
# procedural:
def factorial(n):
f=1
for i in range(1,n+1):
f = f*i
return f
factorial(4) # 24
factorial(6) # 720
# --------------------------------
# functional
def multiply(x, y):
return x * y
def factorial(n):
from functools import reduce
return reduce(multiply, range(1,n+1))
factorial(4) # 24
factorial(6) # 720
14. Bit manipulation
binary positive, negative, addition, shifting, masks
& – Bitwise AND
| – Bitwise OR
~ – Bitwise NOT
^ – XOR
<< – Left Shift
>> – Right Shift
15. Typical algorithmic tasks:
• merging two sorted lists
• finding largest word
• finding duplicates (use hash)
• func(N) as func of smaller numbers
• calculate Fibonacci numbers
• do coin-change (using memoization)
• recursive staircase (1,2,3 steps - how many ways?)
• shortest reach path (using BFSearch)
• balanced parentheses (using stack)
• queue with 2 stacks
• keep contacts in a Trie
• mirroring BST
• finding the 2nd largest value in BST (binary tree)
(or verifying that the tree is correct)
16. Turing Machine (TM) (Turing, 1936) - an very simple abstract
computational machine.
An infinite memory tape is divided into discrete "cells".
The machine positions its "head" over a cell and "reads" it.
Then it uses a "finite table" of instructions/rules to:
- optionally update the cell
- move head by one position left or right
- or halt the computation.
NTM (Non-deterministic TM) - more than one possible action
can be taken in some situations. A NTM effectively is able
to duplicate itself at any time, and have each duplicate
take a different execution path.
Turing-complete set of rules - a set which can be used
to simulate a Turing Machine.
Intractable problem - can be solved in theory, but in reality
takes too much resources/time
Time-complexity classes:
P (Polinomial time by a deterministic Turing machine)
NP (Nondeterministic Polynomial)
NP-hard is the class of decision problems to which all problems
in NP can be reduced to in polynomial time
by a TM (Deterministic Turing Machine).
NP-complete is the intersection of NP-hard and NP.
NP-complete is the class of decision problems in NP
to which all other problems in NP can be reduced to
in polynomial time by a TM (Deterministic Turing Machine).
17. Minimum spanning tree (MST) is a tree that:
- Contains all the nodes (vertices) of the graph.
- has no cycles
- has minimal total length (sum of "weights" of edges)
Example - a cable company wanting to lay lines to multiple houses
while minimizing the amount of cable laid to save money.
Kruskal's algorithm (1956) - a minimum-spanning-tree algorithm
which finds an edge of the least possible weight
that connects any two trees in the forest.
- https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
Shortest Path finding (road navigators, etc.)
- https://en.wikipedia.org/wiki/Shortest_path_problem
Multiple algorithms:
- Dijkstra - single-source shortest path problem with non-negative edge weight.
- Bellman–Ford - single-source problem if edge weights may be negative.
- A* search algorithm - single pair shortest path using heuristics
to try to speed up the search.
- Floyd–Warshall - all pairs shortest paths.
- Johnson's - all pairs shortest paths, and may be faster
than Floyd–Warshall on sparse graphs.
- Viterbi - shortest stochastic path problem with an additional
probabilistic weight on each node.
- etc.
==================================
Combinatorial search algorithms - achieve efficiency by
reducing the effective size of the search space
or by employing heuristics.
Classic combinatorial search problems include solving
the eight queens puzzle or evaluating moves in games
with a large game tree, such as reversi or chess.