Types of Statements in Python Programming LanguageExplore Skilled
The slide describes the types of flow control statements in Python including conditional statement , loop statement , control statement such as break and continue .
Types of Statements in Python Programming LanguageExplore Skilled
The slide describes the types of flow control statements in Python including conditional statement , loop statement , control statement such as break and continue .
This is a short presentation on Vertex Cover Problem for beginners in the field of Graph Theory...
Download the presentation for a better experience...
Subset sum problem is to find subset of elements that are selected from a given set whose sum adds up to a given number K. We are considering the set contains non-negative values. It is assumed that the input set is unique (no duplicates are presented).
Algorithms Lecture 1: Introduction to AlgorithmsMohamed Loey
We will discuss the following: Algorithms, Time Complexity & Space Complexity, Algorithm vs Pseudo code, Some Algorithm Types, Programming Languages, Python, Anaconda.
This is a short presentation on Vertex Cover Problem for beginners in the field of Graph Theory...
Download the presentation for a better experience...
Subset sum problem is to find subset of elements that are selected from a given set whose sum adds up to a given number K. We are considering the set contains non-negative values. It is assumed that the input set is unique (no duplicates are presented).
Algorithms Lecture 1: Introduction to AlgorithmsMohamed Loey
We will discuss the following: Algorithms, Time Complexity & Space Complexity, Algorithm vs Pseudo code, Some Algorithm Types, Programming Languages, Python, Anaconda.
Systems in the small - Introduction to AlgorithmsDavid Millard
A brief presentation to introduce new IT students to the concept of algorithms - including some short group exercises based around search and sorting lists
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. CONTENT OF THIS COURSE
Here’s what you’ll find in this Slidesgo template:
● Analyzing the algorithm.
● Linear search.
● Binary Search.
● Sorting.
3. Analyzing the algorithm
To create a good algorithm, we have to ensure that we have got the best performance
from the algorithm. In this section, we are going to discuss the ways we can analyze the
time complexity of basic functions.
4. Worst, average, and best cases
Worst case analysis is a calculation of the upper bound on the running time of the
algorithm. In the Search() function, the upper bound can be an element that does not
appear in the arr, for instance, 60, so it has to iterate through all elements of arr and
still cannot find the element.
Average case analysis is a calculation of all possible inputs on the running time of
algorithm, including an element that is not present in the arr array.
Best case analysis is a calculation of the lower bound on the running time of the
algorithm. In the Search() function, the lower bound is the first element of the arr array,
which is 42. When we search for element 42, the function will only iterate through the
arr array once, so the arrSize doesn't matter.
5. Big Theta, Big-O, and Big Omega
After discussing asymptotic analysis and the three cases in algorithms, let's discuss
asymptotic notation to represent the time complexity of an algorithm. There are three
asymptotic notations that are mostly used in an algorithm; they are Big Theta, Big-O, and
Big Omega
6. Big Theta, Big-O, and Big Omega
The Big Theta notation (θ) is a notation that bounds a function from above and below, like
we saw previously in asymptotic analysis, which also omits a constant from a notation.
7. Big Theta, Big-O, and Big Omega
Big-O notation (O) is a notation that bounds a function from above only using the upper
bound of an algorithm. From the previous notation, f(n) = 4n + 1, we can say that the time
complexity of the f(n) is O(n). If we are going to use Big Theta notation, we can say that the
worst case time complexity of f(n) is θ(n) and the best case time complexity of f(n) is θ(1).
8. Big Theta, Big-O, and Big Omega
Big Omega notation is contrary to Big-O notation. It uses the lower bound to analyze time
complexity. In other words, it uses the best case in Big Theta notation. For the f(n)
notation, we can say that the time complexity is Ω(1).
9. O(N) — "worst case" definition
In the worst case, I need to do approximately N steps for an input of size N.
10. O(N) — "scaling" definition
For every new item that gets added to my input, I need to do a new step. We say "our runtime
scales linearly with the size of our input".
11. Ω(1) — "best case" definition
In the best case, I only need to do a constant number of steps to find my solution.
12. O(N2) — "worst case" definition
In the worst case, I need to do approximately N2 steps if
my input size is N.
13. Ω(N2) — "best case" definition
In the worst case, I need to do approximately N2 steps if
my input size is N.
14. O(N2) — "scaling" definition
In the worst case, I need to do approximately N2 steps if
my input size is N.
15. Ω(N) — "best case" definition
In the best case, I need to do approximately N steps if
my input size is N.
16. O(log2(n)) — "worst case" definition
In the worst case, I need to do log2(n) steps to find my
solution.
17. O(log2(n)) — "scaling" definition
I don't need to take another step in my algorithm until I
double my input.
18. Binary search
Suppose you’re searching for a word in a Dictionary. The word starts with K. You could
start at the beginning and keep flipping pages until you get to the Ks. But you’re more likely
to start at a page in the middle, because you know the Ks are going to be near the middle
of the Dictionary.
20. Binary search
Now suppose you log on to Facebook. When you do, Facebook has to verify that you have an
account on the site. So, it needs to search for your username in its database. Suppose your
username is kareem. Facebook could start from the As and search for your name—but it
makes more sense for it to begin somewhere in the middle. This is a search problem. And all
these cases use the same algorithm to solve the problem: binary search.
21. Binary search
Binary search is an algorithm; its input is a sorted list of elements (I’ll explain later why it
needs to be sorted). If an element you’re looking for is in that list, binary search returns the
position where it’s located. Otherwise, binary search returns null.
24. Binary search
Suppose you’re looking for a word in the dictionary. The dictionary has 240,000 words. In the
worst case, how many steps do you think each search will take?
Simple search could take 240,000 steps if the word you’re looking for is the very last one in
the book. With each step of binary search, you cut the number of words in half until you’re
left with only one word.
25. Binary search
So binary search will take 18 steps—a big difference! In general, for any list of n, binary search
will take log2 n steps to run in the worst case, whereas simple search will take n steps.
27. Binary search
For a list of 1,024 elements, log 1,024 = 10, because 210
== 1,024. So for a list of 1,024
numbers, you’d have to check 10 numbers at most.
29. Binary search-Running time
Generally you want to choose the most efficient algorithm— whether you’re trying to
optimize for time or space.
Back to binary search. How much time do you save by using it? Well, the first approach
was to check each number, one by one. If this is a list of 100 numbers, it takes up to 100
guesses. If it’s a list of 4 billion numbers, it takes up to 4 billion guesses. So the maximum
number of guesses is the same as the size of the list. This is called linear time. Binary
search is different. If the list is 100 items long, it takes at most 7 guesses. If the list is 4
billion items, it takes at most 32 guesses. Powerful, eh? Binary search runs in logarithmic
time (or log time, as the natives call it). Here’s a table summarizing our findings today
31. Binary search-Big O notation
Big O notation is special notation that tells you how fast an algorithm is.
Big O establishes a worst-case run time
32. Visualizing different Big O run times
Here’s a practical example you can follow at home with a few pieces of paper and a pencil.
Suppose you have to draw a grid of 16 boxes.
Algorithm 1 One way to do it is to draw 16 boxes, one at a time. Remember, Big O
notation counts the number of operations. In this example, drawing one box is one
operation. You have to draw 16 boxes. How many operations will it take, drawing
one box at a time?
34. Visualizing different Big O run times
You can “draw” twice as many boxes with every fold, so you can draw 16 boxes in 4 steps.
What’s the running time for this algorithm? Come up with running times for both
algorithms before moving on. Answers: Algorithm 1 takes O(n) time, and algorithm 2 takes
O(log n) time.
35. Some common Big O run times
Here are five Big O run times that you’ll encounter a lot, sorted from fastest to slowest:
O(log n), also known as log time. Example: Binary search.
O(n), also known as linear time. Example: Simple search.
O(n * log n). Example: A fast sorting algorithm, like quicksort .
O(n2 ). Example: A slow sorting algorithm, like selection sort .
O(n!). Example: A really slow algorithm, like the traveling salesperson.
36. Visualizing different Big O run times
Algorithm speed isn’t measured in seconds, but in growth of
the number of operations.
Instead, we talk about how quickly the run time of an
algorithm increases as the size of the input increases.
37. The traveling salesperson
You might have read that last section and thought, “There’s no way I’ll ever run
into an algorithm that takes O(n!) time.” Well, let me try to prove you wrong!
Here’s an example of an algorithm with a really bad running time. This is a
famous problem in computer science, because its growth is appalling and some
very smart people think it can’t be improved. It’s called the traveling salesperson
problem
38. The traveling salesperson
● You have a salesperson.
● The salesperson has to go to five cities.
● This salesperson, whom I’ll call Opus, wants to hit all five cities while traveling the minimum
distance. Here’s one way to do that: look at every possible order in which he could travel to the
cities.
39. The traveling salesperson
● He adds up the total distance and then picks the path with the lowest distance. There are 120
permutations with 5 cities, so it will take 120 operations to solve the problem for 5 cities. For 6
cities, it will take 720 operations (there are 720 permutations). For 7 cities, it will take 5,040
operations!
40. The traveling salesperson
In general, for n items, it will take n! (n factorial) operations to compute the result. So this is O(n!) time,
or factorial time. It takes a lot of operations for everything except the smallest numbers. Once you’re
dealing with 100+ cities, it’s impossible to calculate the answer in time—the Sun will collapse first. This is
a terrible algorithm! Opus should use a different one, right? But he can’t. This is one of the unsolved
problems in computer science. There’s no fast known algorithm for it, and smart people think it’s
impossible to have a smart algorithm for this problem. The best we can do is come up with an
approximate solution; see chapter 10 for more.
41. Selection Sort-How Memory Works
Imagine you go to a show and need to check your things. A chest of drawers is available.
Each drawer can hold one element. You want to store two things, so you ask for two drawers.
42. Selection Sort-How Memory Works
Imagine you go to a show and need to check your things. A chest of drawers is available.
Each drawer can hold one element. You want to store two things, so you ask for two drawers.
44. Selection Sort-How Memory Works
And you’re ready for the show! This is basically how your computer’s memory works. Your
computer looks like a giant set of drawers, and each drawer has an address.
fe0ffeeb is the address of a slot in memory. / Each time you want to store an item in memory, you
ask the computer for some space, and it gives you an address where you can store your item. If you
want to store multiple items, there are two basic ways to do so: arrays and lists. I’ll talk about arrays
and lists next, as well as the pros and cons of each. There isn’t one right way to store items for
every use case, so it’s important to know the differences.
45. Arrays and linked lists
Sometimes you need to store a list of elements in memory. Suppose you’re writing an app to
manage your todos. You’ll want to store the todos as a list in memory. Should you use an array, or a
linked list? Let’s store the todos in an array first, because it’s easier to grasp. Using an array means
all your tasks are stored contiguously (right next to each other) in memory.
46. Arrays and linked lists
Now suppose you want to add a fourth task. But the next drawer is taken up by someone else’s
stuff!
It’s like going to a movie with your friends and finding a place to sit— but another friend joins you,
and there’s no place for them. You have to move to a new spot where you all fit. In this case, you
need to ask your computer for a different chunk of memory that can fit four tasks. Then you need
to move all your tasks there.
47. Arrays and linked lists
If another friend comes by, you’re out of room again—and you all have to move a second time!
What a pain. Similarly, adding new items to an array can be a big pain. If you’re out of space and
need to move to a new spot in memory every time, adding a new item will be really slow. One easy
fix is to “hold seats”: even if you have only 3 items in your task list, you can ask the computer for 10
slots, just in case. Then you can add 10 items to your task list without having to move.
48. Arrays and linked lists
This is a good workaround, but you should be aware of a couple of downsides:
You may not need the extra slots that you asked for, and then that memory will
be wasted. You aren’t using it, but no one else can use it either.
You may add more than 10 items to your task list and have to move anyway. So
it’s a good workaround, but it’s not a perfect solution. Linked lists solve this
problem of adding items.
49. Linked lists
With linked lists, your items can be anywhere in memory.
Each item stores the address of the next item in the list. A bunch of random memory addresses are
linked together.
50. Which are used more: arrays or lists?
Obviously, it depends on the use case. But arrays see a lot of use because they allow random
access. There are two different types of access: random access and sequential access. Sequential
access means reading the elements one by one, starting at the first element. Linked lists can only
do sequential access. If you want to read the 10th element of a linked list, you have to read the first
9 elements and follow the links to the 10th element. Random access means you can jump directly
to the 10th element. You’ll frequently hear me say that arrays are faster at reads. This is because
they provide random access. A lot of use cases require random access, so arrays are used a lot.
Arrays and lists are used to implement other data structures, too
51. EXERCISES
Would you use an array or a linked list to implement this queue? (Hint: Linked lists are good for
inserts/deletes, and arrays are good for random access. Which one are you going to be doing here?)
Let’s run a thought experiment. Suppose Facebook keeps a list of usernames. When someone tries
to log in to Facebook, a search is done for their username. If their name is in the list of usernames,
they can log in. People log in to Facebook pretty often, so there are a lot of searches through this
list of usernames. Suppose Facebook uses binary search to search the list. Binary search needs
random access—you need to be able to get to the middle of the list of usernames instantly.
Knowing this, would you implement the list as an array or a linked list?
52. Bubble Sort
The Bubble Sort compares each successive pair of elements
in an unordered list and inverts the elements if they are not in
order.
54. Selection Sort
Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n2)
time complexity, making it inefficient on large lists, and generally performs worse than
the similar insertion sort. Selection sort is noted for its simplicity, and it has performance
advantages over more complicated algorithms in certain situations, particularly where
auxiliary memory is limited. The algorithm divides the input list into two parts: the sublist
of items already sorted, which is built up from left to right at the front (left) of the list, and
the sublist of items remaining to be sorted that occupy the rest of the list. Initially, the
sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm
proceeds by finding the smallest (or largest, depending on sorting order) element in the
unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it
in sorted order), and moving the sublist boundaries one element to the right.
55. Selection Sort
Suppose you have a bunch of music on your computer. For each artist, you have a play count.
You want to sort this list from most to least played, so that you can rank your favorite artists. How
can you do it?
56. Selection Sort
One way is to go through the list and find the most-played artist. Add that artist to a new list.
58. Selection Sort
Let’s put on our computer science hats and see how long this will take to run. Remember that O(n)
time means you touch every element in a list once. For example, running simple search over the list
of artists means looking at each artist once
59. Selection Sort
To find the artist with the highest play count, you have to check each item in the list. This takes O(n)
time, as you just saw. So you have an operation that takes O(n) time, and you have to do that n
times:
This takes O(n × n) time or O(n2 ) time.
61. Merge Sort
Merge Sort is a divide-and-conquer algorithm. It divides the input list of length n in
half successively until there are n lists of size 1. Then, pairs of lists are merged
together with the smaller first element among the pair of lists being added in each
step. Through successive merging and through comparison of first elements, the
sorted list is built.
63. Greedy Algorithm
When using a device like this, odds are you want to minimize the
number of coins you’re dispensing for each customer, lest you
have to press levers more times than are necessary. Fortunately,
computer science has given cashiers everywhere ways to
minimize numbers of coins due: greedy algorithms.
64. Greedy Algorithm
According to the National Institute of Standards and Technology
(NIST), a greedy algorithm is one "that always takes the best
immediate, or local, solution while finding an answer. Greedy
algorithms find the overall, or globally, optimal solution for some
optimization problems, but may find less-than-optimal solutions for
some instances of other problems.
65. Greedy Algorithm
What’s all that mean? Well, suppose that a cashier owes a customer
some change and on that cashier’s belt are levers that dispense quarters,
dimes, nickels, and pennies. Solving this "problem" requires one or more
presses of one or more levers. Think of a "greedy" cashier as one who
wants to take, with each press, the biggest bite out of this problem as
possible. For instance, if some customer is owed 41¢, the biggest first
(i.e., best immediate, or local) bite that can be taken is 25¢. (That bite is
"best" inasmuch as it gets us closer to 0¢ faster than any other coin
would.)
66. Greedy Algorithm
Note that a bite of this size would whittle what was a 41¢ problem down to
a 16¢ problem, since 41 - 25 = 16. That is, the remainder is a similar but
smaller problem. Needless to say, another 25¢ bite would be too big
(assuming the cashier prefers not to lose money), and so our greedy
cashier would move on to a bite of size 10¢, leaving him or her with a 6¢
problem. At that point, greed calls for one 5¢ bite followed by one 1¢ bite,
at which point the problem is solved. The customer receives one quarter,
one dime, one nickel, and one penny: four coins in total.
67. Dynamic Programming
The term "dynamic programming" was invented by Richard Bellman to
"sound cool" to management, ensuring that there would continue to be
funding for his research.
In fact, dynamic programming is simply the concept of saving
computed results for reuse later, much like a lookup table.
Just like how hash tables and tries are types of data structures,
dynamic programming is a type of algorithm that solves a problem by
saving the results each time, and looking up the answer if it’s needed
in the future, saving that extra computation.
69. Breadth-first search
Breadth-first search allows you to find the shortest distance between two
things. But shortest distance can mean a lot of things! You can use breadth-
first search to •
Write a checkers AI that calculates the fewest moves to Univeristy
Find the doctor closest to you in your network
70. Introduction to graphs
Suppose you’re in San Francisco, and you want to go from Twin Peaks to the
Golden Gate Bridge. You want to get there by bus, with the minimum number
of transfers. Here are your options.
71. Introduction to graphs
Aha! Now the Golden Gate Bridge shows up. So it takes three steps to get
from Twin Peaks to the bridge using this route.
72. Introduction to graphs
There are other routes that will get you to the bridge too, but they’re longer
(four steps). The algorithm found that the shortest route to the bridge is
three steps long. This type of problem is called a shortest-path problem.
You’re always trying to find the shortest something. It could be the shortest
route to your friend’s house. It could be the smallest number of moves to
checkmate in a game of chess. The algorithm to solve a shortest-path
problem is called breadth-first search.
73. To figure out how to get from Twin Peaks to the Golden
Gate Bridge, there are two steps
1. Model the problem as a graph.(
○ What is a graph?
■ A graph models a set of connections.)
2. 2. Solve the problem using breadth-first search.
74. Breadth-first search
We looked at a search algorithm in chapter 1: binary search. Breadthfirst
search is a different kind of search algorithm: one that runs on graphs. It can
help answer two types of questions: •
Question type 1: Is there a path from node A to node B?
Question type 2: What is the shortest path from node A to node B?
77. Recursion
● Suppose you’re digging through your grandma’s attic and come across a mysterious locked
suitcase.
● Grandma tells you that the key for the suitcase is probably in this other box.
● This box contains more boxes, with more boxes inside those boxes. The key is in a box somewhere.
What’s your algorithm to search for the key? Think of an algorithm before you read on.
78. Recursion
1. Make a pile of boxes to look through.
2. Grab a box, and look through it.
3. If you find a box, add it to the pile to look through later.
4. If you find a key, you’re done! 5.
5. Repeat.
79. Recursion
Recursion is where a function calls itself. Here’s the second way in
pseudocode:
Recursion is used when it makes the solution clearer. There’s no performance
benefit to using recursion; in fact, loops are sometimes better for
performance. I like this quote by Leigh Caldwell on Stack Overflow: “Loops may
achieve a performance gain for your program. Recursion may achieve a
performance gain for your programmer. Choose which is more important in
your situation!”1
81. Recursion
Write out this code and run it. You’ll notice a problem: this function will run
forever!
82. Recursion
When you write a recursive function, you have to tell it when to stop recursing.
That’s why every recursive function has two parts: the base case, and the
recursive case. The recursive case is when the function calls itself. The base
case is when the function doesn’t call itself again … so it doesn’t go into an
infinite loop.
83. Recursion
When you write a recursive function, you have to tell it when to stop recursing.
That’s why every recursive function has two parts: the base case, and the
recursive case. The recursive case is when the function calls itself. The base
case is when the function doesn’t call itself again … so it doesn’t go into an
infinite loop.