Wolaita Sodo University
School of Informatics
Department of Computer Science
Course title: Introduction to Artificial Intelligence
Compiled by: Eyob S. (MSc)
ChapterThree
SearchingandPlanning
Problem
•A problem is a specific task or challenge that requires finding a solution or making
a decision.
•In artificial intelligence, problems can vary in complexity and scope, ranging from
simple tasks like arithmetic calculations to complex challenges such as image
recognition, natural language processing, game playing, and optimization.
•Each problem has a defined set of initial states, possible actions or moves, and a
goal state that needs to be reached or achieved.
These are the following steps which require to solve a problem:
•Problem definition: detailed specification of inputs and acceptable system
solutions.
•Problem analysis: analyze the problem thoroughly (techniques to solve it).
•Knowledge Representation: collect detailed information about the problem and
define all possible techniques (represent the task knowledge that is necessary to
solve the problem).
•Problem-solving: selection of best techniques (applying the best problem solving
technique and apply it to the particular problem).
Cont…
A well-defined problem can be described by:
• Initial state: that the agent starts in (begins the search).
• Operator or successor function: generates legal states that result from trying the
actions to move the blank. Example: four actions to move the blank (Left, Right,
Top, Bottom).
• State space: all states reachable from initial by any sequence of actions.
• Path: sequence through state space.
• Path cost: function that assigns a cost to a path. Cost of a path is the sum of costs
of individual actions along the path (assigns a numeric cost to each path).
• Goal test: test to determine if at goal state (determines whether a given state is a
goal state or not).
Problem solving agents
• An important aspect of intelligence is goal-based
problem solving.
• The solution of many problems can be described
by finding a sequence of actions that lead to a
desirable goal.
• Each action changes the state and the aim is to
find the sequence of actions and states that lead
from the initial (start) state to a final (goal) state.
• Problem-solving agents decide what to do by
finding sequences of actions that lead to desirable
states.
• Rational agents or problem-solving agents in AI
mostly used these search strategies or algorithms
to solve a specific problem and provide the best
result.
• Problem-solving agents are the goal-based agents
and use atomic representation.
• The agent’s task is to find out which sequence of
actions will get to a goal state.
environment
agent
?
sensors
actuators
Problem Solving by Searching
What is Search?
• Search is the process of considering various possible sequences of operators applied to the
initial state, and finding out a sequence which finishes in a goal state.
• Search is the systematic examination of states to find path from the start/root state to the
goal state (step by step procedure to solve a search problem in a given search space).
• Solution will be a sequence of operations (actions) leading from initial state to goal state
(plan) i.e. the output of a search algorithm is a solution, that is, a path from the initial state
to a state that satisfies the goal test.
• Rational agents or Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result.
• Problem-solving agents are the goal-based agents and use some technique of
representation.
• In general, the process of looking for a sequence of actions that reaches the goal is called
search.
Problem States
• A state is a representation of those elements in a given moment.
• Two special states are defined:
• Initial state (starting point)
• Final state (goal state)
• The state space is the set of all states reachable from the initial state.
• It forms a graph (or map) in which the nodes are states and the arcs between nodes
are actions.
• A path in the state space is a sequence of states connected by a sequence of
actions.
• The solution of the problem is part of the map formed by the state space.
Problem Solution
• A solution in the state space is a path from the initial state to a goal state or,
sometimes, just a goal state.
• Path/solution cost: function that assigns a numeric cost to each path, the cost of
applying the operators to the states.
• Solution quality is measured by the path cost function, and an optimal
solution has the lowest path cost among all solutions.
• Solutions: The final goal state.
• Cost is important depending on the problem and the type of solution required.
• An optimal solution has the lowest path cost among all the solutions.
Cont…
Problem solving agents are goal-directed agents and covers the
following phases:
1. Goal Formulation: set of one or more (desirable) world states.
2. Problem formulation: what actions and states to consider given a goal
and an initial state.
3. Search for solution: given the problem, search for a solution that is a
sequence of actions to achieve the goal starting from the initial state.
4. Execution of the solution.
Example 1: The 8-puzzle
State:
✓Initial sate: Location of empty square
✓Operators/ Actions: Blank moves left, right, up, and down
✓Goal state: Match G
✓Path cost: each step costs 1 so cost length of path to reach goal
Compute the following 8-puzzle problem using different approaches
Cont…
Cont…Example
Goal state
Start state
Search Algorithm Terminologies
• A search problem can have three main factors:
• Search Space: Search space represents a set of possible solutions, which a system may have.
• Start State: It is a state from where agent begins the search.
• Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
• Search tree: A tree representation of search problem. The root of the search tree is the root node
which is corresponding to the initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do, can be represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
Search Strategies
• A search algorithm takes a problem as input and returns a solution in the form of an action
sequence.
• Once a solution is found, the actions it recommends can be carried out. This is called the execution
phase.
• A strategy is defined by picking the order of node expansion
• Performance Measures:
– Completeness – does it always find a solution if one exists?
– Time complexity – number of nodes generated/expanded (how long does it take to find a
solution?)
– Space complexity – maximum number of nodes in memory
– Optimality – does it always find a least-cost solution
• Time and space complexity are measured in terms of
– b – maximum branching factor of the search tree
– d – depth of the least-cost solution
Properties of Search Algorithms
Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:
• Completeness: A search algorithm is said to be complete if it guarantees to return a
solution if at least any solution exists for any random input.
• Optimality: If a solution found for an algorithm is guaranteed to be the best
solution (lowest path cost) among all other solutions, then such a solution for is said
to be an optimal solution.
• Time Complexity: Time complexity is a measure of time for an algorithm to
complete its task.
• Space Complexity: It is the maximum storage space required at any point during
the search, as the complexity of the problem.
Types of search algorithms in AI
• Search algorithms are one of the most important areas of artificial intelligence. They helps to find a
solution of the complex problems.
• There are far too many powerful search algorithms out there to fit in a single article.
• Based on the search problems, we can classify the search algorithm as Uninformed search and
Informed search.
Uninformed Search algorithms
• The uninformed search algorithm does not have any domain knowledge such as closeness,
location of the goal state. These algorithms explore the search space without any prior
knowledge about the goal state. They rely on a general search strategy to find a solution.
• It only knows the information about how to traverse/cross the given tree and how to find
the goal state.
• This algorithm is also known as the Blind search algorithm or Brute -Force algorithm.
1. BFS (Breadth-first search):
• It is of the most common search strategies.
• It searches breadthwise (side-to-side) in a tree or a graph.
• It generally starts from the root node and examines the neighbor nodes and then moves to
the next level.
• It uses First-in First-out (FIFO) strategy as it gives the shortest path to achieving the
solution.
• BFS is used where the space complexity is not considered.
• It is blind/exhaustive search algorithm in which search start from initial node and
searches level by level up to goal node found.
Cont…Example 1
• Here, let’s take node A as the start state and node F as the goal state.
• The BFS algorithm starts with the start state and then goes to the next level and visits the node until it
reaches the goal state.
• In this example, it starts from A and then travel to the next level and visits B and C and then
travel to the next level and visits D, E, F and G. Here, the goal state is defined as F. So, the traversal
will stop at F.
Now, consider the following tree.
Cont…
• The path of traversal is: A —-> B —-> C —-> D —-> E —-> F
FRINGE = (B, C)
FRINGE = (C, D, E)
FRINGE = (D, E, F, G)
FRINGE = (A)
2 3
4 5
1
6 7
FRINGE = (1)
• Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data
structures.
• It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a
‘search key’), and explores all of the neighbor nodes at the present depth prior to moving
on to the nodes at the next depth level.
9
8
Cont…Example 2
FRINGE = (2, 3)
2 3
4 5
1
6 7
9
8
Cont…
FRINGE = (3, 4, 5)
2 3
4 5
1
6 7
9
8
Cont…
FRINGE = (4, 5, 6, 7)
2 3
4 5
1
6 7
9
8
Cont…
FRINGE = (5, 6, 7, 8)
2 3
4 5
1
6 7
9
8
Cont…
FRINGE = (6, 7, 8)
2 3
4 5
1
6 7
9
8
Cont…
Cont…
FRINGE = (7, 8, 9)
2 3
4 5
1
6 7
9
8
Cont…
FRINGE = (8, 9)
2 3
4 5
1
6 7
9
8
Advantages of BFS
• BFS will never be trapped/locked in in any unwanted nodes.
• If the graph has more than one solution, then BFS will return the optimal
solution which provides the shortest path.
Disadvantages of BFS
• BFS stores all the nodes in the current level and then go to the next level.
• It requires a lot of memory to store the nodes.
• BFS takes more time to reach the goal state which is far away.
Cont…
Cont…
2. DFS (Depth First Search):
• The depth-first search uses Last-in, First-out (LIFO) strategy and hence it can
be implemented by using stack.
• DFS uses backtracking. That is, it starts from the initial state and explores
each path to its greatest depth before it moves to the next path.
• It is also blind/exhaustive search algorithm in which search starts at initial
node and searches up to some depth in one direction.
• If goal node is found, it stop the search process, otherwise you can do
backtracking.
DFS will follow:
Root node —-> Left node —-> Right node
• Now, consider the same example tree
mentioned.
• Here, it starts from the start state A and
then travels to B and then it goes to D.
After reaching D, it backtracks to B. B is
already visited, hence it goes to the next
depth E and then backtracks to B. as it is
already visited, it goes back to A. A is
already visited. So, it goes to C and then
to F. F is our goal state and it stops there.
Cont…Example 1
The path of traversal is:
A —-> B —-> D —-> E —-> C —-> F
Advantages of DFS
• It takes lesser memory as compared to BFS.
• The time complexity is lesser when compared to BFS.
• DFS does not require much more search.
Disadvantages of DFS
• DFS does not always guarantee to give a solution.
• As DFS goes deep down, it may get trapped in an infinite loop.
Cont…
Cont…
3. UCS (Uniform cost search)
• Uniform cost search is considered the best search algorithm for a weighted graph or graph with costs.
• It searches the graph by giving maximum priority to the lowest cumulative cost.
• Uniform cost search can be implemented using a priority queue (Higher priority to minimum cost).
Consider the below graph where each node has a pre-defined cost.
Cont…
Here, S is the start node and G is the
goal node.
From S, G can be reached in the
following ways:
S, A, E, F, G -> 19
S, B, E, F, G -> 18
S, B, D, F, G -> 19
S, C, D, F, G -> 23
Here, the path with the least cost
is S, B, E, F, G
Cont…
Advantages of UCS
• This algorithm is optimal/best as the selection of paths is based on the lowest cost.
Disadvantages of UCS
• The algorithm does not consider how many steps it goes to reach the lowest path.
This may result in an infinite loop also.
Informed Search algorithms
• Informed search algorithms use domain knowledge.
• In an informed search, problem information is available which can guide the
search.
• Informed search strategies can find a solution more efficiently than an uninformed
search strategy.
• Informed search is also called a heuristic/experiential search.
• A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
• Informed search can solve much complex problem which could not be solved
in another way.
• Informed search algorithms types:
• Greedy Search
• A* Search
1. Greedy best-first search algorithm
• Greedy best-first search uses the properties of both depth-first search and
breadth-first search.
• Greedy best-first search traverses the node by selecting the path which appears
best at the moment.
• The closest path is selected by using the heuristic function.
• Thus, the quality of the heuristic function determines the practical usability of
greedy search.
Cont…
• Consider the below graph with the heuristic values.
Cont…Example
Here, A is the start node and H is the goal
node.
Greedy best-first search first starts with A and
then examines the next neighbor B and C.
Here, the heuristics of B is 12 and C is 4.
The best path at the moment is C and hence it
goes to C. From C, it explores the neighbors F
and G. The heuristics of F is 8 and G is 2.
Hence it goes to G. From G, it goes to H
whose heuristic is 0 which is also our goal
state.
Cont…
The path of traversal is:
A —-> C —-> G —-> H
Cont…
Advantages of Greedy best-first search
• Greedy best-first search is more efficient compared with breadth-first search and
depth-first search.
Disadvantages of Greedy best-first search
• In the worst-case scenario, the greedy best-first search algorithm may behave
like an unguided DFS.
• There are some possibilities for greedy best-first to get trapped in an infinite loop.
Cont…
2. A* (A-star) search algorithm
• A* search algorithm is a combination of both uniform cost search and greedy
best-first search algorithms.
• It uses the advantages of both with better memory usage.
• It uses a heuristic function to find the shortest path.
• A* search algorithm uses the sum of both the cost and heuristic of the node to
find the best path.
• Consider the following graph with the heuristics values as follows.
Cont…Example
Let A be the start node and H be the goal node.
First, the algorithm will start with A. From A, it can
go to B, C, H.
• Note: A* search uses the sum of path cost and
heuristics value to determine the path.
Here, from A to B, the sum of cost and heuristics is
1 + 3 = 4.
From A to C, it is 2 + 4 = 6.
From A to H, it is 7 + 0 = 7.
Here, the lowest cost is 4 and the path A to B is
chosen. The other paths will be on hold.
Now, from B, it can go to D or E.
From A to B to D, the cost is 1 + 4 + 2 = 7.
From A to B to E, it is 1 + 6 + 6 = 13.
The lowest cost is 7. Path A to B to D is chosen and
compared with other paths which are on hold.
Cont…
• Here, path A to C is of less cost. That is 6.
• Hence, A to C is chosen and other paths are kept
on hold.
• From C, it can now go to F or G.
• From A to C to F, the cost is 2 + 3 + 3 = 8.
• From A to C to G, the cost is 2 + 2 + 1 = 5.
• The lowest cost is 5 which is also lesser than
other paths which are on hold. Hence, path A to
G is chosen.
• From G, it can go to H whose cost is 2 + 2 + 2 +
0 = 6.
• Here, 6 is lesser than other paths cost which is
on hold.
• Also, H is our goal state. The algorithm will
terminate here.
The path of traversal is:
A —-> C —-> G —-> H
Cont…
Advantages of A* search algorithm
• This algorithm is best when compared with other algorithms.
• This algorithm can be used to solve very complex problems also it is an optimal
one.
Disadvantages of A* search algorithm
• The A* search is based on heuristics and cost. It may not produce the shortest
path.
• The usage of memory is more as it keeps all the nodes in the memory.
Comparison of uninformed and informed search algorithms
• Uninformed search is also known as blind search whereas informed search
is also called heuristics search.
• Uniformed search does not require much information.
• Informed search requires domain-specific details.
• Compared to uninformed search, informed search strategies are more
efficient and the time complexity of uninformed search strategies is more.
• In general, informed search handles the problem better than blind search.
Adversarial search
• Adversarial Search has become fundamental to Artificial Intelligence (AI),
focusing on decision-making in competitive scenarios.
• It's commonly associated with games and strategic interactions.
• In this approach, an AI agent aims to make optimal decisions while anticipating
the actions of an opponent with opposing goals.
• The key idea behind Adversarial Search in Artificial Intelligence is to model
the interaction as a game, often using principles from game theory.
• It involves constructing a game tree that represents all possible moves and
outcomes and selecting the best activity for the AI agent based on a strategy that
minimizes the maximum potential loss (hence, the term Minimax).
• Adversarial search is neither purely informed nor uninformed. It lies
somewhere in between, incorporating elements of both.
Cont…
The Minimax algorithm
• The Minimax algorithm is a fundamental technique in Adversarial Search, particularly
in AI's decision-making processes within competitive environments.
Chess and Adversarial Search
• Chess is a prime example of Adversarial Search in action. Powerful chess engines, like
Deep Blue and AlphaZero, employ advanced techniques to search vast game trees and
select optimal moves against human opponents and tic-tac-toe game.
• Checkers is another two-player game that can use adversarial search. A computer
program called “Chinook” was developed specifically to play in the World Checkers
Championship.
• In general, adversarial search is a method applied to a situation where you are planning
while another actor prepares against you. Your plans, therefore, could be affected by
your opponent’s actions.
Cont…other applications
Planning and scheduling
Cont…
Dynamic game theory
• Dynamic game theory is a branch of game theory that analyzes strategic
interactions between players who make decisions sequentially over time.
• Unlike static games, where players make decisions simultaneously, dynamic games
involve a sequence of moves, allowing players to respond to each other's actions.
• Example:
• Strategic Game AI: AI agents use dynamic game theory to model the
opponent's strategies, predict their moves.
• Real-Time Strategy Games: Dynamic game theory helps these agents adapt to
changing game states and opponent actions.
• Competitive Multi-Agent Systems: In competitive scenarios, agents strive to
maximize their individual rewards.
Challenges and Future Directions
• While dynamic game theory offers significant potential, there are still challenges to
overcome:
1. Scalability: As the number of agents and the complexity of the game increase, the
computational cost of solving dynamic games can become prohibitive.
2. Incomplete Information: Real-world scenarios often involve uncertainty and incomplete
information, making it difficult to apply traditional game-theoretic techniques.
3. Learning and Adaptation: AI agents need to be able to learn from experience and adapt
to changing environments, which requires robust learning algorithms.
To address these challenges, researchers are exploring techniques like:
Reinforcement Learning: Training AI agents to learn optimal strategies through trial and
error.
Approximate Dynamic Programming: Developing efficient algorithms for solving large-
scale dynamic games.
• Informed search algorithm - Graph Search
• Properties of each search algorithms
• Dynamic game theory in detail
Reading Assignment
Thank you!!
Any Query?

Chapter 3 - Searching and prPlanning.pdf

  • 1.
    Wolaita Sodo University Schoolof Informatics Department of Computer Science Course title: Introduction to Artificial Intelligence Compiled by: Eyob S. (MSc)
  • 2.
  • 3.
    Problem •A problem isa specific task or challenge that requires finding a solution or making a decision. •In artificial intelligence, problems can vary in complexity and scope, ranging from simple tasks like arithmetic calculations to complex challenges such as image recognition, natural language processing, game playing, and optimization. •Each problem has a defined set of initial states, possible actions or moves, and a goal state that needs to be reached or achieved. These are the following steps which require to solve a problem: •Problem definition: detailed specification of inputs and acceptable system solutions. •Problem analysis: analyze the problem thoroughly (techniques to solve it). •Knowledge Representation: collect detailed information about the problem and define all possible techniques (represent the task knowledge that is necessary to solve the problem). •Problem-solving: selection of best techniques (applying the best problem solving technique and apply it to the particular problem).
  • 4.
    Cont… A well-defined problemcan be described by: • Initial state: that the agent starts in (begins the search). • Operator or successor function: generates legal states that result from trying the actions to move the blank. Example: four actions to move the blank (Left, Right, Top, Bottom). • State space: all states reachable from initial by any sequence of actions. • Path: sequence through state space. • Path cost: function that assigns a cost to a path. Cost of a path is the sum of costs of individual actions along the path (assigns a numeric cost to each path). • Goal test: test to determine if at goal state (determines whether a given state is a goal state or not).
  • 5.
    Problem solving agents •An important aspect of intelligence is goal-based problem solving. • The solution of many problems can be described by finding a sequence of actions that lead to a desirable goal. • Each action changes the state and the aim is to find the sequence of actions and states that lead from the initial (start) state to a final (goal) state. • Problem-solving agents decide what to do by finding sequences of actions that lead to desirable states. • Rational agents or problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. • Problem-solving agents are the goal-based agents and use atomic representation. • The agent’s task is to find out which sequence of actions will get to a goal state. environment agent ? sensors actuators
  • 6.
    Problem Solving bySearching What is Search? • Search is the process of considering various possible sequences of operators applied to the initial state, and finding out a sequence which finishes in a goal state. • Search is the systematic examination of states to find path from the start/root state to the goal state (step by step procedure to solve a search problem in a given search space). • Solution will be a sequence of operations (actions) leading from initial state to goal state (plan) i.e. the output of a search algorithm is a solution, that is, a path from the initial state to a state that satisfies the goal test. • Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. • Problem-solving agents are the goal-based agents and use some technique of representation. • In general, the process of looking for a sequence of actions that reaches the goal is called search.
  • 7.
    Problem States • Astate is a representation of those elements in a given moment. • Two special states are defined: • Initial state (starting point) • Final state (goal state) • The state space is the set of all states reachable from the initial state. • It forms a graph (or map) in which the nodes are states and the arcs between nodes are actions. • A path in the state space is a sequence of states connected by a sequence of actions. • The solution of the problem is part of the map formed by the state space.
  • 8.
    Problem Solution • Asolution in the state space is a path from the initial state to a goal state or, sometimes, just a goal state. • Path/solution cost: function that assigns a numeric cost to each path, the cost of applying the operators to the states. • Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. • Solutions: The final goal state. • Cost is important depending on the problem and the type of solution required. • An optimal solution has the lowest path cost among all the solutions.
  • 9.
    Cont… Problem solving agentsare goal-directed agents and covers the following phases: 1. Goal Formulation: set of one or more (desirable) world states. 2. Problem formulation: what actions and states to consider given a goal and an initial state. 3. Search for solution: given the problem, search for a solution that is a sequence of actions to achieve the goal starting from the initial state. 4. Execution of the solution.
  • 10.
    Example 1: The8-puzzle State: ✓Initial sate: Location of empty square ✓Operators/ Actions: Blank moves left, right, up, and down ✓Goal state: Match G ✓Path cost: each step costs 1 so cost length of path to reach goal Compute the following 8-puzzle problem using different approaches
  • 11.
  • 12.
  • 13.
    Search Algorithm Terminologies •A search problem can have three main factors: • Search Space: Search space represents a set of possible solutions, which a system may have. • Start State: It is a state from where agent begins the search. • Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not. • Search tree: A tree representation of search problem. The root of the search tree is the root node which is corresponding to the initial state. • Actions: It gives the description of all the available actions to the agent. • Transition model: A description of what each action do, can be represented as a transition model. • Path Cost: It is a function which assigns a numeric cost to each path. • Solution: It is an action sequence which leads from the start node to the goal node. • Optimal Solution: If a solution has the lowest cost among all solutions.
  • 14.
    Search Strategies • Asearch algorithm takes a problem as input and returns a solution in the form of an action sequence. • Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. • A strategy is defined by picking the order of node expansion • Performance Measures: – Completeness – does it always find a solution if one exists? – Time complexity – number of nodes generated/expanded (how long does it take to find a solution?) – Space complexity – maximum number of nodes in memory – Optimality – does it always find a least-cost solution • Time and space complexity are measured in terms of – b – maximum branching factor of the search tree – d – depth of the least-cost solution
  • 15.
    Properties of SearchAlgorithms Following are the four essential properties of search algorithms to compare the efficiency of these algorithms: • Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input. • Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution for is said to be an optimal solution. • Time Complexity: Time complexity is a measure of time for an algorithm to complete its task. • Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem.
  • 16.
    Types of searchalgorithms in AI • Search algorithms are one of the most important areas of artificial intelligence. They helps to find a solution of the complex problems. • There are far too many powerful search algorithms out there to fit in a single article. • Based on the search problems, we can classify the search algorithm as Uninformed search and Informed search.
  • 17.
    Uninformed Search algorithms •The uninformed search algorithm does not have any domain knowledge such as closeness, location of the goal state. These algorithms explore the search space without any prior knowledge about the goal state. They rely on a general search strategy to find a solution. • It only knows the information about how to traverse/cross the given tree and how to find the goal state. • This algorithm is also known as the Blind search algorithm or Brute -Force algorithm. 1. BFS (Breadth-first search): • It is of the most common search strategies. • It searches breadthwise (side-to-side) in a tree or a graph. • It generally starts from the root node and examines the neighbor nodes and then moves to the next level. • It uses First-in First-out (FIFO) strategy as it gives the shortest path to achieving the solution. • BFS is used where the space complexity is not considered. • It is blind/exhaustive search algorithm in which search start from initial node and searches level by level up to goal node found.
  • 18.
    Cont…Example 1 • Here,let’s take node A as the start state and node F as the goal state. • The BFS algorithm starts with the start state and then goes to the next level and visits the node until it reaches the goal state. • In this example, it starts from A and then travel to the next level and visits B and C and then travel to the next level and visits D, E, F and G. Here, the goal state is defined as F. So, the traversal will stop at F. Now, consider the following tree.
  • 19.
    Cont… • The pathof traversal is: A —-> B —-> C —-> D —-> E —-> F FRINGE = (B, C) FRINGE = (C, D, E) FRINGE = (D, E, F, G) FRINGE = (A)
  • 20.
    2 3 4 5 1 67 FRINGE = (1) • Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. • It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’), and explores all of the neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. 9 8 Cont…Example 2
  • 21.
    FRINGE = (2,3) 2 3 4 5 1 6 7 9 8 Cont…
  • 22.
    FRINGE = (3,4, 5) 2 3 4 5 1 6 7 9 8 Cont…
  • 23.
    FRINGE = (4,5, 6, 7) 2 3 4 5 1 6 7 9 8 Cont…
  • 24.
    FRINGE = (5,6, 7, 8) 2 3 4 5 1 6 7 9 8 Cont…
  • 25.
    FRINGE = (6,7, 8) 2 3 4 5 1 6 7 9 8 Cont…
  • 26.
    Cont… FRINGE = (7,8, 9) 2 3 4 5 1 6 7 9 8
  • 27.
    Cont… FRINGE = (8,9) 2 3 4 5 1 6 7 9 8
  • 28.
    Advantages of BFS •BFS will never be trapped/locked in in any unwanted nodes. • If the graph has more than one solution, then BFS will return the optimal solution which provides the shortest path. Disadvantages of BFS • BFS stores all the nodes in the current level and then go to the next level. • It requires a lot of memory to store the nodes. • BFS takes more time to reach the goal state which is far away. Cont…
  • 29.
    Cont… 2. DFS (DepthFirst Search): • The depth-first search uses Last-in, First-out (LIFO) strategy and hence it can be implemented by using stack. • DFS uses backtracking. That is, it starts from the initial state and explores each path to its greatest depth before it moves to the next path. • It is also blind/exhaustive search algorithm in which search starts at initial node and searches up to some depth in one direction. • If goal node is found, it stop the search process, otherwise you can do backtracking.
  • 30.
    DFS will follow: Rootnode —-> Left node —-> Right node • Now, consider the same example tree mentioned. • Here, it starts from the start state A and then travels to B and then it goes to D. After reaching D, it backtracks to B. B is already visited, hence it goes to the next depth E and then backtracks to B. as it is already visited, it goes back to A. A is already visited. So, it goes to C and then to F. F is our goal state and it stops there. Cont…Example 1 The path of traversal is: A —-> B —-> D —-> E —-> C —-> F
  • 31.
    Advantages of DFS •It takes lesser memory as compared to BFS. • The time complexity is lesser when compared to BFS. • DFS does not require much more search. Disadvantages of DFS • DFS does not always guarantee to give a solution. • As DFS goes deep down, it may get trapped in an infinite loop. Cont…
  • 32.
    Cont… 3. UCS (Uniformcost search) • Uniform cost search is considered the best search algorithm for a weighted graph or graph with costs. • It searches the graph by giving maximum priority to the lowest cumulative cost. • Uniform cost search can be implemented using a priority queue (Higher priority to minimum cost). Consider the below graph where each node has a pre-defined cost.
  • 33.
    Cont… Here, S isthe start node and G is the goal node. From S, G can be reached in the following ways: S, A, E, F, G -> 19 S, B, E, F, G -> 18 S, B, D, F, G -> 19 S, C, D, F, G -> 23 Here, the path with the least cost is S, B, E, F, G
  • 34.
    Cont… Advantages of UCS •This algorithm is optimal/best as the selection of paths is based on the lowest cost. Disadvantages of UCS • The algorithm does not consider how many steps it goes to reach the lowest path. This may result in an infinite loop also.
  • 35.
    Informed Search algorithms •Informed search algorithms use domain knowledge. • In an informed search, problem information is available which can guide the search. • Informed search strategies can find a solution more efficiently than an uninformed search strategy. • Informed search is also called a heuristic/experiential search. • A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time. • Informed search can solve much complex problem which could not be solved in another way. • Informed search algorithms types: • Greedy Search • A* Search
  • 36.
    1. Greedy best-firstsearch algorithm • Greedy best-first search uses the properties of both depth-first search and breadth-first search. • Greedy best-first search traverses the node by selecting the path which appears best at the moment. • The closest path is selected by using the heuristic function. • Thus, the quality of the heuristic function determines the practical usability of greedy search. Cont…
  • 37.
    • Consider thebelow graph with the heuristic values. Cont…Example Here, A is the start node and H is the goal node. Greedy best-first search first starts with A and then examines the next neighbor B and C. Here, the heuristics of B is 12 and C is 4. The best path at the moment is C and hence it goes to C. From C, it explores the neighbors F and G. The heuristics of F is 8 and G is 2. Hence it goes to G. From G, it goes to H whose heuristic is 0 which is also our goal state.
  • 38.
    Cont… The path oftraversal is: A —-> C —-> G —-> H
  • 39.
    Cont… Advantages of Greedybest-first search • Greedy best-first search is more efficient compared with breadth-first search and depth-first search. Disadvantages of Greedy best-first search • In the worst-case scenario, the greedy best-first search algorithm may behave like an unguided DFS. • There are some possibilities for greedy best-first to get trapped in an infinite loop.
  • 40.
    Cont… 2. A* (A-star)search algorithm • A* search algorithm is a combination of both uniform cost search and greedy best-first search algorithms. • It uses the advantages of both with better memory usage. • It uses a heuristic function to find the shortest path. • A* search algorithm uses the sum of both the cost and heuristic of the node to find the best path.
  • 41.
    • Consider thefollowing graph with the heuristics values as follows. Cont…Example Let A be the start node and H be the goal node. First, the algorithm will start with A. From A, it can go to B, C, H. • Note: A* search uses the sum of path cost and heuristics value to determine the path. Here, from A to B, the sum of cost and heuristics is 1 + 3 = 4. From A to C, it is 2 + 4 = 6. From A to H, it is 7 + 0 = 7. Here, the lowest cost is 4 and the path A to B is chosen. The other paths will be on hold. Now, from B, it can go to D or E. From A to B to D, the cost is 1 + 4 + 2 = 7. From A to B to E, it is 1 + 6 + 6 = 13. The lowest cost is 7. Path A to B to D is chosen and compared with other paths which are on hold.
  • 42.
    Cont… • Here, pathA to C is of less cost. That is 6. • Hence, A to C is chosen and other paths are kept on hold. • From C, it can now go to F or G. • From A to C to F, the cost is 2 + 3 + 3 = 8. • From A to C to G, the cost is 2 + 2 + 1 = 5. • The lowest cost is 5 which is also lesser than other paths which are on hold. Hence, path A to G is chosen. • From G, it can go to H whose cost is 2 + 2 + 2 + 0 = 6. • Here, 6 is lesser than other paths cost which is on hold. • Also, H is our goal state. The algorithm will terminate here. The path of traversal is: A —-> C —-> G —-> H
  • 43.
    Cont… Advantages of A*search algorithm • This algorithm is best when compared with other algorithms. • This algorithm can be used to solve very complex problems also it is an optimal one. Disadvantages of A* search algorithm • The A* search is based on heuristics and cost. It may not produce the shortest path. • The usage of memory is more as it keeps all the nodes in the memory.
  • 44.
    Comparison of uninformedand informed search algorithms • Uninformed search is also known as blind search whereas informed search is also called heuristics search. • Uniformed search does not require much information. • Informed search requires domain-specific details. • Compared to uninformed search, informed search strategies are more efficient and the time complexity of uninformed search strategies is more. • In general, informed search handles the problem better than blind search.
  • 45.
    Adversarial search • AdversarialSearch has become fundamental to Artificial Intelligence (AI), focusing on decision-making in competitive scenarios. • It's commonly associated with games and strategic interactions. • In this approach, an AI agent aims to make optimal decisions while anticipating the actions of an opponent with opposing goals. • The key idea behind Adversarial Search in Artificial Intelligence is to model the interaction as a game, often using principles from game theory. • It involves constructing a game tree that represents all possible moves and outcomes and selecting the best activity for the AI agent based on a strategy that minimizes the maximum potential loss (hence, the term Minimax). • Adversarial search is neither purely informed nor uninformed. It lies somewhere in between, incorporating elements of both.
  • 46.
    Cont… The Minimax algorithm •The Minimax algorithm is a fundamental technique in Adversarial Search, particularly in AI's decision-making processes within competitive environments. Chess and Adversarial Search • Chess is a prime example of Adversarial Search in action. Powerful chess engines, like Deep Blue and AlphaZero, employ advanced techniques to search vast game trees and select optimal moves against human opponents and tic-tac-toe game. • Checkers is another two-player game that can use adversarial search. A computer program called “Chinook” was developed specifically to play in the World Checkers Championship. • In general, adversarial search is a method applied to a situation where you are planning while another actor prepares against you. Your plans, therefore, could be affected by your opponent’s actions.
  • 47.
  • 48.
  • 49.
  • 50.
    Dynamic game theory •Dynamic game theory is a branch of game theory that analyzes strategic interactions between players who make decisions sequentially over time. • Unlike static games, where players make decisions simultaneously, dynamic games involve a sequence of moves, allowing players to respond to each other's actions. • Example: • Strategic Game AI: AI agents use dynamic game theory to model the opponent's strategies, predict their moves. • Real-Time Strategy Games: Dynamic game theory helps these agents adapt to changing game states and opponent actions. • Competitive Multi-Agent Systems: In competitive scenarios, agents strive to maximize their individual rewards.
  • 51.
    Challenges and FutureDirections • While dynamic game theory offers significant potential, there are still challenges to overcome: 1. Scalability: As the number of agents and the complexity of the game increase, the computational cost of solving dynamic games can become prohibitive. 2. Incomplete Information: Real-world scenarios often involve uncertainty and incomplete information, making it difficult to apply traditional game-theoretic techniques. 3. Learning and Adaptation: AI agents need to be able to learn from experience and adapt to changing environments, which requires robust learning algorithms. To address these challenges, researchers are exploring techniques like: Reinforcement Learning: Training AI agents to learn optimal strategies through trial and error. Approximate Dynamic Programming: Developing efficient algorithms for solving large- scale dynamic games.
  • 52.
    • Informed searchalgorithm - Graph Search • Properties of each search algorithms • Dynamic game theory in detail Reading Assignment
  • 53.