SEARCH ALGORITHMS IN
ARTIFICIAL INTELLIGENCE
BEST FIRST SEARCH, A*, IDA*, LOCAL & ADVERSARIAL
SEARCH TECHNIQUES
BEST FIRST SEARCH
• Uses heuristic to estimate cost from current node to goal
• Selects node with lowest heuristic value (h(n))
• Not guaranteed to find optimal path
A* ALGORITHM
• Combines path cost and heuristic: f(n) = g(n) + h(n)
• Optimal and complete if h(n) is admissible
• Widely used in pathfinding (e.g., GPS navigation)
ITERATIVE DEEPENING A* (IDA*)
• Combines depth-first search with A*’s cost function
• Uses iterative cost threshold to control depth
• Memory-efficient and optimal
DEPTH-FIRST BRANCH AND BOUND
(DFBNB)
• Performs DFS and prunes paths exceeding current best
cost
• Reduces memory usage compared to A*
• May be slower due to backtracking
ADMISSIBLE HEURISTICS & DOMAIN
RELAXATION
• Admissible heuristic: never overestimates cost to goal
• Domain relaxation: simplifies constraints to create
heuristic
• Example: Manhattan distance for 8-puzzle
LOCAL SEARCH: SATISFACTION VS
OPTIMIZATION
• Satisfaction: find any solution meeting constraints
• Optimization: find best solution (minimum cost, max value)
• Used in CSPs and combinatorial optimization
N-QUEENS PROBLEM EXAMPLE
• Place N queens on NxN board without attacks
• Constraints: no two queens share row, column, or diagonal
• Can be solved using local search techniques
HILL CLIMBING
• Starts from random state, iteratively improves solution
• Only moves to better neighboring states
• May get stuck in local maxima or plateaus
SIMULATED ANNEALING
• Variation of hill climbing with probabilistic moves
• Accepts worse states to escape local optima
• Cooling schedule controls acceptance probability
GENETIC ALGORITHMS
• Inspired by natural evolution
• Uses population, crossover, mutation for exploration
• Good for large, complex search spaces
ADVERSARIAL SEARCH: MINIMAX
• Used in two-player games like chess, tic-tac-toe
• Assumes opponent plays optimally
• Builds game tree to choose best move
ALPHA-BETA PRUNING
• Optimizes minimax by pruning irrelevant branches
• Maintains same result as minimax but faster
• Reduces number of nodes evaluated

Search_Algorithms_Presentation introduction.pptx

  • 1.
    SEARCH ALGORITHMS IN ARTIFICIALINTELLIGENCE BEST FIRST SEARCH, A*, IDA*, LOCAL & ADVERSARIAL SEARCH TECHNIQUES
  • 2.
    BEST FIRST SEARCH •Uses heuristic to estimate cost from current node to goal • Selects node with lowest heuristic value (h(n)) • Not guaranteed to find optimal path
  • 3.
    A* ALGORITHM • Combinespath cost and heuristic: f(n) = g(n) + h(n) • Optimal and complete if h(n) is admissible • Widely used in pathfinding (e.g., GPS navigation)
  • 4.
    ITERATIVE DEEPENING A*(IDA*) • Combines depth-first search with A*’s cost function • Uses iterative cost threshold to control depth • Memory-efficient and optimal
  • 5.
    DEPTH-FIRST BRANCH ANDBOUND (DFBNB) • Performs DFS and prunes paths exceeding current best cost • Reduces memory usage compared to A* • May be slower due to backtracking
  • 6.
    ADMISSIBLE HEURISTICS &DOMAIN RELAXATION • Admissible heuristic: never overestimates cost to goal • Domain relaxation: simplifies constraints to create heuristic • Example: Manhattan distance for 8-puzzle
  • 7.
    LOCAL SEARCH: SATISFACTIONVS OPTIMIZATION • Satisfaction: find any solution meeting constraints • Optimization: find best solution (minimum cost, max value) • Used in CSPs and combinatorial optimization
  • 8.
    N-QUEENS PROBLEM EXAMPLE •Place N queens on NxN board without attacks • Constraints: no two queens share row, column, or diagonal • Can be solved using local search techniques
  • 9.
    HILL CLIMBING • Startsfrom random state, iteratively improves solution • Only moves to better neighboring states • May get stuck in local maxima or plateaus
  • 10.
    SIMULATED ANNEALING • Variationof hill climbing with probabilistic moves • Accepts worse states to escape local optima • Cooling schedule controls acceptance probability
  • 11.
    GENETIC ALGORITHMS • Inspiredby natural evolution • Uses population, crossover, mutation for exploration • Good for large, complex search spaces
  • 12.
    ADVERSARIAL SEARCH: MINIMAX •Used in two-player games like chess, tic-tac-toe • Assumes opponent plays optimally • Builds game tree to choose best move
  • 13.
    ALPHA-BETA PRUNING • Optimizesminimax by pruning irrelevant branches • Maintains same result as minimax but faster • Reduces number of nodes evaluated