Branch and bound is a method for systematically searching a solution space that uses bounding functions to avoid generating subtrees that do not contain an answer node. It has a branching function that determines the order of exploring nodes and a bounding function that prunes the search tree efficiently. Branch and bound refers to methods where all children of the current node are generated before exploring other nodes. It generalizes breadth-first and depth-first search strategies. The method uses estimates of node costs and upper bounds to prune portions of the search space that cannot contain optimal solutions.
The document describes the greedy method algorithm design technique. It works in steps, selecting the best available option at each step until all options are exhausted. Many problems can be formulated as finding a feasible subset that optimizes an objective function. A greedy algorithm works in stages, making locally optimal choices at each stage to arrive at a global optimal solution. Several examples are provided to illustrate greedy algorithms for problems like change making, machine scheduling, container loading, knapsack problem, job sequencing with deadlines, and single-source shortest paths. Pseudocode is given for some of the greedy algorithms.
The document describes using a branch and bound algorithm to solve the Travelling Salesman Problem (TSP). It starts from node 1 and explores the solution space by calculating costs of paths through different nodes. It maintains costs and paths of explored "live nodes" and explores the node with the lowest cost at each step. After node 10 with cost 28 is explored, node 11 with the same cost of 28 is explored by extending the path through node 3.
The document discusses different types of problem-solving agents and search algorithms. It describes single-state, sensorless, contingency, and exploration problem types. It also summarizes common uninformed search strategies like breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search and analyzes their properties in terms of completeness, time complexity, space complexity, and optimality. Examples of problems that can be modeled as state space searches are also provided, like the vacuum world, 8-puzzle, and robotic assembly problems.
This document summarizes several disk scheduling algorithms used in operating systems to efficiently manage disk drive access. It describes the First Come First Served (FCFS), Shortest Seek Time First (SSTF), SCAN, C-SCAN, and C-LOOK algorithms. Each algorithm is illustrated with an example request queue to show how it would schedule and service the requests to minimize the total disk head movement and seek time. The document concludes by noting that SSTF and LOOK are commonly used default algorithms but performance depends on the workload, and different algorithms may perform better for systems with heavy disk loads.
This document discusses constraint satisfaction problems (CSPs) and techniques for solving them. It begins by defining CSPs as problems with variables, domains of possible values, and constraints limiting assignments. Backtracking search and heuristics like minimum remaining values are described as standard approaches. Constraint propagation techniques like forward checking and arc consistency are explained, which aim to detect inconsistencies earlier. The 4-queens problem is provided as an example CSP.
The document summarizes a student project to program a computer to play the game of Nim. The goals were to get the computer to play itself using trial and error, then allow a human player and enforce the game rules. Arrays were added so the computer could remember previous winning moves. While Nim won't be useful outside class, learning to teach a computer through reinforcement learning could assist with electrical and computer engineering careers.
Adversarial search is a technique used in game playing to determine the best move when facing an opponent who is also trying to maximize their score. It involves searching through possible future game states called a game tree to evaluate the best outcome. The minimax algorithm searches the entire game tree to determine the optimal move by assuming the opponent will make the best counter-move. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move. Modern game programs use techniques like precomputed databases, sophisticated evaluation functions, and extensive search to defeat human champions at games like checkers, chess, and Othello.
Informed and Uninformed search StrategiesAmey Kerkar
1. The document discusses various search strategies used to solve problems including uninformed search strategies like breadth-first search and depth-first search, and informed search strategies like best-first search and A* search that use heuristics.
2. It provides examples and explanations of breadth-first search, depth-first search, hill climbing, and best-first search algorithms. Key advantages and disadvantages of each strategy are outlined.
3. The document focuses on explaining control strategies for problem solving, different types of search strategies classified as uninformed or informed, and algorithms for breadth-first search, depth-first search, hill climbing, and best-first search.
The document describes the greedy method algorithm design technique. It works in steps, selecting the best available option at each step until all options are exhausted. Many problems can be formulated as finding a feasible subset that optimizes an objective function. A greedy algorithm works in stages, making locally optimal choices at each stage to arrive at a global optimal solution. Several examples are provided to illustrate greedy algorithms for problems like change making, machine scheduling, container loading, knapsack problem, job sequencing with deadlines, and single-source shortest paths. Pseudocode is given for some of the greedy algorithms.
The document describes using a branch and bound algorithm to solve the Travelling Salesman Problem (TSP). It starts from node 1 and explores the solution space by calculating costs of paths through different nodes. It maintains costs and paths of explored "live nodes" and explores the node with the lowest cost at each step. After node 10 with cost 28 is explored, node 11 with the same cost of 28 is explored by extending the path through node 3.
The document discusses different types of problem-solving agents and search algorithms. It describes single-state, sensorless, contingency, and exploration problem types. It also summarizes common uninformed search strategies like breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search and analyzes their properties in terms of completeness, time complexity, space complexity, and optimality. Examples of problems that can be modeled as state space searches are also provided, like the vacuum world, 8-puzzle, and robotic assembly problems.
This document summarizes several disk scheduling algorithms used in operating systems to efficiently manage disk drive access. It describes the First Come First Served (FCFS), Shortest Seek Time First (SSTF), SCAN, C-SCAN, and C-LOOK algorithms. Each algorithm is illustrated with an example request queue to show how it would schedule and service the requests to minimize the total disk head movement and seek time. The document concludes by noting that SSTF and LOOK are commonly used default algorithms but performance depends on the workload, and different algorithms may perform better for systems with heavy disk loads.
This document discusses constraint satisfaction problems (CSPs) and techniques for solving them. It begins by defining CSPs as problems with variables, domains of possible values, and constraints limiting assignments. Backtracking search and heuristics like minimum remaining values are described as standard approaches. Constraint propagation techniques like forward checking and arc consistency are explained, which aim to detect inconsistencies earlier. The 4-queens problem is provided as an example CSP.
The document summarizes a student project to program a computer to play the game of Nim. The goals were to get the computer to play itself using trial and error, then allow a human player and enforce the game rules. Arrays were added so the computer could remember previous winning moves. While Nim won't be useful outside class, learning to teach a computer through reinforcement learning could assist with electrical and computer engineering careers.
Adversarial search is a technique used in game playing to determine the best move when facing an opponent who is also trying to maximize their score. It involves searching through possible future game states called a game tree to evaluate the best outcome. The minimax algorithm searches the entire game tree to determine the optimal move by assuming the opponent will make the best counter-move. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move. Modern game programs use techniques like precomputed databases, sophisticated evaluation functions, and extensive search to defeat human champions at games like checkers, chess, and Othello.
Informed and Uninformed search StrategiesAmey Kerkar
1. The document discusses various search strategies used to solve problems including uninformed search strategies like breadth-first search and depth-first search, and informed search strategies like best-first search and A* search that use heuristics.
2. It provides examples and explanations of breadth-first search, depth-first search, hill climbing, and best-first search algorithms. Key advantages and disadvantages of each strategy are outlined.
3. The document focuses on explaining control strategies for problem solving, different types of search strategies classified as uninformed or informed, and algorithms for breadth-first search, depth-first search, hill climbing, and best-first search.
This document discusses partial redundancy elimination, which aims to minimize redundant expression evaluations by identifying partially redundant expressions and inserting computations to make them fully redundant so they can be eliminated. It describes partial redundancy as occurring when an expression is evaluated along at least one path to a program point but not all paths. The document outlines the key steps of partial redundancy elimination as inserting computations to make partially redundant expressions fully redundant, then eliminating the redundant expressions. It also discusses some challenges around identifying candidate expressions and safe insertion points.
Best-first search is a heuristic search algorithm that expands the most promising node first. It uses an evaluation function f(n) that estimates the cost to reach the goal from each node n. Nodes are ordered in the fringe by increasing f(n). A* search is a special case of best-first search that uses an admissible heuristic function h(n) and is guaranteed to find the optimal solution.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
Example of iterative deepening search & bidirectional searchAbhijeet Agarwal
There are the some examples of Iterative deepening search & Bidirectional Search with some definitions and some theory related to the both searches. If you have any query please ask in comment or mail i will be happy to help you
Uninformed search algorithms traverse a search space without any additional information about the state or structure. The document describes several uninformed search algorithms:
- Breadth-first search expands all nodes at the current search level before moving to the next level. It is complete but uses more memory.
- Depth-first search prioritizes exploring paths fully before backtracking. It uses less memory than breadth-first search but may get stuck in infinite loops.
- Depth-limited search limits the depth of depth-first search to avoid infinite paths, making it incomplete but avoiding excessive memory usage.
In the field of artificial intelligence (AI), planning refers to the process of developing a sequence of actions or steps that an intelligent agent should take to achieve a specific goal or solve a particular problem. AI planning is a fundamental component of many AI systems and has applications in various domains, including robotics, autonomous systems, scheduling, logistics, and more. Here are some key aspects of planning in AI:
Definition of Planning: Planning involves defining a problem, specifying the initial state, setting a goal state, and finding a sequence of actions or a plan that transforms the initial state into the desired goal state while adhering to certain constraints.
State-Space Representation: In AI planning, the problem is often represented as a state-space, where each state represents a snapshot of the system, and actions transform one state into another. The goal is to find a path through this state-space from the initial state to the goal state.
Search Algorithms: AI planning typically relies on search algorithms to explore the state-space efficiently. Uninformed search algorithms, such as depth-first search and breadth-first search, can be used, as well as informed search algorithms, like A* search, which incorporates heuristics to guide the search.
Heuristics: Heuristics are used in planning to estimate the cost or distance from a state to the goal. Heuristic functions help inform the search algorithms by providing an estimate of how close a state is to the solution. Good heuristics can significantly improve the efficiency of the search.
Plan Execution: Once a plan is generated, the next step is plan execution, where the agent carries out the actions in the plan to achieve the desired goal. This often requires monitoring the environment to ensure that the actions are executed as planned.
Temporal and Hierarchical Planning: In more complex scenarios, temporal planning deals with actions that have temporal constraints, and hierarchical planning involves creating plans at multiple levels of abstraction, making planning more manageable in complex domains.
Partial and Incremental Planning: Sometimes, it may not be necessary to create a complete plan from scratch. Partial and incremental planning allows agents to adapt and modify existing plans to respond to changing circumstances.
Applications: Planning is used in a wide range of applications, from manufacturing and logistics (e.g., scheduling production and delivery) to robotics (e.g., path planning for robots) and game playing (e.g., chess and video games).
Challenges: Challenges in AI planning include dealing with large search spaces, handling uncertainty, addressing resource constraints, and optimizing plans for efficiency and performance.
AI planning is a critical component in creating intelligent systems that can autonomously make decisions and solve complex problems.
Quadratic probing is an open addressing scheme for resolving hash collisions in hash tables. It operates by taking the original hash index and adding successive values of a quadratic polynomial until an open slot is found. This helps avoid clustering better than linear probing but does not eliminate it. Quadratic probing provides good memory caching due to locality of reference, though linear probing has greater locality. The algorithm takes the initial hash value and subsequent values using a quadratic function to probe for open slots.
The document discusses different search methods for problem solving, including uninformed search, heuristic search, and informed search using heuristic functions. It provides examples of heuristic functions that estimate the cost to reach the goal state and explores greedy best-first search and A* search algorithms. A* combines the cost to reach a node and a heuristic estimate of remaining cost to ensure optimal, efficient search.
Tower of Hanoi using AI technique - means ends analysisShubham Nimiwal
The document describes a project report on solving the Tower of Hanoi problem using means-ends analysis. It explains the problem statement of moving disks of different sizes between three poles according to rules. It then outlines using means-ends analysis to break the problem into smaller sub-problems and recursively solve them. The report details an implementation of the algorithm in Java code. It concludes that applying this technique to solve TOH showed how humans also use means-ends analysis.
The document describes the insertion sort algorithm sorting the array [15, 9, 9, 10, 12, 1, 11, 3, 9]. It works by taking each element from the unsorted part of the array and inserting it into the correct position in the sorted part. It iterates through the array, swapping elements if the current element is less than the element preceding it, until the subarray is sorted.
This document discusses top-down parsing and LL parsing. It begins by defining top-down parsing as discovering a parse tree from top to bottom using a preorder traversal and leftmost derivation. It then explains that LL parsing is a technique of top-down parsing that uses a parser stack, parsing table, and driver function. It identifies left recursion and left factoring as problems for LL(1) parsers, as they prevent deterministic parsing. It provides examples of eliminating left recursion and left factoring through grammar transformations.
The document describes the Crow Search Algorithm, a nature-inspired metaheuristic optimization algorithm. It mimics the social behavior of crows, including hiding and stealing foods. The algorithm initializes a population of solutions representing crows. It updates each crow's position based on the position of a randomly selected crow, and each crow's memory based on fitness. This balances exploration and exploitation to optimize problems.
RBFS is a memory bounded search algorithm similar to A* that uses heuristics. It recursively searches the problem space, but only keeps the current search path and sibling nodes in memory at a given time. When exploring a subtree no longer looks promising, it is abandoned to save space. The space complexity of RBFS is linear in the search depth, same as IDA*. RBFS inherits heuristic values h(n) from parent nodes to child nodes to guide the search when reexploring subtrees.
The document discusses the branch and bound algorithm for solving the 15-puzzle problem. It describes the key components of branch and bound including live nodes, e-nodes, and dead nodes. It also defines the cost function used to evaluate nodes as the sum of the path length and number of misplaced tiles. The algorithm generates all possible child nodes from the current node and prunes the search tree by comparing node costs to avoid exploring subtrees without solutions.
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Video: https://www.facebook.com/atscaleevents/videos/1693888610884236/ . Talk by Brendan Gregg from Facebook's Performance @Scale: "Linux performance analysis has been the domain of ancient tools and metrics, but that's now changing in the Linux 4.x series. A new tracer is available in the mainline kernel, built from dynamic tracing (kprobes, uprobes) and enhanced BPF (Berkeley Packet Filter), aka, eBPF. It allows us to measure latency distributions for file system I/O and run queue latency, print details of storage device I/O and TCP retransmits, investigate blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. This talk will summarize this new technology and some long-standing issues that it can solve, and how we intend to use it at Netflix."
The document discusses various search algorithms used in artificial intelligence including uninformed and informed search methods. It provides details on breadth-first search, depth-first search, uniform cost search, and heuristic search approaches like hill climbing, greedy best-first search, and A* search. It also covers general problem solving techniques and evaluating search performance based on completeness, time complexity, space complexity, and optimality.
Adaptive Huffman coding is an improvement over standard Huffman coding that allows the Huffman tree to be adapted as additional symbols are encoded. It determines codeword mappings using a running estimate of symbol probabilities. This allows it to better exploit locality in the data. The algorithm works in two phases: first, it transforms the existing Huffman tree to maintain optimality when a symbol's weight is incremented; second, it increments the weight. This process is repeated as each new symbol is encoded.
The branch and bound method searches a tree model of the solution space for discrete optimization problems. It generates nodes that define problem states, with solution states satisfying the problem constraints. Bounding functions are used to prune subtrees without optimal solutions to avoid full exploration. The method searches the state space tree using techniques like first-in-first-out (FIFO) or least cost search, which selects the next node to expand based on estimated distance to an optimal solution.
Here are the portions of the state space tree generated by LCBB and FIFOBB for the given knapsack problems:
a) n=5, (p1,p2,p3,p4,p5)=(10,15,6,8,4), (w1,w2,w3,w4,w5)=(4,6,3,4,2) and m=12
LCBB:
1
2 3 4
7 8 9 10
5 6
FIFOBB:
1
2
3
7
4
5 6
8 9
b) n=5, (p1,p2,p3,
This document discusses partial redundancy elimination, which aims to minimize redundant expression evaluations by identifying partially redundant expressions and inserting computations to make them fully redundant so they can be eliminated. It describes partial redundancy as occurring when an expression is evaluated along at least one path to a program point but not all paths. The document outlines the key steps of partial redundancy elimination as inserting computations to make partially redundant expressions fully redundant, then eliminating the redundant expressions. It also discusses some challenges around identifying candidate expressions and safe insertion points.
Best-first search is a heuristic search algorithm that expands the most promising node first. It uses an evaluation function f(n) that estimates the cost to reach the goal from each node n. Nodes are ordered in the fringe by increasing f(n). A* search is a special case of best-first search that uses an admissible heuristic function h(n) and is guaranteed to find the optimal solution.
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Applications of Syntax-Directed Translation, Syntax-Directed Translation Schemes, and Implementing L-Attributed SDD's. Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code, Types and Declarations, Type Checking, Control Flow, Back patching, Switch-Statements
Example of iterative deepening search & bidirectional searchAbhijeet Agarwal
There are the some examples of Iterative deepening search & Bidirectional Search with some definitions and some theory related to the both searches. If you have any query please ask in comment or mail i will be happy to help you
Uninformed search algorithms traverse a search space without any additional information about the state or structure. The document describes several uninformed search algorithms:
- Breadth-first search expands all nodes at the current search level before moving to the next level. It is complete but uses more memory.
- Depth-first search prioritizes exploring paths fully before backtracking. It uses less memory than breadth-first search but may get stuck in infinite loops.
- Depth-limited search limits the depth of depth-first search to avoid infinite paths, making it incomplete but avoiding excessive memory usage.
In the field of artificial intelligence (AI), planning refers to the process of developing a sequence of actions or steps that an intelligent agent should take to achieve a specific goal or solve a particular problem. AI planning is a fundamental component of many AI systems and has applications in various domains, including robotics, autonomous systems, scheduling, logistics, and more. Here are some key aspects of planning in AI:
Definition of Planning: Planning involves defining a problem, specifying the initial state, setting a goal state, and finding a sequence of actions or a plan that transforms the initial state into the desired goal state while adhering to certain constraints.
State-Space Representation: In AI planning, the problem is often represented as a state-space, where each state represents a snapshot of the system, and actions transform one state into another. The goal is to find a path through this state-space from the initial state to the goal state.
Search Algorithms: AI planning typically relies on search algorithms to explore the state-space efficiently. Uninformed search algorithms, such as depth-first search and breadth-first search, can be used, as well as informed search algorithms, like A* search, which incorporates heuristics to guide the search.
Heuristics: Heuristics are used in planning to estimate the cost or distance from a state to the goal. Heuristic functions help inform the search algorithms by providing an estimate of how close a state is to the solution. Good heuristics can significantly improve the efficiency of the search.
Plan Execution: Once a plan is generated, the next step is plan execution, where the agent carries out the actions in the plan to achieve the desired goal. This often requires monitoring the environment to ensure that the actions are executed as planned.
Temporal and Hierarchical Planning: In more complex scenarios, temporal planning deals with actions that have temporal constraints, and hierarchical planning involves creating plans at multiple levels of abstraction, making planning more manageable in complex domains.
Partial and Incremental Planning: Sometimes, it may not be necessary to create a complete plan from scratch. Partial and incremental planning allows agents to adapt and modify existing plans to respond to changing circumstances.
Applications: Planning is used in a wide range of applications, from manufacturing and logistics (e.g., scheduling production and delivery) to robotics (e.g., path planning for robots) and game playing (e.g., chess and video games).
Challenges: Challenges in AI planning include dealing with large search spaces, handling uncertainty, addressing resource constraints, and optimizing plans for efficiency and performance.
AI planning is a critical component in creating intelligent systems that can autonomously make decisions and solve complex problems.
Quadratic probing is an open addressing scheme for resolving hash collisions in hash tables. It operates by taking the original hash index and adding successive values of a quadratic polynomial until an open slot is found. This helps avoid clustering better than linear probing but does not eliminate it. Quadratic probing provides good memory caching due to locality of reference, though linear probing has greater locality. The algorithm takes the initial hash value and subsequent values using a quadratic function to probe for open slots.
The document discusses different search methods for problem solving, including uninformed search, heuristic search, and informed search using heuristic functions. It provides examples of heuristic functions that estimate the cost to reach the goal state and explores greedy best-first search and A* search algorithms. A* combines the cost to reach a node and a heuristic estimate of remaining cost to ensure optimal, efficient search.
Tower of Hanoi using AI technique - means ends analysisShubham Nimiwal
The document describes a project report on solving the Tower of Hanoi problem using means-ends analysis. It explains the problem statement of moving disks of different sizes between three poles according to rules. It then outlines using means-ends analysis to break the problem into smaller sub-problems and recursively solve them. The report details an implementation of the algorithm in Java code. It concludes that applying this technique to solve TOH showed how humans also use means-ends analysis.
The document describes the insertion sort algorithm sorting the array [15, 9, 9, 10, 12, 1, 11, 3, 9]. It works by taking each element from the unsorted part of the array and inserting it into the correct position in the sorted part. It iterates through the array, swapping elements if the current element is less than the element preceding it, until the subarray is sorted.
This document discusses top-down parsing and LL parsing. It begins by defining top-down parsing as discovering a parse tree from top to bottom using a preorder traversal and leftmost derivation. It then explains that LL parsing is a technique of top-down parsing that uses a parser stack, parsing table, and driver function. It identifies left recursion and left factoring as problems for LL(1) parsers, as they prevent deterministic parsing. It provides examples of eliminating left recursion and left factoring through grammar transformations.
The document describes the Crow Search Algorithm, a nature-inspired metaheuristic optimization algorithm. It mimics the social behavior of crows, including hiding and stealing foods. The algorithm initializes a population of solutions representing crows. It updates each crow's position based on the position of a randomly selected crow, and each crow's memory based on fitness. This balances exploration and exploitation to optimize problems.
RBFS is a memory bounded search algorithm similar to A* that uses heuristics. It recursively searches the problem space, but only keeps the current search path and sibling nodes in memory at a given time. When exploring a subtree no longer looks promising, it is abandoned to save space. The space complexity of RBFS is linear in the search depth, same as IDA*. RBFS inherits heuristic values h(n) from parent nodes to child nodes to guide the search when reexploring subtrees.
The document discusses the branch and bound algorithm for solving the 15-puzzle problem. It describes the key components of branch and bound including live nodes, e-nodes, and dead nodes. It also defines the cost function used to evaluate nodes as the sum of the path length and number of misplaced tiles. The algorithm generates all possible child nodes from the current node and prunes the search tree by comparing node costs to avoid exploring subtrees without solutions.
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Video: https://www.facebook.com/atscaleevents/videos/1693888610884236/ . Talk by Brendan Gregg from Facebook's Performance @Scale: "Linux performance analysis has been the domain of ancient tools and metrics, but that's now changing in the Linux 4.x series. A new tracer is available in the mainline kernel, built from dynamic tracing (kprobes, uprobes) and enhanced BPF (Berkeley Packet Filter), aka, eBPF. It allows us to measure latency distributions for file system I/O and run queue latency, print details of storage device I/O and TCP retransmits, investigate blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. This talk will summarize this new technology and some long-standing issues that it can solve, and how we intend to use it at Netflix."
The document discusses various search algorithms used in artificial intelligence including uninformed and informed search methods. It provides details on breadth-first search, depth-first search, uniform cost search, and heuristic search approaches like hill climbing, greedy best-first search, and A* search. It also covers general problem solving techniques and evaluating search performance based on completeness, time complexity, space complexity, and optimality.
Adaptive Huffman coding is an improvement over standard Huffman coding that allows the Huffman tree to be adapted as additional symbols are encoded. It determines codeword mappings using a running estimate of symbol probabilities. This allows it to better exploit locality in the data. The algorithm works in two phases: first, it transforms the existing Huffman tree to maintain optimality when a symbol's weight is incremented; second, it increments the weight. This process is repeated as each new symbol is encoded.
The branch and bound method searches a tree model of the solution space for discrete optimization problems. It generates nodes that define problem states, with solution states satisfying the problem constraints. Bounding functions are used to prune subtrees without optimal solutions to avoid full exploration. The method searches the state space tree using techniques like first-in-first-out (FIFO) or least cost search, which selects the next node to expand based on estimated distance to an optimal solution.
Here are the portions of the state space tree generated by LCBB and FIFOBB for the given knapsack problems:
a) n=5, (p1,p2,p3,p4,p5)=(10,15,6,8,4), (w1,w2,w3,w4,w5)=(4,6,3,4,2) and m=12
LCBB:
1
2 3 4
7 8 9 10
5 6
FIFOBB:
1
2
3
7
4
5 6
8 9
b) n=5, (p1,p2,p3,
The branch and bound method searches a tree model of the solution space for discrete optimization problems. It uses bounding functions to prune subtrees that cannot contain optimal solutions, avoiding evaluating all possible solutions. The method generates the tree in a depth-first manner, and selects the next node to expand (the E-node) based on cost estimates. Common strategies are FIFO, LIFO, and least cost search. The traveling salesman problem can be solved using branch and bound by formulating the solution space as a tree and applying row and column reduction techniques to the cost matrix at each node to identify prunable branches.
This document summarizes search algorithms for discrete optimization problems. It begins with an overview of discrete optimization and definitions. It then discusses sequential search algorithms like depth-first search, best-first search, A*, and iterative deepening search. The document next covers parallel search algorithms including parallel depth-first search using dynamic load balancing. It analyzes different load balancing schemes and evaluates them through experiments on satisfiability problems. Finally, it discusses techniques for termination detection in parallel search algorithms.
Branch and bound is a state space search method that generates all children of a node before expanding any children. It associates a cost or profit with each node and uses a min or max heap to select the next node to expand. For the travelling salesman problem, it constructs a permutation tree representing all possible routes and uses lower bounds and reduced cost matrices at each node to prune the search space and find an optimal solution.
This document provides an overview of various machine learning algorithms and concepts, including supervised learning techniques like linear regression, logistic regression, decision trees, random forests, and support vector machines. It also discusses unsupervised learning methods like principal component analysis and kernel-based PCA. Key aspects of linear regression, logistic regression, and random forests are summarized, such as cost functions, gradient descent, sigmoid functions, and bagging. Kernel methods are also introduced, explaining how the kernel trick can allow solving non-linear problems by mapping data to a higher-dimensional feature space.
Parallel Computing 2007: Bring your own parallel applicationGeoffrey Fox
This document discusses parallelizing several algorithms and applications including k-means clustering, frequent itemset mining, integer programming, computer chess, and support vector machines (SVM). For k-means and frequent itemset mining, the algorithms can be parallelized by partitioning the data across processors and performing partial computations locally before combining results with an allreduce operation. Computer chess can be parallelized by exploring different game tree branches simultaneously on different processors. SVM problems involve large dense matrices that are difficult to solve in parallel directly due to their size exceeding memory; alternative approaches include solving smaller subproblems independently.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
Global Optimization with Descending Region AlgorithmLoc Nguyen
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
This document discusses deques, priority queues, depth-first search (DFS), and breadth-first search (BFS). It provides code examples for deque and priority queue interfaces. It explains that a deque is a double-ended queue that allows adding/removing from both ends, while a priority queue organizes objects by priority rather than order of arrival. DFS searches deeper first until reaching a goal or dead end, while BFS searches all neighboring nodes first before moving to the next level. The document also discusses converting a general tree to a binary tree for efficient processing and traversal.
The document discusses greedy algorithms and provides examples. It begins with an overview of greedy algorithms and their properties. It then provides a sample problem (traveling salesman) and shows how a greedy approach can provide an iterative solution. The document notes advantages and disadvantages of greedy algorithms and provides additional examples, including optimal binary tree merging and the knapsack problem. It concludes with describing algorithms for optimal solutions to these problems.
The document discusses various algorithms for solving the traveling salesperson problem (TSP), which is to find the shortest route for a salesperson to visit each of a number of cities only once and return to the starting city. It describes brute force, greedy, divide and conquer, branch and bound, and dynamic programming approaches.
The document discusses Radial Basis Function (RBF) networks. It describes the architecture of an RBF network which has three layers - an input layer, a hidden layer of radial basis functions, and a linear output layer. It also discusses types of radial basis functions like Gaussian, training algorithms for determining hidden unit centers and radii, and provides an example of how an RBF network can learn the XOR problem.
Hashing is a technique for storing data in an array-like structure that allows for fast lookup of data based on keys. It improves upon linear and binary search by avoiding the need to keep data sorted. Hashing works by using a hash function to map keys to array indices, with collisions resolved through techniques like separate chaining or open addressing. Separate chaining uses linked lists at each index while open addressing resolves collisions by probing to alternate indices like linear, quadratic, or double hashing.
This document summarizes a research paper that proposes a new method to accelerate the nearest neighbor search step of the k-means clustering algorithm. The k-means algorithm is computationally expensive due to calculating distances between data points and cluster centers. The proposed method uses geometric relationships between data points and centers to reject centers that are unlikely to be the nearest neighbor, without decreasing clustering accuracy. Experimental results showed the method significantly reduced the number of distance computations required.
(a) There are three ways to traverse a binary tree pre-order, in-or.docxajoy21
The document discusses three topics: (1) in-order tree traversal, (2) B-tree data structures, and (3) open addressing for hash collisions. It provides code for in-order traversal and explains that B-trees allow for efficient searching, insertion and deletion of large blocks of sorted data, keeping the tree balanced. Open addressing resolves collisions by probing alternate array locations using techniques like linear, quadratic or double hashing until finding an empty slot.
Numerical Study of Some Iterative Methods for Solving Nonlinear Equationsinventionjournals
In this paper we introduce, numerical study of some iterative methods for solving non linear equations. Many iterative methods for solving algebraic and transcendental equations is presented by the different formulae. Using bisection method , secant method and the Newton’s iterative method and their results are compared. The software, matlab 2009a was used to find the root of the function for the interval [0,1]. Numerical rate of convergence of root has been found in each calculation. It was observed that the Bisection method converges at the 47 iteration while Newton and Secant methods converge to the exact root of 0.36042170296032 with error level at the 4th and 5th iteration respectively. It was also observed that the Newton method required less number of iteration in comparison to that of secant method. However, when we compare performance, we must compare both cost and speed of convergence [6]. It was then concluded that of the three methods considered, Secant method is the most effective scheme. By the use of numerical experiments to show that secant method are more efficient than others.
The Nelder-Mead search algorithm is an optimization technique used to find the minimum or maximum of an objective function by iteratively modifying a set of parameter values. It begins with an initial set of points forming a simplex in parameter space and evaluates the objective function at each point, replacing the highest point with a new point to form a new simplex. This process repeats until the optimal value is found or a stopping criteria is met. The algorithm is simple to implement and can find local optima for problems with multiple parameters, but depends on the initial starting point and may get stuck in local optima rather than finding the global solution.
This document provides an overview of control structures and flow altering statements in Ruby. It discusses conditionals like if/else, unless, case; loops like while, until, for/in; iterators and enumerable objects; and flow altering statements like return, break, next, redo. Conditionals and loops allow altering the sequential execution of code. Iterators and enumerables provide concise ways to iterate over objects. Statements like return and break change the normal flow of control.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Mechatronics is a multidisciplinary field that refers to the skill sets needed in the contemporary, advanced automated manufacturing industry. At the intersection of mechanics, electronics, and computing, mechatronics specialists create simpler, smarter systems. Mechatronics is an essential foundation for the expected growth in automation and manufacturing.
Mechatronics deals with robotics, control systems, and electro-mechanical systems.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
2. General Method of Branch and bound:
• Branch and Bound is another method to systematically search a solution space. Just like
backtracking, we will use bounding functions to avoid generating subtrees that do not
contain an answer node. However, branch and Bound differs from backtracking in two
important manners:
It has a branching function, which can be a depth first search, breadth first search or
based on bounding function.
It has a bounding function, which goes far beyond the feasibility test as a mean to
prune efficiently the search tree.
3. Branch and Bound refers to all state space search methods in which all children of the E-node are
generated before any other live node becomes the E-node
• Branch and Bound is the generalization of both graph search strategies, BFS and D- search.
A BFS like state space search is called as FIFO (First in first out) search as the list of live
nodes in a first in first out list (or queue).
A D search like state space search is called as LIFO (Last in first out) search as the list of
live nodes in a last in first out (orstack).
4. Different definitions
Definition 1: Live node is a node that has been generated but whose children have not yet been generated.
Definition 2: E-node is a live node whose children are currently being explored. In other words, an E-node is a node currently
being expanded.
Definition 3: Dead node is a generated node that is not to be expanded or explored any further. All children of a dead node have
already been expanded.
Definition 4: Branch-an-bound refers to all state space search methods in which all children of an E-node are generated before
any other live node can become the E-node.
Definition 5: The adjective "heuristic", means" related to improving problem solving performance". As a noun it is also used in
regard to "any method or trick used to improve the efficiency of a problem solving problem". But imperfect methods are not
necessarily heuristic or vice versa. "A heuristic (heuristic rule, heuristic method) is a rule of thumb, strategy, trick simplification
or any other kind of device which drastically limits search for solutions in large problem spaces. Heuristics do not guarantee
optimal solutions, they do not guarantee any solution at all. A useful heuristic offers solutions which are good enough most of
thetime.
5. Least Cost (LC) search:
• In both LIFO and FIFO Branch and Bound the selection rule for the next E-node in rigid and blind. The selection rule for the
next E-node does not give any preference to a node that has a very good chance of getting the search to an answer node
quickly.
• The search for an answer node can be speeded by using an “intelligent” ranking function c( ) for live nodes. The next E-node
is selected on the basis of this ranking function. The node x is assigned a rank using:
• c( x ) = f(h(x)) + g( x ) where, c( x ) is the cost of x.
• h(x) is the cost of reaching x from the root and f(.) is any non-decreasing function.
•g(x) is an estimate of the additional effort needed to reach an answer node from x.
• A search strategy that uses a cost function c( x ) = f(h(x) + g( x ) to select the next E-node would always choose for its next E-
node a live node with least c(.) is called a LC–search (Least Cost search)
• BFS and D-search are special cases of LC-search. If g( x ) = 0 and f(h(x)) = level of node x, then an LC search
generates nodes by levels. This is eventually the same as a BFS. If f(h(x)) = 0 and g( x ) > g( y ) whenever y is a child of x, then the search
is essentially a D-search.
• An LC-search coupled with bounding functions is called an LC-branch and bound search
• We associate a cost c(x) with each node x in the state space tree. It is not possible to easily compute the function c(x).
So we compute a estimate c( x ) of c(x).
6. Control abstraction of LC-search
• Let t be a state space tree and c() a cost function for the nodes in t. If x is a node in t,
then c(x) is the minimum cost of any answer node in the subtree with root x. Thus, c(t) is
the cost of a minimum-cost answer node in t.
• A heuristic c(.) is used to estimate c(). This heuristic should be easy to
compute and generally has the property that if xis either an answer node or a
leaf node, then c(x) = c( x ) .
• LC-search uses c to find an answer node. The algorithm uses two functions Least() and
Add() to delete and add a live node from or to the list of live nodes, respectively.
• Least() finds a live node with least c(). This node is deleted from the list of live nodes
and returned.
7. Algorithm LCSearch(t)
{ //Search t for an answer node
if *t is an answer node then output
*t andreturn; E := t; //E-node.
initialize the list of live nodes
to be empty; repeat
{
for each child x of E do
{
if x is an answer node then output the path
from x to t andreturn;
Add (x);
(x->parent) := E;
//x is a new livenode.
// pointer for path toroot
}
if there are no more live nodes then
{
write (“No
answer node”);
return;
}
E := Least();
} until (false);
}
The root node is the first, E-node.
During the execution of LC search, this list contains
all live nodes except the E-node.
Initially this list should be empty.
Examine all the children of the E-node
if one of the children is an answer node, then the
algorithm outputs the path from x to t and
terminates.
If the child of E is not an answer node, then it
becomes a live node. It is added to the list of live
nodes and its parent field set to E.
When all the children of E have been generated, E
becomes a dead node. This happens only if none of
E‟schildren is an answer node.
Continue the search further until no live nodes
found. Otherwise, Least(), by definition, correctly
chooses the next E-node and the search continues
fromhere.
LC search terminates only when either an answer
node is found or the entire state space tree has
been generated and searched
8. Bounding:
• A branch and bound method searches a state space tree using any search mechanism in which all the
children of the E-node are generated before another node becomes the E-node. We assume that each
answer node x has a cost c(x) associated with it and that a minimum-cost answer node is to be found.
Three common search strategies are FIFO, LIFO, and LC. The three search methods differ only in the
selection rule used to obtain the next E-node.
• A good bounding helps to prune efficiently the tree, leading to a faster exploration of the solution
space.
• A cost function c(.) such that c( x ) < c(x) is used to provide lower bounds on solutions obtainable from
any node x. If upper is an upper bound on the cost of a minimum-cost solution, then all live nodes x
with c(x) > c( x ) > upper. The starting value for upper can be obtained by some heuristic or can be set
to ∞.
• As long as the initial value for upper is not less than the cost of a minimum-cost answer node, the
above rules to kill live nodes will not result in the killing of a live node that can reach a minimum-cost
answer node. Each time a new answer node is found, the value of upper can be updated.
• Branch-and-bound algorithms are used for optimization problems where, we deal directly only with
minimization problems. A maximization problem is easily converted to a minimization problem by
changing the sign of the objective function.
9. To formulate the search for an optimal solution for a least-cost answer node in a state space
tree, it is necessary to define the cost function c(.), such that c(x) is minimum for all nodes
representing an optimal solution. The easiest way to do this is to use the objective function itself
for c(.).
For nodes representing feasible solutions, c(x) is the value of the objective function
for that feasible solution.
For nodes representing infeasible solutions, c(x) =∞.
For nodes representing partial solutions, c(x) is the cost of the minimum-cost node in
the subtree with root x.
Since, c(x) is generally hard to compute, the branch-and-bound algorithm will use an estimate
c( x ) such that c( x ) < c(x) for all x.
10. FIFO Branch and Bound:
A FIFO branch-and-bound algorithm for the job sequencing problem can begin with upper
= ∞ as an upper bound on the cost of a minimum-cost answer node.
• Starting with node 1 as the E-node and using the variable tuple size formulation of Figure 8.4, nodes 2,
3, 4, and 5 are generated. Then u(2) = 19, u(3) = 14, u(4) =18, and u(5) = 21.
• The variable upper is updated to 14 when node 3 is generated. Since c (4) and c(5) are greater than
upper, nodes 4 and 5 get killed. Only nodes 2 and 3 remain alive.
• Node 2 becomes the next E-node. Its children, nodes 6, 7 and 8 are generated.
Then u(6) = 9 and so upper is updated to 9. The cost
gets killed. Node 8 is infeasible and so it is killed.
c(7) = 10 > upper and node 7
11. • The next E-node is node 6. Both its children are infeasible. Node 9‟sonly
child is also infeasible. The minimum-cost answer node is node 9. It has a cost
of 8.
• When implementing a FIFO branch-and-bound algorithm, it is not economical to kill live
nodes with c(x) > upper each time upper is updated. This is so because live nodes are in the
queue in the order in which they were generated. Hence, nodes with c(x) > upper are distributed in
some random way in the queue. Instead, live nodes with c(x) > upper can be killed when they are about to become E-nodes.
• The FIFO-based branch-and-bound algorithm with an appropriate c(.) and
u(.) is called FIFOBB.
12. LC Branch and Bound:
An LC Branch-and-Bound search of the tree of Figure 8.4 will begin with upper = ∞
and node 1 as the first E-node.
When node 1 is expanded, nodes 2, 3, 4 and 5 are generated in that order.As in the case of FIFOBB, upper is
updated to 14 when node 3 is generated and nodes 4 and 5 are killed as c(4) > upper and c(5) > upper.
Node 2 is the next E-node as c(2) = 0 and c(3) = 5. Nodes 6, 7 and 8 are generated and upper is updated to 9 when node 6
is generated. So, node 7 is killed as c(7) = 10 > upper. Node 8 is infeasible and so killed. The only live nodes now
are nodes 3 and 6.
Node 6 is the next E-node as c(6) = 0 < c(3) . Both its children are infeasible.
Node 3 becomes the next E-node. When node 9 is generated, upper is updated to 8 as u(9) = 8. So, node
10 with c(10) = 11 is killed on generation.
Node 9 becomes the next E-node. Its only child is infeasible. No live nodes remain. The search terminates
with node 9 representing the minimum-cost answer node.
2 3
• The path = 1 3 9 = 5 + 3 = 8
13. Traveling Sale Person Problem:
We start at a particular node and visit all nodes exactly once and come
back to initial node with minimum cost.
• Let G = (V, E) is a connected graph. Let C(i, J) be the cost of edge <i, j>.
cij = if <i, j> E and let |V| = n, the number of vertices. Every tour starts
at vertex 1 and ends at the same vertex. So, the solution space is given by
S = {1, , 1 | is a
• permutation of (2, 3, . . . , n)} and |S| = (n – 1)!. The size of S can be
reduced by restricting S so that (1, i1, i2, . . . . in-1, 1) S iff <ij, ij+1>
E, 0 < j < n - 1 and i0 = in =1.
14. Procedure for solving traveling sale person problem:
Step 1: Reduce the given cost matrix. A matrix is reduced if every row and column is reduced. A row (column) is said
to be reduced if it contain at least one zero and all-remaining entries are non-negative. This can be done as follows:
a) Row reduction: Take the minimum element from first row, subtract it from all elements of first row, next take
minimum element from the second row and subtract it from second row. Similarly apply the same procedure for
all rows.
b) Find the sum of elements, which were subtracted from rows.
c) Apply column reductions for the matrix obtained after row reduction.
Column reduction: Take the minimum element from first column, subtract it from all elements of first
column, next take minimum element from the second column and subtract it from second column.
Similarly apply the same procedure for all columns.
d) Find the sum of elements, which were subtracted from columns.
e) Obtain the cumulative sum of row wise reduction and column wise reduction.
Cumulative reduced sum = Row wise reduction sum + column wise reduction sum.
Associate the cumulative reduced sum to the starting state as lower bound and ∞ as upper bound
15. Step 3: Repeat step 2 until all nodes are visited.
Step2: Calculate the reduced cost matrix for every node R. Let A is the reduced cost matrix for node
R. Let S be a child of R such that the tree edge (R, S) corresponds to including edge <i, j> in the tour.
If S is not a leaf node, then the reduced cost matrix for S may be obtained as follows:
a) Change all entries in row i and column j of A to .
b) Set A (j, 1) to .
Reduce all rows and columns in the resulting matrix except for rows and column containing
only . Let r is the total amount subtracted to reduce the matrix.
cS cR A i, j r, where „r‟is the total amount
subtracted to reduce the matrix, cR indicates the lower bound of the ith
node in (i, j) path and cS is called
the cost function.
28. The overall tree organization is as follows:
The path of traveling sale person problem is: 1-> 4 ->2-> 5-> 3-> 1
The minimum cost of the path is: 10 + 6 +2+ 7 + 3 = 28.