ARTIFICIAL INTELLIGENCE
Course Code:21CS3002
• Continuous Evaluation: 40 Marks
• End Semester Examination: 60 Marks
• L T P : 3 0 0
• Credits: 3
• Pre-Requisite : NIL
3.
COURSE OBJECTIVE
• Tolearn the fundamentals of AI and role of agents in AI.
• To understand the search which is the first building block of AI and its
applications.
• To understand and analyze the second building block of AI that is
knowledge representation and handling uncertainty.
• To understand the concepts of planning and learning to create smart
applications.
• To learn the applications of AI for NLP and Expert system designing.
4.
COURSE LEARNING OUTCOMES
(CLO)
•Identify problems that are amenable to solution by AI methods, and which AI methods may be
suited to solving a given problem.
• Solve problems like constraint satisfaction search and optimization problem.
• Deduce through logic and reasoning algorithms.
• Handle uncertainty.
• Understand the role of planning and learning in automated control and smart applications.
• Formalize a given problem in the language/framework of different AI methods.
• Design and carry out an empirical evaluation of different algorithms on a problem formalization.
5.
UNIT-I
Introduction
• Introduction toAI: Definitions, Historical foundations, Basic
Elements of AI, Characteristics of intelligent algorithm, AI
application Area.
• Agents: Definition of agents, Agent Environment, Agent
architectures (e.g., reactive, layered, cognitive), Multi-agent
systems- Collaborating agents, Competitive agents.
6.
UNIT-II
Problem solving
• Problemsolving: State space search; Production systems, search
space control: depth-first, breadth-first search, heuristic search - Hill
climbing, best-first search, branch and bound. Problem Reduction,
Constraint Satisfaction End, Means-End Analysis, Game Playing.
7.
UNIT-II
Problem solving
• Problemsolving: State space search; Production systems, search
space control: depth-first, breadth-first search, heuristic search - Hill
climbing, best-first search, branch and bound. Problem Reduction,
Constraint Satisfaction End, Means-End Analysis, Game Playing.
8.
UNIT-III
Handling Uncertainty
• Non-MonotonicReasoning, Probabilistic reasoning, use of certainty
factors, Basics of Fuzzy logic.
• Knowledge Based Systems: Proportional Logic, FOPL, Clausal
Form, Resolution & Unification. Knowledge representation,
acquisition, organization & Manipulation.
9.
UNIT-IV
Planning
• Planning-The blocksworld, Components of Planning Systems,
Goal stack Planning, Nonlinear planning, Hierarchical planning.
• Learning-Learning from example, Learning by advice, Explanation
based learning, Learning in problem solving, Definition and
examples of broad variety of machine learning tasks, Classification,
Inductive learning, Simple statistical-based learning such as Naive
Bayesian Classifier, decision trees, single layer & multiplayer
Perceptions,
10.
UNIT-V
Natural Language Processing
•Language models, n-grams, Vector space models, Bag of words,
Text classification, Information retrieval, Pagerank, Information
extraction, Question-answering.
• Expert Systems: Need and justification for expert systems, Basic
Components & architecture of Expert systems, ES-Shells,
Representing & Using Domain Knowledge, Knowledge acquisition.
• Case Studies: IBM WATSON and CHATBOT, MYCIN,RI.
11.
TEXT BOOKS
• StuartRussell, Peter Norvig, Artificial Intelligence: A Modern
Approach, Prentice Hall, Fourth edition, 2020.
• Rich and K. Knight," Artificial Intelligence", Tata McGraw Hill.
12.
REFERENCE BOOKS
• DanW. Patterson, “Introduction to Artificial Intelligence and Expert Systems”,
PHI
• Nils J. Nilsson, Artificial Intelligence: A New Synthesis, Morgan-Kaufmann, 1998.
• Biere, A., Heule, M., Van Maaren, H., Walsh, T., Handbook of Satisfiability, IOS
Press, 2009.
• Judea Pearl, Heuristics: Intelligent Search Strategies for Computer Problem
Solving, Addison- Wesley Publishing Company, 1984.
• Pattern Recognition and Machine learning , C.M. Bishop, Springer
• Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical
Learning (ESL), Springer, 2009 (freely available online)
• Kevin Murphy, Machine Learning: A Probabilistic Perspective (MLAPP), MIT
Press, 2012
13.
REFERENCE BOOKS
• DanW. Patterson, “Introduction to Artificial Intelligence and Expert Systems”,
PHI
• Nils J. Nilsson, Artificial Intelligence: A New Synthesis, Morgan-Kaufmann, 1998.
• Biere, A., Heule, M., Van Maaren, H., Walsh, T., Handbook of Satisfiability, IOS
Press, 2009.
• Judea Pearl, Heuristics: Intelligent Search Strategies for Computer Problem
Solving, Addison- Wesley Publishing Company, 1984.
• Pattern Recognition and Machine learning , C.M. Bishop, Springer
• Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical
Learning (ESL), Springer, 2009 (freely available online)
• Kevin Murphy, Machine Learning: A Probabilistic Perspective (MLAPP), MIT
Press, 2012
14.
ARTIFICIAL INTELLIGENCE LAB
CourseCode: 21CS3114
• Continuous Evaluation: 60 Marks
• End Semester Examination: 40 Marks
• L T P : 0 0 2
• Credits: 1
• Pre-Requisite : Basics of any Programming Language
15.
COURSE OBJECTIVES (CO)
•To implement concepts of AI through different programming
languages.
• To understand the role of each component of AI in designing a
smart application.
16.
COURSE LEARNING OUTCOMES
(CLO)
•Understand the requirement of search strategies in AI.
• Understand and implement the concepts for uncertainty, knowledge
representation and learning.
• Learn to design the application while deciding the level of
requirement of each AI component(search, Planning, Learning,
uncertainty).
• Learn and understand the mapping and interaction among various
AI components for anautomated/ smart application.
17.
LIST OF PROGRAMS
•WAP to solve Water Jug Problem (Using DFS and BFS).
• WAP to solve a problem for Means-End Analysis technique (like
robot traversal)
• WAP to solve 4-Queen’s Problem.
• WAP to solve travelling salesman problem.
• WAP to convert Predicate To Prepositional Logic
• WAP for Syntax Checking of English sentences-English Grammar.
18.
LIST OF PROGRAMS
•WAP to develop an Expert system for Medical diagnosis.
• Develop any Rule based system for an application of your choice.
• WAP to study various fuzzification methods in fuzzy logic.
• Design fuzzy rule base system for tipping problem.
• WAP to design a single layer perceptron for linear logic gates.
• WAP to design multi-layer perceptron for non-linear logic gates.
19.
LIST OF PROGRAMS
•Design a classifier for fruit classification using Bayesian and
Decision Tree classifier.
• Develop an algorithm for morphological derivation / verb derivation
and implement it.
20.
Note
• Students canchoose any programming language for
implementation like Python, C, C++, Java, MATLAB etc.
• Students will create a project in teams to analyse and apply the
concepts learnt.
21.
Learning Resources
• ReferenceBook and other materials
• Laboratory Manual
• Stuart Russell, Peter Norvig, Artificial Intelligence: A Modern
Approach, Prentice Hall, Fourth edition, 2020.
• Rich and K. Knight," Artificial Intelligence", Tata McGraw Hill.
22.
UNIT-I
Introduction
• Introduction toAI: Definitions, Historical foundations, Basic
Elements of AI, Characteristics of intelligent algorithm, AI
application Area.
23.
• Artificial intelligence(AI) refers to computer systems capable
of performing complex tasks that historically only a human
could do, such as reasoning, making decisions, or solving
problems.
• Today, the term “AI” describes a wide range of technologies that
power many of the services and goods we use every day – from
apps that recommend TV shows to chatbots that provide customer
support in real time.
• But do all of these really constitute artificial intelligence as most of
us envision it? And if not, then why do we use the term so often?
24.
What is artificialintelligence?
• Artificial intelligence (AI) is the theory and development of
computer systems capable of performing tasks that historically
required human intelligence, such as recognizing speech, making
decisions, and identifying patterns.
• AI is an umbrella term that encompasses a wide variety of
technologies, including machine learning, deep learning, and
natural language processing (NLP).
25.
Cont.
• Although theterm is commonly used to describe a range of
different technologies in use today, many disagree on whether
these actually constitute artificial intelligence.
• Instead, some argue that much of the technology used in the real
world today actually constitutes highly advanced machine learning
that is simply a first step towards true artificial intelligence, or
“general artificial intelligence” (GAI).
26.
Cont.
• Yet, despitethe many philosophical disagreements over whether
“true” intelligent machines actually exist, when most people use the
term AI today, they’re referring to a suite of machine learning-
powered technologies, such as Chat GPT or computer vision, that
enable machines to perform tasks that previously only humans can
do like generating written content, steering a car, or analyzing
data.
27.
The History ofAI: A Timeline of
Artificial Intelligence
• In recent years, the field of artificial intelligence (AI) has undergone
rapid transformation. Learn more about its development from the
1950s to the present.
• AI technologies now work at a far faster pace than human output
and have the ability to generate once unthinkable creative
responses, such as text, images, and videos, to name just a few of
the developments that have taken place.
28.
Cont.
• The speedat which AI continues to expand is unprecedented, and
to appreciate how we got to this present moment, it’s worthwhile to
understand how it first began.
• AI has a long history stretching back to the 1950s, with
significant milestones at nearly every decade.
29.
The beginnings ofAI: 1950s
• In the 1950s, computing machines essentially functioned as large-scale
calculators.
• In fact, when organizations like NASA needed the answer to specific
calculations, like the trajectory of a rocket launch, they more regularly
turned to human “computers” or teams of women tasked with solving
those complex equations.
• Long before computing machines became the modern devices they
are today, a mathematician and computer scientist envisioned the
possibility of artificial intelligence. This is where AI's origins really
begin.
30.
Alan Turing
• Ata time when computing power was still largely reliant on human brains, the
British mathematician Alan Turing imagined a machine capable of advancing
far past its original programming.
• To Turing, a computing machine would initially be coded to work according to
that program but could expand beyond its original functions.
• At the time, Turing lacked the technology to prove his theory because computing
machines had not advanced to that point, but he’s credited with
conceptualizing artificial intelligence before it came to be called that.
• He also developed a means for assessing whether a machine thinks on par with
a human, which he called “the imitation game” but is now more popularly called
“the Turing test.”
31.
Dartmouth conference
• Duringthe summer of 1956, Dartmouth College mathematics professor
John McCarthy invited a small group of researchers from various
disciplines to participate in a summer-long workshop focused on
investigating the possibility of “thinking machines.”
• The group believed, “Every aspect of learning or any other feature of
intelligence can in principle be so precisely described that a
machine can be made to simulate it”
• Due to the conversations and work they undertook that summer, they are
largely credited with founding the field of artificial intelligence.
32.
John McCarthy
• Duringthe summer Dartmouth Conference—and two years after
Turing’s death—McCarthy conceived of the term that would come
to define the practice of human-like machines.
• In outlining the purpose of the workshop that summer, he described
it using the term it would forever be known as, “artificial
intelligence.”
33.
Laying the groundwork:1960s-1970s
• The early excitement that came out of the Dartmouth Conference grew
over the next two decades, with early signs of progress coming in the
form of a realistic chatbot and other inventions.
ELIZA
• Created by the MIT computer scientist Joseph Weizenbaum in 1966,
ELIZA is widely considered the first chatbot and was intended to
simulate therapy by repurposing the answers users gave into
questions that prompted further conversation—also known as the
Rogerian argument.
34.
Cont.
• Weizenbaum believedthat rather rudimentary back-and-forth
would prove the simplistic state of machine intelligence.
• Instead, many users came to believe they were talking to a human
professional.
• In a research paper, Weizenbaum explained, “Some subjects
have been very hard to convince that ELIZA…is not human.”
35.
Shakey the Robot
•Between 1966 and 1972, the Artificial Intelligence Center at the
Stanford Research Initiative developed Shakey the Robot, a
mobile robot system equipped with sensors and a TV camera,
which it used to navigate different environments.
• The objective in creating Shakey was “to develop concepts and
techniques in artificial intelligence [that enabled] an
automaton to function independently in realistic environments,”
according to a paper SRI later published.
36.
Cont.
• While Shakey’sabilities were rather crude compared to today’s
developments, the robot helped advance elements in AI,
including “visual analysis, route finding, and object
manipulation”.
37.
American Association ofArtificial
Intelligence founded
• After the Dartmouth Conference in the 1950s, AI research began
springing up at venerable institutions like MIT, Stanford, and
Carnegie Mellon.
• The instrumental figures behind that work needed opportunities
to share information, ideas, and discoveries.
• To that end, the International Joint Conference on AI was held in
1977 and again in 1979, but a more cohesive society had yet to
arise.
38.
Cont.
• The AmericanAssociation of Artificial Intelligence was formed in
the 1980s to fill that gap.
• The organization focused on establishing a journal in the field,
holding workshops, and planning an annual conference.
• The society has evolved into the Association for the
Advancement of Artificial Intelligence (AAAI) and is “dedicated
to advancing the scientific understanding of the mechanisms
underlying thought and intelligent behavior and their
embodiment in machines”.
39.
AI winter
• In1974, the applied mathematician Sir James Lighthill published
a critical report on academic AI research, claiming that
researchers had essentially over-promised and under-delivered
when it came to the potential intelligence of machines.
• His condemnation resulted in stark funding cuts.
• The period between the late 1970s and early 1990s signaled an
“AI winter”—a term first used in 1984—that referred to the gap
between AI expectations and the technology’s shortcomings.
40.
Early AI excitementquiets: 1980s-
1990s
• The AI winter that began in the 1970s continued throughout much
of the following two decades, despite a brief resurgence in the
early 1980s.
• It wasn’t until the progress of the late 1990s that the field gained
more R&D funding to make substantial leaps forward.
41.
First driverless car
•Ernst Dickmanns, a scientist working in Germany, invented the
first self-driving car in 1986.
• Technically a Mercedes van that had been outfitted with a
computer system and sensors to read the environment, the vehicle
could only drive on roads without other cars and passengers.
42.
Deep Blue
• In1996, IBM had its computer system Deep Blue—a chess-playing
computer program—compete against then-world chess champion
Gary Kasparov in a six-game match-up.
• At the time, Deep Blue won only one of the six games, but the following
year, it won the rematch. In fact, it took only 19 moves to win the final
game.
• Deep Blue didn’t have the functionality of today’s generative AI, but
it could process information at a rate far faster than the human brain.
• In one second, it could review 200 million potential chess moves.
43.
AI growth: 2000-2019
•With renewed interest in AI, the field experienced significant growth
beginning in 2000.
44.
Kismet
• You cantrace the research for Kismet, a “social robot” capable of
identifying and simulating human emotions, back to 1997, but the project
came to fruition in 2000.
• Created in MIT’s Artificial Intelligence Laboratory and helmed by Dr.
Cynthia Breazeal, Kismet contained sensors, a microphone, and
programming that outlined “human emotion processes.” All of this
helped the robot read and mimic a range of feelings.
• "I think people are often afraid that technology is making us less human,”
Breazeal told MIT News in 2001.
• “Kismet is a counterpoint to that—it really celebrates our humanity.
This is a robot that thrives on social interactions”.
45.
Nasa Rovers
• Marswas orbiting much closer to Earth in 2004, so NASA took
advantage of that navigable distance by sending two rovers—
named Spirit and Opportunity—to the red planet.
• Both were equipped with AI that helped them traverse Mars’
difficult, rocky terrain, and make decisions in real-time rather than
rely on human assistance to do so.
46.
IBM Watson
• Manyyears after IBM’s Deep Blue program successfully beat the
world chess champion, the company created another competitive
computer system in 2011 that would go on to play the hit US quiz
show Jeopardy.
• In the lead-up to its debut, Watson DeepQA was fed data from
encyclopedias and across the internet.
• Watson was designed to receive natural language questions and
respond accordingly, which it used to beat two of the show’s most
formidable all-time champions, Ken Jennings and Brad Rutter.
47.
Siri and Alexa
•During a presentation about its iPhone product in 2011, Apple
showcased a new feature: a virtual assistant named Siri.
• Three years later, Amazon released its proprietary virtual assistant
named Alexa.
• Both had natural language processing capabilities that could
understand a spoken question and respond with an answer.
• Yet, they still contained limitations. Known as “command-and-
control systems,” Siri and Alexa are programmed to understand a
lengthy list of questions but cannot answer anything that falls
outside their purview.
48.
Geoffrey Hinton andneural networks
• The computer scientist Geoffrey Hinton began exploring the idea
of neural networks (an AI system built to process data in a
manner similar to the human brain) while working on his PhD
in the 1970s.
• But it wasn’t until 2012, when he and two of his graduate students
displayed their research at the competition ImageNet, that the tech
industry saw the ways in which neural networks had progressed.
49.
Cont.
• Hinton’s workon neural networks and deep learning—the
process by which an AI system learns to process a vast
amount of data and make accurate predictions—has been
foundational to AI processes such as natural language processing
and speech recognition.
• The excitement around Hinton’s work led to him joining Google in
2013.
• He eventually resigned in 2023 so that he could speak more
freely about the dangers of creating artificial general
intelligence.
50.
Sophia citizenship
• Roboticsmade a major leap forward from the early days of Kismet
when the Hong Kong-based company Hanson Robotics created
Sophia, a “human-like robot” capable of facial expressions,
jokes, and conversation in 2016.
• Thanks to her innovative AI and ability to interface with humans,
Sophia became a worldwide phenomenon and would regularly
appear on talk shows, including late-night programs like The
Tonight Show.
51.
Cont.
• Complicating matters,Saudi Arabia granted Sophia citizenship in
2017, making her the first artificially intelligent being to be given
that right.
• The move generated significant criticism among Saudi Arabian
women, who lacked certain rights that Sophia now held.
52.
AlphaGO
• The ancientgame of Go is considered straightforward to learn but
incredibly difficult—bordering on impossible—for any computer
system to play given the vast number of potential positions.
• It’s “a googol times more complex than chess” .
• Despite that, AlphaGO, an artificial intelligence program created by
the AI research lab Google DeepMind, went on to beat Lee Sedol,
one of the best players in the world, in 2016.
53.
Cont.
• AlphaGO isa combination of neural networks and advanced search
algorithms trained to play Go using a method called reinforcement
learning, which strengthened its abilities over the millions of games
that it played against itself. When it bested Sedol, it proved that AI
could tackle once insurmountable problems.
54.
AI surge: 2020-present
•The AI surge in recent years has largely come about thanks to
developments in generative AI——or the ability for AI to generate
text, images, and videos in response to text prompts.
• Unlike past systems that were coded to respond to a set
inquiry, generative AI continues to learn from materials
(documents, photos, and more) from across the internet.
55.
OpenAI and GPT-3
•The AI research company OpenAI built a generative pre-trained
transformer (GPT) that became the architectural foundation for its early
language models GPT-1 and GPT-2, which were trained on billions
of inputs.
• Even with that amount of learning, their ability to generate distinctive text
responses was limited.
• Instead, it was the large language model (LLM) GPT-3 that created a
growing buzz when it was released in 2020 and signaled a major
development in AI.
• GPT-3 was trained on 175 billion parameters, which far exceeded the
1.5 billion parameters GPT-2 had been trained on.
56.
DALL-E
• An OpenAIcreation released in 2021, DALL-E is a text-to-image
model.
• When users prompt DALL-E using natural language text, the
program responds by generating realistic, editable images.
• The first iteration of DALL-E used a version of OpenAI’s GPT-3
model and was trained on 12 billion parameters.
57.
ChatGPT released
• In2022, OpenAI released the AI chatbot ChatGPT, which
interacted with users in a far more realistic way than previous
chatbots thanks to its GPT-3 foundation, which was trained on
billions of inputs to improve its natural language processing
abilities.
• Users prompt ChatGPT for different responses, such as help
writing code or resumes, beating writer’s block, or conducting
research.
• However, unlike previous chatbots, ChatGPT can ask follow-
up questions and recognize inappropriate prompts.
58.
Generative AI grows
•2023 was a milestone year in terms of generative AI.
• Not only did OpenAI release GPT-4, which again built on its
predecessor’s power, but Microsoft integrated ChatGPT into
its search engine Bing and Google released its GPT chatbot
Bard.
• GPT-4 can now generate far more nuanced and creative
responses and engage in an increasingly vast array of activities,
such as passing the bar exam.
59.
1941 The initialcomputer, even to run a single program, they need to do many
connections and it is used to be a complex task to do.
1943 First work recognized by warren maculloh and watter pits. (proposed a model of AI neuron).
1949 Donald Hebb (Update and modify Connection strength between neurons called Hebbian
learning.
1950 Allan Turing proposed a test “Computing machinery and intelligence “ Test to check
machine ablity by human intelligence.
1956 Birth of AI “Darth Mouth Conference adopt the word Artificial intelligence by American
Scientist.
1966
1972
1974-1980
Artificial intelligence examples
•At the simplest level, machine learning uses algorithms trained on
data sets to create machine learning models that allow computer
systems to perform tasks like making song recommendations,
identifying the fastest way to travel to a destination, or translating
text from one language to another.
Some of the most common examples of AI in use today include:
• ChatGPT:
• Google Translate:
• Netflix:
62.
What is artificialgeneral intelligence
(AGI)?
• Artificial general intelligence (AGI) refers to a theoretical state
in which computer systems will be able to achieve or exceed
human intelligence.
• In other words, AGI is “true” artificial intelligence, as depicted in
countless science fiction novels, television shows, movies, and
comics.
63.
Cont.
• As forthe precise meaning of “AI” itself, researchers don’t quite agree on how we
would recognize “true” artificial general intelligence when it appears.
• However, the most famous approach to identifying whether a machine is
intelligent or not is known as the Turing Test or Imitation Game, an experiment
that was first outlined by influential mathematician, computer scientist, and
cryptanalyst Alan Turing in a 1950 paper on computer intelligence.
• There, Turing described a three-player game in which a human “interrogator” is
asked to communicate via text with another human and a machine and judge
who composed each response.
• If the interrogator cannot reliably identify the human, then Turing says the
machine can be said to be intelligent
64.
Cont.
• To complicatematters, researchers and philosophers also can’t quite
agree whether we’re beginning to achieve AGI, if it’s still far off, or just
totally impossible.
• For example, while a recent paper from Microsoft Research and OpenAI
argues that Chat GPT-4 is an early form of AGI, many other researchers
are skeptical of these claims and argue that they were just made for
publicity.
• Regardless of how far we are from achieving AGI, you can assume that
when someone uses the term artificial general intelligence, they’re
referring to the kind of sentient computer programs and machines that are
commonly found in popular science fiction.
65.
Strong AI vs.Weak AI
• When researching artificial intelligence, you might have come
across the terms “strong” and “weak” AI.
• Though these terms might seem confusing, you likely already have
a sense of what they mean.
66.
Strong AI
• StrongAI is essentially AI that is capable of human-level, general
intelligence.
• In other words, it’s just another way to say “artificial general
intelligence.”
67.
Weak AI,
• WeakAI, meanwhile, refers to the narrow use of widely available AI
technology, like machine learning or deep learning, to perform very
specific tasks, such as playing chess, recommending songs, or
steering cars.
• Also known as Artificial Narrow Intelligence (ANI), weak AI is
essentially the kind of AI we use daily.
68.
Types of AI
•As researchers attempt to build more advanced forms of artificial
intelligence, they must also begin to formulate more nuanced
understandings of what intelligence or even consciousness precisely
mean.
• In their attempt to clarify these concepts, researchers have outlined four
types of artificial intelligence.
• Reactive machines
• Limited memory machines
• Theory of mind machines
• Self-aware machines
69.
Reactive machines
• Reactivemachines are the most basic type of artificial intelligence.
• Machines built in this way don’t possess any knowledge of previous
events but instead only “react” to what is before them in a given
moment.
• As a result, they can only perform certain advanced tasks within a
very narrow scope, such as playing chess, and are incapable of
performing tasks outside of their limited context.
70.
Limited memory machines
•Machines with limited memory possess a limited understanding of past
events.
• They can interact more with the world around them than reactive
machines can.
• For example, self-driving cars use a form of limited memory to make turns,
observe approaching vehicles, and adjust their speed.
• However, machines with only limited memory cannot form a complete
understanding of the world because their recall of past events is limited
and only used in a narrow band of time.
71.
Theory of mindmachines
• Machines that possess a “theory of mind” represent an early form
of artificial general intelligence.
• In addition to being able to create representations of the world,
machines of this type would also have an understanding of other
entities that exist within the world.
• As of this moment, this reality has still not materialized.
72.
Self-aware machines
• Machineswith self-awareness are the theoretically most advanced
type of AI and would possess an understanding of the world, others,
and itself.
• This is what most people mean when they talk about achieving AGI.
• Currently, this is a far-off reality.
73.
What is generativeartificial
intelligence?
• Generative AI is a kind of artificial intelligence capable of
producing original content, such as written text or images, in
response to user inputs or "prompts."
• Generative models are also known as large language models
(LLMs) because they're essentially complex, deep learning models
trained on vast amounts of data that can be interacted with using
normal human language rather than technical jargon.
74.
AI benefits anddangers
• AI has a range of applications with the potential to transform how
we work and live.
• While many of these transformations are exciting, like self-
driving cars, virtual assistants, or wearable devices in the
healthcare industry, they also pose many challenges.
• It’s a complicated picture that often summons competing images: a
utopia for some, a dystopia for others.
• The reality is likely to be much more complex.
75.
Potential Benefits
• Greateraccuracy for certain repeatable tasks, such as assembling
vehicles or computers.
• Decreased operational costs due to greater efficiency of machines.
• Increased personalization within digital services and products.
• Improved decision-making in certain situations.
• Ability to quickly generate new content, such as text or images.
76.
Potential Dangers
• Jobloss due to increased automation.
• Potential for bias or discrimination as a result of the data set on which the
AI is trained.
• Possible cybersecurity concerns.
• Lack of transparency over how decisions are arrived at, resulting in less
than optimal solutions.
• Potential to create misinformation, as well as inadvertently violate laws
and regulations.
77.
Cont.
• These arejust some of the ways that AI provides benefits and
dangers to society.
• When using new technologies like AI, it’s best to keep a clear mind
about what it is and isn’t. With great power comes great
responsibility, after all.
78.
Artificial Intelligence (AI):Definitions
• AI refers to the simulation of human intelligence processes by
machines, especially computer systems.
• These processes include learning (acquiring information),
reasoning (using that information to make decisions), problem-
solving, perception (understanding the environment), and language
processing.
79.
Machine Learning (ML):
•A subset of AI, machine learning focuses on algorithms that allow
computers to learn from data without being explicitly
programmed.
• Instead of following predetermined instructions, ML systems
improve their performance by recognizing patterns in data and
making predictions or decisions based on it.
80.
Deep Learning:
• Asubset of machine learning, deep learning uses neural networks
with many layers (hence "deep") to model complex patterns in large
datasets.
• It has driven advances in areas like image and speech recognition.
81.
Neural Networks:
• Neuralnetworks are a fundamental part of deep learning and are
inspired by the human brain.
• They consist of layers of nodes (neurons) that process and transmit
information.
• A network learns to recognize patterns by adjusting connections
between these neurons based on the data it’s trained on.
82.
Natural Language Processing(NLP):
• NLP is a branch of AI focused on the interaction between
computers and human languages.
• It enables machines to understand, interpret, and generate human
language, which is essential for applications like chatbots,
language translation, and voice recognition systems.
83.
Computer Vision:
• Afield within AI that allows machines to interpret and make
decisions based on visual inputs, like images or video.
• It's used in applications such as facial recognition, self-driving cars,
and medical image analysis.
84.
Reinforcement Learning:
• Atype of machine learning where an agent learns to make
decisions by interacting with its environment.
• It takes actions and receives rewards or penalties, gradually
improving its strategy based on these feedback signals.
85.
Supervised Learning
• Insupervised learning, an algorithm is trained on labeled data
(input-output pairs).
• The model learns to predict the output for new, unseen inputs
based on this training.
86.
Unsupervised Learning:
• Inunsupervised learning, the algorithm works with unlabeled data
and tries to find hidden patterns or groupings.
• It’s often used for clustering or anomaly detection.
87.
Basic Elements ofAI
• AI consists of several core elements that contribute to its ability to
mimic human intelligence and solve complex problems. Below are
the basic elements of AI, explained simply:
• Data:-
• Data is the foundation of AI. Machines learn from large amounts of data,
which can include text, images, videos, numbers, and more.
• The quality and quantity of data directly impact the performance of AI models.
88.
Cont.
• Algorithms:-
• Algorithmsare the step-by-step instructions that guide the AI system to
analyze data, make predictions, and perform tasks.
• They are the core of machine learning, deep learning, and other AI
techniques.
• Models:-
• Models are mathematical representations created by algorithms that capture
patterns and relationships in data.
• These models are used to make predictions or decisions based on new data.
Characteristics of intelligentalgorithm
• An intelligent algorithm is one that is capable of solving complex
problems, learning from data, adapting over time, and making decisions
that seem "intelligent."
Below are some key characteristics that define intelligent algorithms:
Learning from Data:-
• Iintelligent algorithms improve over time by learning from data.
• They are capable of recognizing patterns in large datasets without explicit
programming.
• This is central to machine learning (ML) and deep learning.
92.
Cont.
• Adaptability :-
•The algorithm can adapt its behavior or improve its performance based on new
data or experiences.
• It can "learn" from mistakes and make better predictions or decisions in future
scenarios.
• Problem-Solving Ability
• Intelligent algorithms can solve complex, multi-step problems.
• They break down larger issues into smaller, manageable tasks and find optimal
solutions through systematic approaches.
93.
Cont.
• Generalization :-
•These algorithms can generalize from past experiences to new, unseen
situations.
• This means they can apply learned knowledge to data that is similar but not
identical to the data they were trained on.
• Reasoning and Inference :-
• Intelligent algorithms can make logical deductions based on available data.
• They can reason about relationships between variables, draw conclusions,
and make informed decisions or predictions.
94.
Cont.
• Decision-Making:-
• Intelligentalgorithms can make decisions, sometimes in real-time, based on the
data they process.
• They often choose the best possible action from a set of alternatives, weighing
various factors.
• Efficiency:-
• The algorithm should be able to solve problems effectively and in a reasonable
amount of time.
• Efficiency refers not just to speed but also to the use of computational resources
(like memory).
95.
Example: Self-driving car
•Learning from Data: It processes massive amounts of sensor data
(e.g., camera feeds, radar ) to understand its environment.
• Adaptability: It adapts to new road conditions, changing traffic
signals, or new obstacles it hasn't encountered before.
• Problem-Solving: It solves the problem of navigation in complex
environments by planning paths and making real-time decisions.
• Generalization: It can drive in a variety of weather conditions or
unfamiliar locations, generalizing from its prior experiences.
96.
AI application areas
Healthcare
Autonomous Vehicles
Natural Language Processing (NLP)
Computer Vision
Finance and FinTech
Retail and E-commerce
Manufacturing and Industry
Customer Service
Education and EdTech
Marketing and Advertising
97.
AI application areas
Smart Homes and IoT
Entertainment and Media
Human Resources (HR) and Recruitment
Agriculture and Farming
Law and Legal Tech
Energy and Sustainability
Security and Surveillance
Robotics
Space Exploration
Arts and Creativity
Social Media and Content Creation
98.
Agents: Definition ofagents
Agent:
• An entity that perceives its environment through sensors,
processes that information, and takes actions to achieve specific
goals through actuators.
99.
Key Characteristics ofan Agent:
• Perception: An agent gathers data or information about its environment
using sensors (e.g., cameras, microphones, or other input devices).
• Action: Based on the perceived information, the agent performs actions
using actuators (e.g., motors, displays, etc.).
• Autonomy: An agent can make decisions and take actions without direct
human intervention.
• Goal-Oriented: Agents are typically designed to achieve specific
objectives or tasks.
• Intelligence: The agent may have the ability to learn and adapt to
changes in its environment to improve its performance over time.
100.
Examples of Agents:
•Autonomous vehicles: Vehicles that sense their surroundings
(perception) and navigate the roads (actions) to transport
passengers.
• Robots: Machines capable of performing tasks in environments
like factories or homes.
• Software agents: Programs like chatbots or recommendation
systems that interact with users or systems to achieve goals (e.g.,
answering questions or suggesting content).
101.
Types of Environmentsin AI
• An environment in artificial intelligence is the surrounding of the
agent.
• The agent takes input from the environment through sensors and
delivers the output to the environment through actuators.
• There are several types of environments:
102.
Cont.
• Fully Observablevs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
103.
Fully Observable vsPartially
Observable
• When an agent sensor is capable to sense or access the complete
state of an agent at each point in time, it is said to be a fully
observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no
need to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no
sensors in all environments.
104.
Examples:
• Chess –the board is fully observable, and so are the opponent’s
moves.
• Driving – the environment is partially observable because what’s
around the corner is not known.
105.
Deterministic vs Stochastic
•When a uniqueness in the agent’s current state completely
determines the next state of the agent, the environment is said to
be deterministic.
• The stochastic environment is random in nature which is not unique
and cannot be completely determined by the agent.
106.
Examples:
• Chess –there would be only a few possible moves for a chess
piece at the current state and these moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it
varies time to time.
107.
Competitive vs Collaborative
•An agent is said to be in a competitive environment when it competes
against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each other
to win the game which is the output.
• An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they cooperate
with each other to avoid collisions and reach their destination which is the
output desired.
108.
Single-agent vs Multi-agent
•An environment consisting of only one agent is said to be a single-
agent environment.
• A person left alone in a maze is an example of the single-agent
system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in each
team.
109.
Dynamic vs Static
•An environment that keeps constantly changing itself when the
agent is up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its state is called a static
environment.
• An empty house is static as there’s no change in the surroundings
when an agent enters.
110.
Discrete vs Continuous
•If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be a
discrete environment.
• The game of chess is discrete as it has only a finite number of moves.
The number of moves might vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered
i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as their
actions are driving, parking, etc. which cannot be numbered.
111.
Episodic vs Sequential
•In an Episodic task environment, each of the agent’s actions is divided
into atomic incidents or episodes.
• There is no dependency between current and previous incidents.
• In each incident, an agent receives input from the environment and then
performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time
robot(agent) will make the decision on the current part i.e. there is no
dependency between current and previous decisions.
112.
Cont.
• In aSequential environment, the previous decisions can affect all
future decisions. The next action of the agent depends on what
action he has taken previously and what action he is supposed to
take in the future.
• Example: Checkers- Where the previous move can affect all the
following moves.
113.
Known vs Unknown
•In a known environment, the output for all probable actions is given.
Obviously, in case of unknown environment, for an agent to make a
decision, it has to gain knowledge about how the environment
works.
114.
Agent architectures
• InAI, an agent is an entity that perceives its environment through
sensors and acts upon it using actuators.
• The architecture of an agent refers to the structure that defines how
an agent processes information, makes decisions, and performs
actions.
• Different architectures can be employed based on the type of agent
being designed, the tasks it performs, and its environment.
• Broadly, AI agents can be categorized into several types based on
their architecture, including reactive, layered, and cognitive agents.
115.
Reactive Agent Architecture
•Reactive agents operate based on simple condition-action rules,
often referred to as if-then rules.
• These agents do not maintain an internal model of the world or
history.
• They respond to stimuli or sensor inputs immediately with pre-
programmed responses.
• Reactive systems are often used in situations where complex
reasoning or planning is not necessary, and the environment is
relatively predictable.
116.
Key Characteristics:
• Nointernalmodel of the environment: The agent does not try to
predict future states or remember past actions.
• Immediate response to stimuli: Actions are directly linked to
environmental inputs.
• Efficiency: These agents are computationally simple and fast,
making them suitable for real-time applications.
117.
Types of ReactiveAgents:
• Simple Reflex Agents:
• These agents act on the basis of current percepts, without considering the
history of past interactions.
• The action is decided by the current state of the environment.
• Model-Based Reflex Agents:
• These agents have a simple internal model of the world, which helps them
make decisions based on the current state and previous perceptions.
118.
Example:
• Roomba VacuumCleaner:
• A Roomba vacuum cleaner is a reactive agent. It responds to basic
environmental stimuli (e.g., obstacles or dirt detected via sensors) and takes
actions like moving around or cleaning.
• It doesn't need to plan its movements but reacts to the current environment.
119.
Layered Agent Architecture
•The layered architecture divides the agent’s behavior into multiple
levels or layers, each responsible for a different aspect of the
agent's functioning.
• It is a more complex structure compared to reactive agents and
allows for hierarchical decision-making, where higher layers can
influence the behavior of lower layers.
120.
Key Characteristics:
• Multiplelevels:
• The architecture is usually organized into layers, each of which performs
specific tasks.
• Modularity:
• Different layers can be replaced or modified independently, making the agent
flexible and adaptable.
• Separation of concerns:
• Each layer focuses on a specific responsibility, such as perception, decision-
making, and execution.
Example:
• Autonomous Vehicles:
•In an autonomous car, there may be a layered agent architecture where:
• Lower layers handle sensory data (e.g., from cameras, LiDAR) and basic motor
control (e.g., steering, braking).
• Middle layers handle tasks such as lane detection and obstacle avoidance.
• Higher layers deal with route planning, decision-making based on traffic
conditions, and strategic navigation.
• This layered approach ensures that different levels of decision-making
can operate independently while contributing to the overall goal.
123.
Cognitive Agent Architecture
•Cognitive agents are designed to model and simulate human-like
reasoning, learning, and problem-solving processes.
• These agents are more complex and resemble human cognitive
processes, such as memory, attention, learning, and decision-
making.
• Cognitive agents are typically capable of higher-level thinking and
perform tasks such as planning, goal-setting, and reasoning about
future states.
124.
Key Characteristics:
• Internalmodel of the world: Cognitive agents maintain a detailed
model of the environment and use it for planning and reasoning.
• Learning ability: They can improve their performance over time by
learning from past experiences.
• Deliberative behavior: Cognitive agents can engage in reasoning
and can make decisions based on goals, plans, and predictions of
future states.
• Memory and knowledge: These agents have the ability to store
and retrieve information about the environment and their past
actions.
125.
Example:
• IBM Watson:Watson is an AI system that uses natural language
processing (NLP) and machine learning to answer questions posed
in human language.
• It is a cognitive agent because it reasons about the meaning of
questions, searches large datasets, and uses its knowledge to
generate accurate answers.
• In applications such as medical diagnosis, Watson can process
vast amounts of data to reason about potential causes of symptoms
and recommend treatments.
126.
Summary Comparison of
Architectures
AspectReactive Agent Layered Agent Cognitive Agent
Complexity Low Medium High
Internal Model None or minimal Yes (but in layers) Detailed, complex model
Decision-Making
Immediate reaction
to inputs
Hierarchical
decision-making
Reasoning, learning,
planning
Use Case
Simple, real-time
systems
Complex systems
with modular
tasks
Complex problem-solving,
reasoning tasks
Example
Roomba vacuum
cleaner
Autonomous
vehicles
IBM Watson
127.
Conclusion:
• Reactive agentsare well-suited for simple, real-time tasks where
minimal processing is needed.
• Layered architectures allow for flexibility and modularity, handling
complex tasks with different levels of decision-making.
• Cognitive agents are the most advanced, capable of high-level
reasoning, learning, and adaptive decision-making, making them suitable
for tasks requiring complex human-like cognition.
• Understanding these architectures is essential for designing AI
systems that perform a wide range of tasks, from simple to highly
complex ones.
128.
Multi-Agent Systems (MAS)
•A Multi-Agent System (MAS) consists of multiple agents that interact with each
other in a shared environment.
• These agents can be autonomous entities capable of perceiving the environment,
reasoning, and acting.
• They may work together towards common goals, act independently, or engage in
competitive or cooperative interactions.
• The key idea is that the system's overall behavior emerges from the interaction of
these agents, rather than from a single central controller.
• Multi-agent systems are used to solve complex problems where a single agent would
be insufficient due to the size, complexity, or nature of the task.
• They are a key area of research in Artificial Intelligence (AI) and are widely applied in
areas such as robotics, distributed problem solving, game theory, and simulation.
129.
Key Characteristics ofMulti-Agent
Systems
• Autonomy: Each agent in the system is typically autonomous,
meaning it can operate independently and make decisions without
direct human intervention or central control.
• Interaction: Agents can communicate or interact with each other,
either directly or indirectly, to achieve their goals or to coordinate
tasks.
• Heterogeneity: Agents may vary in terms of their capabilities,
goals, and information. Some may have more resources,
knowledge, or power than others.
130.
Cont.
• Decentralization: Multi-agentsystems are often decentralized, meaning that no
single agent has full control over the system. Instead, each agent has some level
of autonomy to make decisions.
• Cooperation/Coordination: Agents may cooperate and coordinate their actions
to achieve a common objective, which is known as collaborative multi-agent
systems.
• Competition: In other cases, agents may be in direct competition with each
other, seeking to maximize their own individual utility (known as competitive
multi-agent systems).
• Distributed Control: Control is distributed across the agents, which enables
scalability and fault tolerance, as the failure of one agent does not necessarily
collapse the whole system.
131.
Types of Multi-AgentSystems
• Cooperative Multi-Agent Systems:
• Agents in these systems work together to achieve a common goal or set of
goals.
• They share information and resources, collaborate on tasks, and may form
teams.
• Example: Robotic Swarms:
• In a robotic swarm system, multiple robots collaborate to perform tasks like
search and rescue or environmental monitoring.
• Each robot performs a small part of the task, and together they achieve the
overall goal.
• Communication and coordination between robots are essential for their success.
132.
Competitive Multi-Agent Systems:
•Agents in these systems are self-interested and compete against
each other to achieve individual goals.
• These systems can model economic, political, or strategic
environments, where each agent aims to maximize its own utility.
• Example:Auctions and Trading:
• In a multi-agent-based auction system, each agent represents a bidder, and
the agents compete to win an item by submitting bids.
• They may adapt their bidding strategies based on the actions of other agents.
133.
Mixed-Motive Multi-Agent Systems
•In these systems, agents have both cooperative and competitive
aspects.
• Agents may have some shared goals, but they may also pursue
individual objectives that sometimes conflict with the group's
objectives.
• Example:Traffic Control Systems:
• In a system for optimizing traffic flow, individual vehicles (agents) work
together to improve overall traffic conditions (cooperation), but each vehicle
also wants to reach its destination as quickly as possible (competition).
134.
Key Challenges inMulti-Agent
Systems
• Coordination:
• Communication:
• Decision-Making and Negotiation:
• Scalability:
• Distributed Problem Solving:
135.
Applications of Multi-AgentSystems
• Robotics:
• Autonomous Vehicles:
• Game Theory and Auctions:
• Distributed Control Systems:
• Healthcare:
136.
Example: Multi-Agent Systemin a
Traffic Control Scenario
• In a traffic control system using MAS, the agents could be traffic lights,
vehicles, or pedestrian crossing signals.
• Each agent operates autonomously and makes decisions based on local
data (e.g., the number of cars waiting at an intersection or the current
flow of traffic).
• Agents may need to coordinate with others, such as adjusting traffic light
timings based on real-time traffic conditions, to minimize congestion and
ensure smooth traffic flow.
• If a vehicle (agent) is stuck in traffic, it might communicate with nearby
vehicles to adjust driving routes or inform other agents about the delay.
137.
Conclusion
• Multi-Agent Systems(MAS) are powerful tools for solving complex
problems that involve multiple interacting entities.
• They can be cooperative, competitive, or a mix of both.
• By modeling real-world interactions among autonomous agents, MAS
offer scalable, efficient, and decentralized solutions to a wide variety of
problems across different domains, from robotics to economics and
healthcare.
• The challenges involved in MAS include coordination, communication,
decision-making, and scalability, but their potential for real-world
applications continues to grow.
138.
UNIT-II
Problem solving
• Problemsolving: State space search; Production systems, search
space control: depth-first, breadth-first search, heuristic search - Hill
climbing, best-first search, branch and bound. Problem Reduction,
Constraint Satisfaction End, Means-End Analysis, Game Playing.
139.
UNIT-II
Problem solving
• Problemsolving: State space search; Production systems, search
space control: depth-first, breadth-first search, heuristic search - Hill
climbing, best-first search, branch and bound. Problem Reduction,
Constraint Satisfaction End, Means-End Analysis, Game Playing.
140.
Production systems
• Inartificial intelligence (AI), a production system is a framework that
automates decision-making and problem-solving through a set of
predefined rules.
• These systems consist of three primary components:
141.
State space search
•State space search is a fundamental concept in Artificial
Intelligence (AI) used to find solutions to problems by exploring a
set of possible states.
• The idea is to model the problem as a space of states, where each
state represents a possible configuration of the problem.
• The goal is to find a sequence of actions (or steps) that lead from
the initial state to a goal state.
142.
State Space
• States:Each state represents a configuration or condition in the problem.
• State Space: This is the entire set of all possible states that can be
reached by applying actions from the initial state.
• Initial State: The state where the search begins.
• Goal State: The state that represents the solution to the problem.
• Operators/Actions: These are the transitions that move the system from
one state to another. They are defined by the problem and specify what
changes can be made to the current state.
143.
Search Strategy:
• Statespace search uses various strategies to explore the space and find
a solution. The common strategies are:
• Uninformed Search: Also called blind search, where no domain-specific
knowledge is used. Examples include:
• Breadth-First Search (BFS): Expands all nodes at the present depth
level before moving on to the next level.
• Depth-First Search (DFS): Explores as far down a branch as possible
before backtracking.
144.
Cont.
• Uniform CostSearch: Expands the node with the lowest path cost.
• Informed Search: Uses heuristics (domain-specific knowledge) to
make smarter decisions about which states to explore first.
Examples include:
• A Search*: A combination of BFS and DFS that uses a heuristic to
estimate the cost from the current state to the goal, making it more
efficient.
• Greedy Best-First Search: Chooses the node that appears to be
the closest to the goal based on a heuristic.
145.
Tree vs. GraphSearch
• Tree Search: This approach generates a search tree, where nodes
can be repeated (leading to inefficiency).
• Graph Search: This approach stores already visited nodes,
ensuring that nodes are not expanded multiple times. This is more
efficient in many cases.
146.
Search Space Complexity
•The size of the state space can have a significant impact on the
performance of the search. Some challenges include:
• Time Complexity: Refers to how many nodes need to be
expanded to find a solution.
• Space Complexity: Refers to how much memory is required to
store the state space during the search.
147.
Water Jug Problem
•The Water Jug Problem is a classic problem in Artificial Intelligence
(AI) that involves using two jugs of different capacities to measure a
specific amount of water.
• It's a type of state space search problem, where the goal is to find a
sequence of operations that lead to the desired outcome.
148.
Problem Setup
• Youare given two jugs with capacities X and Y liters. You need to measure Z liters of
water using these two jugs. You can perform the following operations:
• Fill any jug completely.
• Empty any jug.
• Pour water from one jug into the other until either the first jug is empty or the second jug is full.
• The goal is to determine if it's possible to measure exactly Z liters of water, and if so,
find the sequence of actions to do so.
149.
Example 1:
• Forinstance, suppose you have:
• Jug 1 with a capacity of 4 liters.
• Jug 2 with a capacity of 3 liters.
• You need to measure exactly 2 liters of water.
150.
Approach to Solve
•To solve this problem, we can model it as a state space search problem:
• States: Each state can be represented as a pair of values (x, y), where x
is the amount of water in Jug 1 and y is the amount of water in Jug 2.
• Initial State: (0, 0), meaning both jugs are initially empty.
• Goal State: (Z, y) or (x, Z), where Z is the target amount of water, and y
or x can be any value.
151.
The possible operationscan be seen as
actions that change the state, like
• Fill Jug 1 completely: (0, 0) → (4, 0).
• Fill Jug 2 completely: (0, 0) → (0, 3).
• Pour water from Jug 1 into Jug 2 until Jug 2 is full or Jug 1 is empty,
and so on.
152.
Solving the WaterJug Problem
• We can solve it using Breadth-First Search (BFS), which guarantees
that we find the shortest solution, or Depth-First Search (DFS),
which might explore deeper into the state space but is not
guaranteed to be optimal.
• Steps in BFS Approach:
• Start from the initial state (0, 0).
• Apply each of the allowed operations to generate new states.
• Keep track of visited states to avoid revisiting them.
• Repeat until the goal state is reached.
153.
Example Solution (Jug1: 4 liters, Jug 2: 3
liters, Goal: 2 liters):
• Starting with both jugs empty (0, 0):
• Fill Jug 1: (0, 0) → (4, 0)
• Pour from Jug 1 into Jug 2: (4, 0) → (1, 3) (Jug 2 is now full)
• Empty Jug 2: (1, 3) → (1, 0)
• Pour from Jug 1 into Jug 2: (1, 0) → (0, 1)
• Fill Jug 1: (0, 1) → (4, 1)
• Pour from Jug 1 into Jug 2: (4, 1) → (2, 3) (Now, we have exactly 2 liters in Jug 1!)
• So, we have successfully measured 2 liters in Jug 1.
154.
Key Considerations
• GreatestCommon Divisor (GCD):
• A key observation is that you can measure a target amount of water Z if and only if Z
is a multiple of the greatest common divisor (GCD) of the two jug capacities X and Y.
• That is, if Z is greater than the larger jug's capacity or if Z is not divisible by the GCD
of X and Y, the problem has no solution.
• For example,
• if you have a 4-liter jug and a 3-liter jug, the GCD is 1 (since 4 and 3 are coprime).
• You can measure any amount of water, as long as it is less than or equal to the larger
jug’s capacity (4 liters).
155.
Summary
• The WaterJug Problem is a typical example of a state space
search problem, and can be solved using search techniques such
as BFS or DFS.
• It teaches important lessons about how to model a problem in AI,
define operations, and search through possible states to find a
solution.
156.
Water Jug Problem
Givena full 5-gallon jug and an empty 2-gallon jug, the goal is
to fill the 2-gallon jug with exactly one gallon of water.
157.
Cont.
• Possible actions:
•Empty the 5-gallon jug (pour contents down the drain)
• Empty the 2-gallon jug
• Pour the contents of the 2-gallon jug into the 5-gallon jug (only if there is enough room)
• Fill the 2-gallon jug from the 5-gallon jug
• Case 1: at least 2 gallons in the 5-gallon jug
• Case 2: less than 2 gallons in the 5-gallon jug
• What are the states?
• What are the state transitions?
• What does the state space look like?
159.
Production systems
• Inartificial intelligence (AI), a production system is a framework that
automates decision-making and problem-solving through a set of
predefined rules.
• These systems consist of three primary components:
160.
Cont.
• Production Rules:Conditional statements in the form of "if-then"
clauses that define the system's behavior.
• Working Memory: A global database that holds facts or conditions
relevant to the problem-solving process.
• Control System: Manages the application of production rules
based on the current state of the working memory.
161.
Cont.
• When theconditions specified in a production rule are met, the rule
is triggered, leading to actions that modify the working memory.
• This process continues iteratively until a solution is reached or no
applicable rules remain.
162.
Types of ProductionSystems:
• Monotonic: Rules and facts remain constant throughout the process.
• Partially Commutative: Rules can be applied flexibly within certain constraints.
• Non-monotonic: Rules can be added, modified, or retracted during execution.
• Commutative: Rules can be applied in any sequence without changing the result.
Production systems are foundational in AI, particularly in expert systems and automated planning,
as they emulate human problem-solving abilities by applying logical rules to data.
In artificial intelligence, production systems are frameworks that automate decision-making and
problem-solving through predefined rules. These systems can be classified into four types based
on their characteristics:
163.
Monotonic Production Systems
•In monotonic production systems, once a fact is established, it remains constant
throughout the process.
• The application of one rule does not prevent the later application of another rule that
could have been applied earlier.
• This stability ensures predictability but may limit adaptability in dynamic environments.
• Example:
• Consider a system designed to solve a mathematical theorem.
• Once a fact is deduced, it and the corresponding rule stay fixed throughout the process.
164.
Partially Commutative Production
Systems
•Partially commutative production systems allow for some flexibility in the
order of rule application.
• If a sequence of rules transforms state X into state Y, then any
permissible permutation of those rules also transforms state X into state
Y.
• This flexibility strikes a balance between stability and adaptability.
• Example:
• In the 8-puzzle problem, the order of moves can vary, but the final configuration
remains the same.
165.
Non-Monotonic Production Systems
•Non-monotonic production systems allow facts or conclusions to be
retracted if they conflict with new information.
• This adaptability is crucial in dynamic environments where knowledge is
incomplete or subject to change.
• Example:
• In playing the game of bridge, strategies may change based on new information,
requiring the retraction of previous conclusions.
166.
Commutative Production Systems
•Commutative production systems are both monotonic and partially commutative.
• In these systems, the order of rule application does not affect the final outcome, and
once a fact is established, it remains constant.
• This property is useful for problems where the sequence of operations is not critical.
• Example:
• Theorem proving in mathematics often involves commutative production systems, where the
order of applying logical rules does not affect the final proof.
167.
Space Control inArtificial Intelligence:
Depth-First Search (DFS)
• In Artificial Intelligence (AI), search space control refers to the
techniques and methods used to explore and navigate through the
search space to find solutions to problems.
• The search space can be thought of as a large set of possible
states or configurations that a problem can be in, and the search
algorithm systematically explores this space to find a solution.
168.
Cont.
• One ofthe foundational search strategies in AI is Depth-First
Search (DFS), which is a way to traverse or explore the search
space.
• DFS is particularly important in AI because of its use in various
domains like game AI, pathfinding algorithms, puzzle solving, and
even in optimization problems.
169.
Depth-First Search (DFS)
Concept:
•Depth-First Search (DFS) is an algorithm that starts at the root of
the search tree and explores as far as possible along each branch
before backtracking.
• It is a recursive algorithm that uses a stack data structure to
remember the path it is currently exploring.
170.
Working of DFS:
•Start at the root: DFS begins at the initial state (root) of the search
space.
• Explore deeper: DFS explores one branch of the search tree as
deeply as possible. It moves from one node to its child, and from
there to its child’s child, and so on.
• Backtrack: If a node does not have any unvisited children or it
reaches a dead end, DFS backtracks to the most recent node with
unexplored children and continues from there.
• Repeat until the goal is found or all possibilities are explored.
171.
Cont.
• In DFS,the goal state is not necessarily found immediately;
instead, the algorithm explores each branch of the search
space completely before returning to explore other branches.
172.
Examples of DFSin AI Applications
• Puzzle Solving (e.g., 8-puzzle problem)
• Problem: In the 8-puzzle problem, you have a 3x3 grid where one
tile is missing, and the objective is to move tiles around to reach a
goal state.
• The initial state is given, and the goal is to move the tiles to a
specific configuration.
173.
DFS in 8-Puzzle:
•The search space is the different configurations of the tiles.
• DFS would start at the initial configuration and explore the space by
moving tiles around, diving deeper into each possible configuration,
until it finds the solution or exhausts all possible configurations.
174.
For example
• Startwith the initial tile arrangement.
• DFS moves the tiles into an adjacent empty space, creating a new
state.
• If a state leads to a dead-end (i.e., a configuration already visited or
one with no further legal moves), the algorithm backtracks.
175.
Maze Solving
• Problem:Imagine you are given a maze with walls and paths, and the
task is to find a way out of the maze from a start point to an exit point.
• DFS in Maze Solving:
• The search space consists of all possible paths in the maze.
• DFS starts at the entrance of the maze.
• It explores one path until it hits a dead-end (a wall or an already-visited
node) and backtracks to the previous junction to explore other routes.
176.
Example:
• Start atthe maze’s entry point.
• DFS explores a path step by step (going deeper along the path).
• If it hits a dead end (like a wall or already-visited area), it
backtracks and explores the next possible path.
• This continues until it either finds the exit or exhausts all paths.
177.
Game Tree Search(e.g., Chess or Tic-
Tac-Toe)
• Problem: In strategic board games like Chess or Tic-Tac-Toe, the problem is to
explore possible moves and find the best strategy.
• DFS in Game Tree Search:
• The search space in a game is typically represented by a game tree where each
node represents a game state, and the edges represent valid moves.
• DFS explores each move deeply before returning and exploring alternative moves.
• In games like Tic-Tac-Toe, DFS would recursively explore all possible states of the
game, backtracking whenever a move doesn't lead to a solution
178.
Example
• Start fromthe current board configuration.
• Explore all possible moves for one player.
• Then, recursively explore the opponent’s possible moves.
• If a winning configuration is found (i.e., a goal state), the search
stops and the solution is returned.
179.
Pathfinding in Graphs
•In graph theory, DFS can be used to find a path between two nodes in a graph.
• For instance, if you have a graph of cities connected by roads, you can use DFS to
determine if there exists a path between two cities.
• Example:
• Consider a graph where cities are represented as nodes and roads between cities as edges.
• DFS would start at one city and explore all reachable cities by following one path at a time.
• It backtracks when it reaches a dead end, eventually finding all possible paths between the start
and end city.
180.
Applications of DFSin AI
AI Planning and Search Algorithms
• In AI planning, DFS is often used to explore the various sequences of actions (a plan) that can lead to achieving a
goal. For instance, in a robot navigation problem, DFS could help generate the sequence of moves to reach the
destination.
Web Crawling
• In web crawling, DFS is used to explore hyperlinks from one page to another, visiting all reachable web pages. It
dives deep into each page (branch) before backtracking and exploring other pages.
Constraint Satisfaction Problems
• DFS can be applied to solve constraint satisfaction problems (CSPs) by trying different assignments to variables
and backtracking when a conflict arises (i.e., constraints are violated).
181.
Advantages of DFS
•Memory Efficiency: DFS uses less memory compared to other
search strategies like Breadth-First Search (BFS) because it only
needs to store a single path from the root to a leaf.
• Simple to Implement: DFS is simple to code and can be easily
implemented using recursion or a stack.
• Works Well for Deep Solutions: DFS is effective when the
solution is deep in the search space.
182.
Disadvantages of DFS
•Not Guaranteed to Find the Shortest Path: DFS does not
guarantee the shortest solution, especially in unweighted graphs or
search spaces.
• Can Get Stuck in Infinite Loops: In cyclic graphs, DFS might fall
into infinite loops unless there is a mechanism to avoid revisiting
already visited nodes.
• High Time Complexity: DFS might explore many unnecessary
paths, especially in large search spaces, making it computationally
expensive.
183.
Conclusion
• DFS isa fundamental algorithm in AI for exploring search spaces,
especially when dealing with large, complex, or deep problems.
• Though not always the most optimal search strategy, it is
particularly useful in scenarios where memory efficiency is critical
or the solution is deep in the search space.
• However, its drawbacks, such as the potential for getting stuck in
infinite loops or not finding the shortest path, make it unsuitable for
all applications, especially in large or cyclic search spaces.
Cont.
• Overview ofPython
• Setting up Python environment
• Python Syntax and Basics (Variables, Data Types, Functions)
• Control Flow (Conditionals, Loops)
• Data Structures (Lists, Tuples, Dictionaries, Sets)
• Object-Oriented Programming (Classes, Objects, Inheritance, Polymorphism)
• Why Python is popular for AI
• Key features for AI applications
186.
Libraries for AI
•NumPy
• Array manipulations
• Mathematical functions
• Pandas
• Data handling and analysis
• DataFrames
• Matplotlib & Seaborn
• Data visualization (plots, histograms, heatmaps)
187.
Cont.
• SciPy
• Scientificcomputing (optimizations, linear algebra)
• Scikit-learn
• Machine learning algorithms (classification, regression)
• Model evaluation and metrics
• TensorFlow
• Deep learning framework
• Neural networks and Tensor operations
188.
Cont.
• Keras
• High-levelneural network API
• Building and training deep learning models
• PyTorch
• Deep learning framework
• Autograd, dynamic computation graphs
• NLTK (Natural Language Toolkit)
• Natural language processing (text tokenization, stemming)
AI Techniques inPython
• Machine Learning
• Supervised and Unsupervised Learning
• Deep Learning
• Convolutional Neural Networks (CNN)
• Recurrent Neural Networks (RNN)
• Natural Language Processing (NLP)
• Computer Vision
191.
Additional Tools andLibraries
• Jupyter Notebooks
• Interactive coding environment for data analysis
• Colab
• Cloud-based Python environment with GPU support
• Flask/Django
• Web frameworks for deploying AI models
192.
Breadth-First Search (BFS)
•Breadth-First Search (BFS) is a fundamental search algorithm used
in artificial intelligence (AI) and computer science, particularly in
scenarios where you need to explore a problem space (like a graph
or a tree) systematically.
• The algorithm is used to find the shortest path in an unweighted
graph or explore all the nodes level by level.
• It's often used in AI applications such as solving puzzles,
pathfinding, or exploring state spaces.
193.
How BFS Works
•Start at the Root Node (or Initial State): BFS begins at the root
node (or the initial state) and explores all its neighbors at the
present depth level before moving on to nodes at the next depth
level.
• Queue Structure: BFS uses a queue (FIFO - First In, First Out) to
keep track of the nodes to be explored next. This ensures that
nodes are explored level by level.
194.
Cont.
• Exploration:
• Thealgorithm dequeues a node, processes it, and then enqueues all of its
unvisited neighbors.
• This process repeats until the queue is empty or the goal node is found.
• Termination: The search continues until:
• The goal state is found, or
• All nodes have been explored.
195.
Key Characteristics ofBFS
• Completeness: BFS is guaranteed to find a solution (if one exists)
because it explores all possibilities level by level.
• Optimality: In an unweighted graph, BFS is optimal. It will always
find the shortest path from the start node to the goal node.
• Time Complexity: O(V + E), where V is the number of vertices and
E is the number of edges in the graph.
• Space Complexity: O(V), since in the worst case, the algorithm may
store all nodes in memory (for example, in a fully connected graph).
196.
Applications of BFSin AI:
• Pathfinding: BFS is commonly used for finding the shortest path in grid-based games,
robotics, or navigation systems (where all moves have the same cost).
• Puzzle Solving: In problems like the 8-puzzle or sliding tile puzzles, BFS can be used
to find the optimal solution.
• Web Crawling: BFS is often used in web crawlers to explore websites level by level.
• State Space Exploration: BFS can be used to explore all possible states in search
problems, such as the configuration space in robotics.
197.
Example: BFS ina Graph
• Example: BFS in a Graph:-
A -- B -- D
| |
C -- E
198.
Cont.
• If westart at node A and want to find the shortest path to node E, BFS will explore nodes level by level:
• Start with node A: enqueue A, visited = {A}.
• Process A: enqueue neighbors B and C.
• Process B: enqueue neighbors D and E (visited = {A, B, C}).
• Process C: no new nodes to enqueue.
• Process D: no new nodes to enqueue.
• Process E: goal node found!
• The shortest path would be A -> B -> E.
199.
Limitations
• BFS canbe memory-intensive, especially in large graphs, because
it needs to store all nodes at the current depth level.
• BFS is not efficient for graphs with high branching factors or deep
goal nodes unless additional techniques (like pruning or heuristics)
are used.
200.
Summary
• In summary,BFS is a powerful and simple algorithm for exploring
graphs and finding the shortest path in unweighted spaces, but its
memory usage and performance can be a challenge in large-scale
applications.
201.
Heuristic Search
• HeuristicSearch in Artificial Intelligence refers to using strategies to
guide the search process toward finding a solution more efficiently.
• A heuristic is a rule of thumb or a strategy that helps make
decisions about which paths or states to explore first based on
experience, prior knowledge, or some estimation of how "good" a
particular state is.
• One common heuristic search algorithm is Hill Climbing, which is a
simple and intuitive local search technique.
202.
Hill Climbing Overview
•Hill Climbing is a heuristic search algorithm that continuously moves
towards the "best" state by selecting the neighbor that appears to lead to
the most optimal solution, according to a given evaluation function (often
called the heuristic function).
• The algorithm takes steps "uphill" toward better states, akin to climbing a
hill.
• Hill Climbing is often used when the problem space is large and can be
thought of as a search landscape where the goal is to reach the peak (the
best solution).
203.
Hill Climbing Algorithm
•Start with an initial state.
• Evaluate the neighbors of the current state using a heuristic
function.
• Select the neighbor that appears to be the best (according to the
heuristic).
• Move to the selected neighbor and repeat the process from step 2.
• Stop if no better neighbor can be found or a goal state is reached.
204.
Types of HillClimbing
Simple Hill Climbing:
• This version evaluates the neighboring states one by one and
moves to the first neighbor that is better than the current state.
• It stops when no better neighbors are available.
• Pros: Simple and easy to implement.
• Cons: May get stuck in local maxima, plateau (flat regions), or
loops.
205.
Steepest-Ascent Hill Climbing:
•In this version, all neighbors are evaluated, and the one with the
highest evaluation (best heuristic value) is selected.
• It considers all options before making a move.
• Pros: More thorough than simple hill climbing.
• Cons: More computationally expensive because it evaluates all
neighbors.
206.
Stochastic Hill Climbing
•This is a randomized version of hill climbing where neighbors are
selected randomly, and if the chosen neighbor is better than the
current state, the algorithm moves to it.
• Pros: Introduces randomness to avoid getting stuck in local maxima.
• Cons: Can be inefficient or not find the best solution.
207.
First-Choice Hill Climbing:
•This is a variation where neighbors are generated randomly, and
the first one that improves the current state is chosen.
• Pros: It can quickly find a solution by avoiding a full evaluation of all
neighbors.
• Cons: The randomness can lead to suboptimal solutions.
208.
Example:
• Consider asimple optimization problem where the goal is to find
the highest point (maximum value) on a graph.
• If we start at a random point, the algorithm will evaluate its
neighbors and always move to the neighbor with the highest value,
continuing until no better neighbor can be found.
209.
Properties of HillClimbing
• Complete: Hill Climbing is not guaranteed to find a solution unless the
problem space is structured such that it always climbs towards the goal.
• Optimal: Hill Climbing is not guaranteed to find the global optimum. It can
easily get stuck in a local maximum, minimum, or plateau.
• Time Complexity: The time complexity depends on how many states we
need to evaluate in each step, typically O(n) where n is the number of
neighbors at each state.
• Space Complexity: Hill Climbing is usually space efficient (O(n)), as it
only needs to store the current state and its neighbors.
210.
Issues with HillClimbing
• Local Maxima/Minima: The algorithm might get stuck at a local
maximum (or minimum), where all neighboring states are worse
than the current state, even though a better global maximum exists.
• Plateau: If the search space has a flat region where all neighboring
states have the same value, the algorithm can fail to progress. This
is called a plateau.
• Ridges: If the problem space contains steep ridges, hill climbing
might not make good progress as the algorithm might not be able to
follow the ridge properly
211.
Solutions to Limitations
•Random Restarts: Restart the search from a different random initial
state to avoid getting stuck in local optima.
• Simulated Annealing: This technique introduces randomness,
allowing the algorithm to sometimes move to worse states to
escape local maxima, improving the chance of finding the global
optimum.
• Tabu Search: This method keeps track of recently visited states to
avoid revisiting them, improving the ability to escape local optima.
212.
Applications of HillClimbing
• Game AI: Hill climbing is often used for simple game strategies like
decision-making in a game of chess or puzzles like the 8-puzzle.
• Optimization Problems: In engineering design, machine learning,
and other optimization problems, hill climbing can be used to find
near-optimal solutions.
• Pathfinding: It can be used for heuristic-based search in pathfinding
algorithms, especially in continuous state spaces.
213.
Summary
• In summary,Hill Climbing is an effective and simple approach to
heuristic search but is limited by the potential to get stuck in local
optima or plateaus.
• More advanced techniques, like simulated annealing, are often
used to mitigate these issues.
214.
Best-First Search (BFS)
•Best-First Search is a type of search algorithm used to explore a
graph or search space where nodes are expanded based on a
heuristic function.
• The goal is to move towards the most promising node based on the
evaluation provided by the heuristic, rather than exploring nodes at
random or in a fixed order like in Breadth-First Search (BFS) or
Depth-First Search (DFS).
215.
Key Concept
• InBest-First Search, the algorithm evaluates nodes based on their
estimated cost to reach the goal.
• The node that appears to be the most promising (according to the
heuristic) is expanded next.
• It uses a priority queue to always expand the node with the best
(lowest) heuristic value.
216.
Heuristic Function
• Theheuristic function, h(n), is used to estimate how close a node is
to the goal.
• The heuristic is problem-specific and can be designed based on the
context of the problem.
• For example, in a pathfinding problem, the heuristic might represent
the straight-line distance (Euclidean distance) from the current
node to the goal node.
217.
Understanding Heuristic Search
•Heuristics operates on the search space of a problem to find the best or closest-to-optimal
solution via the use of systematic algorithms.
• In contrast to a brute-force approach, which checks all possible solutions exhaustively, a
heuristic search method uses heuristic information to define a route that seems more plausible
than the rest.
• Heuristics, in this case, refer to a set of criteria or rules of thumb that offer an estimate of a
firm's profitability.
• Utilizing heuristic guiding, the algorithms determine the balance between exploration and
exploitation, and thus they can successfully tackle demanding issues.
• Therefore, they enable an efficient solution finding process.
218.
Significance of HeuristicSearch in AI
• The primary benefit of using heuristic search techniques in AI is
their ability to handle large search spaces.
• Heuristics help to prioritize which paths are most likely to lead to a
solution, significantly reducing the number of paths that must be
explored.
• This not only speeds up the search process but also makes it
feasible to solve problems that are otherwise too complex to handle
with exact algorithms.
219.
Cont.
• Example: A*Search (a form of Best-First Search)
• Best-First Search is an informed search algorithm that explores the
search space based on a heuristic function.
• It selects the node that appears most promising (i.e., closest to the
goal according to the heuristic) and expands it.
• It’s widely used in AI for pathfinding and solving optimization
problems but may not always find the optimal solution or be
complete, depending on the heuristic.
220.
A* Search Algorithm
•A* Search Algorithm is perhaps the most well-known heuristic search
algorithm.
• It uses a best-first search and finds the least-cost path from a given initial
node to a target node.
• It has a heuristic function, often denoted as f(n)=g(n)+h(n),
• where g(n) is the cost from the start node to n, and h(n) is a heuristic that
estimates the cost of the cheapest path from n to the goal.
• A* is widely used in pathfinding and graph traversal.
221.
Greedy best-first search
•Greedy best-first search expands the node that is closest to the
goal, as estimated by a heuristic function.
• Unlike A*, which takes into account the cost of the path from the
start node to the current node, the greedy best-first search only
prioritizes the estimated cost from the current node to the goal.
• This makes it faster but less optimal than A*.
222.
Branch and Bound(B&B)
• Branch and Bound is a general algorithm for solving combinatorial
optimization problems.
• It systematically explores the solution space by dividing the problem into
subproblems (branching), calculating bounds to estimate the best
possible solution (bounding), and pruning (discarding) subproblems that
cannot lead to a better solution than the best one found so far.
• Branch and Bound is particularly useful for problems where an exhaustive
search is infeasible due to the size of the search space, but we still need
to find an optimal solution.
223.
Key Concepts
• Branching:The process of dividing the problem into smaller subproblems (branches).
• Each branch corresponds to a partial solution that can be further explored.
• Bounding: A method for calculating upper or lower bounds for the objective function at each
node (subproblem).
• These bounds help to determine whether a branch can potentially lead to an optimal solution.
• Pruning: The process of eliminating branches (subproblems) that cannot possibly lead to a
better solution than the current best known solution.
• This is based on the bounds calculated.
224.
Example
0/1 Knapsack Problem:-
• In the 0/1 Knapsack Problem, the goal is to maximize the value of
items placed in a knapsack, given weight constraints.
• Each item has a value and a weight, and the knapsack has a
maximum weight capacity.
225.
Branch and Boundin Practice:
• Knapsack Problem: Maximizing the total value of items while
respecting the weight constraint.
• Traveling Salesman Problem (TSP): Finding the shortest possible
route that visits each city exactly once and returns to the origin city.
• Integer Linear Programming: Solving optimization problems where
some variables are constrained to be integers.
• Job Scheduling Problems: Optimally scheduling jobs with certain
constraints on resources, deadlines, etc.
226.
Pros of Branchand Bound
• Exact Solution: Branch and Bound guarantees an optimal solution if
it completes the search, as it exhaustively explores the solution
space while pruning unpromising branches.
• Efficiency: It can be more efficient than exhaustive search methods
(e.g., brute force), especially if the problem has a good bounding
function that allows effective pruning.
227.
Cons of Branchand Bound
• Memory Usage: The algorithm stores a lot of intermediate
subproblems in memory, making it memory-intensive.
• Complexity: The time complexity can still be exponential in the
worst case, especially if the problem has many possible branches
and poor bounds.
• No Guarantees for Speed: The performance depends heavily on
the quality of the bounding function and the branching strategy. If
the bounds are weak, the algorithm may explore many
unnecessary branches.
228.
Summary
• Branch andBound is an exact optimization algorithm for solving
combinatorial problems by dividing the problem into subproblems and
systematically exploring them.
• It uses branching (splitting the problem), bounding (estimating the best
possible solution), and pruning (discarding branches that cannot improve
the current best solution).
• It’s commonly used for problems like the knapsack problem, traveling
salesman problem, and other combinatorial optimization problems.
• The efficiency of Branch and Bound depends on the bounding function
and how well it can prune suboptimal branches.
229.
Problem Reduction
• Problemreduction is a technique in Artificial Intelligence (AI) used to
simplify complex problems by breaking them down into smaller, more
manageable subproblems.
• By reducing the complexity of a problem, AI systems can focus on solving
simpler instances of the problem, ultimately contributing to the solution of
the original, more complicated one.
• Problem reduction can be applied in various ways depending on the
nature of the problem, and it forms the basis of many algorithms used for
planning, search, and optimization in AI.
230.
Key Concepts ofProblem Reduction
• Divide-and-Conquer: The problem is divided into smaller
subproblems, which are easier to solve. The solutions to the
subproblems are then combined to form the solution to the original
problem.
• Transformation: A complex problem is transformed into a simpler
one by applying a rule or mapping that simplifies the original task,
without losing the essential aspects needed to find a solution.
231.
Cont.
• Decomposition: Complextasks are broken down into simpler
components or subgoals.
• These subgoals can then be tackled independently, and their
solutions combined to address the overall goal.
• Hierarchical Planning: The problem-solving process is structured in
layers, with high-level decisions made first and lower-level
decisions made afterward.
• Each layer might reduce the scope of the problem further.
Advantages of HeuristicSearch
Techniques
• Efficiency: As they are capable of aggressively digesting large
areas for the more promising lines, they can allot more time and
resources to investigate the area.
• Optimality: If the methods that an algorithm uses are admissible, A*
guarantees of an optimal result.
• Versatility: Heuristic search methods encompass a spectrum of
problems that are applied to various domains of problems.
234.
Limitations of HeuristicSearch
Techniques
• Heuristic Quality:
• The power of heuristic search strongly depends on the quality of function the heuristic horizon.
• If the heuristics are constructed thoughtlessly, then their level of performance may be low or inefficient.
• Space Complexity:
• The main requirement for some heuristic search algorithms could be a huge memory size in comparison with
the others, especially in cases where the search space considerably increases.
• Domain-Specificity:
• It is often the case that devising efficient heuristics depends on the specifics of the domain, a challenging
obstruction to development of generic approaches.