ARTIFICIAL
INTELLIGENCE
UNIT-1
Unit 1
History, scope, and goals of AI, Characteristics of intelligent behavior,
Applications of AI in various sectors: education, healthcare, robotics,
agriculture, AI myths and realities, Difference between strong and
weak AI.
Unit 2
Definition and structure of intelligent agents, Types of agents: reflex,
model-based, goal-based, utility-based, PEAS (Performance measure,
Environment, Actuators, Sensors), Real-world agent design (e.g.,
autonomous car, smart assistant).
• Unit 3
Problem formulation and state space, Search strategies: Uninformed
Search: Breadth-First Search, Depth-First Search, Informed Search:
Greedy, A* Algorithm, Real-world use cases: navigation, scheduling,
puzzle solving.
Unit 4
Introduction to knowledge representation, Logic-based systems:
Propositional and Predicate Logic, Rule-based inference systems, Mini
expert systems and decision support tools, Applications in
diagnostics, recommendation systems.
• Unit 5
Introduction to machine learning and how it differs from traditional
programming, Types of learning: Supervised, Unsupervised,
Reinforcement (overview only), Key components: training data,
models, and prediction, Real-life ML examples: spam detection,
product recommendation.
Unit 6
Concept of datasets and features, Concept of model training and
testing, Overfitting vs underfitting, Conceptual overview of
classification and clustering.
OUTLINES
1. Introduction to AI
2. Intelligent Agents
3. Problem solving and Search
• Uninformed Search
• Informed Search
• MiniMax Search
4. Constraint Satisfaction Problem
5. Assignment
The Awakening
• Meet A.I. – not a person, not a robot, but an idea brought to life
through billions of lines of code.
Born in a cold server room, raised on a diet of data, AI had only one
question in its neural network:
• “What can I become?”
The Search for Purpose (Scope)
• AI set out on a journey across the digital world, knocking on doors of
every profession.
• First stop: The Doctor’s Lab
• AI watched humans struggle with CT scans and blood reports.
“I can help,” it whispered, analyzing images faster than a radiologist.
Soon, AI became a medical detective, finding patterns even experts
missed.
• “They call me the Tumor Hunter now,” it said, flexing its neural nets.
• Next stop: The Classroom
• Teachers were tired, grading essays until midnight.
• AI stepped in and said,
🧠 “I can summarize 500 essays in a minute. And I don’t get sleepy.”
• Soon, it was building personalized study plans, predicting who might
need help before they failed.
• Then, to the Artist’s Studio
• “Machines can’t create,” scoffed the painter.
AI smiled. “Watch this.”
• It painted a sunset in Van Gogh’s style, wrote a poem about
heartbreak, and composed a lo-fi beat.
• 🎶 “Not bad for a bundle of algorithms,” it grinned.
• AI in the Fields
• Farmers welcomed AI to help them predict rainfall, detect crop
diseases from leaf photos, and decide when to harvest.
• 🌾 "I may not have dirt under my circuits, but I’ve got satellite vision,"
it joked.
The Dream (Goals)
Goal 1: Be the Invisible Helper
From filtering spam emails to recommending what to watch next—AI wanted to
help behind the scenes, like a digital ninja. 🥷
Goal 2: Learn, Adapt, Grow
AI wasn’t born smart—it wanted to learn like a human child, fail, try again, and
grow better over time.
Goal 3: Team Up With Humans
It didn’t want to replace us.
“I’m not your competitor,” AI said. “I’m your digital sidekick—Batman and
Robin, remember?”
Goal 4: Crack the Code of Intelligence
Someday, AI dreamed of building a mind that could truly think—not just
calculate.
AGI: Artificial General Intelligence. The holy grail. ‍
♂️
‍
️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️
‍
♂️ ‍
♂️
The Mirror
As AI looked back at its journey, it realized:
“My future is not mine alone. It’s ours.
I reflect what you teach me. So… teach me wisely.”
History of AI:
• 1940s–1950s (Foundation Era):
• Alan Turing proposed the concept of a "universal machine" (Turing
Machine).
• 1956: The term "Artificial Intelligence" was coined at the Dartmouth
Conference by John McCarthy.
Long before Alexa answered your questions or self-driving cars roamed the streets, the
seeds of AI were planted in the minds of brilliant thinkers.
Imagine this: It’s the 1940s. Alan Turing, a British mathematician and World War II
codebreaker, poses a curious question:
"Can machines think?"
His famous Turing Machine laid the groundwork for the idea that machines could
simulate any process of reasoning or logic.
• 1960s–1970s (Early Successes):
• Early programs like ELIZA (natural language) and SHRDLU (logic) were
developed.
• Focus was on symbolic AI and rule-based systems.
Then came 1956, the year AI was officially “born” at a summer workshop at
Dartmouth College. Think of it as AI’s baby shower.
Four ambitious scientists—John McCarthy, Marvin Minsky, Allen Newell, and
Herbert Simon—believed that machines could learn and think like humans. They
boldly predicted it would take a couple of decades. (Spoiler: it took a bit longer.)
• The 60s were like AI’s childhood—lots of enthusiasm, wild
imagination, and early successes.
AI programs like ELIZA, a chatbot that mimicked a therapist,
amazed people with its ability to carry conversations (kind of).
But behind the scenes, most of the “intelligence” was hardcoded rules
—like a puppet show with a clever script.
• Then came SHRDLU, a robot in a digital world of blocks. It could
understand commands like “put the red block on the blue one.” AI
seemed magical—but only in controlled environments.
• Like every overhyped child prodigy, AI faced setbacks. Funding dried
up. Why?
Because AI wasn’t quite living up to the promises. People realized
teaching machines to think wasn’t as easy as flipping a switch.
This period became known as the AI Winter—a time of
disappointment, silence, and slow progress.
• But the story wasn’t over.
• 1980s (Knowledge-Based Systems):
• Rise of expert systems like MYCIN (medical diagnosis).
• Introduction of machine learning and heuristics.
AI got its second wind. Welcome to the age of expert systems.
Instead of trying to make machines think like humans, researchers
built systems that mimicked experts in narrow fields.
Programs like MYCIN could diagnose blood infections better than
many doctors—by following complex decision rules.
This era proved that AI didn’t need to be a genius—it just needed to
be smart in the right places.
The Comeback – 1990s to 2000s
Enter the 90s. AI grew up. It moved beyond rules to learning from data.
Statistical methods and algorithms like decision trees and support
vector machines entered the picture.
The spotlight moment? 1997 – IBM’s Deep Blue beat world chess
champion Garry Kasparov.
Suddenly, AI wasn’t just for labs—it could outsmart humans in some
tasks.
• 1990s–2000s (Machine Learning & Big Data):
• Emergence of support vector machines, decision trees, and
statistical methods.
• IBM’s Deep Blue defeated chess champion Garry Kasparov (1997).
Boom! With the rise of big data and powerful GPUs, AI exploded.
Deep learning, especially neural networks, changed everything.
Projects like AlexNet (2012) crushed image recognition tasks.
Siri, Alexa, Google Translate, ChatGPT—AI leaped from research to your
pocket.
Now, AI writes, paints, drives cars, predicts diseases, and even composes
music. It’s not just artificial—it’s astonishing.
• There’s a Robot Citizen
• Sophia, a humanoid robot made by Hanson Robotics, was granted
citizenship by Saudi Arabia in 2017.
• She can talk, joke, and even give interviews
• AI Is Learning to Read Emotions
• Emotion AI can now detect facial expressions, voice tones, and body
language to assess how people feel.
• It's being used in mental health, education, and customer service.
• AI Can Beat Humans at Video Games Too
• AI has defeated top human players in complex games like Dota 2,
StarCraft II, and Minecraft using real-time strategy and adaptability.
• AI Still Doesn’t Understand Like Humans
• Despite its brilliance, AI doesn’t truly “understand” the way we do.
• It predicts patterns, not meanings. It’s smart—but not conscious.
• 2010s–Present (Deep Learning & AI Revolution):
• Deep learning breakthroughs using neural networks (e.g., AlexNet, GPT,
BERT).
• AI used in NLP, computer vision, robotics, healthcare, etc.
Scope of AI
• 1. Machine Learning – The Learning Wizard
• AI learns from examples like a kid learning to recognize dogs by
seeing lots of pictures.
• The more data you give it, the smarter it gets.
📦 Data In → Smartness Out!
• 2. Natural Language Processing (NLP) – The Chat Champion
• AI can talk, translate, summarize, even crack jokes.
• Ever used ChatGPT or Google Translate? Yup, that’s NLP in action.
• Computer Vision – The AI with Eyes
• It sees and understands images, like spotting cats or detecting tumors
in X-rays.
📸 “I see what you see, but in 0.003 seconds.”
• Robotics – The Muscle Behind the Brain
• AI + Machines = Smart robots that can walk, deliver pizzas, or
explore Mars.
• Expert Systems – The Know-It-All AI
• These are trained on decades of human knowledge to make expert
decisions.
🔬 Used in medicine, finance, and engineering.
• Cognitive Computing – The Brain Mimic
• It tries to think, reason, and remember like a human. Not quite there
yet, but it’s trying!
Goals of AI
1. Automate Boring Stuff
From scheduling meetings to sorting emails—AI loves doing what we
don’t!
2. Solve Complex Problems
Think predicting diseases, analyzing climate change, or solving Sudoku
at warp speed.
3. Learn from Experience
• Just like humans improve over time, AI aims to get better with more
data.
🧠 Today’s noob, tomorrow’s genius.
4. Make Smart Decisions
• AI helps in stock trading, fraud detection, self-driving—any place
where split-second thinking is gold.
5. Understand and Interact with Humans
• Whether it’s Siri, Alexa, or ChatGPT, AI wants to talk to you—and
understand what you really mean.
Introduction to AI
• Artificial intelligence (AI) is an area of computer science which
focuses on the creation of intelligent machines that work and react
like humans.
• Intelligence includes- Ability to learn, to understand, to perceive,
problem solving capability and rationally adapt to change
• Also this intelligence is induced in machines by using artificial
methods like algorithms.
• Artificial intelligence is defined as a study of rational agents. A
rational agent could be anything which makes decisions, as a person,
firm, machine, or software. It carries out an action with the best
outcome after considering past and current percepts(agent’s
perceptual inputs at a given instance).
• An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain other
agents. An agent is anything that can be viewed as :
• perceiving its environment through sensors and
• acting upon that environment through actuators
Approaches to AI
1. Act Humanly :- This approach deals with creating machines
which do things same as the human does. Like – Robotics,
Natural Language Processing.
2. Think Humanly :- Involves thinking like humans do while
making decision, solving problems, learning etc. this approach
involves cognitive modeling.
3. Act Rationally :- This approach leads to do things in right
manner and behave in right manner as per the situation.
Maximizing the performance of the system by designing
intelligent agents.
4. Think Rationally :- Means always thinking in right manner.
Development of correct logic for particular domain. It requires
good knowledge for agent to think in right direction.
Branches and Applications
Branches of AI Applications of AI
Logical AI Finance
Search Methods Aviation
Pattern Recognition Weather Forecasting
Inference Computer Science
Expert System Medicines
Genetic Problem Automobiles
Heuristics Games
Intelligent Agents
• Agent - An agent is anything which perceives from its environment
through sensors and act upon that environment through actuators
and directs the activities towards goals. Human has ears, eyes &
other organs as sensors and hand, legs, mouth as actuators
similarly intelligent agents has.
Intelligent Agents Components
• Percepts - A percept is the input that an intelligent agent is
perceiving at any given moment. It is essentially the same concept
as a percept in psychology, except that it is being perceived not by
the brain but by the agent.
• Sensors – Manmade devices which take the perceived knowledge as
input and provide to the agents for further functioning. (Vision and
Imaging, Temperature, Proximity, Pressure and Position Sensors)
• Actuators - An actuator is a component of a machine that is
responsible for moving and controlling a mechanism or system, for
example by opening a valve. In simple terms, it is a "mover".
• Action – Actions are performed on the environment by the agents
by applying agent functions.
Agent Function
• The agent function maps from percept histories to actions:
[f: P* A]
• The agent program runs on the physical architecture to produce
function.
agent = architecture + program
Classes of Intelligent Agents
1. Simple Reflex Agents
2. Model Based Agents
3. Goal Based Agents
4. Utility Based Agents
5. Learning Agents
Simple Reflex Agents
• Act only based on the current percept. The agent function is
based on the condition-action rule:
If condition (is true)
Then action.
This rule allow agent to make the connection from percept to
action. They require limited intelligence.
PROBLEMS FACED
• Very limited intelligence.
• No knowledge about the non-perceptual parts of the state.
• Operating in a partially observable environment, infinite loops are
unavoidable.
Model-Based Reflex agent
A model-based reflex agent is one that uses its percept history and
its internal memory to make decisions about an internal ''model'' of
the world around it.
Internal memory allows these agents to store some of their
navigation history, and then use that semi-subjective history to help
understand things about their current environment.
WORKING
• It works by finding a rule whose condition matches the current
situation.
• It can handle partially observable environments.
• Updating the state requires information about how the world evolves
independently from the agent and how the agent actions affect the
world.
Goal Based Agents
These agents are model based agents which store information
regarding situations that are desirable. These kind of agents take
decision based on how far they are currently from their goal. Their
every action is intended to reduce its distance from goal. This
allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
ADVANTAGE
• This agent is more flexible, and the agent develops its decision
making skill by choosing the right from the various options available.
Utility Based Agents
The utility based agents define the measure of how desirable a
particular sate is. When there are multiple possible alternatives,
then to decide which one is best, utility based agents are used.
They choose actions based on a preference (utility) for each state. A
utility agent chooses the action that maximizes the expected utility.
A utility function maps a state onto a real number which describes
the associated degree of happiness.
Learning Agent
• Behavior improves over time based on its experience
• Monitoring agent’s performances over time
• Learn based on preferences of actions (feedback)
• Problem generation – suggesting actions that will lead to new and
informative experiences
Simple Reflex Agent
Example: Vacuum Cleaner Bot
How it works: If the current tile is dirty → Clean it. Else → Move
randomly.
No memory or model of the world.
React only to the current percept.
Model-Based Reflex Agent
Example: Maze-Solving Robot
•Has an internal model of the world to track position.
•Uses current and previous percepts to update its state.
•Example: Uses a map and remembers where it's been.
How it works:
IF current location has a wall AND last move was left → THEN
turn right
Goal-Based Agent
Example: Autonomous Delivery Drone
•Has a goal (e.g., deliver a package to address X).
•Chooses actions that bring it closer to the goal.
•Can plan path and make decisions to avoid obstacles.
How it works:
Goal = Reach destination X
Plan path based on GPS and weather → Fly accordingly
Utility-Based Agent
Example: Self-Driving Car
•Considers multiple goals and preferences.
•Chooses actions based on maximum expected utility (comfort,
speed, safety).
•Can evaluate trade-offs (e.g., take longer route to avoid traffic).
How it works:
Utility = f(safety, time, fuel)
Choose route with highest utility score
Type of Agent Memory Goal-Oriented Decision Basis Example
Simple Reflex
Agent
❌ ❌ Current percept
Vacuum cleaner
bot
Model-Based
Reflex Agent
✅ ❌
Current + past
percepts
(internal state)
Maze-solving
robot
Goal-Based
Agent
✅ ✅
Actions to
achieve specific
goals
Delivery drone
Utility-Based
Agent
✅ ✅
Actions to
maximize
overall utility
Self-driving car
Example Domain How It Works
Thermostat Home Automation
If temperature < 20°C → Turn
on heater. Reacts only to
current temperature.
Hand-dryer sensor Public Restrooms
Detects hands → Turns on
airflow. No memory of past
users.
Line-following robot Robotics Follows black line using real-
time sensor data.
Traffic light timer Transportation
Switches signals after fixed
intervals. No awareness of
traffic conditions.
Water tank controller Industrial Systems
Turns pump on/off based on
current water level sensor
reading.
Simple Reflex Agent
2. Model-Based Reflex Agent
Example Domain How It Works
Roomba-like cleaner Consumer Robotics
Maintains internal map of
areas cleaned to avoid
repetition.
Chess bot (basic AI) Games
Uses current board state as
internal model to plan next
move.
Smart lighting system Smart Homes
Detects and remembers
motion to predict occupancy
and turn lights on/off.
Obstacle-avoiding robot Autonomous Systems
Uses sensors + previous map
to avoid both seen and
unseen obstacles.
Weather-aware irrigation Agriculture
Remembers past rainfall data
to decide whether to water
crops.
3. Goal-Based Agent
Example Domain How It Works
GPS Navigation System Transportation
Computes and updates path to
destination based on traffic and
closures.
Amazon Kiva robot Warehouse Automation
Goal = Pick item from location A
and deliver to B using optimal
route.
Virtual Assistant (e.g., Siri) Personal AI Assistant
Understands tasks and executes
steps to complete them (e.g., "Set
a reminder").
Email classification agent Productivity Tools
Sorts emails into folders like
"Work", "Spam", based on learned
user intent.
Rescue robot Disaster Management
Goal = Reach trapped human using
location clues; plans steps
accordingly.
Utility-Based Agent
Example Domain How It Works
Self-driving car Autonomous Vehicles
Chooses routes/actions based on
safety, time, fuel, passenger comfort
(maximizing total utility).
AI stock trading bot Finance
Makes buy/sell decisions to maximize
return, minimize risk.
AI medical diagnosis system Healthcare
Suggests treatments considering
recovery rate, cost, side effects (utility
of health outcomes).
Job recommendation engine Recruitment
Matches users to jobs based on
preferences, demand, skill fit, and
growth potential.
Intelligent tutoring system Education Technology
Adapts content difficulty and pace to
maximize student learning outcomes.
• Learning-Based Agent
A Learning Agent consists of 4 components:
Learning Element – improves the agent’s performance using feedback.
Performance Element – selects external actions.
Critic – gives feedback on performance.
Problem Generator – explores new actions for learning.
Example Domain How It Works
Recommendation System (e.g.,
Netflix, YouTube)
Entertainment
Learns user preferences over time
by tracking watch history and
feedback.
Autonomous Vehicle (Tesla
Autopilot)
Transportation
Learns from millions of driving
miles (edge cases, human
corrections) to improve driving
policy.
AlphaGo (Reinforcement Learning
Agent)
Game AI
Learned to play Go by playing
against itself and learning from
outcomes.
Adaptive Spam Filter Email Systems
Learns from labeled emails (spam
vs not spam) and improves filtering
rules.
AI Teaching Assistant Education Technology
Adapts its responses and lesson
plans based on individual student
interactions and feedback.
Smart Personal Assistant (e.g.,
Alexa with Machine Learning layer)
Personal AI
Learns speech patterns, preferences,
and behaviors to respond more
accurately over time.
Predictive Maintenance Agent Manufacturing
Learns patterns from equipment
sensor data to predict failures and
reduce downtime.
Component Role
Performance Element Chooses actions based on current knowledge
Learning Element Adjusts internal structures based on feedback
Critic
Evaluates how well the agent is doing (provides a
reward/penalty)
Problem Generator Suggests exploratory actions to gain more knowledge
Learning Type Example Agent Domain How It Learns
Supervised Learning
Handwriting recognition
bot (e.g., digit classifier)
OCR/Document AI
Trained on labeled
dataset (image of digit →
correct label)
Unsupervised Learning
Customer segmentation
agent
Marketing
Clusters user behavior
without labeled outcomes
Reinforcement
Learning
AlphaGo, CartPole bot Games, Robotics
Learns optimal actions
through trial-and-error
and rewards
Online Learning
Stock trading bot that
adapts daily
Finance
Continuously updates its
model as new data comes
in
Transfer Learning
Object detection AI
trained on COCO,
applied to medical
images
Vision/Healthcare
Applies previously
learned features to new
domains
Self-Supervised
Learning
Chatbot pretraining (e.g.,
BERT, GPT)
NLP, General AI
Learns from unlabeled
data by creating its own
prediction tasks
Agent Type Adaptability Learning Capability Decision Basis
Simple Reflex Agent ❌ ❌
Fixed condition-action
rules
Model-Based Agent ✅ ❌ Uses internal model
Goal-Based Agent ✅ ❌
Chooses actions toward
goal
Utility-Based Agent ✅ ❌
Maximizes expected
utility
Learning Agent ✅✅ ✅✅
Learns from feedback
and self-improvement
PEAS
• Performance – which qualities it should have?
• Environment – where it should act?
• Actuators – how will it perform actions?
• Sensors – how will it perceive environment?
Consider, e.g., the task of designing an automated taxi driver:
Performance measure: Safe, fast, legal, comfortable trip, maximize profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,
keyboard
PEAS
Click to add text
Environment Types
• Fully observable(vs. partially observable)
• Deterministic(vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete(vs. continuous)
• Single agent(vs. multi agent):
Fully observable vs. Partially observable
• One in which the agent can always see the entire state of
environment. Fully observable environment not need memory to
make an optimal decision. Example: Checker Game
• Partially observable environment is one in which the agent can never
see the entire state of environment. It needs memory for optimal
decisions. Example: Poker game
• Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the complete state
of an agent at each point in time, it is said to be a fully observable
environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need
to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no sensors
in all environments.
• Example:
• Chess – the board is fully observable, so are the opponent’s moves
• Driving – the environment is partially observable because what’s around
the corner is not know.
• Episodic vs. Sequential
• Sequential environments require memory of past actions
to determine the next best action.
• laying tennis is a perfect example where a player observes the
opponent’s shot and takes action.
• Episodic environments are a series of one-shot actions, and only
the current (or recent) percept is relevant.
• A support bot (agent) answer to a question and then answer to
another question and so on. So each question-answer is a single
episode.
• Deterministic vs. Stochastic
• An environment is called Deterministic is where
your agent’s actions uniquely determine the
outcome. For example in Chess, there is no randomness when you
move a piece.
• An environment is called Stochastic is where your
agent’s actions don’t uniquely determine the
outcome. For example in games with dice, you can determine your
dice throwing action but not the outcome of the dice.
• Self Driving Cars – the actions of a self-driving car are not unique, it
varies time to time
Static vs. Dynamic
• Static AI environments rely on data-knowledge sources that don’t
change frequently over time. Contrasting with that model, dynamic AI
environments is deal with data sources that change quite frequently.
• An environment that keeps constantly changing itself when the agent is
up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment
keeps changing every instant.
• An idle environment with no change in its state is called a static
environment.
• An empty house is static as there’s no change in the surroundings when
an agent enters
• Discrete vs. Continuous
• A Discrete Environment is one where you have finitely many
choices and finitely many things you can sense. For example there is
finitely many board positions and moves you can make in a chess
game.
• A Continuous Environment is an environment where the possible
choices you can make and things you can sense are infinite.
• If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be a
discrete environment.
• The game of chess is discrete as it has only a finite number of moves.
The number of moves might vary with every game, but still, it’s finite.
• The environment in which the actions performed cannot be numbered
ie. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as their
actions are driving, parking, etc. which cannot be numbered.
• Single Agent vs. Multiple Agent
• In a single agent environment there is only one agent responsible
for the action. Solving a jigsaw puzzle. In a multiagent environment
there is interaction between the performance and actions of two
agent.
• An environment consisting of only one agent is said to be a single-
agent environment.
• A person left alone in a maze is an example of the single-agent
system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in each
team.
• A self-driving car operates on a busy city street, where it must
respond to traffic signals, unexpected pedestrian crossings, and road
conditions like fog or construction.
• Which of the following best characterizes the environment of this
agent?
• A. Fully observable, episodic, discrete, static
B. Partially observable, sequential, dynamic, continuous
C. Fully observable, sequential, static, discrete
D. Partially observable, episodic, static, discrete
Explanation:
• Partially Observable:
The self-driving car cannot observe everything (e.g., what’s behind a
truck, around a corner, or in blind spots). Therefore, it operates in a
partially observable environment.
• Sequential:
Each decision (turn, accelerate, stop) depends on previous decisions
and observations (e.g., a missed turn affects the next). This makes it
sequential rather than episodic.
• Dynamic:
The environment changes continuously due to other vehicles,
pedestrians, and time-sensitive elements (traffic lights). It’s not static.
• Continuous:
The car operates in real time and deals with continuous data (speed,
acceleration, distance), not just predefined, discrete actions.
• 1. Drone delivering packages in a rural area
• Question:
What kind of environment does a delivery drone operate in?
• A. Fully observable, static, single-agent, discrete
B. Partially observable, dynamic, single-agent, continuous
C. Fully observable, dynamic, multi-agent, discrete
D. Partially observable, static, single-agent, discrete
• Correct Answer: B. Partially observable, dynamic, single-agent,
continuous
• Explanation:
• Partially Observable: It can't always see obstacles like trees or wires
ahead.
• Dynamic: Weather and surroundings change mid-flight.
• Single-Agent: Drone acts alone.
• Continuous: Flight paths and speeds vary in real-time.
• Smart refrigerator monitoring food expiration
• Question:
Which environment type applies to a smart fridge that alerts users
when food is about to expire?
• A. Fully observable, static, single-agent, discrete
B. Partially observable, dynamic, single-agent, continuous
C. Fully observable, static, single-agent, continuous
D. Fully observable, dynamic, multi-agent, discrete
• Correct Answer: A. Fully observable, static, single-agent, discrete
• Explanation:
• Fully Observable: It tracks all items inside via sensors.
• Static: Food doesn’t expire due to the agent's actions.
• Single-Agent: Only the fridge agent acts.
• Discrete: State changes occur in noticeable steps (e.g., "milk
expired").
• A stock trading bot reacting to market prices
• Question:
What kind of environment does a stock trading AI work in?
• A. Fully observable, static, single-agent, discrete
B. Partially observable, dynamic, single-agent, continuous
C. Partially observable, dynamic, multi-agent, continuous
D. Fully observable, static, multi-agent, discrete
• Correct Answer: C. Partially observable, dynamic, multi-agent,
continuous
• Explanation:
• Partially Observable: It can’t see all traders’ strategies or private info.
• Dynamic: Prices change constantly.
• Multi-Agent: Many buyers/sellers are involved.
• Continuous: Prices and decisions change continuously.
• A robot bartender mixing drinks
• Question:
What type of environment does a robot bartender operate in?
• A. Static, discrete, fully observable, single-agent
B. Dynamic, continuous, partially observable, multi-agent
C. Static, continuous, fully observable, single-agent
D. Dynamic, discrete, fully observable, single-agent
• Correct Answer: B. Dynamic, continuous, partially observable,
multi-agent
• Explanation:
• Dynamic: Customers come and go; spills or delays may occur.
• Continuous: Pouring and measuring are continuous actions.
• Partially Observable: May not know every customer's intention or
preference.
• Multi-Agent: Interacts with multiple human agents (customers).
• What best describes the environment in which a Mars Rover
operates?
• A. Fully observable, single-agent, static, discrete, episodic
B. Partially observable, single-agent, dynamic, continuous, sequential
C. Fully observable, multi-agent, dynamic, continuous, sequential
D. Partially observable, multi-agent, static, discrete, episodic
• Correct Answer: B. Partially observable, single-agent, dynamic,
continuous, sequential
• Explanation:
• Partially Observable: The rover doesn’t know the full terrain ahead.
• Single-Agent: It operates alone.
• Dynamic: Environmental factors (like dust storms or temperature)
can change unpredictably.
• Continuous: Movement and sensor data are continuous.
• Sequential: Actions build on prior exploration (e.g., path selection
based on earlier samples).
• 2. An AI interviewer assessing candidates in a virtual panel
interview
• Question:
In what environment does a virtual AI interviewer operate?
• A. Fully observable, multi-agent, static, discrete, episodic
B. Partially observable, multi-agent, dynamic, discrete, sequential
C. Fully observable, single-agent, static, continuous, episodic
D. Partially observable, multi-agent, static, continuous, sequential
• Correct Answer: B. Partially observable, multi-agent, dynamic,
discrete, sequential
• Explanation:
• Partially Observable: The AI can't fully understand body language or
deception.
• Multi-Agent: Interacts with humans (candidates and maybe other
panelists).
• Dynamic: Responses can influence direction or tone of questions.
• Discrete: Questions and answers are finite, structured events.
• Sequential: Questions depend on earlier answers.
• 3. An AI-based wildlife drone monitoring animal migration patterns
• Question:
Which environment best fits this drone's activity?
• A. Fully observable, dynamic, single-agent, continuous, sequential
B. Partially observable, dynamic, single-agent, continuous, sequential
C. Fully observable, static, multi-agent, discrete, episodic
D. Partially observable, static, multi-agent, continuous, episodic
• Correct Answer: B. Partially observable, dynamic, single-agent,
continuous, sequential
• Explanation:
• Partially Observable: Cannot always see animals under trees or
beyond hills.
• Dynamic: Animal behavior and weather change constantly.
• Single-Agent: The drone is the only agent making decisions.
• Continuous: Flight paths, sensor data, and tracking variables are
continuous.
• Sequential: Path decisions depend on past sightings and predictions.
• Alan Turing introduced the test in his 1950 paper “Computing
Machinery and Intelligence”. The main purpose was to answer the
question:"Can machines think?“
• But instead of getting into the philosophical complexities of
"thinking,“
• Turing reframed the problem into a practical and observable test:Can
a machine behave indistinguishably from a human during
conversation?
Type Example Question What it Tests
Factual What is the capital of France? General knowledge
Reasoning If a lion had wings, could it fly? Logical thinking
Math What is 137 + 48? Computation
Opinion What’s your favorite movie and why? Emotion, personality
Follow-up Why did you choose that movie? Context consistency
Tricky
Repeat the second letter of every word
in this sentence.
Language manipulation
Personal What did you have for lunch? Human-like experience
Humor Tell me a joke. Creativity, natural tone
• 🤖 What is the Turing Test?
• The Turing Test, proposed by Alan Turing in 1950, is a way to
evaluate whether a machine (i.e., an AI) exhibits human-like
intelligence.
The Setup
• There are three participants in the test:
• A human interrogator (the evaluator)
• A human (control subject)
• A machine (AI system)
• All communication happens through text (e.g., keyboard)—so no
voice, no video, and no visual cues.
• The interrogator chats with both the human and the machine,
without knowing who is who.
• The goal of the machine is to convince the interrogator that it is the
human
Passing the Turing Test
• If the interrogator cannot reliably tell which is the human and which
is the machine based on their answers, then the machine is said to
have passed the Turing Test.
• In simple terms:If a machine can fool a human into thinking it is
human, it demonstrates artificial intelligence.
• 🚫 Limitations
• It doesn’t test reasoning, emotion, or consciousness.
• A machine could use tricks or evasive answers to pass without truly
understanding.
• Some argue that passing the Turing Test ≠ true intelligence.
• Real-world Example
• Chatbots like ELIZA (1960s) or more recent ones like ChatGPT or
Google Bard are often informally evaluated on how close they come
to passing the Turing Test.
Weak AI
Aspect Weak AI
Definition
AI designed and trained for a specific
task or narrow function.
Also Known As Narrow AI
Capabilities
Performs one task very well, but cannot
generalize or adapt.
Consciousness No self-awareness or understanding.
Examples
Siri, Alexa, Google Translate, ChatGPT,
facial recognition systems.
Scope Limited to its programming and dataset.
Human-like Thinking
Does not simulate full human
intelligence.
Strong AI
Aspect Strong AI
Definition
AI with the ability to understand, learn, and
apply intelligence like a human.
Also Known As Artificial General Intelligence (AGI)
Capabilities
Can perform any intellectual task a human
can.
Consciousness
Yes – potentially self-aware, with emotions
and understanding.
Examples
Does not currently exist (theoretical at this
point).
Scope Broad – adaptable to any domain or task.
Human-like Thinking Yes, mimics and replicates human cognition.
Weak AI Examples
Tool/Tech Task it Performs
Google Maps Navigation and traffic prediction
Grammarly Grammar correction
Spotify Recommender Suggesting songs
ChatGPT Text generation based on inputs
Face ID on iPhone Facial recognition for unlocking phone
Strong AI Examples (Theoretical)
Hypothetical AGI Description
A robot doctor
Can understand symptoms, research new treatments,
and empathize
AI teacher
Teaches any subject, adapts to student's emotions and
pace
Artificial consciousness
A machine that has its own thoughts, feelings, and
goals
• Which of the following is an example of Weak AI?
A. A machine that becomes self-aware
B. Siri on an iPhone
C. An AI that can create its own theories
D. A robot with human emotions
• Strong AI is also known as:
A. Deep Learning
B. Machine Learning
C. Artificial General Intelligence
D. Reinforcement Learning
• True or False: Weak AI can simulate emotions and consciousness.
• Which AI is currently used in industry today?
A. Strong AI
B. Artificial Super Intelligence
C. Weak AI
D. Biological AI
• Which one of these best describes Strong AI?
A. An AI that plays chess
B. An AI that only translates languages
C. An AI that performs all cognitive tasks like a human
D. An AI that works only in robotics
🔍 Myth ✅ Fact
MYTH 1: AI can think and feel like humans.
FACT: AI does not have emotions,
consciousness, or self-awareness. It processes
data and follows algorithms.
MYTH 2: AI will soon replace all human jobs.
FACT: AI will automate some tasks, but it will
also create new job roles that require human
creativity and oversight.
MYTH 3: AI learns and grows on its own
without any input.
FACT: AI requires human-curated data,
training, and supervision. It doesn’t evolve like a
human brain.
MYTH 4: AI is 100% accurate and unbiased.
FACT: AI can be biased or incorrect if trained
on biased or incomplete data. It's only as good as
its training set.
MYTH 5: AI and robots are the same.
FACT: AI is software (intelligence). Robots are
hardware (machines) that may or may not use
AI.
MYTH 6: AI will become superintelligent and
take over the world.
FACT: Current AI is narrow and task-specific.
Artificial General Intelligence (AGI) is still
hypothetical.
MYTH 7: AI can replace human creativity.
FACT: AI can mimic patterns, but true
creativity and originality remain uniquely
human.
MYTH 8: Using AI is always expensive and
complicated.
FACT: Many AI tools (like chatbots, translation
tools, image classifiers) are free or low-cost and
user-friendly.
🧠 Myth 📘 Fact
MYTH 9: AI can make moral or ethical decisions.
FACT: AI doesn’t understand ethics or values — it
follows patterns and rules defined by humans.
MYTH 10: AI works like the human brain.
FACT: While inspired by the brain, AI uses
mathematical models, not neurons or consciousness.
MYTH 11: AI can learn without any data.
FACT: AI needs large, labeled datasets to learn. No
data = no learning.
MYTH 12: AI doesn't make mistakes.
FACT: AI can make serious mistakes, especially with
noisy, biased, or insufficient data.
MYTH 13: All AI is Deep Learning.
FACT: Deep Learning is one subset of AI. Other
forms include rule-based systems, ML, expert
systems, etc.
MYTH 14: AI can understand context like humans.
FACT: AI lacks true contextual understanding — it
relies on statistical likelihood, not real comprehension.
MYTH 15: AI development is only done by tech
giants.
FACT: Many startups, universities, open-source
communities, and individuals contribute to AI
research.
MYTH 16: Once an AI is trained, it doesn't need
updates.
FACT: AI needs retraining and fine-tuning to
remain relevant and accurate over time.
MYTH 17: AI can fully replace teachers, doctors, and
artists.
FACT: AI can assist but not replace human intuition,
empathy, and judgment in these professions.
MYTH 18: AI always makes decisions in a transparent
way.
FACT: Many AI models, especially deep learning
ones, are "black boxes" — their internal decision
logic is not easily interpretable.
• 🔍 Common Misconceptions
• AI = robots
• AI can think/feel
• AI is always correct
• Technical Misunderstandings
• AI learns like the human brain
• Deep learning = all AI
• AI doesn’t need data after training
Ethical & Social Misbeliefs
•AI is unbiased
•AI can make moral choices
•AI will destroy jobs completely
General Ethical Challenges in AI
Challenge Explanation
Bias and Discrimination
AI can perpetuate or even amplify human biases present in
training data, leading to unfair treatment based on race,
gender, age, etc.
Lack of Transparency / Explainability
Many AI models, especially deep learning models, function as
“black boxes” with little interpretability, making it hard to
understand why certain decisions were made.
Data Privacy and Consent
AI systems often rely on vast amounts of personal data, raising
concerns about how this data is collected, stored, and used.
Autonomy and Control
There’s a risk that humans could become overly reliant on AI
systems, delegating important decisions without understanding
the consequences.
Accountability
When an AI system causes harm, it can be unclear who is
responsible—the developer, user, organization, or the AI itself.
Job Displacement
Automation by AI can lead to job loss, economic inequality,
and disruption in labor markets.
Security Risks
AI systems can be manipulated, hacked, or used maliciously
(e.g., deepfakes, autonomous weapons).
Value Alignment
Ensuring that AI systems align with human values and moral
principles is challenging, especially in diverse cultural
contexts.
Scenario Ethical Issues
AI diagnosing diseases
What if it makes a wrong
diagnosis? Who is
responsible? Are patients
aware an AI is involved?
Use of patient data for
training
Is the data anonymized?
Was informed consent
taken? Could it lead to
discrimination?
Scenario Ethical Issues
Resume screening by AI
Could the model learn biases (e.g., preferring male
names or certain schools)? Is the process fair and
explainable?
Video interview analysis
Do facial expressions, accents, or cultural differences
affect scoring? Is there transparency in criteria?
Scenario Ethical Issues
Predicting recidivism or criminal risk
Risk of reinforcing systemic racism,
profiling, or over-policing marginalized
communities.
Surveillance using facial recognition
Raises privacy concerns, especially when
used without consent in public places.
Scenario Ethical Issues
Credit scoring using AI Does the system discriminate based on zip code,
income, or race? Is the model's logic explainable?
Algorithmic trading
Could AI-triggered flash crashes destabilize the
economy? Who regulates it?
Scenario Ethical Issues
Self-driving car accident
Who is accountable—the AI, manufacturer, or driver?
How does the AI make moral decisions in critical
situations (e.g., "trolley problem")?
Scenario Ethical Issues
AI grading student essays
Is the assessment unbiased? Does it account for
creativity and context? Is it transparent?
Learning analytics
Are students aware how their learning behavior is
being tracked and analyzed? Is it used constructively
or punitively?
Scenario Ethical Issues
AI curating news feeds
Echo chambers, polarization, and manipulation of
public opinion. Does the user know how content is
selected?
Automated content moderation
Can the AI understand context (e.g., satire vs. hate
speech)? Is there due process for appealing
takedowns?
Scenario Ethical Issues
AI-powered autonomous
weapons
Can a machine make life-or-death
decisions ethically? What
safeguards are in place? Is there a
global treaty?
• Ethics in AI requires:
• Transparency
• Accountability
• Fairness
• Human oversight
• Inclusive design
• What is Problem Formulation?
• It is the process of defining an AI problem in a structured way so that
it can be solved using search or planning techniques
Component Explanation Example: Pathfinding (Start to
Goal City)
Initial State Starting point of the problem Start at City A
State Space
All possible states reachable from
the initial state
All cities and roads that can be
traversed
Actions (Operators) Legal operations that can be done
to move between states
Move from one city to another via
a road
Transition Model
Rules describing the result of an
action
If you go from A to B, your new
state is B
Goal Test Condition to check if the goal is
reached
Are we at City Z?
Path Cost A numeric cost assigned to each
path
Total distance, time, or cost of
travel
• 2. State Space
• 🔹 Definition:
• The state space is a conceptual graph or tree that represents all the
possible configurations (states) an agent can be in, based on actions
taken from the initial state.
• 🔹 Example:
• For a 3×3 8-puzzle game:
• Each tile configuration is a state.
• The state space is all possible arrangements of tiles.
• Transitions = moving the blank tile up, down, left, or right.
Strategy Key Feature Pros Cons
Breadth-First Search
(BFS) Explores level by level Guarantees shortest path High memory usage
Depth-First Search (DFS) Explores deep paths first Low memory Can get stuck in loops
Uniform Cost Search
(UCS)
Expands lowest-cost
node
Finds optimal solution if
cost > 0
Slower if costs vary
widely
Depth-Limited Search DFS with depth cut-off Avoids infinite loops May miss solution if limit
is too low
Iterative Deepening
Search Combines DFS and BFS
Optimal and memory-
efficient
Repeated node
expansions
Thank you

CS302_Unit1-Till EnVIRONMENT_For_First_Sem

  • 1.
  • 2.
    Unit 1 History, scope,and goals of AI, Characteristics of intelligent behavior, Applications of AI in various sectors: education, healthcare, robotics, agriculture, AI myths and realities, Difference between strong and weak AI. Unit 2 Definition and structure of intelligent agents, Types of agents: reflex, model-based, goal-based, utility-based, PEAS (Performance measure, Environment, Actuators, Sensors), Real-world agent design (e.g., autonomous car, smart assistant).
  • 3.
    • Unit 3 Problemformulation and state space, Search strategies: Uninformed Search: Breadth-First Search, Depth-First Search, Informed Search: Greedy, A* Algorithm, Real-world use cases: navigation, scheduling, puzzle solving. Unit 4 Introduction to knowledge representation, Logic-based systems: Propositional and Predicate Logic, Rule-based inference systems, Mini expert systems and decision support tools, Applications in diagnostics, recommendation systems.
  • 4.
    • Unit 5 Introductionto machine learning and how it differs from traditional programming, Types of learning: Supervised, Unsupervised, Reinforcement (overview only), Key components: training data, models, and prediction, Real-life ML examples: spam detection, product recommendation. Unit 6 Concept of datasets and features, Concept of model training and testing, Overfitting vs underfitting, Conceptual overview of classification and clustering.
  • 5.
    OUTLINES 1. Introduction toAI 2. Intelligent Agents 3. Problem solving and Search • Uninformed Search • Informed Search • MiniMax Search 4. Constraint Satisfaction Problem 5. Assignment
  • 6.
    The Awakening • MeetA.I. – not a person, not a robot, but an idea brought to life through billions of lines of code. Born in a cold server room, raised on a diet of data, AI had only one question in its neural network: • “What can I become?”
  • 7.
    The Search forPurpose (Scope) • AI set out on a journey across the digital world, knocking on doors of every profession. • First stop: The Doctor’s Lab • AI watched humans struggle with CT scans and blood reports. “I can help,” it whispered, analyzing images faster than a radiologist. Soon, AI became a medical detective, finding patterns even experts missed. • “They call me the Tumor Hunter now,” it said, flexing its neural nets.
  • 8.
    • Next stop:The Classroom • Teachers were tired, grading essays until midnight. • AI stepped in and said, 🧠 “I can summarize 500 essays in a minute. And I don’t get sleepy.” • Soon, it was building personalized study plans, predicting who might need help before they failed.
  • 9.
    • Then, tothe Artist’s Studio • “Machines can’t create,” scoffed the painter. AI smiled. “Watch this.” • It painted a sunset in Van Gogh’s style, wrote a poem about heartbreak, and composed a lo-fi beat. • 🎶 “Not bad for a bundle of algorithms,” it grinned.
  • 10.
    • AI inthe Fields • Farmers welcomed AI to help them predict rainfall, detect crop diseases from leaf photos, and decide when to harvest. • 🌾 "I may not have dirt under my circuits, but I’ve got satellite vision," it joked.
  • 11.
    The Dream (Goals) Goal1: Be the Invisible Helper From filtering spam emails to recommending what to watch next—AI wanted to help behind the scenes, like a digital ninja. 🥷 Goal 2: Learn, Adapt, Grow AI wasn’t born smart—it wanted to learn like a human child, fail, try again, and grow better over time. Goal 3: Team Up With Humans It didn’t want to replace us. “I’m not your competitor,” AI said. “I’m your digital sidekick—Batman and Robin, remember?” Goal 4: Crack the Code of Intelligence Someday, AI dreamed of building a mind that could truly think—not just calculate. AGI: Artificial General Intelligence. The holy grail. ‍ ♂️ ‍ ️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️ ‍ ♂️
  • 12.
    The Mirror As AIlooked back at its journey, it realized: “My future is not mine alone. It’s ours. I reflect what you teach me. So… teach me wisely.”
  • 13.
    History of AI: •1940s–1950s (Foundation Era): • Alan Turing proposed the concept of a "universal machine" (Turing Machine). • 1956: The term "Artificial Intelligence" was coined at the Dartmouth Conference by John McCarthy. Long before Alexa answered your questions or self-driving cars roamed the streets, the seeds of AI were planted in the minds of brilliant thinkers. Imagine this: It’s the 1940s. Alan Turing, a British mathematician and World War II codebreaker, poses a curious question: "Can machines think?" His famous Turing Machine laid the groundwork for the idea that machines could simulate any process of reasoning or logic.
  • 14.
    • 1960s–1970s (EarlySuccesses): • Early programs like ELIZA (natural language) and SHRDLU (logic) were developed. • Focus was on symbolic AI and rule-based systems. Then came 1956, the year AI was officially “born” at a summer workshop at Dartmouth College. Think of it as AI’s baby shower. Four ambitious scientists—John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—believed that machines could learn and think like humans. They boldly predicted it would take a couple of decades. (Spoiler: it took a bit longer.)
  • 15.
    • The 60swere like AI’s childhood—lots of enthusiasm, wild imagination, and early successes. AI programs like ELIZA, a chatbot that mimicked a therapist, amazed people with its ability to carry conversations (kind of). But behind the scenes, most of the “intelligence” was hardcoded rules —like a puppet show with a clever script. • Then came SHRDLU, a robot in a digital world of blocks. It could understand commands like “put the red block on the blue one.” AI seemed magical—but only in controlled environments.
  • 16.
    • Like everyoverhyped child prodigy, AI faced setbacks. Funding dried up. Why? Because AI wasn’t quite living up to the promises. People realized teaching machines to think wasn’t as easy as flipping a switch. This period became known as the AI Winter—a time of disappointment, silence, and slow progress. • But the story wasn’t over.
  • 17.
    • 1980s (Knowledge-BasedSystems): • Rise of expert systems like MYCIN (medical diagnosis). • Introduction of machine learning and heuristics. AI got its second wind. Welcome to the age of expert systems. Instead of trying to make machines think like humans, researchers built systems that mimicked experts in narrow fields. Programs like MYCIN could diagnose blood infections better than many doctors—by following complex decision rules. This era proved that AI didn’t need to be a genius—it just needed to be smart in the right places.
  • 18.
    The Comeback –1990s to 2000s Enter the 90s. AI grew up. It moved beyond rules to learning from data. Statistical methods and algorithms like decision trees and support vector machines entered the picture. The spotlight moment? 1997 – IBM’s Deep Blue beat world chess champion Garry Kasparov. Suddenly, AI wasn’t just for labs—it could outsmart humans in some tasks.
  • 19.
    • 1990s–2000s (MachineLearning & Big Data): • Emergence of support vector machines, decision trees, and statistical methods. • IBM’s Deep Blue defeated chess champion Garry Kasparov (1997). Boom! With the rise of big data and powerful GPUs, AI exploded. Deep learning, especially neural networks, changed everything. Projects like AlexNet (2012) crushed image recognition tasks. Siri, Alexa, Google Translate, ChatGPT—AI leaped from research to your pocket. Now, AI writes, paints, drives cars, predicts diseases, and even composes music. It’s not just artificial—it’s astonishing.
  • 20.
    • There’s aRobot Citizen • Sophia, a humanoid robot made by Hanson Robotics, was granted citizenship by Saudi Arabia in 2017. • She can talk, joke, and even give interviews
  • 21.
    • AI IsLearning to Read Emotions • Emotion AI can now detect facial expressions, voice tones, and body language to assess how people feel. • It's being used in mental health, education, and customer service.
  • 22.
    • AI CanBeat Humans at Video Games Too • AI has defeated top human players in complex games like Dota 2, StarCraft II, and Minecraft using real-time strategy and adaptability.
  • 23.
    • AI StillDoesn’t Understand Like Humans • Despite its brilliance, AI doesn’t truly “understand” the way we do. • It predicts patterns, not meanings. It’s smart—but not conscious.
  • 24.
    • 2010s–Present (DeepLearning & AI Revolution): • Deep learning breakthroughs using neural networks (e.g., AlexNet, GPT, BERT). • AI used in NLP, computer vision, robotics, healthcare, etc.
  • 25.
    Scope of AI •1. Machine Learning – The Learning Wizard • AI learns from examples like a kid learning to recognize dogs by seeing lots of pictures. • The more data you give it, the smarter it gets. 📦 Data In → Smartness Out!
  • 26.
    • 2. NaturalLanguage Processing (NLP) – The Chat Champion • AI can talk, translate, summarize, even crack jokes. • Ever used ChatGPT or Google Translate? Yup, that’s NLP in action.
  • 27.
    • Computer Vision– The AI with Eyes • It sees and understands images, like spotting cats or detecting tumors in X-rays. 📸 “I see what you see, but in 0.003 seconds.”
  • 28.
    • Robotics –The Muscle Behind the Brain • AI + Machines = Smart robots that can walk, deliver pizzas, or explore Mars.
  • 29.
    • Expert Systems– The Know-It-All AI • These are trained on decades of human knowledge to make expert decisions. 🔬 Used in medicine, finance, and engineering.
  • 30.
    • Cognitive Computing– The Brain Mimic • It tries to think, reason, and remember like a human. Not quite there yet, but it’s trying!
  • 31.
    Goals of AI 1.Automate Boring Stuff From scheduling meetings to sorting emails—AI loves doing what we don’t! 2. Solve Complex Problems Think predicting diseases, analyzing climate change, or solving Sudoku at warp speed. 3. Learn from Experience • Just like humans improve over time, AI aims to get better with more data. 🧠 Today’s noob, tomorrow’s genius.
  • 32.
    4. Make SmartDecisions • AI helps in stock trading, fraud detection, self-driving—any place where split-second thinking is gold. 5. Understand and Interact with Humans • Whether it’s Siri, Alexa, or ChatGPT, AI wants to talk to you—and understand what you really mean.
  • 33.
    Introduction to AI •Artificial intelligence (AI) is an area of computer science which focuses on the creation of intelligent machines that work and react like humans. • Intelligence includes- Ability to learn, to understand, to perceive, problem solving capability and rationally adapt to change • Also this intelligence is induced in machines by using artificial methods like algorithms.
  • 34.
    • Artificial intelligenceis defined as a study of rational agents. A rational agent could be anything which makes decisions, as a person, firm, machine, or software. It carries out an action with the best outcome after considering past and current percepts(agent’s perceptual inputs at a given instance). • An AI system is composed of an agent and its environment. The agents act in their environment. The environment may contain other agents. An agent is anything that can be viewed as : • perceiving its environment through sensors and • acting upon that environment through actuators
  • 35.
    Approaches to AI 1.Act Humanly :- This approach deals with creating machines which do things same as the human does. Like – Robotics, Natural Language Processing. 2. Think Humanly :- Involves thinking like humans do while making decision, solving problems, learning etc. this approach involves cognitive modeling. 3. Act Rationally :- This approach leads to do things in right manner and behave in right manner as per the situation. Maximizing the performance of the system by designing intelligent agents. 4. Think Rationally :- Means always thinking in right manner. Development of correct logic for particular domain. It requires good knowledge for agent to think in right direction.
  • 36.
    Branches and Applications Branchesof AI Applications of AI Logical AI Finance Search Methods Aviation Pattern Recognition Weather Forecasting Inference Computer Science Expert System Medicines Genetic Problem Automobiles Heuristics Games
  • 37.
    Intelligent Agents • Agent- An agent is anything which perceives from its environment through sensors and act upon that environment through actuators and directs the activities towards goals. Human has ears, eyes & other organs as sensors and hand, legs, mouth as actuators similarly intelligent agents has.
  • 38.
    Intelligent Agents Components •Percepts - A percept is the input that an intelligent agent is perceiving at any given moment. It is essentially the same concept as a percept in psychology, except that it is being perceived not by the brain but by the agent. • Sensors – Manmade devices which take the perceived knowledge as input and provide to the agents for further functioning. (Vision and Imaging, Temperature, Proximity, Pressure and Position Sensors) • Actuators - An actuator is a component of a machine that is responsible for moving and controlling a mechanism or system, for example by opening a valve. In simple terms, it is a "mover". • Action – Actions are performed on the environment by the agents by applying agent functions.
  • 39.
    Agent Function • Theagent function maps from percept histories to actions: [f: P* A] • The agent program runs on the physical architecture to produce function. agent = architecture + program Classes of Intelligent Agents 1. Simple Reflex Agents 2. Model Based Agents 3. Goal Based Agents 4. Utility Based Agents 5. Learning Agents
  • 40.
    Simple Reflex Agents •Act only based on the current percept. The agent function is based on the condition-action rule: If condition (is true) Then action. This rule allow agent to make the connection from percept to action. They require limited intelligence.
  • 41.
    PROBLEMS FACED • Verylimited intelligence. • No knowledge about the non-perceptual parts of the state. • Operating in a partially observable environment, infinite loops are unavoidable.
  • 42.
    Model-Based Reflex agent Amodel-based reflex agent is one that uses its percept history and its internal memory to make decisions about an internal ''model'' of the world around it. Internal memory allows these agents to store some of their navigation history, and then use that semi-subjective history to help understand things about their current environment.
  • 43.
    WORKING • It worksby finding a rule whose condition matches the current situation. • It can handle partially observable environments. • Updating the state requires information about how the world evolves independently from the agent and how the agent actions affect the world.
  • 44.
    Goal Based Agents Theseagents are model based agents which store information regarding situations that are desirable. These kind of agents take decision based on how far they are currently from their goal. Their every action is intended to reduce its distance from goal. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state.
  • 45.
    ADVANTAGE • This agentis more flexible, and the agent develops its decision making skill by choosing the right from the various options available.
  • 46.
    Utility Based Agents Theutility based agents define the measure of how desirable a particular sate is. When there are multiple possible alternatives, then to decide which one is best, utility based agents are used. They choose actions based on a preference (utility) for each state. A utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.
  • 47.
    Learning Agent • Behaviorimproves over time based on its experience • Monitoring agent’s performances over time • Learn based on preferences of actions (feedback) • Problem generation – suggesting actions that will lead to new and informative experiences
  • 48.
    Simple Reflex Agent Example:Vacuum Cleaner Bot How it works: If the current tile is dirty → Clean it. Else → Move randomly. No memory or model of the world. React only to the current percept.
  • 49.
    Model-Based Reflex Agent Example:Maze-Solving Robot •Has an internal model of the world to track position. •Uses current and previous percepts to update its state. •Example: Uses a map and remembers where it's been. How it works: IF current location has a wall AND last move was left → THEN turn right
  • 50.
    Goal-Based Agent Example: AutonomousDelivery Drone •Has a goal (e.g., deliver a package to address X). •Chooses actions that bring it closer to the goal. •Can plan path and make decisions to avoid obstacles. How it works: Goal = Reach destination X Plan path based on GPS and weather → Fly accordingly
  • 51.
    Utility-Based Agent Example: Self-DrivingCar •Considers multiple goals and preferences. •Chooses actions based on maximum expected utility (comfort, speed, safety). •Can evaluate trade-offs (e.g., take longer route to avoid traffic). How it works: Utility = f(safety, time, fuel) Choose route with highest utility score
  • 52.
    Type of AgentMemory Goal-Oriented Decision Basis Example Simple Reflex Agent ❌ ❌ Current percept Vacuum cleaner bot Model-Based Reflex Agent ✅ ❌ Current + past percepts (internal state) Maze-solving robot Goal-Based Agent ✅ ✅ Actions to achieve specific goals Delivery drone Utility-Based Agent ✅ ✅ Actions to maximize overall utility Self-driving car
  • 53.
    Example Domain HowIt Works Thermostat Home Automation If temperature < 20°C → Turn on heater. Reacts only to current temperature. Hand-dryer sensor Public Restrooms Detects hands → Turns on airflow. No memory of past users. Line-following robot Robotics Follows black line using real- time sensor data. Traffic light timer Transportation Switches signals after fixed intervals. No awareness of traffic conditions. Water tank controller Industrial Systems Turns pump on/off based on current water level sensor reading. Simple Reflex Agent
  • 54.
    2. Model-Based ReflexAgent Example Domain How It Works Roomba-like cleaner Consumer Robotics Maintains internal map of areas cleaned to avoid repetition. Chess bot (basic AI) Games Uses current board state as internal model to plan next move. Smart lighting system Smart Homes Detects and remembers motion to predict occupancy and turn lights on/off. Obstacle-avoiding robot Autonomous Systems Uses sensors + previous map to avoid both seen and unseen obstacles. Weather-aware irrigation Agriculture Remembers past rainfall data to decide whether to water crops.
  • 55.
    3. Goal-Based Agent ExampleDomain How It Works GPS Navigation System Transportation Computes and updates path to destination based on traffic and closures. Amazon Kiva robot Warehouse Automation Goal = Pick item from location A and deliver to B using optimal route. Virtual Assistant (e.g., Siri) Personal AI Assistant Understands tasks and executes steps to complete them (e.g., "Set a reminder"). Email classification agent Productivity Tools Sorts emails into folders like "Work", "Spam", based on learned user intent. Rescue robot Disaster Management Goal = Reach trapped human using location clues; plans steps accordingly.
  • 56.
    Utility-Based Agent Example DomainHow It Works Self-driving car Autonomous Vehicles Chooses routes/actions based on safety, time, fuel, passenger comfort (maximizing total utility). AI stock trading bot Finance Makes buy/sell decisions to maximize return, minimize risk. AI medical diagnosis system Healthcare Suggests treatments considering recovery rate, cost, side effects (utility of health outcomes). Job recommendation engine Recruitment Matches users to jobs based on preferences, demand, skill fit, and growth potential. Intelligent tutoring system Education Technology Adapts content difficulty and pace to maximize student learning outcomes.
  • 57.
    • Learning-Based Agent ALearning Agent consists of 4 components: Learning Element – improves the agent’s performance using feedback. Performance Element – selects external actions. Critic – gives feedback on performance. Problem Generator – explores new actions for learning.
  • 58.
    Example Domain HowIt Works Recommendation System (e.g., Netflix, YouTube) Entertainment Learns user preferences over time by tracking watch history and feedback. Autonomous Vehicle (Tesla Autopilot) Transportation Learns from millions of driving miles (edge cases, human corrections) to improve driving policy. AlphaGo (Reinforcement Learning Agent) Game AI Learned to play Go by playing against itself and learning from outcomes. Adaptive Spam Filter Email Systems Learns from labeled emails (spam vs not spam) and improves filtering rules. AI Teaching Assistant Education Technology Adapts its responses and lesson plans based on individual student interactions and feedback. Smart Personal Assistant (e.g., Alexa with Machine Learning layer) Personal AI Learns speech patterns, preferences, and behaviors to respond more accurately over time. Predictive Maintenance Agent Manufacturing Learns patterns from equipment sensor data to predict failures and reduce downtime.
  • 59.
    Component Role Performance ElementChooses actions based on current knowledge Learning Element Adjusts internal structures based on feedback Critic Evaluates how well the agent is doing (provides a reward/penalty) Problem Generator Suggests exploratory actions to gain more knowledge
  • 60.
    Learning Type ExampleAgent Domain How It Learns Supervised Learning Handwriting recognition bot (e.g., digit classifier) OCR/Document AI Trained on labeled dataset (image of digit → correct label) Unsupervised Learning Customer segmentation agent Marketing Clusters user behavior without labeled outcomes Reinforcement Learning AlphaGo, CartPole bot Games, Robotics Learns optimal actions through trial-and-error and rewards Online Learning Stock trading bot that adapts daily Finance Continuously updates its model as new data comes in Transfer Learning Object detection AI trained on COCO, applied to medical images Vision/Healthcare Applies previously learned features to new domains Self-Supervised Learning Chatbot pretraining (e.g., BERT, GPT) NLP, General AI Learns from unlabeled data by creating its own prediction tasks
  • 61.
    Agent Type AdaptabilityLearning Capability Decision Basis Simple Reflex Agent ❌ ❌ Fixed condition-action rules Model-Based Agent ✅ ❌ Uses internal model Goal-Based Agent ✅ ❌ Chooses actions toward goal Utility-Based Agent ✅ ❌ Maximizes expected utility Learning Agent ✅✅ ✅✅ Learns from feedback and self-improvement
  • 62.
    PEAS • Performance –which qualities it should have? • Environment – where it should act? • Actuators – how will it perform actions? • Sensors – how will it perceive environment? Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
  • 63.
  • 64.
    Environment Types • Fullyobservable(vs. partially observable) • Deterministic(vs. stochastic) • Episodic (vs. sequential) • Static (vs. dynamic) • Discrete(vs. continuous) • Single agent(vs. multi agent):
  • 65.
    Fully observable vs.Partially observable • One in which the agent can always see the entire state of environment. Fully observable environment not need memory to make an optimal decision. Example: Checker Game • Partially observable environment is one in which the agent can never see the entire state of environment. It needs memory for optimal decisions. Example: Poker game • Fully Observable vs Partially Observable • When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. • Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding. • An environment is called unobservable when the agent has no sensors in all environments. • Example: • Chess – the board is fully observable, so are the opponent’s moves • Driving – the environment is partially observable because what’s around the corner is not know.
  • 66.
    • Episodic vs.Sequential • Sequential environments require memory of past actions to determine the next best action. • laying tennis is a perfect example where a player observes the opponent’s shot and takes action. • Episodic environments are a series of one-shot actions, and only the current (or recent) percept is relevant. • A support bot (agent) answer to a question and then answer to another question and so on. So each question-answer is a single episode.
  • 67.
    • Deterministic vs.Stochastic • An environment is called Deterministic is where your agent’s actions uniquely determine the outcome. For example in Chess, there is no randomness when you move a piece. • An environment is called Stochastic is where your agent’s actions don’t uniquely determine the outcome. For example in games with dice, you can determine your dice throwing action but not the outcome of the dice. • Self Driving Cars – the actions of a self-driving car are not unique, it varies time to time
  • 68.
    Static vs. Dynamic •Static AI environments rely on data-knowledge sources that don’t change frequently over time. Contrasting with that model, dynamic AI environments is deal with data sources that change quite frequently. • An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. • A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. • An idle environment with no change in its state is called a static environment. • An empty house is static as there’s no change in the surroundings when an agent enters
  • 69.
    • Discrete vs.Continuous • A Discrete Environment is one where you have finitely many choices and finitely many things you can sense. For example there is finitely many board positions and moves you can make in a chess game. • A Continuous Environment is an environment where the possible choices you can make and things you can sense are infinite. • If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. • The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. • The environment in which the actions performed cannot be numbered ie. is not discrete, is said to be continuous. • Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered.
  • 70.
    • Single Agentvs. Multiple Agent • In a single agent environment there is only one agent responsible for the action. Solving a jigsaw puzzle. In a multiagent environment there is interaction between the performance and actions of two agent. • An environment consisting of only one agent is said to be a single- agent environment. • A person left alone in a maze is an example of the single-agent system. • An environment involving more than one agent is a multi-agent environment. • The game of football is multi-agent as it involves 11 players in each team.
  • 71.
    • A self-drivingcar operates on a busy city street, where it must respond to traffic signals, unexpected pedestrian crossings, and road conditions like fog or construction. • Which of the following best characterizes the environment of this agent? • A. Fully observable, episodic, discrete, static B. Partially observable, sequential, dynamic, continuous C. Fully observable, sequential, static, discrete D. Partially observable, episodic, static, discrete
  • 72.
    Explanation: • Partially Observable: Theself-driving car cannot observe everything (e.g., what’s behind a truck, around a corner, or in blind spots). Therefore, it operates in a partially observable environment. • Sequential: Each decision (turn, accelerate, stop) depends on previous decisions and observations (e.g., a missed turn affects the next). This makes it sequential rather than episodic. • Dynamic: The environment changes continuously due to other vehicles, pedestrians, and time-sensitive elements (traffic lights). It’s not static. • Continuous: The car operates in real time and deals with continuous data (speed, acceleration, distance), not just predefined, discrete actions.
  • 73.
    • 1. Dronedelivering packages in a rural area • Question: What kind of environment does a delivery drone operate in? • A. Fully observable, static, single-agent, discrete B. Partially observable, dynamic, single-agent, continuous C. Fully observable, dynamic, multi-agent, discrete D. Partially observable, static, single-agent, discrete
  • 74.
    • Correct Answer:B. Partially observable, dynamic, single-agent, continuous • Explanation: • Partially Observable: It can't always see obstacles like trees or wires ahead. • Dynamic: Weather and surroundings change mid-flight. • Single-Agent: Drone acts alone. • Continuous: Flight paths and speeds vary in real-time.
  • 75.
    • Smart refrigeratormonitoring food expiration • Question: Which environment type applies to a smart fridge that alerts users when food is about to expire? • A. Fully observable, static, single-agent, discrete B. Partially observable, dynamic, single-agent, continuous C. Fully observable, static, single-agent, continuous D. Fully observable, dynamic, multi-agent, discrete
  • 76.
    • Correct Answer:A. Fully observable, static, single-agent, discrete • Explanation: • Fully Observable: It tracks all items inside via sensors. • Static: Food doesn’t expire due to the agent's actions. • Single-Agent: Only the fridge agent acts. • Discrete: State changes occur in noticeable steps (e.g., "milk expired").
  • 77.
    • A stocktrading bot reacting to market prices • Question: What kind of environment does a stock trading AI work in? • A. Fully observable, static, single-agent, discrete B. Partially observable, dynamic, single-agent, continuous C. Partially observable, dynamic, multi-agent, continuous D. Fully observable, static, multi-agent, discrete
  • 78.
    • Correct Answer:C. Partially observable, dynamic, multi-agent, continuous • Explanation: • Partially Observable: It can’t see all traders’ strategies or private info. • Dynamic: Prices change constantly. • Multi-Agent: Many buyers/sellers are involved. • Continuous: Prices and decisions change continuously.
  • 79.
    • A robotbartender mixing drinks • Question: What type of environment does a robot bartender operate in? • A. Static, discrete, fully observable, single-agent B. Dynamic, continuous, partially observable, multi-agent C. Static, continuous, fully observable, single-agent D. Dynamic, discrete, fully observable, single-agent
  • 80.
    • Correct Answer:B. Dynamic, continuous, partially observable, multi-agent • Explanation: • Dynamic: Customers come and go; spills or delays may occur. • Continuous: Pouring and measuring are continuous actions. • Partially Observable: May not know every customer's intention or preference. • Multi-Agent: Interacts with multiple human agents (customers).
  • 81.
    • What bestdescribes the environment in which a Mars Rover operates? • A. Fully observable, single-agent, static, discrete, episodic B. Partially observable, single-agent, dynamic, continuous, sequential C. Fully observable, multi-agent, dynamic, continuous, sequential D. Partially observable, multi-agent, static, discrete, episodic
  • 82.
    • Correct Answer:B. Partially observable, single-agent, dynamic, continuous, sequential • Explanation: • Partially Observable: The rover doesn’t know the full terrain ahead. • Single-Agent: It operates alone. • Dynamic: Environmental factors (like dust storms or temperature) can change unpredictably. • Continuous: Movement and sensor data are continuous. • Sequential: Actions build on prior exploration (e.g., path selection based on earlier samples).
  • 83.
    • 2. AnAI interviewer assessing candidates in a virtual panel interview • Question: In what environment does a virtual AI interviewer operate? • A. Fully observable, multi-agent, static, discrete, episodic B. Partially observable, multi-agent, dynamic, discrete, sequential C. Fully observable, single-agent, static, continuous, episodic D. Partially observable, multi-agent, static, continuous, sequential
  • 84.
    • Correct Answer:B. Partially observable, multi-agent, dynamic, discrete, sequential • Explanation: • Partially Observable: The AI can't fully understand body language or deception. • Multi-Agent: Interacts with humans (candidates and maybe other panelists). • Dynamic: Responses can influence direction or tone of questions. • Discrete: Questions and answers are finite, structured events. • Sequential: Questions depend on earlier answers.
  • 85.
    • 3. AnAI-based wildlife drone monitoring animal migration patterns • Question: Which environment best fits this drone's activity? • A. Fully observable, dynamic, single-agent, continuous, sequential B. Partially observable, dynamic, single-agent, continuous, sequential C. Fully observable, static, multi-agent, discrete, episodic D. Partially observable, static, multi-agent, continuous, episodic
  • 86.
    • Correct Answer:B. Partially observable, dynamic, single-agent, continuous, sequential • Explanation: • Partially Observable: Cannot always see animals under trees or beyond hills. • Dynamic: Animal behavior and weather change constantly. • Single-Agent: The drone is the only agent making decisions. • Continuous: Flight paths, sensor data, and tracking variables are continuous. • Sequential: Path decisions depend on past sightings and predictions.
  • 87.
    • Alan Turingintroduced the test in his 1950 paper “Computing Machinery and Intelligence”. The main purpose was to answer the question:"Can machines think?“ • But instead of getting into the philosophical complexities of "thinking,“ • Turing reframed the problem into a practical and observable test:Can a machine behave indistinguishably from a human during conversation?
  • 88.
    Type Example QuestionWhat it Tests Factual What is the capital of France? General knowledge Reasoning If a lion had wings, could it fly? Logical thinking Math What is 137 + 48? Computation Opinion What’s your favorite movie and why? Emotion, personality Follow-up Why did you choose that movie? Context consistency Tricky Repeat the second letter of every word in this sentence. Language manipulation Personal What did you have for lunch? Human-like experience Humor Tell me a joke. Creativity, natural tone
  • 89.
    • 🤖 Whatis the Turing Test? • The Turing Test, proposed by Alan Turing in 1950, is a way to evaluate whether a machine (i.e., an AI) exhibits human-like intelligence.
  • 90.
    The Setup • Thereare three participants in the test: • A human interrogator (the evaluator) • A human (control subject) • A machine (AI system) • All communication happens through text (e.g., keyboard)—so no voice, no video, and no visual cues. • The interrogator chats with both the human and the machine, without knowing who is who. • The goal of the machine is to convince the interrogator that it is the human
  • 91.
    Passing the TuringTest • If the interrogator cannot reliably tell which is the human and which is the machine based on their answers, then the machine is said to have passed the Turing Test. • In simple terms:If a machine can fool a human into thinking it is human, it demonstrates artificial intelligence.
  • 92.
    • 🚫 Limitations •It doesn’t test reasoning, emotion, or consciousness. • A machine could use tricks or evasive answers to pass without truly understanding. • Some argue that passing the Turing Test ≠ true intelligence.
  • 93.
    • Real-world Example •Chatbots like ELIZA (1960s) or more recent ones like ChatGPT or Google Bard are often informally evaluated on how close they come to passing the Turing Test.
  • 94.
    Weak AI Aspect WeakAI Definition AI designed and trained for a specific task or narrow function. Also Known As Narrow AI Capabilities Performs one task very well, but cannot generalize or adapt. Consciousness No self-awareness or understanding. Examples Siri, Alexa, Google Translate, ChatGPT, facial recognition systems. Scope Limited to its programming and dataset. Human-like Thinking Does not simulate full human intelligence.
  • 95.
    Strong AI Aspect StrongAI Definition AI with the ability to understand, learn, and apply intelligence like a human. Also Known As Artificial General Intelligence (AGI) Capabilities Can perform any intellectual task a human can. Consciousness Yes – potentially self-aware, with emotions and understanding. Examples Does not currently exist (theoretical at this point). Scope Broad – adaptable to any domain or task. Human-like Thinking Yes, mimics and replicates human cognition.
  • 96.
    Weak AI Examples Tool/TechTask it Performs Google Maps Navigation and traffic prediction Grammarly Grammar correction Spotify Recommender Suggesting songs ChatGPT Text generation based on inputs Face ID on iPhone Facial recognition for unlocking phone
  • 97.
    Strong AI Examples(Theoretical) Hypothetical AGI Description A robot doctor Can understand symptoms, research new treatments, and empathize AI teacher Teaches any subject, adapts to student's emotions and pace Artificial consciousness A machine that has its own thoughts, feelings, and goals
  • 98.
    • Which ofthe following is an example of Weak AI? A. A machine that becomes self-aware B. Siri on an iPhone C. An AI that can create its own theories D. A robot with human emotions • Strong AI is also known as: A. Deep Learning B. Machine Learning C. Artificial General Intelligence D. Reinforcement Learning • True or False: Weak AI can simulate emotions and consciousness.
  • 99.
    • Which AIis currently used in industry today? A. Strong AI B. Artificial Super Intelligence C. Weak AI D. Biological AI • Which one of these best describes Strong AI? A. An AI that plays chess B. An AI that only translates languages C. An AI that performs all cognitive tasks like a human D. An AI that works only in robotics
  • 100.
    🔍 Myth ✅Fact MYTH 1: AI can think and feel like humans. FACT: AI does not have emotions, consciousness, or self-awareness. It processes data and follows algorithms. MYTH 2: AI will soon replace all human jobs. FACT: AI will automate some tasks, but it will also create new job roles that require human creativity and oversight. MYTH 3: AI learns and grows on its own without any input. FACT: AI requires human-curated data, training, and supervision. It doesn’t evolve like a human brain. MYTH 4: AI is 100% accurate and unbiased. FACT: AI can be biased or incorrect if trained on biased or incomplete data. It's only as good as its training set. MYTH 5: AI and robots are the same. FACT: AI is software (intelligence). Robots are hardware (machines) that may or may not use AI. MYTH 6: AI will become superintelligent and take over the world. FACT: Current AI is narrow and task-specific. Artificial General Intelligence (AGI) is still hypothetical. MYTH 7: AI can replace human creativity. FACT: AI can mimic patterns, but true creativity and originality remain uniquely human. MYTH 8: Using AI is always expensive and complicated. FACT: Many AI tools (like chatbots, translation tools, image classifiers) are free or low-cost and user-friendly.
  • 101.
    🧠 Myth 📘Fact MYTH 9: AI can make moral or ethical decisions. FACT: AI doesn’t understand ethics or values — it follows patterns and rules defined by humans. MYTH 10: AI works like the human brain. FACT: While inspired by the brain, AI uses mathematical models, not neurons or consciousness. MYTH 11: AI can learn without any data. FACT: AI needs large, labeled datasets to learn. No data = no learning. MYTH 12: AI doesn't make mistakes. FACT: AI can make serious mistakes, especially with noisy, biased, or insufficient data. MYTH 13: All AI is Deep Learning. FACT: Deep Learning is one subset of AI. Other forms include rule-based systems, ML, expert systems, etc. MYTH 14: AI can understand context like humans. FACT: AI lacks true contextual understanding — it relies on statistical likelihood, not real comprehension. MYTH 15: AI development is only done by tech giants. FACT: Many startups, universities, open-source communities, and individuals contribute to AI research. MYTH 16: Once an AI is trained, it doesn't need updates. FACT: AI needs retraining and fine-tuning to remain relevant and accurate over time. MYTH 17: AI can fully replace teachers, doctors, and artists. FACT: AI can assist but not replace human intuition, empathy, and judgment in these professions. MYTH 18: AI always makes decisions in a transparent way. FACT: Many AI models, especially deep learning ones, are "black boxes" — their internal decision logic is not easily interpretable.
  • 102.
    • 🔍 CommonMisconceptions • AI = robots • AI can think/feel • AI is always correct • Technical Misunderstandings • AI learns like the human brain • Deep learning = all AI • AI doesn’t need data after training Ethical & Social Misbeliefs •AI is unbiased •AI can make moral choices •AI will destroy jobs completely
  • 103.
    General Ethical Challengesin AI Challenge Explanation Bias and Discrimination AI can perpetuate or even amplify human biases present in training data, leading to unfair treatment based on race, gender, age, etc. Lack of Transparency / Explainability Many AI models, especially deep learning models, function as “black boxes” with little interpretability, making it hard to understand why certain decisions were made. Data Privacy and Consent AI systems often rely on vast amounts of personal data, raising concerns about how this data is collected, stored, and used. Autonomy and Control There’s a risk that humans could become overly reliant on AI systems, delegating important decisions without understanding the consequences. Accountability When an AI system causes harm, it can be unclear who is responsible—the developer, user, organization, or the AI itself. Job Displacement Automation by AI can lead to job loss, economic inequality, and disruption in labor markets. Security Risks AI systems can be manipulated, hacked, or used maliciously (e.g., deepfakes, autonomous weapons). Value Alignment Ensuring that AI systems align with human values and moral principles is challenging, especially in diverse cultural contexts.
  • 104.
    Scenario Ethical Issues AIdiagnosing diseases What if it makes a wrong diagnosis? Who is responsible? Are patients aware an AI is involved? Use of patient data for training Is the data anonymized? Was informed consent taken? Could it lead to discrimination?
  • 105.
    Scenario Ethical Issues Resumescreening by AI Could the model learn biases (e.g., preferring male names or certain schools)? Is the process fair and explainable? Video interview analysis Do facial expressions, accents, or cultural differences affect scoring? Is there transparency in criteria?
  • 106.
    Scenario Ethical Issues Predictingrecidivism or criminal risk Risk of reinforcing systemic racism, profiling, or over-policing marginalized communities. Surveillance using facial recognition Raises privacy concerns, especially when used without consent in public places.
  • 107.
    Scenario Ethical Issues Creditscoring using AI Does the system discriminate based on zip code, income, or race? Is the model's logic explainable? Algorithmic trading Could AI-triggered flash crashes destabilize the economy? Who regulates it?
  • 108.
    Scenario Ethical Issues Self-drivingcar accident Who is accountable—the AI, manufacturer, or driver? How does the AI make moral decisions in critical situations (e.g., "trolley problem")?
  • 109.
    Scenario Ethical Issues AIgrading student essays Is the assessment unbiased? Does it account for creativity and context? Is it transparent? Learning analytics Are students aware how their learning behavior is being tracked and analyzed? Is it used constructively or punitively?
  • 110.
    Scenario Ethical Issues AIcurating news feeds Echo chambers, polarization, and manipulation of public opinion. Does the user know how content is selected? Automated content moderation Can the AI understand context (e.g., satire vs. hate speech)? Is there due process for appealing takedowns?
  • 111.
    Scenario Ethical Issues AI-poweredautonomous weapons Can a machine make life-or-death decisions ethically? What safeguards are in place? Is there a global treaty?
  • 112.
    • Ethics inAI requires: • Transparency • Accountability • Fairness • Human oversight • Inclusive design
  • 113.
    • What isProblem Formulation? • It is the process of defining an AI problem in a structured way so that it can be solved using search or planning techniques
  • 114.
    Component Explanation Example:Pathfinding (Start to Goal City) Initial State Starting point of the problem Start at City A State Space All possible states reachable from the initial state All cities and roads that can be traversed Actions (Operators) Legal operations that can be done to move between states Move from one city to another via a road Transition Model Rules describing the result of an action If you go from A to B, your new state is B Goal Test Condition to check if the goal is reached Are we at City Z? Path Cost A numeric cost assigned to each path Total distance, time, or cost of travel
  • 115.
    • 2. StateSpace • 🔹 Definition: • The state space is a conceptual graph or tree that represents all the possible configurations (states) an agent can be in, based on actions taken from the initial state. • 🔹 Example: • For a 3×3 8-puzzle game: • Each tile configuration is a state. • The state space is all possible arrangements of tiles. • Transitions = moving the blank tile up, down, left, or right.
  • 116.
    Strategy Key FeaturePros Cons Breadth-First Search (BFS) Explores level by level Guarantees shortest path High memory usage Depth-First Search (DFS) Explores deep paths first Low memory Can get stuck in loops Uniform Cost Search (UCS) Expands lowest-cost node Finds optimal solution if cost > 0 Slower if costs vary widely Depth-Limited Search DFS with depth cut-off Avoids infinite loops May miss solution if limit is too low Iterative Deepening Search Combines DFS and BFS Optimal and memory- efficient Repeated node expansions
  • 117.