SlideShare a Scribd company logo
Nadar saraswathi college of arts &
science, theni.
Department of cs & it
ARTIFICIAL INTELLIGENCE
PRESENTED BY
G.KAVIYA
M.SC(IT)
TOPIC:LEARNING FROM
OBSERVATION
LEARNING
LEARNING FROM OBSERVATION
 Forms of learning
 Ensemble learning
 Computational learning theory
LEARNING:
Learning is Agent’s percepts should be used for
acting.
It also used for improving the agents ability to act
in the future.
Learning takes places as the agents observes, its
interactions with the world and its own decision making
processes.
FORMS OF LEARNING:
Learning Agent can be thought of as containing
a Performance Element, that decides, what actions to take,
and a Learning Elements that modifies the performance
elements to take better decisions.
Three major issues in learning element design
 Which components the performance element are to be
learned.
 What feedback is available to learn these components.
 What representation is used for the components.
Components of Agents are;
 A direct mapping from conditions on
the current state to actions.
 A means to infer relevant properties of
the world from the percept sequence.
 Information about the way the world
evolves and about the results of the
possible action the agent can take.
 Utility information indicating the
desirability of world states.
 Action value information indicating the
desirability of action.
 Goals the describe classes of the state
whose achievement maximizes the
agent’s utility.
Classified into three categories:
Supervised Learning.
Unsupervised Learning.
Reinforcement Learning.
Supervised Learning:
The Learning here is performed with the
help of teacher. Let us take the example of the learning
process of the small child.
The child doesn’t know how to read/write.
He/she is being taught by the parents at home and by the
teacher in school.
The children are recognize the alphabet,
numerals, etc. Their and every action is supervised by a
teacher.
Continue;
Actually, a child works on
the basis of the output that
he/she has to produce. All
these real-time events
involve supervised learning
methodology.
Similarly, in ANNs
following the supervised
learning, each input vector
requires a corresponding
target vector, which
represents the desired
outputs.
The input vector
along with the target vector
is called training pair.
Continue;
 In this type of training, a
supervisor or teacher is
required for error
minimization. Hence, the
network trained by this
method is said to be using
supervised training
methodology.
 In supervised learning, It is
assumed that the correct
“target” output values are
known for each input pattern.
 The input vector is
presented to the network,
Which result is an output
vector. The output vector is the
actual output vector. Then the
actual output vector is
compared with the desired
output vector.
 If there exists a difference
between the two output
vectors then an error signal is
generated by the network. This
error signal is used for
adjustment of weights until the
actual output matches the
desired output.
Un Supervised Learning:
 The learning here is
performed without the help of
teacher. Consider, learning
process of a tadpole, it learns
by itself, that is, a child fish
learns to swim by itself, it is not
taught by its mother.
 Thus, Its learning process
is independent and is not
supervised by a teacher.
 In ANNs following
unsupervised learning, the
input vectors of similar type
are grouped without the use of.
Reinforcement Learning:
 The learning process is
similar to supervised learning.
In the case of supervised
learning the correct target
output values are known for
each input pattern.
 But, In some cases, Less
information might be available.
For example, the network
might be told that its actual
output is only “50% correct “or
so. Thus, Here only critic
information is available, not the
exact information.
 The learning information
based on the critic information
is called reinforcement learning
and the feedback sent is called
reinforcement signal.
ENSEMBLE OF LEARNING:
 Learn multiple
alternative definitions of
a concept using different
training data or different
learning algorithms.
 Combine decisions
of multiple definitions,
eg. Using weighted
voting.
VALUE OF ENSEMBLES
When combing multiple independent and diverse
decisions each of which is at least more accurate than random
guessing, random errors cancel each other out, correct decisions
are reinforcement.
Generate a group of base-learners which when
combined has higher accuracy.
Different learners use different;
Algorithm.
Hyperparameters.
Representations/Modalities/Views.
Training sets.
Subproblems.
Ensembles:
BOOSTING:
Also uses voting/averaging but models are
weighted according to their performance.
Iterative procedure: new models are influenced
by performance of previously built ones.
* New model is encouraged to become
expert for instances classified incorrectly by earlier
models.
* Intuitive justification: models should be
experts that complement each other.
There are several variants of this algorithm.
Continue;
 STRONG LEARNER:
Objective of machine learning.
o Take labeled data for training.
o Produce a classifier which can be
arbitrarily accurate.
o Strong learners are very difficult to
construct.
 WEAKER LEARNER:
o Take labeled data for training.
o Produce a classifier which is more
accurate than random guessing.
o Constructing weaker learners is
relatively easy.
ADAPTIVE BOOSTING:
Each rectangle corresponds to an example, with
weight proportional to its height.
Crosses corresponds to misclassified examples.
Size of decision tree indicates the weight of that
classifier in the final ensemble.
Using Different Data Distribution
* Start with uniform weighting.
* During each step of learning.
Increase weights of the examples which are not
correctly learned by the weak learner.
Decrease weights of the examples which are
correctly learned by the weak learner.
Continue;
IDEA:
focus on difficult example which
are not correctly classified in the
previous steps
WEIGHTED VOTING:
construct strong classifier by
weighted voting of the weak classifier.
IDEA:
Better weak classifier gets a
larger weight.
Iteratively add weak classifiers
Increase accuracy of the
combined classifier through
minimization of a cost function.
COMPUTATIONAL LEARNING
THEORY
 Computational learning theory characterize. The difficulty of
several types of machine learning problem.
 Capabilities of several types of ML algorithm.
 CLT seeks answer, question such as;
a) “under what conditions is successful learning possible and
impossible?”
b) “under what conditions is a particular learning algorithm
assured of learning successfully?” it means that, what kind of
task are learnable, what kind of data is required for
learnability.
Various issues are:
Sample complexity:
How many training examples are needed for a
learner to converge (with high probability) to a successful
hypothesis?
Computational complexity:
How much computational effort is needed for a
learner to converge to a successful hypothesis?
Mistake bound:
How many training examples will the learner
misclassify before converging to a successful hypothesis?
(PAC) Probably Learning an Approximately
Correct hypothesis:-
A particular setting for the learning problem,
called the probably approximately correct(PAC) learning
model.
This model of learning is based on following
points:
1. Specifying problem setting that defines PAC
model.
2. How many training examples are required.
3. How much computational are required in order
to learn various classes of target functions within PAC
Problem setting:-
X : Set of all the instance (eg: set of people)
each described by attributes <age, height>.
C : Target concept the leaner need to learn.
C: X{0,1}
L : Learner have to learn “people who are
skiers”
C(x) =1 : positive training example
C(x) =0 : negative training example
Error of a hypothesis: True error, denoted by errorD
(h), of hypothesis h w.r.t target concept c and
distribution D is probability that h will misclassify an
instance drawn at random according to D.
errorD (h) = pr [c(x) not equal to h(x)]
XED
PAC Learnability:
No of training examples needed to learn a
hypothesis h, for which
errorD (h) = 0.
For these two difficulties, following measures can be
taken:-
1. No requirement of zero error hypothesis for learner L. So
a bound to error can be set by constant E, that can be made small.
2.Not necessary that learner succeed for every sequence of
randomly drawn training example. So learner probably learn a
hypothesis that is approximately correct.
Bounded by same constant S which is,
Definition:
Consider a concept class define over a set of instance
X of kngtn n and a learner L using hypothesis space H.
C is PAC learnable by L using H if for all CEC.
distribution D over X
E such that 0<E<1/2 and
S such that 0<S<1/2
Learner L will with Probably at least (1-8)
output a hypothesis nEH such that
errorD (n) <= E.
Artificial Intelligence.pptx

More Related Content

Similar to Artificial Intelligence.pptx

Introduction
IntroductionIntroduction
Introductionbutest
 
Introduction
IntroductionIntroduction
Introductionbutest
 
Introduction
IntroductionIntroduction
Introductionbutest
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
ijistjournal
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .pptbutest
 
Machine Learning presentation.
Machine Learning presentation.Machine Learning presentation.
Machine Learning presentation.butest
 
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Ravi Kumar
 
Machine Learning_PPT.pptx
Machine Learning_PPT.pptxMachine Learning_PPT.pptx
Machine Learning_PPT.pptx
RajeshBabu833061
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
ijistjournal
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDESbutest
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDESbutest
 
Name.pptx
Name.pptxName.pptx
Name.pptx
Ayan974999
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
Adetimehin Oluwasegun Matthew
 
ICTCM 2013 Presentation -- Dan DuPort
ICTCM 2013 Presentation  --  Dan DuPortICTCM 2013 Presentation  --  Dan DuPort
ICTCM 2013 Presentation -- Dan DuPort
Dan DuPort
 
Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)
Jeet Das
 
ML_lec1.pdf
ML_lec1.pdfML_lec1.pdf
ML_lec1.pdf
Abdulrahman181781
 
AI_06_Machine Learning.pptx
AI_06_Machine Learning.pptxAI_06_Machine Learning.pptx
AI_06_Machine Learning.pptx
Yousef Aburawi
 
Learning in AI
Learning in AILearning in AI
Learning in AI
Minakshi Atre
 
Statistical foundations of ml
Statistical foundations of mlStatistical foundations of ml
Statistical foundations of ml
Vipul Kalamkar
 
AI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptxAI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptx
MohammadAsim91
 

Similar to Artificial Intelligence.pptx (20)

Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .ppt
 
Machine Learning presentation.
Machine Learning presentation.Machine Learning presentation.
Machine Learning presentation.
 
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
 
Machine Learning_PPT.pptx
Machine Learning_PPT.pptxMachine Learning_PPT.pptx
Machine Learning_PPT.pptx
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
Name.pptx
Name.pptxName.pptx
Name.pptx
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
ICTCM 2013 Presentation -- Dan DuPort
ICTCM 2013 Presentation  --  Dan DuPortICTCM 2013 Presentation  --  Dan DuPort
ICTCM 2013 Presentation -- Dan DuPort
 
Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)
 
ML_lec1.pdf
ML_lec1.pdfML_lec1.pdf
ML_lec1.pdf
 
AI_06_Machine Learning.pptx
AI_06_Machine Learning.pptxAI_06_Machine Learning.pptx
AI_06_Machine Learning.pptx
 
Learning in AI
Learning in AILearning in AI
Learning in AI
 
Statistical foundations of ml
Statistical foundations of mlStatistical foundations of ml
Statistical foundations of ml
 
AI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptxAI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptx
 

More from Kaviya452563

softcomputing.pptx
softcomputing.pptxsoftcomputing.pptx
softcomputing.pptx
Kaviya452563
 
OOAD.pptx
OOAD.pptxOOAD.pptx
OOAD.pptx
Kaviya452563
 
DIP.pptx
DIP.pptxDIP.pptx
DIP.pptx
Kaviya452563
 
Big Data Analytics.pptx
Big Data Analytics.pptxBig Data Analytics.pptx
Big Data Analytics.pptx
Kaviya452563
 
client server computing.pptx
client server computing.pptxclient server computing.pptx
client server computing.pptx
Kaviya452563
 
WE.pptx
WE.pptxWE.pptx
WE.pptx
Kaviya452563
 
Internet of Things.pptx
Internet of Things.pptxInternet of Things.pptx
Internet of Things.pptx
Kaviya452563
 
data mining.pptx
data mining.pptxdata mining.pptx
data mining.pptx
Kaviya452563
 
python programming.pptx
python programming.pptxpython programming.pptx
python programming.pptx
Kaviya452563
 
Distributing computing.pptx
Distributing computing.pptxDistributing computing.pptx
Distributing computing.pptx
Kaviya452563
 
Advanced java programming
Advanced java programmingAdvanced java programming
Advanced java programming
Kaviya452563
 
Network and internet security
Network and internet securityNetwork and internet security
Network and internet security
Kaviya452563
 
Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architecture
Kaviya452563
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithms
Kaviya452563
 

More from Kaviya452563 (14)

softcomputing.pptx
softcomputing.pptxsoftcomputing.pptx
softcomputing.pptx
 
OOAD.pptx
OOAD.pptxOOAD.pptx
OOAD.pptx
 
DIP.pptx
DIP.pptxDIP.pptx
DIP.pptx
 
Big Data Analytics.pptx
Big Data Analytics.pptxBig Data Analytics.pptx
Big Data Analytics.pptx
 
client server computing.pptx
client server computing.pptxclient server computing.pptx
client server computing.pptx
 
WE.pptx
WE.pptxWE.pptx
WE.pptx
 
Internet of Things.pptx
Internet of Things.pptxInternet of Things.pptx
Internet of Things.pptx
 
data mining.pptx
data mining.pptxdata mining.pptx
data mining.pptx
 
python programming.pptx
python programming.pptxpython programming.pptx
python programming.pptx
 
Distributing computing.pptx
Distributing computing.pptxDistributing computing.pptx
Distributing computing.pptx
 
Advanced java programming
Advanced java programmingAdvanced java programming
Advanced java programming
 
Network and internet security
Network and internet securityNetwork and internet security
Network and internet security
 
Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architecture
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithms
 

Recently uploaded

Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Excellence Foundation for South Sudan
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
Wasim Ak
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
Priyankaranawat4
 
kitab khulasah nurul yaqin jilid 1 - 2.pptx
kitab khulasah nurul yaqin jilid 1 - 2.pptxkitab khulasah nurul yaqin jilid 1 - 2.pptx
kitab khulasah nurul yaqin jilid 1 - 2.pptx
datarid22
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
amberjdewit93
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
IreneSebastianRueco1
 
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Ashish Kohli
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
tarandeep35
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
Krisztián Száraz
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
TechSoup
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
Jean Carlos Nunes Paixão
 
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
RitikBhardwaj56
 
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
National Information Standards Organization (NISO)
 
What is the purpose of studying mathematics.pptx
What is the purpose of studying mathematics.pptxWhat is the purpose of studying mathematics.pptx
What is the purpose of studying mathematics.pptx
christianmathematics
 

Recently uploaded (20)

Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
 
kitab khulasah nurul yaqin jilid 1 - 2.pptx
kitab khulasah nurul yaqin jilid 1 - 2.pptxkitab khulasah nurul yaqin jilid 1 - 2.pptx
kitab khulasah nurul yaqin jilid 1 - 2.pptx
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
 
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
 
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
 
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
 
What is the purpose of studying mathematics.pptx
What is the purpose of studying mathematics.pptxWhat is the purpose of studying mathematics.pptx
What is the purpose of studying mathematics.pptx
 

Artificial Intelligence.pptx

  • 1. Nadar saraswathi college of arts & science, theni. Department of cs & it ARTIFICIAL INTELLIGENCE PRESENTED BY G.KAVIYA M.SC(IT) TOPIC:LEARNING FROM OBSERVATION
  • 2. LEARNING LEARNING FROM OBSERVATION  Forms of learning  Ensemble learning  Computational learning theory
  • 3. LEARNING: Learning is Agent’s percepts should be used for acting. It also used for improving the agents ability to act in the future. Learning takes places as the agents observes, its interactions with the world and its own decision making processes.
  • 4. FORMS OF LEARNING: Learning Agent can be thought of as containing a Performance Element, that decides, what actions to take, and a Learning Elements that modifies the performance elements to take better decisions. Three major issues in learning element design  Which components the performance element are to be learned.  What feedback is available to learn these components.  What representation is used for the components.
  • 5. Components of Agents are;  A direct mapping from conditions on the current state to actions.  A means to infer relevant properties of the world from the percept sequence.  Information about the way the world evolves and about the results of the possible action the agent can take.  Utility information indicating the desirability of world states.  Action value information indicating the desirability of action.  Goals the describe classes of the state whose achievement maximizes the agent’s utility.
  • 6. Classified into three categories: Supervised Learning. Unsupervised Learning. Reinforcement Learning.
  • 7. Supervised Learning: The Learning here is performed with the help of teacher. Let us take the example of the learning process of the small child. The child doesn’t know how to read/write. He/she is being taught by the parents at home and by the teacher in school. The children are recognize the alphabet, numerals, etc. Their and every action is supervised by a teacher.
  • 8. Continue; Actually, a child works on the basis of the output that he/she has to produce. All these real-time events involve supervised learning methodology. Similarly, in ANNs following the supervised learning, each input vector requires a corresponding target vector, which represents the desired outputs. The input vector along with the target vector is called training pair.
  • 9. Continue;  In this type of training, a supervisor or teacher is required for error minimization. Hence, the network trained by this method is said to be using supervised training methodology.  In supervised learning, It is assumed that the correct “target” output values are known for each input pattern.  The input vector is presented to the network, Which result is an output vector. The output vector is the actual output vector. Then the actual output vector is compared with the desired output vector.  If there exists a difference between the two output vectors then an error signal is generated by the network. This error signal is used for adjustment of weights until the actual output matches the desired output.
  • 10. Un Supervised Learning:  The learning here is performed without the help of teacher. Consider, learning process of a tadpole, it learns by itself, that is, a child fish learns to swim by itself, it is not taught by its mother.  Thus, Its learning process is independent and is not supervised by a teacher.  In ANNs following unsupervised learning, the input vectors of similar type are grouped without the use of.
  • 11. Reinforcement Learning:  The learning process is similar to supervised learning. In the case of supervised learning the correct target output values are known for each input pattern.  But, In some cases, Less information might be available. For example, the network might be told that its actual output is only “50% correct “or so. Thus, Here only critic information is available, not the exact information.  The learning information based on the critic information is called reinforcement learning and the feedback sent is called reinforcement signal.
  • 12. ENSEMBLE OF LEARNING:  Learn multiple alternative definitions of a concept using different training data or different learning algorithms.  Combine decisions of multiple definitions, eg. Using weighted voting.
  • 13. VALUE OF ENSEMBLES When combing multiple independent and diverse decisions each of which is at least more accurate than random guessing, random errors cancel each other out, correct decisions are reinforcement. Generate a group of base-learners which when combined has higher accuracy. Different learners use different; Algorithm. Hyperparameters. Representations/Modalities/Views. Training sets. Subproblems.
  • 15. BOOSTING: Also uses voting/averaging but models are weighted according to their performance. Iterative procedure: new models are influenced by performance of previously built ones. * New model is encouraged to become expert for instances classified incorrectly by earlier models. * Intuitive justification: models should be experts that complement each other. There are several variants of this algorithm.
  • 16. Continue;  STRONG LEARNER: Objective of machine learning. o Take labeled data for training. o Produce a classifier which can be arbitrarily accurate. o Strong learners are very difficult to construct.  WEAKER LEARNER: o Take labeled data for training. o Produce a classifier which is more accurate than random guessing. o Constructing weaker learners is relatively easy.
  • 17. ADAPTIVE BOOSTING: Each rectangle corresponds to an example, with weight proportional to its height. Crosses corresponds to misclassified examples. Size of decision tree indicates the weight of that classifier in the final ensemble. Using Different Data Distribution * Start with uniform weighting. * During each step of learning. Increase weights of the examples which are not correctly learned by the weak learner. Decrease weights of the examples which are correctly learned by the weak learner.
  • 18. Continue; IDEA: focus on difficult example which are not correctly classified in the previous steps WEIGHTED VOTING: construct strong classifier by weighted voting of the weak classifier. IDEA: Better weak classifier gets a larger weight. Iteratively add weak classifiers Increase accuracy of the combined classifier through minimization of a cost function.
  • 19. COMPUTATIONAL LEARNING THEORY  Computational learning theory characterize. The difficulty of several types of machine learning problem.  Capabilities of several types of ML algorithm.  CLT seeks answer, question such as; a) “under what conditions is successful learning possible and impossible?” b) “under what conditions is a particular learning algorithm assured of learning successfully?” it means that, what kind of task are learnable, what kind of data is required for learnability.
  • 20. Various issues are: Sample complexity: How many training examples are needed for a learner to converge (with high probability) to a successful hypothesis? Computational complexity: How much computational effort is needed for a learner to converge to a successful hypothesis? Mistake bound: How many training examples will the learner misclassify before converging to a successful hypothesis?
  • 21. (PAC) Probably Learning an Approximately Correct hypothesis:- A particular setting for the learning problem, called the probably approximately correct(PAC) learning model. This model of learning is based on following points: 1. Specifying problem setting that defines PAC model. 2. How many training examples are required. 3. How much computational are required in order to learn various classes of target functions within PAC
  • 22. Problem setting:- X : Set of all the instance (eg: set of people) each described by attributes <age, height>. C : Target concept the leaner need to learn. C: X{0,1} L : Learner have to learn “people who are skiers” C(x) =1 : positive training example C(x) =0 : negative training example Error of a hypothesis: True error, denoted by errorD (h), of hypothesis h w.r.t target concept c and distribution D is probability that h will misclassify an instance drawn at random according to D. errorD (h) = pr [c(x) not equal to h(x)] XED
  • 23. PAC Learnability: No of training examples needed to learn a hypothesis h, for which errorD (h) = 0. For these two difficulties, following measures can be taken:- 1. No requirement of zero error hypothesis for learner L. So a bound to error can be set by constant E, that can be made small. 2.Not necessary that learner succeed for every sequence of randomly drawn training example. So learner probably learn a hypothesis that is approximately correct. Bounded by same constant S which is,
  • 24. Definition: Consider a concept class define over a set of instance X of kngtn n and a learner L using hypothesis space H. C is PAC learnable by L using H if for all CEC. distribution D over X E such that 0<E<1/2 and S such that 0<S<1/2 Learner L will with Probably at least (1-8) output a hypothesis nEH such that errorD (n) <= E.