SlideShare a Scribd company logo
Artificial Intelligence
Concept Learning as Search
• We assume that the concept lies in the
hypothesis space. So we search for a
hypothesis belonging to this hypothesis
space that best fits the training examples,
such that the output given by the hypothesis
is same as the true output of concept
• Hence the search has achieved the
learning of the actual concept using the
given training set
Concept Learning as Search
• In short:
Assume , search for an that best fits D, such
that xi D, h(xi) = c(xi)
Where c is the concept we are trying to determine (the
output of the training set)
H is the hypothesis space
D is the training set
h is the hypothesis
xi is the ith instance of Instance space
Hc Hh

Ordering of Hypothesis Space
• General to Specific Ordering of Hypothesis
Space
• Most General Hypothesis:
– hg< ?, ? >
• Most Specific Hypothesis:
– hs< Ø , Ø >
Ordering of Hypothesis Space
SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
Find-S Algorithm
• FIND-S finds the most specific hypothesis
possible within the version space given a
set of training data
• Uses the general-to-specific ordering for
searching through the hypotheses space
Find-S Algorithm
Initialize hypothesis h to the most specific hypothesis in H
(the hypothesis space)
For each positive training instance x (i.e. output is 1)
For each attribute constraint ai in h
If the constraint ai is satisfied by x
Then do nothing
Else
Replace ai in h by the next more
general constraint that is satisfied by x
Output hypothesis h
Find-S Algorithm
To illustrate this algorithm, let us assume that the learner is given the sequence of
following training examples from the SICK domain:
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
The first step of FIND-S is to initialize hypothesis h to the most specific hypothesis in
H:
h = < Ø , Ø >
Find-S Algorithm
D T BP SK
x1 H H 1
First training example is positive:
But h = < Ø , Ø > fails over this first instance
Because h(x1) = 0, since Ø gives us 0 for any attribute
value
Since h = < Ø , Ø > is so specific that it doesn’t give even one single instance
as positive, so we change it to next more general hypothesis that fits this
particular first instance x1 of the training data set D to
h = < H , H >
Find-S Algorithm
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
Find-S Algorithm
D T BP SK
x1 H H 1
x2 L L 0
Upon encountering the second example; in this case a negative example, the algorithm makes no
change to h. In fact, the FIND-S algorithm simply ignores every negative example
So the hypothesis still remains: h = < H , H >
Find-S Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
Final Hypothesis:
h = < ?, H >
What does this hypothesis state?
This hypothesis will term all the future patients which have BP = H as SICK for all the
different values of T
Find-S Algorithm
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
Candidate-Elimination Algorithm
• Although FIND-S does find a consistent
hypothesis
• In general, however, there may be more
hypotheses consistent with D; of which
FIND-S only finds one
• Candidate-Elimination finds all the
hypotheses in the Version Space
Version Space (VS)
• Version space is a set of all the
hypotheses that are consistent with all the
training examples
• By consistent we mean
h(xi) = c(xi) , for all instances belonging to
training set D
Version Space
Let us take the following training set D:
D T BP SK
x1 H H 1
x2 L L 0
x3 N N 0
Another representation of this set D:
BP
H - - 1
N - 0 -
L 0 - -
L N H T
Version Space
Is there a hypothesis that can generate this D:
BP
H - - 1
N - 0 -
L 0 - -
L N H T
One of the consistent hypotheses can be h1 = < H, H >
BP
H 0 0 1
N 0 0 0
L 0 0 0
L N H T
Version Space
There are other hypotheses consistent with D, such as h2 = < H, ? >
There’s another hypothesis, h3 = < ?, H >
BP
H 1 1 1
N 0 0 0
L 0 0 0
L N H T
BP
H 0 0 1
N 0 0 1
L 0 0 1
L N H T
Version Space
• Version space is denoted as
VS H,D = {h1, h2, h3}
• This translates as: Version space is a
subset of hypothesis space H, composed
of h1, h2 and h3, that is consistent with D
• In other words version space is a group of
all hypotheses consistent with D, not just
one hypothesis we saw in the previous
case
Candidate-Elimination Algorithm
• Candidate Elimination works with two sets:
– Set G (General hypotheses)
– Set S (Specific hypotheses)
• Starts with:
– G0 = {< ? , ? >} considers negative examples only
– S0 = {< Ø , Ø >} considers positive examples only
• Within these two boundaries is the entire
Hypothesis space
Candidate-Elimination Algorithm
• Intuitively:
– As each training example is observed one by
one
• The S boundary is made more and more general
• The G boundary set is made more and more specific
• This eliminates from the version space any hypotheses found
inconsistent with the new training example
– At the end, we are left with VS
Candidate-Elimination Algorithm
Initialize G to the set of maximally general hypotheses in H
Initialize S to the set of maximally specific hypotheses in H
For each training example d, do
If d is a positive example
Remove from G any hypothesis inconsistent with d
For each hypothesis s in S that is inconsistent with d
Remove s from S
Add to S all minimal generalization h of s, such that
h is consistent with d, and some member of G is more general than h
Remove from S any hypothesis that is more general than another one in S
If d is a negative example
Remove from S any hypothesis inconsistent with d
For each hypothesis g in G that is inconsistent with d
Remove g from G
Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h
Remove from G any hypothesis that is less general than another one in G
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
G0 = {< ?, ? >} most general
S0 = {< Ø, Ø >} most specific
Candidate-Elimination Algorithm
• Candidate Elimination works with two sets:
– Set G (General hypotheses)
– Set S (Specific hypotheses)
• Starts with:
– G0 = {< ? , ? >} considers negative examples only
– S0 = {< Ø , Ø >} considers positive examples only
• Within these two boundaries is the entire
Hypothesis space
Candidate-Elimination Algorithm
Initialize G to the set of maximally general hypotheses in H
Initialize S to the set of maximally specific hypotheses in H
For each training example d, do
If d is a positive example
Remove from G any hypothesis inconsistent with d
For each hypothesis s in S that is inconsistent with d
Remove s from S
Add to S all minimal generalization h of s, such that
h is consistent with d, and some member of G is more general than h
Remove from S any hypothesis that is more general than another one in S
If d is a negative example
Remove from S any hypothesis inconsistent with d
For each hypothesis g in G that is inconsistent with d
Remove g from G
Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h
Remove from G any hypothesis that is less general than another one in G
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
G0 = {< ?, ? >} most general
S0 = {< Ø, Ø >} most specific
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
d1 = (<H, H>, 1) [a positive example]:
G1 = {< ?, ? >}
S1 = {< H, H >}
Remove < Ø, Ø > from S0 , since it is not consistent with d1
and add the next minimally general hypothesis from H to form
S1
G1 = G0 = {< ?, ? >}, since <?, ?> is consistent with d1; both
give positive outputs
G0 = {< ?, ? >}
S0 = {< Ø, Ø >}
Candidate-Elimination Algorithm
D T BP SK
x2 L L 0
Second training example is: d2 = (<L, L>, 0) [negative example]
G2 = {< H, ? >, < ?, H >}
S2 = {< H, H>}
Remove < ?, ? > from G1 , since it is not consistent with d2 and add the next
minimally specialized hypothesis from H to form G2 , keeping in mind one
rule:
S2 = S1 = {< H, H>}, since <H, H> is consistent with d2: both give negative outputs for x2
“Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h”
Now, observe that the immediate one step specialized hypotheses of < ?, ? > are:
{< H, ? >, < N, ? >, < L, ? >, < ?, H >, < ?, N >, < ?, L >}
G1 = {< ?, ? >}
S1 = {< H, H >}
Candidate-Elimination Algorithm
D T BP SK
x3 N H 1
Third and final training example is: d3 = (<N, H>, 1) [A positive example]
G3 = {< ?, H >}
S3 = {< ?, H >}
In S2, < H, H > is not consistent with d3, so we remove it and add minimally
general hypotheses than < H, H >. The two choices we have are: < H, ? >
and < ?, H >. We only keep < ?, H >, since the other one is not consistent
with d3
We see that in G2, < H, ? > is not consistent with d3, so we remove it.
However < ?, H > is consistent hence it is retained: G3 = {< ?, H >}
G2 = {< H, ? >, < ?, H >}
S2 = {< H, H>}
Conjunctive vs Disjuncvtive
Conjuntive Rule (ANDing)
h = < T=H AND BP = ? >
BP
H 1 1 1
N 0 0 1
L 0 0 1
L N H T
BP
H 0 0 1
N 0 0 1
L 0 0 1
L N H T
Disjuntive Rule (ORing)
h = < T=H AND BP = ?
OR
T=? AND BP = H >

More Related Content

What's hot

Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
IJEID :: International Journal of Excellence Innovation and Development
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learningbutest
 
CMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions RepresentationsCMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions Representations
allyn joy calcaben
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
16890 unit 2 heuristic search techniques
16890 unit 2 heuristic  search techniques16890 unit 2 heuristic  search techniques
16890 unit 2 heuristic search techniquesJais Balta
 
04 search heuristic
04 search heuristic04 search heuristic
04 search heuristic
Nour Zeineddine
 
3.6 applications in optimization
3.6 applications in optimization3.6 applications in optimization
3.6 applications in optimizationmath265
 
5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculusmath265
 
Text classification
Text classificationText classification
Text classification
Harry Potter
 
27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinatesmath267
 
E42012426
E42012426E42012426
E42012426
IJERA Editor
 
2.5 computations of derivatives
2.5 computations of derivatives2.5 computations of derivatives
2.5 computations of derivativesmath265
 
Numerical Evidence for Darmon Points
Numerical Evidence for Darmon PointsNumerical Evidence for Darmon Points
Numerical Evidence for Darmon Points
mmasdeu
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
10 parametric eequations of lines
10 parametric eequations of lines10 parametric eequations of lines
10 parametric eequations of linesmath267
 
Appendex c
Appendex cAppendex c
Appendex cswavicky
 
Predicates and Quantifiers
Predicates and QuantifiersPredicates and Quantifiers
Predicates and Quantifiers
blaircomp2003
 

What's hot (19)

Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
CMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions RepresentationsCMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions Representations
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
16890 unit 2 heuristic search techniques
16890 unit 2 heuristic  search techniques16890 unit 2 heuristic  search techniques
16890 unit 2 heuristic search techniques
 
04 search heuristic
04 search heuristic04 search heuristic
04 search heuristic
 
3.6 applications in optimization
3.6 applications in optimization3.6 applications in optimization
3.6 applications in optimization
 
5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus
 
Text classification
Text classificationText classification
Text classification
 
27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates
 
E42012426
E42012426E42012426
E42012426
 
2.5 computations of derivatives
2.5 computations of derivatives2.5 computations of derivatives
2.5 computations of derivatives
 
Numerical Evidence for Darmon Points
Numerical Evidence for Darmon PointsNumerical Evidence for Darmon Points
Numerical Evidence for Darmon Points
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
10 parametric eequations of lines
10 parametric eequations of lines10 parametric eequations of lines
10 parametric eequations of lines
 
Appendex c
Appendex cAppendex c
Appendex c
 
Predicates and Quantifiers
Predicates and QuantifiersPredicates and Quantifiers
Predicates and Quantifiers
 

Similar to Lec 17

AI_Lecture_34.ppt
AI_Lecture_34.pptAI_Lecture_34.ppt
AI_Lecture_34.ppt
InamUllahKhan961803
 
concept-learning of artificial intelligence
concept-learning of artificial intelligenceconcept-learning of artificial intelligence
concept-learning of artificial intelligence
ssuser01fa1b
 
concept-learning.ppt
concept-learning.pptconcept-learning.ppt
concept-learning.ppt
patel252389
 
ML_Unit_1_Part_B
ML_Unit_1_Part_BML_Unit_1_Part_B
ML_Unit_1_Part_B
Srimatre K
 
ML02.ppt
ML02.pptML02.ppt
ML02.ppt
ssuserec53e73
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
swapnac12
 
2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf
Variable14
 
All about a cosets and Lagrange's theorem
All about a cosets and Lagrange's theoremAll about a cosets and Lagrange's theorem
All about a cosets and Lagrange's theorem
AlmaPalceAbarollo
 
Poggi analytics - concepts - 1a
Poggi   analytics - concepts - 1aPoggi   analytics - concepts - 1a
Poggi analytics - concepts - 1a
Gaston Liberman
 
Poggi analytics - star - 1a
Poggi   analytics - star - 1aPoggi   analytics - star - 1a
Poggi analytics - star - 1a
Gaston Liberman
 
Introduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec domsIntroduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec doms
Babasab Patil
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)NYversity
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learning
kensaleste
 
A Brief Introduction to Linear Regression
A Brief Introduction to Linear RegressionA Brief Introduction to Linear Regression
A Brief Introduction to Linear Regression
Nidhal Selmi
 
CS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture NotesCS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture Notes
Eric Conner
 
Homotopy Perturbation Method
Homotopy Perturbation MethodHomotopy Perturbation Method
Homotopy Perturbation Method
SOUMYADAS835019
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Sean Meyn
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdf
Shiwani Gupta
 

Similar to Lec 17 (20)

AI_Lecture_34.ppt
AI_Lecture_34.pptAI_Lecture_34.ppt
AI_Lecture_34.ppt
 
concept-learning of artificial intelligence
concept-learning of artificial intelligenceconcept-learning of artificial intelligence
concept-learning of artificial intelligence
 
concept-learning.ppt
concept-learning.pptconcept-learning.ppt
concept-learning.ppt
 
ML_Unit_1_Part_B
ML_Unit_1_Part_BML_Unit_1_Part_B
ML_Unit_1_Part_B
 
ML02.ppt
ML02.pptML02.ppt
ML02.ppt
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
 
2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf
 
All about a cosets and Lagrange's theorem
All about a cosets and Lagrange's theoremAll about a cosets and Lagrange's theorem
All about a cosets and Lagrange's theorem
 
Poggi analytics - concepts - 1a
Poggi   analytics - concepts - 1aPoggi   analytics - concepts - 1a
Poggi analytics - concepts - 1a
 
AI Lesson 34
AI Lesson 34AI Lesson 34
AI Lesson 34
 
Poggi analytics - star - 1a
Poggi   analytics - star - 1aPoggi   analytics - star - 1a
Poggi analytics - star - 1a
 
Introduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec domsIntroduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec doms
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learning
 
A Brief Introduction to Linear Regression
A Brief Introduction to Linear RegressionA Brief Introduction to Linear Regression
A Brief Introduction to Linear Regression
 
CS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture NotesCS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture Notes
 
Homotopy Perturbation Method
Homotopy Perturbation MethodHomotopy Perturbation Method
Homotopy Perturbation Method
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdf
 

More from Nilt1234

Lisp tutorial
Lisp tutorialLisp tutorial
Lisp tutorial
Nilt1234
 
INTRODUCTION TO LISP
INTRODUCTION TO LISPINTRODUCTION TO LISP
INTRODUCTION TO LISP
Nilt1234
 
Lec 09
Lec 09Lec 09
Lec 09
Nilt1234
 
relational algebra (joins)
relational algebra (joins)relational algebra (joins)
relational algebra (joins)
Nilt1234
 
relational algebra-(basics)
 relational algebra-(basics) relational algebra-(basics)
relational algebra-(basics)
Nilt1234
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
Nilt1234
 
Lec 06
Lec 06Lec 06
Lec 06
Nilt1234
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
Nilt1234
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
Nilt1234
 
introduction of Database
introduction of Databaseintroduction of Database
introduction of Database
Nilt1234
 
Database Architecture
Database Architecture Database Architecture
Database Architecture
Nilt1234
 
What is Artificial Intelligence
What is Artificial IntelligenceWhat is Artificial Intelligence
What is Artificial Intelligence
Nilt1234
 
Entity Relationship Diagaram
Entity Relationship DiagaramEntity Relationship Diagaram
Entity Relationship Diagaram
Nilt1234
 

More from Nilt1234 (13)

Lisp tutorial
Lisp tutorialLisp tutorial
Lisp tutorial
 
INTRODUCTION TO LISP
INTRODUCTION TO LISPINTRODUCTION TO LISP
INTRODUCTION TO LISP
 
Lec 09
Lec 09Lec 09
Lec 09
 
relational algebra (joins)
relational algebra (joins)relational algebra (joins)
relational algebra (joins)
 
relational algebra-(basics)
 relational algebra-(basics) relational algebra-(basics)
relational algebra-(basics)
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Lec 06
Lec 06Lec 06
Lec 06
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
 
introduction of Database
introduction of Databaseintroduction of Database
introduction of Database
 
Database Architecture
Database Architecture Database Architecture
Database Architecture
 
What is Artificial Intelligence
What is Artificial IntelligenceWhat is Artificial Intelligence
What is Artificial Intelligence
 
Entity Relationship Diagaram
Entity Relationship DiagaramEntity Relationship Diagaram
Entity Relationship Diagaram
 

Recently uploaded

Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
Peter Windle
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
Wasim Ak
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
Jean Carlos Nunes Paixão
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
deeptiverma2406
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 

Recently uploaded (20)

Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 

Lec 17

  • 2. Concept Learning as Search • We assume that the concept lies in the hypothesis space. So we search for a hypothesis belonging to this hypothesis space that best fits the training examples, such that the output given by the hypothesis is same as the true output of concept • Hence the search has achieved the learning of the actual concept using the given training set
  • 3. Concept Learning as Search • In short: Assume , search for an that best fits D, such that xi D, h(xi) = c(xi) Where c is the concept we are trying to determine (the output of the training set) H is the hypothesis space D is the training set h is the hypothesis xi is the ith instance of Instance space Hc Hh 
  • 4. Ordering of Hypothesis Space • General to Specific Ordering of Hypothesis Space • Most General Hypothesis: – hg< ?, ? > • Most Specific Hypothesis: – hs< Ø , Ø >
  • 5. Ordering of Hypothesis Space SK = < T, BP >, T = { H, N, L } and BP = { H, N, L } < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø >
  • 6. Find-S Algorithm • FIND-S finds the most specific hypothesis possible within the version space given a set of training data • Uses the general-to-specific ordering for searching through the hypotheses space
  • 7. Find-S Algorithm Initialize hypothesis h to the most specific hypothesis in H (the hypothesis space) For each positive training instance x (i.e. output is 1) For each attribute constraint ai in h If the constraint ai is satisfied by x Then do nothing Else Replace ai in h by the next more general constraint that is satisfied by x Output hypothesis h
  • 8. Find-S Algorithm To illustrate this algorithm, let us assume that the learner is given the sequence of following training examples from the SICK domain: D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 The first step of FIND-S is to initialize hypothesis h to the most specific hypothesis in H: h = < Ø , Ø >
  • 9. Find-S Algorithm D T BP SK x1 H H 1 First training example is positive: But h = < Ø , Ø > fails over this first instance Because h(x1) = 0, since Ø gives us 0 for any attribute value Since h = < Ø , Ø > is so specific that it doesn’t give even one single instance as positive, so we change it to next more general hypothesis that fits this particular first instance x1 of the training data set D to h = < H , H >
  • 10. Find-S Algorithm < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø > SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
  • 11. Find-S Algorithm D T BP SK x1 H H 1 x2 L L 0 Upon encountering the second example; in this case a negative example, the algorithm makes no change to h. In fact, the FIND-S algorithm simply ignores every negative example So the hypothesis still remains: h = < H , H >
  • 12. Find-S Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 Final Hypothesis: h = < ?, H > What does this hypothesis state? This hypothesis will term all the future patients which have BP = H as SICK for all the different values of T
  • 13. Find-S Algorithm < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø > D T BP SK x1 H H 1 x2 L L 0 x3 N H 1
  • 14. Candidate-Elimination Algorithm • Although FIND-S does find a consistent hypothesis • In general, however, there may be more hypotheses consistent with D; of which FIND-S only finds one • Candidate-Elimination finds all the hypotheses in the Version Space
  • 15. Version Space (VS) • Version space is a set of all the hypotheses that are consistent with all the training examples • By consistent we mean h(xi) = c(xi) , for all instances belonging to training set D
  • 16. Version Space Let us take the following training set D: D T BP SK x1 H H 1 x2 L L 0 x3 N N 0 Another representation of this set D: BP H - - 1 N - 0 - L 0 - - L N H T
  • 17. Version Space Is there a hypothesis that can generate this D: BP H - - 1 N - 0 - L 0 - - L N H T One of the consistent hypotheses can be h1 = < H, H > BP H 0 0 1 N 0 0 0 L 0 0 0 L N H T
  • 18. Version Space There are other hypotheses consistent with D, such as h2 = < H, ? > There’s another hypothesis, h3 = < ?, H > BP H 1 1 1 N 0 0 0 L 0 0 0 L N H T BP H 0 0 1 N 0 0 1 L 0 0 1 L N H T
  • 19. Version Space • Version space is denoted as VS H,D = {h1, h2, h3} • This translates as: Version space is a subset of hypothesis space H, composed of h1, h2 and h3, that is consistent with D • In other words version space is a group of all hypotheses consistent with D, not just one hypothesis we saw in the previous case
  • 20. Candidate-Elimination Algorithm • Candidate Elimination works with two sets: – Set G (General hypotheses) – Set S (Specific hypotheses) • Starts with: – G0 = {< ? , ? >} considers negative examples only – S0 = {< Ø , Ø >} considers positive examples only • Within these two boundaries is the entire Hypothesis space
  • 21. Candidate-Elimination Algorithm • Intuitively: – As each training example is observed one by one • The S boundary is made more and more general • The G boundary set is made more and more specific • This eliminates from the version space any hypotheses found inconsistent with the new training example – At the end, we are left with VS
  • 22. Candidate-Elimination Algorithm Initialize G to the set of maximally general hypotheses in H Initialize S to the set of maximally specific hypotheses in H For each training example d, do If d is a positive example Remove from G any hypothesis inconsistent with d For each hypothesis s in S that is inconsistent with d Remove s from S Add to S all minimal generalization h of s, such that h is consistent with d, and some member of G is more general than h Remove from S any hypothesis that is more general than another one in S If d is a negative example Remove from S any hypothesis inconsistent with d For each hypothesis g in G that is inconsistent with d Remove g from G Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h Remove from G any hypothesis that is less general than another one in G
  • 23. Candidate-Elimination Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 G0 = {< ?, ? >} most general S0 = {< Ø, Ø >} most specific
  • 24. Candidate-Elimination Algorithm • Candidate Elimination works with two sets: – Set G (General hypotheses) – Set S (Specific hypotheses) • Starts with: – G0 = {< ? , ? >} considers negative examples only – S0 = {< Ø , Ø >} considers positive examples only • Within these two boundaries is the entire Hypothesis space
  • 25. Candidate-Elimination Algorithm Initialize G to the set of maximally general hypotheses in H Initialize S to the set of maximally specific hypotheses in H For each training example d, do If d is a positive example Remove from G any hypothesis inconsistent with d For each hypothesis s in S that is inconsistent with d Remove s from S Add to S all minimal generalization h of s, such that h is consistent with d, and some member of G is more general than h Remove from S any hypothesis that is more general than another one in S If d is a negative example Remove from S any hypothesis inconsistent with d For each hypothesis g in G that is inconsistent with d Remove g from G Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h Remove from G any hypothesis that is less general than another one in G
  • 26. Candidate-Elimination Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 G0 = {< ?, ? >} most general S0 = {< Ø, Ø >} most specific
  • 27. Candidate-Elimination Algorithm D T BP SK x1 H H 1 d1 = (<H, H>, 1) [a positive example]: G1 = {< ?, ? >} S1 = {< H, H >} Remove < Ø, Ø > from S0 , since it is not consistent with d1 and add the next minimally general hypothesis from H to form S1 G1 = G0 = {< ?, ? >}, since <?, ?> is consistent with d1; both give positive outputs G0 = {< ?, ? >} S0 = {< Ø, Ø >}
  • 28. Candidate-Elimination Algorithm D T BP SK x2 L L 0 Second training example is: d2 = (<L, L>, 0) [negative example] G2 = {< H, ? >, < ?, H >} S2 = {< H, H>} Remove < ?, ? > from G1 , since it is not consistent with d2 and add the next minimally specialized hypothesis from H to form G2 , keeping in mind one rule: S2 = S1 = {< H, H>}, since <H, H> is consistent with d2: both give negative outputs for x2 “Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h” Now, observe that the immediate one step specialized hypotheses of < ?, ? > are: {< H, ? >, < N, ? >, < L, ? >, < ?, H >, < ?, N >, < ?, L >} G1 = {< ?, ? >} S1 = {< H, H >}
  • 29. Candidate-Elimination Algorithm D T BP SK x3 N H 1 Third and final training example is: d3 = (<N, H>, 1) [A positive example] G3 = {< ?, H >} S3 = {< ?, H >} In S2, < H, H > is not consistent with d3, so we remove it and add minimally general hypotheses than < H, H >. The two choices we have are: < H, ? > and < ?, H >. We only keep < ?, H >, since the other one is not consistent with d3 We see that in G2, < H, ? > is not consistent with d3, so we remove it. However < ?, H > is consistent hence it is retained: G3 = {< ?, H >} G2 = {< H, ? >, < ?, H >} S2 = {< H, H>}
  • 30. Conjunctive vs Disjuncvtive Conjuntive Rule (ANDing) h = < T=H AND BP = ? > BP H 1 1 1 N 0 0 1 L 0 0 1 L N H T BP H 0 0 1 N 0 0 1 L 0 0 1 L N H T Disjuntive Rule (ORing) h = < T=H AND BP = ? OR T=? AND BP = H >