SlideShare a Scribd company logo
1 of 30
Artificial Intelligence
Concept Learning as Search
• We assume that the concept lies in the
hypothesis space. So we search for a
hypothesis belonging to this hypothesis
space that best fits the training examples,
such that the output given by the hypothesis
is same as the true output of concept
• Hence the search has achieved the
learning of the actual concept using the
given training set
Concept Learning as Search
• In short:
Assume , search for an that best fits D, such
that xi D, h(xi) = c(xi)
Where c is the concept we are trying to determine (the
output of the training set)
H is the hypothesis space
D is the training set
h is the hypothesis
xi is the ith instance of Instance space
Hc Hh

Ordering of Hypothesis Space
• General to Specific Ordering of Hypothesis
Space
• Most General Hypothesis:
– hg< ?, ? >
• Most Specific Hypothesis:
– hs< Ø , Ø >
Ordering of Hypothesis Space
SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
Find-S Algorithm
• FIND-S finds the most specific hypothesis
possible within the version space given a
set of training data
• Uses the general-to-specific ordering for
searching through the hypotheses space
Find-S Algorithm
Initialize hypothesis h to the most specific hypothesis in H
(the hypothesis space)
For each positive training instance x (i.e. output is 1)
For each attribute constraint ai in h
If the constraint ai is satisfied by x
Then do nothing
Else
Replace ai in h by the next more
general constraint that is satisfied by x
Output hypothesis h
Find-S Algorithm
To illustrate this algorithm, let us assume that the learner is given the sequence of
following training examples from the SICK domain:
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
The first step of FIND-S is to initialize hypothesis h to the most specific hypothesis in
H:
h = < Ø , Ø >
Find-S Algorithm
D T BP SK
x1 H H 1
First training example is positive:
But h = < Ø , Ø > fails over this first instance
Because h(x1) = 0, since Ø gives us 0 for any attribute
value
Since h = < Ø , Ø > is so specific that it doesn’t give even one single instance
as positive, so we change it to next more general hypothesis that fits this
particular first instance x1 of the training data set D to
h = < H , H >
Find-S Algorithm
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
Find-S Algorithm
D T BP SK
x1 H H 1
x2 L L 0
Upon encountering the second example; in this case a negative example, the algorithm makes no
change to h. In fact, the FIND-S algorithm simply ignores every negative example
So the hypothesis still remains: h = < H , H >
Find-S Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
Final Hypothesis:
h = < ?, H >
What does this hypothesis state?
This hypothesis will term all the future patients which have BP = H as SICK for all the
different values of T
Find-S Algorithm
< ?, ? >
< H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L >
< H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L >
< Ø , Ø >
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
Candidate-Elimination Algorithm
• Although FIND-S does find a consistent
hypothesis
• In general, however, there may be more
hypotheses consistent with D; of which
FIND-S only finds one
• Candidate-Elimination finds all the
hypotheses in the Version Space
Version Space (VS)
• Version space is a set of all the
hypotheses that are consistent with all the
training examples
• By consistent we mean
h(xi) = c(xi) , for all instances belonging to
training set D
Version Space
Let us take the following training set D:
D T BP SK
x1 H H 1
x2 L L 0
x3 N N 0
Another representation of this set D:
BP
H - - 1
N - 0 -
L 0 - -
L N H T
Version Space
Is there a hypothesis that can generate this D:
BP
H - - 1
N - 0 -
L 0 - -
L N H T
One of the consistent hypotheses can be h1 = < H, H >
BP
H 0 0 1
N 0 0 0
L 0 0 0
L N H T
Version Space
There are other hypotheses consistent with D, such as h2 = < H, ? >
There’s another hypothesis, h3 = < ?, H >
BP
H 1 1 1
N 0 0 0
L 0 0 0
L N H T
BP
H 0 0 1
N 0 0 1
L 0 0 1
L N H T
Version Space
• Version space is denoted as
VS H,D = {h1, h2, h3}
• This translates as: Version space is a
subset of hypothesis space H, composed
of h1, h2 and h3, that is consistent with D
• In other words version space is a group of
all hypotheses consistent with D, not just
one hypothesis we saw in the previous
case
Candidate-Elimination Algorithm
• Candidate Elimination works with two sets:
– Set G (General hypotheses)
– Set S (Specific hypotheses)
• Starts with:
– G0 = {< ? , ? >} considers negative examples only
– S0 = {< Ø , Ø >} considers positive examples only
• Within these two boundaries is the entire
Hypothesis space
Candidate-Elimination Algorithm
• Intuitively:
– As each training example is observed one by
one
• The S boundary is made more and more general
• The G boundary set is made more and more specific
• This eliminates from the version space any hypotheses found
inconsistent with the new training example
– At the end, we are left with VS
Candidate-Elimination Algorithm
Initialize G to the set of maximally general hypotheses in H
Initialize S to the set of maximally specific hypotheses in H
For each training example d, do
If d is a positive example
Remove from G any hypothesis inconsistent with d
For each hypothesis s in S that is inconsistent with d
Remove s from S
Add to S all minimal generalization h of s, such that
h is consistent with d, and some member of G is more general than h
Remove from S any hypothesis that is more general than another one in S
If d is a negative example
Remove from S any hypothesis inconsistent with d
For each hypothesis g in G that is inconsistent with d
Remove g from G
Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h
Remove from G any hypothesis that is less general than another one in G
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
G0 = {< ?, ? >} most general
S0 = {< Ø, Ø >} most specific
Candidate-Elimination Algorithm
• Candidate Elimination works with two sets:
– Set G (General hypotheses)
– Set S (Specific hypotheses)
• Starts with:
– G0 = {< ? , ? >} considers negative examples only
– S0 = {< Ø , Ø >} considers positive examples only
• Within these two boundaries is the entire
Hypothesis space
Candidate-Elimination Algorithm
Initialize G to the set of maximally general hypotheses in H
Initialize S to the set of maximally specific hypotheses in H
For each training example d, do
If d is a positive example
Remove from G any hypothesis inconsistent with d
For each hypothesis s in S that is inconsistent with d
Remove s from S
Add to S all minimal generalization h of s, such that
h is consistent with d, and some member of G is more general than h
Remove from S any hypothesis that is more general than another one in S
If d is a negative example
Remove from S any hypothesis inconsistent with d
For each hypothesis g in G that is inconsistent with d
Remove g from G
Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h
Remove from G any hypothesis that is less general than another one in G
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
x2 L L 0
x3 N H 1
G0 = {< ?, ? >} most general
S0 = {< Ø, Ø >} most specific
Candidate-Elimination Algorithm
D T BP SK
x1 H H 1
d1 = (<H, H>, 1) [a positive example]:
G1 = {< ?, ? >}
S1 = {< H, H >}
Remove < Ø, Ø > from S0 , since it is not consistent with d1
and add the next minimally general hypothesis from H to form
S1
G1 = G0 = {< ?, ? >}, since <?, ?> is consistent with d1; both
give positive outputs
G0 = {< ?, ? >}
S0 = {< Ø, Ø >}
Candidate-Elimination Algorithm
D T BP SK
x2 L L 0
Second training example is: d2 = (<L, L>, 0) [negative example]
G2 = {< H, ? >, < ?, H >}
S2 = {< H, H>}
Remove < ?, ? > from G1 , since it is not consistent with d2 and add the next
minimally specialized hypothesis from H to form G2 , keeping in mind one
rule:
S2 = S1 = {< H, H>}, since <H, H> is consistent with d2: both give negative outputs for x2
“Add to G all minimal specializations h of g, such that
h is consistent with d, and some member of S is more specific than h”
Now, observe that the immediate one step specialized hypotheses of < ?, ? > are:
{< H, ? >, < N, ? >, < L, ? >, < ?, H >, < ?, N >, < ?, L >}
G1 = {< ?, ? >}
S1 = {< H, H >}
Candidate-Elimination Algorithm
D T BP SK
x3 N H 1
Third and final training example is: d3 = (<N, H>, 1) [A positive example]
G3 = {< ?, H >}
S3 = {< ?, H >}
In S2, < H, H > is not consistent with d3, so we remove it and add minimally
general hypotheses than < H, H >. The two choices we have are: < H, ? >
and < ?, H >. We only keep < ?, H >, since the other one is not consistent
with d3
We see that in G2, < H, ? > is not consistent with d3, so we remove it.
However < ?, H > is consistent hence it is retained: G3 = {< ?, H >}
G2 = {< H, ? >, < ?, H >}
S2 = {< H, H>}
Conjunctive vs Disjuncvtive
Conjuntive Rule (ANDing)
h = < T=H AND BP = ? >
BP
H 1 1 1
N 0 0 1
L 0 0 1
L N H T
BP
H 0 0 1
N 0 0 1
L 0 0 1
L N H T
Disjuntive Rule (ORing)
h = < T=H AND BP = ?
OR
T=? AND BP = H >

More Related Content

What's hot

Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learningbutest
 
CMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions RepresentationsCMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions Representationsallyn joy calcaben
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Christian Robert
 
16890 unit 2 heuristic search techniques
16890 unit 2 heuristic  search techniques16890 unit 2 heuristic  search techniques
16890 unit 2 heuristic search techniquesJais Balta
 
3.6 applications in optimization
3.6 applications in optimization3.6 applications in optimization
3.6 applications in optimizationmath265
 
5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculusmath265
 
Text classification
Text classificationText classification
Text classificationHarry Potter
 
27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinatesmath267
 
2.5 computations of derivatives
2.5 computations of derivatives2.5 computations of derivatives
2.5 computations of derivativesmath265
 
Numerical Evidence for Darmon Points
Numerical Evidence for Darmon PointsNumerical Evidence for Darmon Points
Numerical Evidence for Darmon Pointsmmasdeu
 
10 parametric eequations of lines
10 parametric eequations of lines10 parametric eequations of lines
10 parametric eequations of linesmath267
 
Appendex c
Appendex cAppendex c
Appendex cswavicky
 
Predicates and Quantifiers
Predicates and QuantifiersPredicates and Quantifiers
Predicates and Quantifiersblaircomp2003
 

What's hot (19)

Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
Approximating the Bell-shaped Function based on Combining Hedge Algebras and ...
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
CMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions RepresentationsCMSC 56 | Lecture 9: Functions Representations
CMSC 56 | Lecture 9: Functions Representations
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
16890 unit 2 heuristic search techniques
16890 unit 2 heuristic  search techniques16890 unit 2 heuristic  search techniques
16890 unit 2 heuristic search techniques
 
04 search heuristic
04 search heuristic04 search heuristic
04 search heuristic
 
3.6 applications in optimization
3.6 applications in optimization3.6 applications in optimization
3.6 applications in optimization
 
5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus5.3 areas, riemann sums, and the fundamental theorem of calaculus
5.3 areas, riemann sums, and the fundamental theorem of calaculus
 
Text classification
Text classificationText classification
Text classification
 
27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates27 triple integrals in spherical and cylindrical coordinates
27 triple integrals in spherical and cylindrical coordinates
 
E42012426
E42012426E42012426
E42012426
 
2.5 computations of derivatives
2.5 computations of derivatives2.5 computations of derivatives
2.5 computations of derivatives
 
Numerical Evidence for Darmon Points
Numerical Evidence for Darmon PointsNumerical Evidence for Darmon Points
Numerical Evidence for Darmon Points
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
10 parametric eequations of lines
10 parametric eequations of lines10 parametric eequations of lines
10 parametric eequations of lines
 
Appendex c
Appendex cAppendex c
Appendex c
 
Predicates and Quantifiers
Predicates and QuantifiersPredicates and Quantifiers
Predicates and Quantifiers
 

Similar to Lec 17

concept-learning of artificial intelligence
concept-learning of artificial intelligenceconcept-learning of artificial intelligence
concept-learning of artificial intelligencessuser01fa1b
 
concept-learning.ppt
concept-learning.pptconcept-learning.ppt
concept-learning.pptpatel252389
 
ML_Unit_1_Part_B
ML_Unit_1_Part_BML_Unit_1_Part_B
ML_Unit_1_Part_BSrimatre K
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmswapnac12
 
2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdfVariable14
 
Poggi analytics - concepts - 1a
Poggi   analytics - concepts - 1aPoggi   analytics - concepts - 1a
Poggi analytics - concepts - 1aGaston Liberman
 
Poggi analytics - star - 1a
Poggi   analytics - star - 1aPoggi   analytics - star - 1a
Poggi analytics - star - 1aGaston Liberman
 
Introduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec domsIntroduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec domsBabasab Patil
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)NYversity
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learningkensaleste
 
A Brief Introduction to Linear Regression
A Brief Introduction to Linear RegressionA Brief Introduction to Linear Regression
A Brief Introduction to Linear RegressionNidhal Selmi
 
CS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture NotesCS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture NotesEric Conner
 
Homotopy Perturbation Method
Homotopy Perturbation MethodHomotopy Perturbation Method
Homotopy Perturbation MethodSOUMYADAS835019
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfShiwani Gupta
 
learning boolean weight learning real valued weights rank learning as ordina...
learning boolean weight learning real valued weights  rank learning as ordina...learning boolean weight learning real valued weights  rank learning as ordina...
learning boolean weight learning real valued weights rank learning as ordina...jaishriramm0
 

Similar to Lec 17 (20)

AI_Lecture_34.ppt
AI_Lecture_34.pptAI_Lecture_34.ppt
AI_Lecture_34.ppt
 
concept-learning of artificial intelligence
concept-learning of artificial intelligenceconcept-learning of artificial intelligence
concept-learning of artificial intelligence
 
concept-learning.ppt
concept-learning.pptconcept-learning.ppt
concept-learning.ppt
 
ML_Unit_1_Part_B
ML_Unit_1_Part_BML_Unit_1_Part_B
ML_Unit_1_Part_B
 
ML02.ppt
ML02.pptML02.ppt
ML02.ppt
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
 
2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf2.Find_SandCandidateElimination.pdf
2.Find_SandCandidateElimination.pdf
 
Poggi analytics - concepts - 1a
Poggi   analytics - concepts - 1aPoggi   analytics - concepts - 1a
Poggi analytics - concepts - 1a
 
AI Lesson 34
AI Lesson 34AI Lesson 34
AI Lesson 34
 
Poggi analytics - star - 1a
Poggi   analytics - star - 1aPoggi   analytics - star - 1a
Poggi analytics - star - 1a
 
Introduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec domsIntroduction to hypothesis testing ppt @ bec doms
Introduction to hypothesis testing ppt @ bec doms
 
Machine learning (1)
Machine learning (1)Machine learning (1)
Machine learning (1)
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learning
 
A Brief Introduction to Linear Regression
A Brief Introduction to Linear RegressionA Brief Introduction to Linear Regression
A Brief Introduction to Linear Regression
 
CS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture NotesCS229 Machine Learning Lecture Notes
CS229 Machine Learning Lecture Notes
 
Homotopy Perturbation Method
Homotopy Perturbation MethodHomotopy Perturbation Method
Homotopy Perturbation Method
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdf
 
learning boolean weight learning real valued weights rank learning as ordina...
learning boolean weight learning real valued weights  rank learning as ordina...learning boolean weight learning real valued weights  rank learning as ordina...
learning boolean weight learning real valued weights rank learning as ordina...
 

More from Nilt1234

Lisp tutorial
Lisp tutorialLisp tutorial
Lisp tutorialNilt1234
 
INTRODUCTION TO LISP
INTRODUCTION TO LISPINTRODUCTION TO LISP
INTRODUCTION TO LISPNilt1234
 
relational algebra (joins)
relational algebra (joins)relational algebra (joins)
relational algebra (joins)Nilt1234
 
relational algebra-(basics)
 relational algebra-(basics) relational algebra-(basics)
relational algebra-(basics)Nilt1234
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceNilt1234
 
SQL Queries
SQL QueriesSQL Queries
SQL QueriesNilt1234
 
SQL Queries
SQL QueriesSQL Queries
SQL QueriesNilt1234
 
introduction of Database
introduction of Databaseintroduction of Database
introduction of DatabaseNilt1234
 
Database Architecture
Database Architecture Database Architecture
Database Architecture Nilt1234
 
What is Artificial Intelligence
What is Artificial IntelligenceWhat is Artificial Intelligence
What is Artificial IntelligenceNilt1234
 
Entity Relationship Diagaram
Entity Relationship DiagaramEntity Relationship Diagaram
Entity Relationship DiagaramNilt1234
 

More from Nilt1234 (13)

Lisp tutorial
Lisp tutorialLisp tutorial
Lisp tutorial
 
INTRODUCTION TO LISP
INTRODUCTION TO LISPINTRODUCTION TO LISP
INTRODUCTION TO LISP
 
Lec 09
Lec 09Lec 09
Lec 09
 
relational algebra (joins)
relational algebra (joins)relational algebra (joins)
relational algebra (joins)
 
relational algebra-(basics)
 relational algebra-(basics) relational algebra-(basics)
relational algebra-(basics)
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Lec 06
Lec 06Lec 06
Lec 06
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
 
SQL Queries
SQL QueriesSQL Queries
SQL Queries
 
introduction of Database
introduction of Databaseintroduction of Database
introduction of Database
 
Database Architecture
Database Architecture Database Architecture
Database Architecture
 
What is Artificial Intelligence
What is Artificial IntelligenceWhat is Artificial Intelligence
What is Artificial Intelligence
 
Entity Relationship Diagaram
Entity Relationship DiagaramEntity Relationship Diagaram
Entity Relationship Diagaram
 

Recently uploaded

Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 

Recently uploaded (20)

INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 

Lec 17

  • 2. Concept Learning as Search • We assume that the concept lies in the hypothesis space. So we search for a hypothesis belonging to this hypothesis space that best fits the training examples, such that the output given by the hypothesis is same as the true output of concept • Hence the search has achieved the learning of the actual concept using the given training set
  • 3. Concept Learning as Search • In short: Assume , search for an that best fits D, such that xi D, h(xi) = c(xi) Where c is the concept we are trying to determine (the output of the training set) H is the hypothesis space D is the training set h is the hypothesis xi is the ith instance of Instance space Hc Hh 
  • 4. Ordering of Hypothesis Space • General to Specific Ordering of Hypothesis Space • Most General Hypothesis: – hg< ?, ? > • Most Specific Hypothesis: – hs< Ø , Ø >
  • 5. Ordering of Hypothesis Space SK = < T, BP >, T = { H, N, L } and BP = { H, N, L } < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø >
  • 6. Find-S Algorithm • FIND-S finds the most specific hypothesis possible within the version space given a set of training data • Uses the general-to-specific ordering for searching through the hypotheses space
  • 7. Find-S Algorithm Initialize hypothesis h to the most specific hypothesis in H (the hypothesis space) For each positive training instance x (i.e. output is 1) For each attribute constraint ai in h If the constraint ai is satisfied by x Then do nothing Else Replace ai in h by the next more general constraint that is satisfied by x Output hypothesis h
  • 8. Find-S Algorithm To illustrate this algorithm, let us assume that the learner is given the sequence of following training examples from the SICK domain: D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 The first step of FIND-S is to initialize hypothesis h to the most specific hypothesis in H: h = < Ø , Ø >
  • 9. Find-S Algorithm D T BP SK x1 H H 1 First training example is positive: But h = < Ø , Ø > fails over this first instance Because h(x1) = 0, since Ø gives us 0 for any attribute value Since h = < Ø , Ø > is so specific that it doesn’t give even one single instance as positive, so we change it to next more general hypothesis that fits this particular first instance x1 of the training data set D to h = < H , H >
  • 10. Find-S Algorithm < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø > SK = < T, BP >, T = { H, N, L } and BP = { H, N, L }
  • 11. Find-S Algorithm D T BP SK x1 H H 1 x2 L L 0 Upon encountering the second example; in this case a negative example, the algorithm makes no change to h. In fact, the FIND-S algorithm simply ignores every negative example So the hypothesis still remains: h = < H , H >
  • 12. Find-S Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 Final Hypothesis: h = < ?, H > What does this hypothesis state? This hypothesis will term all the future patients which have BP = H as SICK for all the different values of T
  • 13. Find-S Algorithm < ?, ? > < H, ? > < N, ? > < L, ? > < ?, H > < ?, N > < ?, L > < H, H >< H, N >< H, L > < N, H >< N, N >< N, L > < L, H >< L, N >< L, L > < Ø , Ø > D T BP SK x1 H H 1 x2 L L 0 x3 N H 1
  • 14. Candidate-Elimination Algorithm • Although FIND-S does find a consistent hypothesis • In general, however, there may be more hypotheses consistent with D; of which FIND-S only finds one • Candidate-Elimination finds all the hypotheses in the Version Space
  • 15. Version Space (VS) • Version space is a set of all the hypotheses that are consistent with all the training examples • By consistent we mean h(xi) = c(xi) , for all instances belonging to training set D
  • 16. Version Space Let us take the following training set D: D T BP SK x1 H H 1 x2 L L 0 x3 N N 0 Another representation of this set D: BP H - - 1 N - 0 - L 0 - - L N H T
  • 17. Version Space Is there a hypothesis that can generate this D: BP H - - 1 N - 0 - L 0 - - L N H T One of the consistent hypotheses can be h1 = < H, H > BP H 0 0 1 N 0 0 0 L 0 0 0 L N H T
  • 18. Version Space There are other hypotheses consistent with D, such as h2 = < H, ? > There’s another hypothesis, h3 = < ?, H > BP H 1 1 1 N 0 0 0 L 0 0 0 L N H T BP H 0 0 1 N 0 0 1 L 0 0 1 L N H T
  • 19. Version Space • Version space is denoted as VS H,D = {h1, h2, h3} • This translates as: Version space is a subset of hypothesis space H, composed of h1, h2 and h3, that is consistent with D • In other words version space is a group of all hypotheses consistent with D, not just one hypothesis we saw in the previous case
  • 20. Candidate-Elimination Algorithm • Candidate Elimination works with two sets: – Set G (General hypotheses) – Set S (Specific hypotheses) • Starts with: – G0 = {< ? , ? >} considers negative examples only – S0 = {< Ø , Ø >} considers positive examples only • Within these two boundaries is the entire Hypothesis space
  • 21. Candidate-Elimination Algorithm • Intuitively: – As each training example is observed one by one • The S boundary is made more and more general • The G boundary set is made more and more specific • This eliminates from the version space any hypotheses found inconsistent with the new training example – At the end, we are left with VS
  • 22. Candidate-Elimination Algorithm Initialize G to the set of maximally general hypotheses in H Initialize S to the set of maximally specific hypotheses in H For each training example d, do If d is a positive example Remove from G any hypothesis inconsistent with d For each hypothesis s in S that is inconsistent with d Remove s from S Add to S all minimal generalization h of s, such that h is consistent with d, and some member of G is more general than h Remove from S any hypothesis that is more general than another one in S If d is a negative example Remove from S any hypothesis inconsistent with d For each hypothesis g in G that is inconsistent with d Remove g from G Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h Remove from G any hypothesis that is less general than another one in G
  • 23. Candidate-Elimination Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 G0 = {< ?, ? >} most general S0 = {< Ø, Ø >} most specific
  • 24. Candidate-Elimination Algorithm • Candidate Elimination works with two sets: – Set G (General hypotheses) – Set S (Specific hypotheses) • Starts with: – G0 = {< ? , ? >} considers negative examples only – S0 = {< Ø , Ø >} considers positive examples only • Within these two boundaries is the entire Hypothesis space
  • 25. Candidate-Elimination Algorithm Initialize G to the set of maximally general hypotheses in H Initialize S to the set of maximally specific hypotheses in H For each training example d, do If d is a positive example Remove from G any hypothesis inconsistent with d For each hypothesis s in S that is inconsistent with d Remove s from S Add to S all minimal generalization h of s, such that h is consistent with d, and some member of G is more general than h Remove from S any hypothesis that is more general than another one in S If d is a negative example Remove from S any hypothesis inconsistent with d For each hypothesis g in G that is inconsistent with d Remove g from G Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h Remove from G any hypothesis that is less general than another one in G
  • 26. Candidate-Elimination Algorithm D T BP SK x1 H H 1 x2 L L 0 x3 N H 1 G0 = {< ?, ? >} most general S0 = {< Ø, Ø >} most specific
  • 27. Candidate-Elimination Algorithm D T BP SK x1 H H 1 d1 = (<H, H>, 1) [a positive example]: G1 = {< ?, ? >} S1 = {< H, H >} Remove < Ø, Ø > from S0 , since it is not consistent with d1 and add the next minimally general hypothesis from H to form S1 G1 = G0 = {< ?, ? >}, since <?, ?> is consistent with d1; both give positive outputs G0 = {< ?, ? >} S0 = {< Ø, Ø >}
  • 28. Candidate-Elimination Algorithm D T BP SK x2 L L 0 Second training example is: d2 = (<L, L>, 0) [negative example] G2 = {< H, ? >, < ?, H >} S2 = {< H, H>} Remove < ?, ? > from G1 , since it is not consistent with d2 and add the next minimally specialized hypothesis from H to form G2 , keeping in mind one rule: S2 = S1 = {< H, H>}, since <H, H> is consistent with d2: both give negative outputs for x2 “Add to G all minimal specializations h of g, such that h is consistent with d, and some member of S is more specific than h” Now, observe that the immediate one step specialized hypotheses of < ?, ? > are: {< H, ? >, < N, ? >, < L, ? >, < ?, H >, < ?, N >, < ?, L >} G1 = {< ?, ? >} S1 = {< H, H >}
  • 29. Candidate-Elimination Algorithm D T BP SK x3 N H 1 Third and final training example is: d3 = (<N, H>, 1) [A positive example] G3 = {< ?, H >} S3 = {< ?, H >} In S2, < H, H > is not consistent with d3, so we remove it and add minimally general hypotheses than < H, H >. The two choices we have are: < H, ? > and < ?, H >. We only keep < ?, H >, since the other one is not consistent with d3 We see that in G2, < H, ? > is not consistent with d3, so we remove it. However < ?, H > is consistent hence it is retained: G3 = {< ?, H >} G2 = {< H, ? >, < ?, H >} S2 = {< H, H>}
  • 30. Conjunctive vs Disjuncvtive Conjuntive Rule (ANDing) h = < T=H AND BP = ? > BP H 1 1 1 N 0 0 1 L 0 0 1 L N H T BP H 0 0 1 N 0 0 1 L 0 0 1 L N H T Disjuntive Rule (ORing) h = < T=H AND BP = ? OR T=? AND BP = H >