SlideShare a Scribd company logo
1 of 10
INDUCTIVE-ANALYTICAL
APPROACHES TO LEARNING
Swapna.C
2.1 The Learning Problem
• Given:
• A set of training examples D, possibly
containing errors.
• A domain theory B, possibly containing errors.
• A space of candidate hypotheses H.
• Determine:
• A hypothesis that best fits the training
examples and domain theory.
• There are 2 approaches:
• 1. To find best fit find errorD(h),errorB(h)
errorD(h) is defined to be the proportion of
examples from D that are misclassified by h.
errorB(h) of h with respect to a domain theory
B to be the probability that h will disagree with
B on the classification of a randomly drawn
instance.
we could require the hypothesis that minimizes
some combined measure of these errors, such
as
• It is not clear what values to assign to kB and kD
to specify the relative importance of fitting the
data versus fitting the theory. If we have a very
poor theory and a great deal of reliable data, it
will be best to weight errorD(h) more heavily.
• Given a strong theory and a small sample of very
noisy data, the best results would be obtained by
weighting errorB(h) more heavily. Of course if the
learner does not know in advance the quality of
the domain theory or training data, it will be
unclear how it should weight these two error
components.
• 2.Bayes theorem perspective.
• Bayes theorem computes this posterior
probability based on the observed data D,
together with prior knowledge in the form of
P(h), P(D), and P(Dlh).
• The Bayesian view is that one should simply
choose the hypothesis whose posterior
probability is greatest, and that Bayes
theorem provides the proper method for
weighting the contribution of this prior
knowledge and observed data.
• Unfortunately, Bayes theorem implicitly assumes
pefect knowledge about the probability distributions
P(h), P(D), and P(Dlh). When these quantities are
only imperfectly known, Bayes theorem alone does
not prescribe how to combine them with the
observed data.
• (One possible approach in such cases is to assume
prior probability distributions over P(h), P(D), and
P(D1h) themselves, then calculate the expected
value of the posterior P (h / D) .
• we will simply say that the learning problem is to
minimize some combined measure of the error of
the hypothesis over the data and the domain theory.
2.2 Hypothesis Space Search
• We can characterize most learning methods as
search algorithms by describing the hypothesis
space H they search, the initial hypothesis ho at
which they begin their search, the set of search
operators O that define individual search steps,
and the goal criterion G that specifies the
search objective.
Three different methods for using prior
knowledge to alter the search performed
by purely inductive methods.
• Use prior knowledge to derive an initial hypothesis from
which to begin the search.:
• In this approach the domain theory B is used to construct
an initial hypothesis ho that is consistent with B. A
standard inductive method is then applied, starting with
the i ho.
• For example, the KBANN system learns artificial neural
networks in It uses prior knowledge to design the
interconnections and weights for an initial network, so that
this initial network is perfectly consistent with the given
domain theory. This initial network hypothesis is then
refined inductively using the BACKPROPAGATaIlOgNor ithm
and available data. Beginning the search at a hypothesis
consistent with the domain theory makes it more likely that
the final output hypothesis will better fit this theory.
• Use prior knowledge to alter the objective of the
hypothesis space search:
• The goal criterion G is modified to require that
the output hypothesis fits the domain theory as
well as the training examples.
• For example, the EBNN system learns neural
networks ,Whereas inductive learning of neural
networks performs gradient descent search to
minimize the squared error of the network over
the training data, EBNN performs gradient
descent to optimize a different criterion.
• This modified criterion includes an additional
term that measures the error of the learned
network relative to the domain theory.
• Use prior knowledge to alter the available search
steps.
• In this approach, the set of search operators 0 is altered by
the domain theory.
• For example, the FOCL system learns sets of Horn clauses.
It is based on the inductive system FOIL, which conducts a
greedy search through the space of possible Horn clauses, at
each step revising its current hypothesis by adding a single
new literal.
• FOCL uses the domain theory to expand the set of
alternatives available when revising the hypothesis, allowing
the addition of multiple literals in a single search step when
warranted by the domain theory.
• FOCL allows single-step moves through the hypothesis space
that would correspond to many steps using the original
inductive algorithm. These "macro-moves" can dramatically
alter the course of the search, so that the final hypothesis
found consistent with the data is different from the one that
would be found using only the inductive search steps.

More Related Content

What's hot

Instance Based Learning in Machine Learning
Instance Based Learning in Machine LearningInstance Based Learning in Machine Learning
Instance Based Learning in Machine LearningPavithra Thippanaik
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation NetworkAkshay Dhole
 
Memory virtualization
Memory virtualizationMemory virtualization
Memory virtualizationPiyush Rochwani
 
States, state graphs and transition testing
States, state graphs and transition testingStates, state graphs and transition testing
States, state graphs and transition testingABHISHEK KUMAR
 
Target language in compiler design
Target language in compiler designTarget language in compiler design
Target language in compiler designMuhammad Haroon
 
Data-Intensive Technologies for Cloud Computing
Data-Intensive Technologies for CloudComputingData-Intensive Technologies for CloudComputing
Data-Intensive Technologies for Cloud Computinghuda2018
 
Taxonomy for bugs
Taxonomy for bugsTaxonomy for bugs
Taxonomy for bugsHarika Krupal
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmMostafa G. M. Mostafa
 
Digital Image Processing - Image Compression
Digital Image Processing - Image CompressionDigital Image Processing - Image Compression
Digital Image Processing - Image CompressionMathankumar S
 
Foundation of A.I
Foundation of A.IFoundation of A.I
Foundation of A.IMegha Sharma
 
Problem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptProblem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
 
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...TEJVEER SINGH
 
lazy learners and other classication methods
lazy learners and other classication methodslazy learners and other classication methods
lazy learners and other classication methodsrajshreemuthiah
 
Address in the target code in Compiler Construction
Address in the target code in Compiler ConstructionAddress in the target code in Compiler Construction
Address in the target code in Compiler ConstructionMuhammad Haroon
 
Principle source of optimazation
Principle source of optimazationPrinciple source of optimazation
Principle source of optimazationSiva Sathya
 

What's hot (20)

Instance Based Learning in Machine Learning
Instance Based Learning in Machine LearningInstance Based Learning in Machine Learning
Instance Based Learning in Machine Learning
 
Back propagation
Back propagationBack propagation
Back propagation
 
PAC Learning
PAC LearningPAC Learning
PAC Learning
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation Network
 
Memory virtualization
Memory virtualizationMemory virtualization
Memory virtualization
 
States, state graphs and transition testing
States, state graphs and transition testingStates, state graphs and transition testing
States, state graphs and transition testing
 
Target language in compiler design
Target language in compiler designTarget language in compiler design
Target language in compiler design
 
Data-Intensive Technologies for Cloud Computing
Data-Intensive Technologies for CloudComputingData-Intensive Technologies for CloudComputing
Data-Intensive Technologies for Cloud Computing
 
Bayesian learning
Bayesian learningBayesian learning
Bayesian learning
 
Planning
PlanningPlanning
Planning
 
Taxonomy for bugs
Taxonomy for bugsTaxonomy for bugs
Taxonomy for bugs
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
 
Digital Image Processing - Image Compression
Digital Image Processing - Image CompressionDigital Image Processing - Image Compression
Digital Image Processing - Image Compression
 
Foundation of A.I
Foundation of A.IFoundation of A.I
Foundation of A.I
 
Problem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptProblem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.ppt
 
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
 
lazy learners and other classication methods
lazy learners and other classication methodslazy learners and other classication methods
lazy learners and other classication methods
 
Address in the target code in Compiler Construction
Address in the target code in Compiler ConstructionAddress in the target code in Compiler Construction
Address in the target code in Compiler Construction
 
Principle source of optimazation
Principle source of optimazationPrinciple source of optimazation
Principle source of optimazation
 

Similar to INDUCTIVE-ANALYTICAL LEARNING APPROACHES

Module 4_F.pptx
Module  4_F.pptxModule  4_F.pptx
Module 4_F.pptxSupriyaN21
 
Module 4 part_1
Module 4 part_1Module 4 part_1
Module 4 part_1ShashankN22
 
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering CollegeBayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering CollegeDhivyaa C.R
 
09_Informed_Search.ppt
09_Informed_Search.ppt09_Informed_Search.ppt
09_Informed_Search.pptrnyau
 
-BayesianLearning in machine Learning 12
-BayesianLearning in machine Learning 12-BayesianLearning in machine Learning 12
-BayesianLearning in machine Learning 12Kumari Naveen
 
Learn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic ModelLearn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic ModelJunya Tanaka
 
Unit1_AI&ML_leftover (2).pptx
Unit1_AI&ML_leftover (2).pptxUnit1_AI&ML_leftover (2).pptx
Unit1_AI&ML_leftover (2).pptxsahilshah890338
 
Chernick.Michael.ppt
Chernick.Michael.pptChernick.Michael.ppt
Chernick.Michael.pptABINASHPADHY6
 
Chernick.Michael.ppt
Chernick.Michael.pptChernick.Michael.ppt
Chernick.Michael.pptalizain9604
 
Chernick.Michael (1).ppt
Chernick.Michael (1).pptChernick.Michael (1).ppt
Chernick.Michael (1).pptalizain9604
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learningkensaleste
 
Principle of Maximum Entropy
Principle of Maximum EntropyPrinciple of Maximum Entropy
Principle of Maximum EntropyJiawang Liu
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptxHadrian7
 
tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.pptGuioGonza2
 
Computational Learning Theory
Computational Learning TheoryComputational Learning Theory
Computational Learning Theorybutest
 
Poggi analytics - ebl - 1
Poggi   analytics - ebl - 1Poggi   analytics - ebl - 1
Poggi analytics - ebl - 1Gaston Liberman
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slidesBaliThorat1
 

Similar to INDUCTIVE-ANALYTICAL LEARNING APPROACHES (20)

Module 4_F.pptx
Module  4_F.pptxModule  4_F.pptx
Module 4_F.pptx
 
Module 4 part_1
Module 4 part_1Module 4 part_1
Module 4 part_1
 
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering CollegeBayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
Bayesian Learning by Dr.C.R.Dhivyaa Kongu Engineering College
 
UNIT II (7).pptx
UNIT II (7).pptxUNIT II (7).pptx
UNIT II (7).pptx
 
Bayes Theorem.pdf
Bayes Theorem.pdfBayes Theorem.pdf
Bayes Theorem.pdf
 
09_Informed_Search.ppt
09_Informed_Search.ppt09_Informed_Search.ppt
09_Informed_Search.ppt
 
-BayesianLearning in machine Learning 12
-BayesianLearning in machine Learning 12-BayesianLearning in machine Learning 12
-BayesianLearning in machine Learning 12
 
Learn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic ModelLearn from Example and Learn Probabilistic Model
Learn from Example and Learn Probabilistic Model
 
Unit1_AI&ML_leftover (2).pptx
Unit1_AI&ML_leftover (2).pptxUnit1_AI&ML_leftover (2).pptx
Unit1_AI&ML_leftover (2).pptx
 
ML02.ppt
ML02.pptML02.ppt
ML02.ppt
 
Chernick.Michael.ppt
Chernick.Michael.pptChernick.Michael.ppt
Chernick.Michael.ppt
 
Chernick.Michael.ppt
Chernick.Michael.pptChernick.Michael.ppt
Chernick.Michael.ppt
 
Chernick.Michael (1).ppt
Chernick.Michael (1).pptChernick.Michael (1).ppt
Chernick.Michael (1).ppt
 
Bayesian Learning- part of machine learning
Bayesian Learning-  part of machine learningBayesian Learning-  part of machine learning
Bayesian Learning- part of machine learning
 
Principle of Maximum Entropy
Principle of Maximum EntropyPrinciple of Maximum Entropy
Principle of Maximum Entropy
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.ppt
 
Computational Learning Theory
Computational Learning TheoryComputational Learning Theory
Computational Learning Theory
 
Poggi analytics - ebl - 1
Poggi   analytics - ebl - 1Poggi   analytics - ebl - 1
Poggi analytics - ebl - 1
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slides
 

More from swapnac12

Awt, Swing, Layout managers
Awt, Swing, Layout managersAwt, Swing, Layout managers
Awt, Swing, Layout managersswapnac12
 
Applet
 Applet Applet
Appletswapnac12
 
Event handling
Event handlingEvent handling
Event handlingswapnac12
 
Asymptotic notations(Big O, Omega, Theta )
Asymptotic notations(Big O, Omega, Theta )Asymptotic notations(Big O, Omega, Theta )
Asymptotic notations(Big O, Omega, Theta )swapnac12
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)swapnac12
 
Introduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsIntroduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsswapnac12
 
Genetic algorithms
Genetic algorithmsGenetic algorithms
Genetic algorithmsswapnac12
 
Instance based learning
Instance based learningInstance based learning
Instance based learningswapnac12
 
Computational learning theory
Computational learning theoryComputational learning theory
Computational learning theoryswapnac12
 
Multilayer & Back propagation algorithm
Multilayer & Back propagation algorithmMultilayer & Back propagation algorithm
Multilayer & Back propagation algorithmswapnac12
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1swapnac12
 
Evaluating hypothesis
Evaluating  hypothesisEvaluating  hypothesis
Evaluating hypothesisswapnac12
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning systemswapnac12
 
Inductive bias
Inductive biasInductive bias
Inductive biasswapnac12
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmswapnac12
 

More from swapnac12 (15)

Awt, Swing, Layout managers
Awt, Swing, Layout managersAwt, Swing, Layout managers
Awt, Swing, Layout managers
 
Applet
 Applet Applet
Applet
 
Event handling
Event handlingEvent handling
Event handling
 
Asymptotic notations(Big O, Omega, Theta )
Asymptotic notations(Big O, Omega, Theta )Asymptotic notations(Big O, Omega, Theta )
Asymptotic notations(Big O, Omega, Theta )
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)
 
Introduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventionsIntroduction ,characteristics, properties,pseudo code conventions
Introduction ,characteristics, properties,pseudo code conventions
 
Genetic algorithms
Genetic algorithmsGenetic algorithms
Genetic algorithms
 
Instance based learning
Instance based learningInstance based learning
Instance based learning
 
Computational learning theory
Computational learning theoryComputational learning theory
Computational learning theory
 
Multilayer & Back propagation algorithm
Multilayer & Back propagation algorithmMultilayer & Back propagation algorithm
Multilayer & Back propagation algorithm
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1
 
Evaluating hypothesis
Evaluating  hypothesisEvaluating  hypothesis
Evaluating hypothesis
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning system
 
Inductive bias
Inductive biasInductive bias
Inductive bias
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
 

Recently uploaded

Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxAnaBeatriceAblay2
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerunnathinaik
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 

Recently uploaded (20)

Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developer
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 

INDUCTIVE-ANALYTICAL LEARNING APPROACHES

  • 2. 2.1 The Learning Problem • Given: • A set of training examples D, possibly containing errors. • A domain theory B, possibly containing errors. • A space of candidate hypotheses H. • Determine: • A hypothesis that best fits the training examples and domain theory.
  • 3. • There are 2 approaches: • 1. To find best fit find errorD(h),errorB(h) errorD(h) is defined to be the proportion of examples from D that are misclassified by h. errorB(h) of h with respect to a domain theory B to be the probability that h will disagree with B on the classification of a randomly drawn instance. we could require the hypothesis that minimizes some combined measure of these errors, such as
  • 4. • It is not clear what values to assign to kB and kD to specify the relative importance of fitting the data versus fitting the theory. If we have a very poor theory and a great deal of reliable data, it will be best to weight errorD(h) more heavily. • Given a strong theory and a small sample of very noisy data, the best results would be obtained by weighting errorB(h) more heavily. Of course if the learner does not know in advance the quality of the domain theory or training data, it will be unclear how it should weight these two error components.
  • 5. • 2.Bayes theorem perspective. • Bayes theorem computes this posterior probability based on the observed data D, together with prior knowledge in the form of P(h), P(D), and P(Dlh). • The Bayesian view is that one should simply choose the hypothesis whose posterior probability is greatest, and that Bayes theorem provides the proper method for weighting the contribution of this prior knowledge and observed data.
  • 6. • Unfortunately, Bayes theorem implicitly assumes pefect knowledge about the probability distributions P(h), P(D), and P(Dlh). When these quantities are only imperfectly known, Bayes theorem alone does not prescribe how to combine them with the observed data. • (One possible approach in such cases is to assume prior probability distributions over P(h), P(D), and P(D1h) themselves, then calculate the expected value of the posterior P (h / D) . • we will simply say that the learning problem is to minimize some combined measure of the error of the hypothesis over the data and the domain theory.
  • 7. 2.2 Hypothesis Space Search • We can characterize most learning methods as search algorithms by describing the hypothesis space H they search, the initial hypothesis ho at which they begin their search, the set of search operators O that define individual search steps, and the goal criterion G that specifies the search objective.
  • 8. Three different methods for using prior knowledge to alter the search performed by purely inductive methods. • Use prior knowledge to derive an initial hypothesis from which to begin the search.: • In this approach the domain theory B is used to construct an initial hypothesis ho that is consistent with B. A standard inductive method is then applied, starting with the i ho. • For example, the KBANN system learns artificial neural networks in It uses prior knowledge to design the interconnections and weights for an initial network, so that this initial network is perfectly consistent with the given domain theory. This initial network hypothesis is then refined inductively using the BACKPROPAGATaIlOgNor ithm and available data. Beginning the search at a hypothesis consistent with the domain theory makes it more likely that the final output hypothesis will better fit this theory.
  • 9. • Use prior knowledge to alter the objective of the hypothesis space search: • The goal criterion G is modified to require that the output hypothesis fits the domain theory as well as the training examples. • For example, the EBNN system learns neural networks ,Whereas inductive learning of neural networks performs gradient descent search to minimize the squared error of the network over the training data, EBNN performs gradient descent to optimize a different criterion. • This modified criterion includes an additional term that measures the error of the learned network relative to the domain theory.
  • 10. • Use prior knowledge to alter the available search steps. • In this approach, the set of search operators 0 is altered by the domain theory. • For example, the FOCL system learns sets of Horn clauses. It is based on the inductive system FOIL, which conducts a greedy search through the space of possible Horn clauses, at each step revising its current hypothesis by adding a single new literal. • FOCL uses the domain theory to expand the set of alternatives available when revising the hypothesis, allowing the addition of multiple literals in a single search step when warranted by the domain theory. • FOCL allows single-step moves through the hypothesis space that would correspond to many steps using the original inductive algorithm. These "macro-moves" can dramatically alter the course of the search, so that the final hypothesis found consistent with the data is different from the one that would be found using only the inductive search steps.