View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline
View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app!
Learning in AI is also called machine learning or pattern recognition.
The basic objective is to allow an intelligent agent to discover autonomously knowledge from experience .
Let’s examine the definition more closely:
“an intelligent agent ”: The ability to learn requires a prior level of intelligence and knowledge. Learning has to start from an existing level of capability.
“to discover autonomously ”: Learning is fundamentally about an agent recognizing new facts for its own use and acquiring new abilities that reinforce its own existing abilities. Literal programming, i.e. rote learning from instruction, is not useful.
“ knowledge ”: Whatever is learned has to be represented in some way that the agent can use. “If you can't represent it, you can't learn it” is a corollary of the slogan “Knowledge is power”.
“from experience ”: Experience is typically a set of so-called training examples; examples may be categorized or not. They may be random or selected by a teacher. They may include explanations or not.
inputs: set theory (union, intersection, etc); “how to do mathematics” (based on a book by Polya), e.g., if f is an interesting function of two arguments, then f ( x , x ) is an interesting function on one, etc.
speculated about what was interesting an made conjectures, etc.
What AM discovered
integers (as equivalence relation on cardinality of sets)
addition (using disjoint union of sets)
primes: 1 was interesting, the function returning the cardinality of set of divisors was interesting, etc.
Glodbach’s conjecture: “all even numbers are the sum of two prime numbers”; (note that AM did not prove it, just discovered that it was interesting
Why was AM so successful?
Connection between LISP and mathematics (mutations of small bits of LISP code are likely to be interesting)
A particular instance in the training set might be: < overcast , hot , normal , false >: play In this case, the target class is a binary attribute, so each instance represents a positive or a negative example.
Basic Idea: classify new instances based on their similarity to instances we have seen before
also called “ instance-based learning ”
Simplest form of MBR: Rote Learning
learning by memorization
save all previously encountered instance; given a new instance, find one from the memorized set that most closely “resembles” the new one; assign new instance to the same class as the “nearest neighbor”
more general methods try to find k nearest neighbors rather than just one
but, how do we define “resembles?”
MBR is “lazy”
defers all of the real work until new instance is obtained; no attempts are made to learn a generalized model from the training set
less data preprocessing and model evaluation, but more work has to be done at classification time