• Save
AI: Learning in AI
Upcoming SlideShare
Loading in...5
×
 

AI: Learning in AI

on

  • 10,590 views

AI: Learning in AI

AI: Learning in AI

Statistics

Views

Total Views
10,590
Views on SlideShare
10,500
Embed Views
90

Actions

Likes
1
Downloads
0
Comments
0

3 Embeds 90

http://www.dataminingtools.net 48
http://dataminingtools.net 37
http://robysurendran.posterous.com 5

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

AI: Learning in AI AI: Learning in AI Presentation Transcript

  • Learning In AI system and Neural Networks
  • Learning from Observation
    Components of  general model of learning agents
    Critic guided by performance standard
    Learning element
    Problem generator
    Performance element
    Sensors to give input from environment
    Effectors to carry out actions in environment
  • Components of the performance element
    A direct mapping from conditions on the current state to actions.
    A means to infer relevant properties of the world from the percept sequence.
    Information about the way the world evolves.
    Information about the results of possible actions the agent can take.
    Utility information indicating the desirability of world states.
    Action-value information indicating the desirability of particular actions in particular states.
    Goals that describe classes of states whose achievement maximizes the agent's utility.
  • Types of learning
    Any situation in which both the inputs and outputs of a component can be perceived is called supervised learning.
    In learning the condition-action component, the agent receives some evaluation of its action but is not told the correct action. This is called reinforcement learning;
    Learning when there is no hint at all about the correct outputs is called unsupervised learning.
  • What is Inductive Learning?
    In supervised learning, the learning element is given the correct value of the function for particular inputs, and changes its representation of the function to try to match the information provided by the feedback.
    More formally, we say an example is a pair (x,f(x)), where x is the input and/(it) is the output of the function applied to x.
    The task of pure inductive inference (or induction) is this: given a collection of examples, return a function h that approximates. The function h is called a hypothesis.
  • Measuring the performance of the learning algorithm
    Collect a large set of examples.
    Divide it into two disjoint sets: the training set and the test set.
    Use the learning algorithm with the training set as examples to generate a hypothesis H.
    Measure the percentage of examples in the test set that are correctly classified by H.
    Repeat steps 1 to 4 for different sizes of training sets and different randomly selected training sets of each size.
  • Broadening the applicability of decision trees
    The following issues must be addressed in order to enhance the applicability:
    Missing data
    Multi valued attributes
    Continuous-valued attributes
  • General logical descriptions
    Hypothesis proposes expressions, which we call as a candidate definition of the goal predicate.
    An example can be a false negative for the hypothesis, if the hypothesis says it should be negative but in fact it is positive.
    An example can be a false positive for the hypothesis, if the hypothesis says it should be positive but in fact it is negative.
  • Learning in Neural and belief Networks
    Neural networks how to train complex networks of simple computing elements, thereby perhaps shedding some light on the workings of the brain.
    The simple arithmetic computing elements correspond to neurons , the network as a whole corresponds to a collection of interconnected neurons. For this reason, the networks are called neural networks.
  • Comparing brains with digital computers
  • What is a Neural network?
    A neural network is composed of a number of nodes, or units, connected by links.
    Each link has a numeric weight associated with it.
    Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights. 
  • Application of neural networks
    Pronunciation :Pronunciation of written English text by a computer is a fascinating problem in linguistics, as well as a task with high commercial payoff.
    Handwritten character recognitionIn one of the largest applications of neural networks to date, Le Cun et al. (1989) have implemented a network designed to read zip codes on hand-addressed envelopes
  • Application of neural network
    ALVINN (Autonomous Land Vehicle In a Neural Network) (Pomerleau, 1993) is a neural network that has performed quite well in a domain where some other approaches have failed. It learns to steer a vehicle along a single lane on a highway by observing the performance of a human driver.
  • Visit more self help tutorials
    Pick a tutorial of your choice and browse through it at your own pace.
    The tutorials section is free, self-guiding and will not involve any additional support.
    Visit us at www.dataminingtools.net