Your SlideShare is downloading. ×
0
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
AI: Learning in AI
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

AI: Learning in AI

13,046

Published on

AI: Learning in AI

AI: Learning in AI

Published in: Technology
0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
13,046
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Learning In AI system and Neural Networks
  • 2. Learning from Observation
    Components of  general model of learning agents
    Critic guided by performance standard
    Learning element
    Problem generator
    Performance element
    Sensors to give input from environment
    Effectors to carry out actions in environment
  • 3. Components of the performance element
    A direct mapping from conditions on the current state to actions.
    A means to infer relevant properties of the world from the percept sequence.
    Information about the way the world evolves.
    Information about the results of possible actions the agent can take.
    Utility information indicating the desirability of world states.
    Action-value information indicating the desirability of particular actions in particular states.
    Goals that describe classes of states whose achievement maximizes the agent's utility.
  • 4. Types of learning
    Any situation in which both the inputs and outputs of a component can be perceived is called supervised learning.
    In learning the condition-action component, the agent receives some evaluation of its action but is not told the correct action. This is called reinforcement learning;
    Learning when there is no hint at all about the correct outputs is called unsupervised learning.
  • 5. What is Inductive Learning?
    In supervised learning, the learning element is given the correct value of the function for particular inputs, and changes its representation of the function to try to match the information provided by the feedback.
    More formally, we say an example is a pair (x,f(x)), where x is the input and/(it) is the output of the function applied to x.
    The task of pure inductive inference (or induction) is this: given a collection of examples, return a function h that approximates. The function h is called a hypothesis.
  • 6. Measuring the performance of the learning algorithm
    Collect a large set of examples.
    Divide it into two disjoint sets: the training set and the test set.
    Use the learning algorithm with the training set as examples to generate a hypothesis H.
    Measure the percentage of examples in the test set that are correctly classified by H.
    Repeat steps 1 to 4 for different sizes of training sets and different randomly selected training sets of each size.
  • 7. Broadening the applicability of decision trees
    The following issues must be addressed in order to enhance the applicability:
    Missing data
    Multi valued attributes
    Continuous-valued attributes
  • 8. General logical descriptions
    Hypothesis proposes expressions, which we call as a candidate definition of the goal predicate.
    An example can be a false negative for the hypothesis, if the hypothesis says it should be negative but in fact it is positive.
    An example can be a false positive for the hypothesis, if the hypothesis says it should be positive but in fact it is negative.
  • 9. Learning in Neural and belief Networks
    Neural networks how to train complex networks of simple computing elements, thereby perhaps shedding some light on the workings of the brain.
    The simple arithmetic computing elements correspond to neurons , the network as a whole corresponds to a collection of interconnected neurons. For this reason, the networks are called neural networks.
  • 10. Comparing brains with digital computers
  • 11. What is a Neural network?
    A neural network is composed of a number of nodes, or units, connected by links.
    Each link has a numeric weight associated with it.
    Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights. 
  • 12. Application of neural networks
    Pronunciation :Pronunciation of written English text by a computer is a fascinating problem in linguistics, as well as a task with high commercial payoff.
    Handwritten character recognitionIn one of the largest applications of neural networks to date, Le Cun et al. (1989) have implemented a network designed to read zip codes on hand-addressed envelopes
  • 13. Application of neural network
    ALVINN (Autonomous Land Vehicle In a Neural Network) (Pomerleau, 1993) is a neural network that has performed quite well in a domain where some other approaches have failed. It learns to steer a vehicle along a single lane on a highway by observing the performance of a human driver.
  • 14. Visit more self help tutorials
    Pick a tutorial of your choice and browse through it at your own pace.
    The tutorials section is free, self-guiding and will not involve any additional support.
    Visit us at www.dataminingtools.net

×