• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
AI: Learning in AI 2
 

AI: Learning in AI 2

on

  • 2,355 views

AI: Learning in AI 2

AI: Learning in AI 2

Statistics

Views

Total Views
2,355
Views on SlideShare
2,290
Embed Views
65

Actions

Likes
0
Downloads
0
Comments
0

2 Embeds 65

http://www.dataminingtools.net 34
http://dataminingtools.net 31

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    AI: Learning in AI 2 AI: Learning in AI 2 Presentation Transcript

    • Learning In AI 2
    • Bayesian learning
      Bayesian learning views the problem of constructing hypotheses from data as a sub problem of the more fundamental problem of making predictions.
      The idea is to use hypotheses as intermediaries between data and predictions.
    • Belief network learning problems
      Known structure, fully observable
      Unknown structure, fully observable
      Known structure, hidden variables
      Unknown structure, hidden variables
    • A comparison of belief networks and neural networks
      The principal difference is that belief networks are localized representations, whereas neural networks are distributed representations
      Another representational difference is that belief network variables have two dimensions of "activation"—the range of values for the proposition, and the probability assigned to each of those values.
    • What is Reinforcement learning?
      The task of reinforcement learning is to use rewards to learn a successful agent function.
    • What is Q-Learning?
      The agent learns an action-value function giving the expected utility of taking a given action in a given state. This is called Q-learning.
    • Passive learning in an unknown environment
      The prioritized-sweeping heuristic prefers to make adjustments to states whose likely successors have just undergone a large adjustment in their own utility estimates.
    • Active learning in an unknown environment
      An active agent must consider what actions to take, what their outcomes may be, and how they will affect the rewards received.
    • Design for an active Adaptive dynamic programming(ADP) agent
      The agent learns an environment model M by observing the results of its actions, and uses the model to calculate the utility function U using a dynamic programming algorithm.
    • Exploration of Action
      An action has two kinds of outcome:
      It gains rewards on the current sequence.
      It affects the percepts received, and hence the ability of the agent to learn—and receive rewards in future sequences.
    • Genetic Algorithm
      It starts with a set of one or more individuals and applies selection and reproduction operators to "evolve" an individual that is successful, as measured by a fitness function.
      The genetic algorithm finds a fit individual using simulated evolution.
    • Knowledge from learning
      If we use Descriptions to denote the conjunction of all the example descriptions, and Classifications to denote the conjunction of all the example classifications.
      The Hypothesis then must satisfy the following property:
      Hypothesis A Descriptions |= Classifications
      We call this kind of relationship an entailment constraint.
      A cumulative learning process uses, and adds to, its stock of background knowledge over time.
    • Prior knowledge in learning
      Is useful in following ways
      Because any hypothesis generated must be consistent with the prior knowledge as well as with the new observations, the effective hypothesis space size is reduced to include only those theories that are consistent with what is already known.
      For any given set of observations, the size of the hypothesis required to construct an explanation for the observations can be much reduced, because the prior knowledge will be available to help out the new rules in explaining the observations. The smaller the hypothesis, the easier it is to find.
    • Explanation based learning
      The technique of memorization has long been used in computer science to speed up programs by saving the results of computation.
    • Improving efficiency in Learning
      A common approach to ensuring that derived rules are efficient is to insist on the operationality of each sub goal in the rule. EBL makes the knowledge base more efficient for the kind of problems that it is reasonable to expect.
    • Visit more self help tutorials
      Pick a tutorial of your choice and browse through it at your own pace.
      The tutorials section is free, self-guiding and will not involve any additional support.
      Visit us at www.dataminingtools.net