Your SlideShare is downloading. ×
AI: Learning in AI 2
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

AI: Learning in AI 2

2,202

Published on

AI: Learning in AI 2

AI: Learning in AI 2

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,202
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Learning In AI 2
  • 2. Bayesian learning
    Bayesian learning views the problem of constructing hypotheses from data as a sub problem of the more fundamental problem of making predictions.
    The idea is to use hypotheses as intermediaries between data and predictions.
  • 3. Belief network learning problems
    Known structure, fully observable
    Unknown structure, fully observable
    Known structure, hidden variables
    Unknown structure, hidden variables
  • 4. A comparison of belief networks and neural networks
    The principal difference is that belief networks are localized representations, whereas neural networks are distributed representations
    Another representational difference is that belief network variables have two dimensions of "activation"—the range of values for the proposition, and the probability assigned to each of those values.
  • 5. What is Reinforcement learning?
    The task of reinforcement learning is to use rewards to learn a successful agent function.
  • 6. What is Q-Learning?
    The agent learns an action-value function giving the expected utility of taking a given action in a given state. This is called Q-learning.
  • 7. Passive learning in an unknown environment
    The prioritized-sweeping heuristic prefers to make adjustments to states whose likely successors have just undergone a large adjustment in their own utility estimates.
  • 8. Active learning in an unknown environment
    An active agent must consider what actions to take, what their outcomes may be, and how they will affect the rewards received.
  • 9. Design for an active Adaptive dynamic programming(ADP) agent
    The agent learns an environment model M by observing the results of its actions, and uses the model to calculate the utility function U using a dynamic programming algorithm.
  • 10. Exploration of Action
    An action has two kinds of outcome:
    It gains rewards on the current sequence.
    It affects the percepts received, and hence the ability of the agent to learn—and receive rewards in future sequences.
  • 11. Genetic Algorithm
    It starts with a set of one or more individuals and applies selection and reproduction operators to "evolve" an individual that is successful, as measured by a fitness function.
    The genetic algorithm finds a fit individual using simulated evolution.
  • 12. Knowledge from learning
    If we use Descriptions to denote the conjunction of all the example descriptions, and Classifications to denote the conjunction of all the example classifications.
    The Hypothesis then must satisfy the following property:
    Hypothesis A Descriptions |= Classifications
    We call this kind of relationship an entailment constraint.
    A cumulative learning process uses, and adds to, its stock of background knowledge over time.
  • 13. Prior knowledge in learning
    Is useful in following ways
    Because any hypothesis generated must be consistent with the prior knowledge as well as with the new observations, the effective hypothesis space size is reduced to include only those theories that are consistent with what is already known.
    For any given set of observations, the size of the hypothesis required to construct an explanation for the observations can be much reduced, because the prior knowledge will be available to help out the new rules in explaining the observations. The smaller the hypothesis, the easier it is to find.
  • 14. Explanation based learning
    The technique of memorization has long been used in computer science to speed up programs by saving the results of computation.
  • 15. Improving efficiency in Learning
    A common approach to ensuring that derived rules are efficient is to insist on the operationality of each sub goal in the rule. EBL makes the knowledge base more efficient for the kind of problems that it is reasonable to expect.
  • 16. Visit more self help tutorials
    Pick a tutorial of your choice and browse through it at your own pace.
    The tutorials section is free, self-guiding and will not involve any additional support.
    Visit us at www.dataminingtools.net

×