Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

No Downloads

Total views

3,804

On SlideShare

0

From Embeds

0

Number of Embeds

66

Shares

0

Downloads

0

Comments

0

Likes

3

No embeds

No notes for slide

- 1. Learning In AI 2<br />
- 2. Bayesian learning<br />Bayesian learning views the problem of constructing hypotheses from data as a sub problem of the more fundamental problem of making predictions. <br />The idea is to use hypotheses as intermediaries between data and predictions.<br />
- 3. Belief network learning problems <br />Known structure, fully observable<br />Unknown structure, fully observable<br />Known structure, hidden variables<br />Unknown structure, hidden variables<br />
- 4. A comparison of belief networks and neural networks<br />The principal difference is that belief networks are localized representations, whereas neural networks are distributed representations<br />Another representational difference is that belief network variables have two dimensions of "activation"—the range of values for the proposition, and the probability assigned to each of those values.<br />
- 5. What is Reinforcement learning?<br />The task of reinforcement learning is to use rewards to learn a successful agent function.<br />
- 6. What is Q-Learning?<br />The agent learns an action-value function giving the expected utility of taking a given action in a given state. This is called Q-learning.<br />
- 7. Passive learning in an unknown environment<br />The prioritized-sweeping heuristic prefers to make adjustments to states whose likely successors have just undergone a large adjustment in their own utility estimates.<br />
- 8. Active learning in an unknown environment<br />An active agent must consider what actions to take, what their outcomes may be, and how they will affect the rewards received.<br />
- 9. Design for an active Adaptive dynamic programming(ADP) agent<br />The agent learns an environment model M by observing the results of its actions, and uses the model to calculate the utility function U using a dynamic programming algorithm.<br />
- 10. Exploration of Action<br /> An action has two kinds of outcome:<br />It gains rewards on the current sequence.<br />It affects the percepts received, and hence the ability of the agent to learn—and receive rewards in future sequences.<br />
- 11. Genetic Algorithm<br />It starts with a set of one or more individuals and applies selection and reproduction operators to "evolve" an individual that is successful, as measured by a fitness function. <br />The genetic algorithm finds a fit individual using simulated evolution.<br />
- 12. Knowledge from learning<br />If we use Descriptions to denote the conjunction of all the example descriptions, and Classifications to denote the conjunction of all the example classifications.<br />The Hypothesis then must satisfy the following property:<br /> Hypothesis A Descriptions |= Classifications<br />We call this kind of relationship an entailment constraint.<br />A cumulative learning process uses, and adds to, its stock of background knowledge over time.<br />
- 13. Prior knowledge in learning<br />Is useful in following ways<br />Because any hypothesis generated must be consistent with the prior knowledge as well as with the new observations, the effective hypothesis space size is reduced to include only those theories that are consistent with what is already known.<br />For any given set of observations, the size of the hypothesis required to construct an explanation for the observations can be much reduced, because the prior knowledge will be available to help out the new rules in explaining the observations. The smaller the hypothesis, the easier it is to find.<br />
- 14. Explanation based learning<br />The technique of memorization has long been used in computer science to speed up programs by saving the results of computation.<br />
- 15. Improving efficiency in Learning<br />A common approach to ensuring that derived rules are efficient is to insist on the operationality of each sub goal in the rule. EBL makes the knowledge base more efficient for the kind of problems that it is reasonable to expect.<br />
- 16. Visit more self help tutorials<br />Pick a tutorial of your choice and browse through it at your own pace.<br />The tutorials section is free, self-guiding and will not involve any additional support.<br />Visit us at www.dataminingtools.net<br />

No public clipboards found for this slide

Be the first to comment