Upcoming SlideShare
×

# AI: Learning in AI

1,124 views

Published on

AI: Learning in AI

Published in: Technology
1 Like
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
1,124
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
0
0
Likes
1
Embeds 0
No embeds

No notes for slide

### AI: Learning in AI

1. 1. Learning In AI system and Neural Networks<br />
2. 2. Learning from Observation<br />Components of  general model of learning agents<br />Critic guided by performance standard<br />Learning element<br />Problem generator<br />Performance element<br />Sensors to give input from environment<br />Effectors to carry out actions in environment<br />
3. 3. Components of the performance element<br />A direct mapping from conditions on the current state to actions.<br />A means to infer relevant properties of the world from the percept sequence.<br />Information about the way the world evolves.<br />Information about the results of possible actions the agent can take.<br />Utility information indicating the desirability of world states.<br />Action-value information indicating the desirability of particular actions in particular states.<br />Goals that describe classes of states whose achievement maximizes the agent's utility.<br />
4. 4. Types of learning<br />Any situation in which both the inputs and outputs of a component can be perceived is called supervised learning.<br />In learning the condition-action component, the agent receives some evaluation of its action but is not told the correct action. This is called reinforcement learning;<br />Learning when there is no hint at all about the correct outputs is called unsupervised learning.<br />
5. 5. What is Inductive Learning?<br />In supervised learning, the learning element is given the correct value of the function for particular inputs, and changes its representation of the function to try to match the information provided by the feedback. <br />More formally, we say an example is a pair (x,f(x)), where x is the input and/(it) is the output of the function applied to x. <br />The task of pure inductive inference (or induction) is this: given a collection of examples, return a function h that approximates. The function h is called a hypothesis.<br />
6. 6. Measuring the performance of the learning algorithm<br />Collect a large set of examples.<br />Divide it into two disjoint sets: the training set and the test set.<br />Use the learning algorithm with the training set as examples to generate a hypothesis H.<br />Measure the percentage of examples in the test set that are correctly classified by H.<br />Repeat steps 1 to 4 for different sizes of training sets and different randomly selected training sets of each size.<br />
7. 7. Broadening the applicability of decision trees<br />The following issues must be addressed in order to enhance the applicability:<br />Missing data<br />Multi valued attributes<br />Continuous-valued attributes<br />
8. 8. General logical descriptions<br />Hypothesis proposes expressions, which we call as a candidate definition of the goal predicate.<br />An example can be a false negative for the hypothesis, if the hypothesis says it should be negative but in fact it is positive.<br />An example can be a false positive for the hypothesis, if the hypothesis says it should be positive but in fact it is negative.<br />
9. 9. Learning in Neural and belief Networks<br />Neural networks how to train complex networks of simple computing elements, thereby perhaps shedding some light on the workings of the brain. <br />The simple arithmetic computing elements correspond to neurons , the network as a whole corresponds to a collection of interconnected neurons. For this reason, the networks are called neural networks.<br />
10. 10. Comparing brains with digital computers<br />
11. 11. What is a Neural network?<br />A neural network is composed of a number of nodes, or units, connected by links.<br />Each link has a numeric weight associated with it.<br />Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights. <br />
12. 12. Application of neural networks<br />Pronunciation :Pronunciation of written English text by a computer is a fascinating problem in linguistics, as well as a task with high commercial payoff.<br />Handwritten character recognitionIn one of the largest applications of neural networks to date, Le Cun et al. (1989) have implemented a network designed to read zip codes on hand-addressed envelopes<br />
13. 13. Application of neural network<br />ALVINN (Autonomous Land Vehicle In a Neural Network) (Pomerleau, 1993) is a neural network that has performed quite well in a domain where some other approaches have failed. It learns to steer a vehicle along a single lane on a highway by observing the performance of a human driver.<br />
14. 14. Visit more self help tutorials<br />Pick a tutorial of your choice and browse through it at your own pace.<br />The tutorials section is free, self-guiding and will not involve any additional support.<br />Visit us at www.dataminingtools.net<br />