Neural network


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Neural network

  1. 1. Artificial Neural Network <ul><li>1 Brief Introduction </li></ul><ul><li>2 Backpropogation Algorithm </li></ul><ul><li>3 A Simply Illustration </li></ul>
  2. 2. Chapter 1 Brief Introduction <ul><li>1.2 Review to Decision Tree </li></ul><ul><ul><li>Learning process is to reduce the error, which can be understood as the difference between the target and output values from learning structure. </li></ul></ul><ul><ul><li>ID3 Algorithm can be implemented only for discrete values. </li></ul></ul><ul><ul><li>Artificial Neural Network (ANN) can describe arbitrary functions. </li></ul></ul><ul><li>History </li></ul>
  3. 3. <ul><li>1.3 Basic Structure </li></ul><ul><ul><li>This example of ANN learning is provided by Pomerluau’s(1993) system ALVINN, which uses a learned ANN to steer an autonomous vehicle driving at normal speeds. The input of ANN is a 30x32 grid of pixel intensities obtained from forward-faced camera mounted on the vehicle. The output is the direction in which the vehicle is steered. </li></ul></ul><ul><ul><li>As can be seen, 4 units receive inputs directly from all of the 30X32 pixels from the camera in vehicle. These are called ”hidden” units because their outputs are only available to the coming units in the network, but not as apart of the global network. </li></ul></ul>
  4. 5. <ul><li>1.4 Ability </li></ul><ul><ul><li>Instances are represented by many attribute-value pairs. The target function to be learned is defined over instances that can be described by a vector of predefined feature. such as the pixel values in the ALVINN example. </li></ul></ul><ul><ul><li>The training examples may contain errors. In following sections we can see, that ANN learning methods are quite robust to noise in training data. </li></ul></ul><ul><ul><li>Long training times are acceptable. Compared to decision tree learning, network training algorithm requires longer training time, depending on factors such as the number of the weights in network. </li></ul></ul>
  5. 6. Chapter 2 backpropagation Algorithm <ul><li>2.1 Sigmoid </li></ul><ul><ul><li>Like the perceptron, the sigmoid unit first computes a linear combination of its input. </li></ul></ul><ul><ul><li>then the sigmoid unit computes its output with the following function. </li></ul></ul>
  6. 7. <ul><ul><li>This equation 2 is often referred to as the squashing function since it map very large input domain to a small range of output. </li></ul></ul><ul><ul><li>this sigmoid function has a useful property that its derivative is easily expressed in terms of its output. In the following description of the backpropagation we can see, the algorithm makes use of this derivative. </li></ul></ul>
  7. 8. <ul><li>2.2 Function </li></ul><ul><ul><li>the sigmoid is only one unit in the network, now we take a look at the whole function, which the neural network calculates. There is a figure 2.2, if we consider an example (x, t), where x is called input attribute and t is called target attribute, than: </li></ul></ul>
  8. 10. <ul><li>2.3 Squared Error </li></ul><ul><ul><li>Above it has mentioned, that the whole learning process is in order to reduce the error, but how can man error describe? Generally the function squared error is used. </li></ul></ul><ul><ul><li>Notice: this function 3 sums all the error over all of the networks output units after a whole set of training examples has been computed. </li></ul></ul>
  9. 12. <ul><ul><li>then the value-vector can be updated by: </li></ul></ul><ul><ul><li>where ∇E(~w) is the gradient of E: </li></ul></ul><ul><ul><ul><li>so for each value k can be updated by: </li></ul></ul></ul>
  10. 13. <ul><ul><li>But in practice, because the function 3 sums all the error over a whole set of the training data, so need the algorithm with this function more time to compute, and can easily be effected by local minimum, so construct man a new function, named stochastic squared error: </li></ul></ul><ul><ul><li>As can be seen, the function computes error only about a example. The gradient of Ed(~w) is easily made out: </li></ul></ul>
  11. 14. <ul><li>2.4 Backpropagation Algorithm </li></ul><ul><ul><li>The learning problem faced by Backpropagation is to search a large hypothesis space defined by all possible weight values for all the units in the network. The diagram of Algorithm is: </li></ul></ul>
  12. 16. <ul><ul><li>Notice: the error term for hidden unit h is calculated by summing the error terms σ_k for each output unit influenced by unit h, weighting each of the σ_k’s by w_kh,the weight from hidden unit h to output unit k. This weight characterizes the degree to which hidden unit h is ”responsible for” the error in output unit k. </li></ul></ul>
  13. 17. Chapter 3 A Simple Illustration <ul><ul><ul><li>Now we make an example to give a more inductive knowledge. How does ANN learn the most simply function, a identity id. We construct the network shown in figure. There are eight network input units, which are connected to three hidden units, which are in turn connected to eight output units. Because of this structure, the three hidden units will be forced to represent the eight input values in some way that captures their relevant features, so that this hidden layer representation can be used by the output units to compute the correct target values. </li></ul></ul></ul>
  14. 18. <ul><ul><li>This 8 x 3 x 8 network was trained to learn the identity function. After 5000training times, the three hidden unit values encode the eight distinct inputs using the encoding shown in the tabular. Notice if the encoded values are rounded to zero or one, the result is the standard binary encoding for 8 distinct values. </li></ul></ul>