Perceptron working

520 views
390 views

Published on

PERCEPTRON, neur

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
520
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Perceptron working

  1. 1. PERCEPTRON INTRODUCTIONThe perceptron is a type of artificial neural network invented in 1957 at the CornellAeronautical Laboratory by Frank Rosenblatt. It can be seen as the simplest kind of feedforward neural network: a linear classifier.DefinitionThe perceptron is a binary classifier which maps its input x (a real-valued vector) to anoutput value f(x) (a single binary value):Where w is a vector of real-valued weights, is the dot product (which here computes aweighted sum), and b is the bias, a constant term that does not depend on any inputvalue.The value of f(x) (0 or 1) is used to classify x as either a positive or a negative instance,in the case of a binary classification problem. If b is negative, then the weightedcombination of inputs must produce a positive value greater than | b | in order to push theclassifier neuron over the 0 threshold. Spatially, the bias alters the position (though notthe orientation) of the decision boundary. The perceptron learning algorithm does notterminate if the learning set is not linearly separable.The perceptron is considered the simplest kind of feed-forward neural network.Learning algorithmBelow is an example of a learning algorithm for a single-layer (no hidden-layer)perceptron. For multilayer perceptrons, more complicated algorithms such as backpropagation must be used. Alternatively, methods such as the delta rule can be used if thefunction is non-linear and differentiable, although the one below will work as well.The learning algorithm we demonstrate is the same across all the output neurons,therefore everything that follows is applied to a single neuron in isolation. We first definesome variables: • denotes the output from the perceptron for an input vector . • is the bias term, which in the example below we take to be 0. • is the training set of s samples, where: o is the n-dimensional input vector. o is the desired output value of the perceptron for that input.We show the values of the nodes as follows: • is the value of the ith node of the jth training input vector.

×