2. Properties of Brain
It has Ten billion (1010) neurons
On average, each neuron has several thousand
connections
Many neurons die as we progress through life, and are
not replaced, yet we continue to learn.
They are Compensated for problems by massive
parallelism
3. Properties of Brain
The interconnections of biological neurons is called
biological neural network.
Neural network allows High degree of parallel
computation
4. biological Neuron
Biological Neuron is the fundamental processing unit
of the brain.
It learns from experience (by example)
It consists of the following components:
1. soma
2. axon
3. synapse
4. dendrites
5. nucleus
6. Axon Hillock
7. Myelin sheath
8.Nodes of ranvier
9.Terminal Buttons
6. Components of a biological Neuron
1. Nucleus. The smallest unit of a neuron.
2. Soma: this is a cell body that contains a nucleus.
- It supports chemical processing and production of
neural transmitters.
3 .Dendrites: This is the input component a neuron for
receiving connections from other neurons.
4. Axon: output component that carries information away
from soma to the synaptic sites of other neurons. Axon
splits into a number of strands each of which connects to
another neuron.
5. Axon Hillock: this is the site of summation of incoming
information from other neurons
7. Components of a biological Neuron
6. Myelin sheath: Consists of fat-containing cells that
insulate the axon from electrical activity. This insulation
acts to increase the rate of transmission of signals.
7. Nodes of ranvier: These are gaps between mylen
sheath cells along axons.
8. Terminal Buttons: These are the small knobs at the
end of an axon that release chemicals. are also called
neuron transmitters.
8. Synapse : This is the point at which neurons join other
neurons. A neuron may connect to as many as 100,000
other neurons.
Electro chemical communication between neurons takes
place at these junctions
8. Information flow in a Biological Neuron
Input/output and the propagation of information is as follows
9. Information flow in a Biological Neuron
1. Dendrites receives activation from other neurons.
2. Soma processes the incoming activations by summing
the inputs. Once a threshold level is reached then it
converts input activations into output activations .
3. Output activations are sent down the axon as an
electrical impulse.
10. Synapses
Synapses refers to junctions that allow signal transmission
between the axons and dendrites. Sending activation to
other neurons is known as firing output
Synapses vary in strength
Good connections allows a large signal
Slight connections allow only a weak signal.
12. Artificial Neural Networks
A network of processing units (programming constructs)
that mimic the properties of biological neurons.
inputs outputs
Ann
Brain
13. Parts of A Neural Network
A neural network two main components :
1. Artificial Neurons: these are individual Processing
units{uj}, where each uj has a certain activation level aj(t)
at any point in time.
2. Weighted interconnections between the various processing
units which determine how the activation of one unit leads
to input for another unit.
14. General Structure of ANN
• The input layer: set of neurons Introducing input values into
the network.
– No activation function or other processing.
• The hidden layers.: Perform processing e.g classifying
inputs
– Consist summation and activation functions
– Two hidden layers are sufficient to solve any problem
– Ouputs are passed on to the ouput layers
– More hidden layers may be better
• The output layer. Perform processing e.g classifying inputs
- Consist summation and activation functions
- Outputs are passed on to the world outside the neural
network.
15. Lecture Notes for data mining
15
General Structure of ANN
Input
Layer
Hidden
Layer
Output
Layer
x1 x2 x3 x4 x5
y
16. Benefits of artificial neural networks:
1. Solve complex problem: A neural networks are used to
perform tasks that a linear program can not.
2. Parallelism: When an element of the neural network fails, it
can continue without any problem by their parallel nature.
3. Learning capability: A neural network learns and does not
need to be reprogrammed.
4. Wider application : It can be implemented in any
application.
17. Disadvantages of ANN
1. Training requirement: The neural network needs training to
operate.
2. Resource Intensive: Requires high processing time for large
neural networks.
3. Complexity: Neural Networks can be extremely hard to use
4. Many parameters: Consists of many Parameters to be set
18. Artificial neuron
Artificial neuron is a mathematical function which
simulates the biological neuron.
It act as the basic information processing unit of a artificial
Neural network.
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
19. Flow of information in Artificial Neuron
A set of input connections brings in activations from
other neurons. E.g input1,input2,input3
A processing unit sums the inputs and then applies an
activation function. e.g (input1*w)+(input2*)
An output line transmits the result to other neurons.
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
20. General Structure of Artificial neuron
Artificial neuron has two main components
1. summation function
2. activation function
21. General Structure of Artificial neuron
1. A summation function (linear combiner) A function(rule) which
computes the weighted sum of the inputs From other neurons
It is also known as adder function:
m
1
j
jx
w
u
j
22. General Structure of Artificial neuron
2. An activation function
this is a function that is applied to the weighted sum of
the inputs (u) of a neuron to produce the output
Activation refers to the output signal produced by this
function when it acts on the set of input signals.
The output value is passed to other neurons in the
network.
This function is also called squashing function since it
limits the amplitude of the output of the neuron.
23. Neural network architectures
Neural networks architectures are divided into two main
categories:
1. Recurrent neural networks
2. Feed forward neural networks
Artificial Neural Networks architectures
Feedforward Recurrent
hebbian, SOM,BP,
Perceptron
ART
Elman, Jordan,
Hopfield
24. 1) feed- forward networks
This is a network which has no feedback (loops).
Allow signals to travel one way only; from input to
output (unidirectional)
They are two types of feed- forward networks:
1. Single layer feed-forward networks
2. Multi-layer feed-forward networks
25. Single layer feed-forward networks
This is a feed-forward network where every output node is
connected to every input node
It has no hidden layer
Input Output
layer layer
Example :
26. Multi-layer feed-forward networks
This a feed-forward network with One or more hidden layers.
Input Hidden Output
layer layer layer
2-layer or
1-hidden layer
fully connected
network
27. Examples of feed forward networks
1. som
2. Hebbian
3. perceptron
4. back propagation
28. 2. Recurrent networks
This is a network where information can travel back from the
output to the input.
Where connections between units form a directed cycle (loop)
29. NN 1 29
Recurrent networks consists of one or more feedback loops.
The connections between units form a directed cycle
z-1
z-1
z-1
Recurrent network
input
hidden
output
30. Benefits of Recurrent networks
They can implement more complex agent designs
Examples
Hopfield Networks and Boltzmann machines.
31. Recurrent networks limitation:
It can be unstable, or oscillate, or exhibit chaotic behaviour
e.g., given some input values, it can take a long time to
compute stable output and learning is made more difficult
34. Learning in brain
Learning in brain occurs when synapse strength change.
Good connections allows a large signal
Slight connections allow only a weak signal
Amount of signal passing through a neuron depends on:
1. Intensity of signal from feeding neurons
2. Their synaptic strengths
3. Threshold (activation level (point)) of the receiving
neuron
35. Lecture Notes for data mining
35
Learning in artificial neural networks ( ANN
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
Input
Layer
Hidden
Layer
Output
Layer
x1 x2 x3 x4 x5
y
Training ANN involve
adjusting the weights of
the neuron inputs
36. Learning in neural networks
Input weights represent synapse strengths of biological
neurons.
Weights are adjusted in such a way that the output of
ANN is consistent with class labels of training examples so
as to reduce the learning error.
-1
2
2
X1
X2
X3
Y
37. General Learning algorithm in ANN
Learning in Ann involves four Steps :
1. Introduce inputs and guess initial weight values. (Initializing)
2. Computes An Output and Compare with Desired Output to
determine the error
3. Determine direction of weight adjustments (whether +ve or –ve)
4. Adjust Weights for output layer according to calculations in order
to Reduce Error
5 adjust weights for hidden layer.
39. Learning error
Learning Error is the difference between actual and
desired output.
Weight is adjusted relative to error size
Propagation to previous layer is done if the error is not
equal to zero.
With time this leads to Improved performance
40. Neural network parameters
Before learning starts the following parameters need to be
specified.
1. Threshold :
2. learning rate:
3. learning rule:
4. learning algorithm
41. Neural network parameters:
1. Threshold : is the lowest possible input value (potential)
that is required for the neuron to activate (fire).
•Generally Neurons do not fire (produce an output) unless their total
input is equal or above a threshold value.
42. Neural network parameters:
2. learning rate : a value that is set to determine the speed at
which the network learns.
It is abbreviated as ‘C’ which stands for ‘constant’
The Larger the c, faster the learning
If the learning rate (c) is very small, then neural network may
not correct its mistake immediately.
i.e it will take longer to learn.
It ranges between 1 and 0.
Commonly used values are: 0.1 and 0.25
43. Neural network parameters:
3. Learning rules (learning function): Refers to functions that are
used to specify how to adjust weights.
They include:
• Delta rule
• Hebbian rule
• Gradient descent rule
44. Delta rule
The delta learning rule states that ‘ if it’s not broke, don’t
fix it
That is; there is no need of changing any of the weights if
there is no learning error
Ie.
if (desired output – actual output)=0 then don’t adjust them
‘’Delta ” refers to the difference between desired and actual
output (learning error).
45. Hebbian Learning Rule
Hebbian rule states that
“Neurons that fire together, wire together.”
i.e:
When two connected neurons are firing at the same time, the
strength of the synapse between them increases.
The rule builds on Hebbs's 1949 learning rule which states that
the connections between two neurons might be strengthened if the
neurons fire simultaneously.
rule that specifies how much the weight of the connection
between two units should be increased or decreased in proportion
to the product of their activation.
46. Hebbian Learning Rule
The Hebb rule determines the change in the weight connection
from unit i to unitj by
Dwij = r * ai * aj,
where r is the learning rate and ai, aj represent the activations
of ui and uj respectively.
Thus, if both ui and uj are activated the weight of the
connection from ui to uj should be adjusted.
47. Gradient descent learning rule
This rule states that the Minimum of a function is
found by following the slope of the function
48. Learning algorithms
The most popular learning algorithms include:
Perceptron learning
Back propagation algorithm
Back propagationThis method is proven highly successful in training of multilayered neural nets. The network is not just given reinforcement for how it is doing on a task. Information about errors is also filtered back through the system and is used to adjust the connections between the layers, thus improving performance. A form of supervised learning.