SlideShare a Scribd company logo
1 of 49
Lecture 11
Artificial Neural Networks
Data mining and Warehousing
Properties of Brain
It has Ten billion (1010) neurons
On average, each neuron has several thousand
connections
Many neurons die as we progress through life, and are
not replaced, yet we continue to learn.
They are Compensated for problems by massive
parallelism
Properties of Brain
 The interconnections of biological neurons is called
biological neural network.
 Neural network allows High degree of parallel
computation
biological Neuron
 Biological Neuron is the fundamental processing unit
of the brain.
 It learns from experience (by example)
 It consists of the following components:
 1. soma
 2. axon
 3. synapse
 4. dendrites
 5. nucleus
 6. Axon Hillock
 7. Myelin sheath
 8.Nodes of ranvier
 9.Terminal Buttons
Components of a biological Neuron
Components of a biological Neuron
1. Nucleus. The smallest unit of a neuron.
2. Soma: this is a cell body that contains a nucleus.
- It supports chemical processing and production of
neural transmitters.
3 .Dendrites: This is the input component a neuron for
receiving connections from other neurons.
4. Axon: output component that carries information away
from soma to the synaptic sites of other neurons. Axon
splits into a number of strands each of which connects to
another neuron.
5. Axon Hillock: this is the site of summation of incoming
information from other neurons
Components of a biological Neuron
6. Myelin sheath: Consists of fat-containing cells that
insulate the axon from electrical activity. This insulation
acts to increase the rate of transmission of signals.
7. Nodes of ranvier: These are gaps between mylen
sheath cells along axons.
8. Terminal Buttons: These are the small knobs at the
end of an axon that release chemicals. are also called
neuron transmitters.
8. Synapse : This is the point at which neurons join other
neurons. A neuron may connect to as many as 100,000
other neurons.
Electro chemical communication between neurons takes
place at these junctions
Information flow in a Biological Neuron
Input/output and the propagation of information is as follows
Information flow in a Biological Neuron
1. Dendrites receives activation from other neurons.
2. Soma processes the incoming activations by summing
the inputs. Once a threshold level is reached then it
converts input activations into output activations .
3. Output activations are sent down the axon as an
electrical impulse.
Synapses
Synapses refers to junctions that allow signal transmission
between the axons and dendrites. Sending activation to
other neurons is known as firing output
Synapses vary in strength
Good connections allows a large signal
Slight connections allow only a weak signal.
 Artificial neural networks
Artificial Neural Networks
 A network of processing units (programming constructs)
that mimic the properties of biological neurons.
inputs outputs
Ann
Brain
Parts of A Neural Network
A neural network two main components :
1. Artificial Neurons: these are individual Processing
units{uj}, where each uj has a certain activation level aj(t)
at any point in time.
2. Weighted interconnections between the various processing
units which determine how the activation of one unit leads
to input for another unit.
General Structure of ANN
• The input layer: set of neurons Introducing input values into
the network.
– No activation function or other processing.
• The hidden layers.: Perform processing e.g classifying
inputs
– Consist summation and activation functions
– Two hidden layers are sufficient to solve any problem
– Ouputs are passed on to the ouput layers
– More hidden layers may be better
• The output layer. Perform processing e.g classifying inputs
- Consist summation and activation functions
- Outputs are passed on to the world outside the neural
network.
Lecture Notes for data mining
15
General Structure of ANN
Input
Layer
Hidden
Layer
Output
Layer
x1 x2 x3 x4 x5
y
Benefits of artificial neural networks:
1. Solve complex problem: A neural networks are used to
perform tasks that a linear program can not.
2. Parallelism: When an element of the neural network fails, it
can continue without any problem by their parallel nature.
3. Learning capability: A neural network learns and does not
need to be reprogrammed.
4. Wider application : It can be implemented in any
application.
Disadvantages of ANN
1. Training requirement: The neural network needs training to
operate.
2. Resource Intensive: Requires high processing time for large
neural networks.
3. Complexity: Neural Networks can be extremely hard to use
4. Many parameters: Consists of many Parameters to be set
Artificial neuron
 Artificial neuron is a mathematical function which
simulates the biological neuron.
 It act as the basic information processing unit of a artificial
Neural network.
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
Flow of information in Artificial Neuron
 A set of input connections brings in activations from
other neurons. E.g input1,input2,input3
 A processing unit sums the inputs and then applies an
activation function. e.g (input1*w)+(input2*)
 An output line transmits the result to other neurons.
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
General Structure of Artificial neuron
 Artificial neuron has two main components
 1. summation function
 2. activation function
General Structure of Artificial neuron
1. A summation function (linear combiner) A function(rule) which
computes the weighted sum of the inputs From other neurons
 It is also known as adder function:



m
1
j
jx
w
u
j
General Structure of Artificial neuron
2. An activation function
this is a function that is applied to the weighted sum of
the inputs (u) of a neuron to produce the output
Activation refers to the output signal produced by this
function when it acts on the set of input signals.
 The output value is passed to other neurons in the
network.
 This function is also called squashing function since it
limits the amplitude of the output of the neuron.
Neural network architectures
Neural networks architectures are divided into two main
categories:
 1. Recurrent neural networks
 2. Feed forward neural networks
Artificial Neural Networks architectures
Feedforward Recurrent
hebbian, SOM,BP,
Perceptron
ART
Elman, Jordan,
Hopfield
1) feed- forward networks
 This is a network which has no feedback (loops).
 Allow signals to travel one way only; from input to
output (unidirectional)
 They are two types of feed- forward networks:
 1. Single layer feed-forward networks
 2. Multi-layer feed-forward networks
Single layer feed-forward networks
 This is a feed-forward network where every output node is
connected to every input node
 It has no hidden layer
Input Output
layer layer
Example :
Multi-layer feed-forward networks
 This a feed-forward network with One or more hidden layers.
Input Hidden Output
layer layer layer
2-layer or
1-hidden layer
fully connected
network
Examples of feed forward networks
 1. som
 2. Hebbian
 3. perceptron
 4. back propagation
2. Recurrent networks
 This is a network where information can travel back from the
output to the input.
Where connections between units form a directed cycle (loop)
NN 1 29
 Recurrent networks consists of one or more feedback loops.
 The connections between units form a directed cycle
z-1
z-1
z-1
Recurrent network
input
hidden
output
Benefits of Recurrent networks
 They can implement more complex agent designs
 Examples
 Hopfield Networks and Boltzmann machines.
Recurrent networks limitation:
It can be unstable, or oscillate, or exhibit chaotic behaviour
e.g., given some input values, it can take a long time to
compute stable output and learning is made more difficult
Learning in Neural networks
Learning in brains
Brains learns by altering strength of
connections between neurons (synapses).
Learning in brain
 Learning in brain occurs when synapse strength change.
 Good connections allows a large signal
 Slight connections allow only a weak signal
 Amount of signal passing through a neuron depends on:
1. Intensity of signal from feeding neurons
2. Their synaptic strengths
3. Threshold (activation level (point)) of the receiving
neuron
Lecture Notes for data mining
35
Learning in artificial neural networks ( ANN
Activation
function
g(Si )
Si
Oi
I1
I2
I3
wi1
wi2
wi3
Oi
Neuron i
Input Output
threshold, t
Input
Layer
Hidden
Layer
Output
Layer
x1 x2 x3 x4 x5
y
Training ANN involve
adjusting the weights of
the neuron inputs
Learning in neural networks
 Input weights represent synapse strengths of biological
neurons.
 Weights are adjusted in such a way that the output of
ANN is consistent with class labels of training examples so
as to reduce the learning error.
-1
2
2
X1
X2
X3
Y
General Learning algorithm in ANN
Learning in Ann involves four Steps :
1. Introduce inputs and guess initial weight values. (Initializing)
2. Computes An Output and Compare with Desired Output to
determine the error
3. Determine direction of weight adjustments (whether +ve or –ve)
4. Adjust Weights for output layer according to calculations in order
to Reduce Error
5 adjust weights for hidden layer.
Training the Artificial Neural Network
Learning error
 Learning Error is the difference between actual and
desired output.
 Weight is adjusted relative to error size
 Propagation to previous layer is done if the error is not
equal to zero.
 With time this leads to Improved performance
Neural network parameters
 Before learning starts the following parameters need to be
specified.
1. Threshold :
2. learning rate:
3. learning rule:
4. learning algorithm
Neural network parameters:
1. Threshold : is the lowest possible input value (potential)
that is required for the neuron to activate (fire).
•Generally Neurons do not fire (produce an output) unless their total
input is equal or above a threshold value.
Neural network parameters:
 2. learning rate : a value that is set to determine the speed at
which the network learns.
 It is abbreviated as ‘C’ which stands for ‘constant’
 The Larger the c, faster the learning
 If the learning rate (c) is very small, then neural network may
not correct its mistake immediately.
 i.e it will take longer to learn.
 It ranges between 1 and 0.
 Commonly used values are: 0.1 and 0.25
Neural network parameters:
3. Learning rules (learning function): Refers to functions that are
used to specify how to adjust weights.
They include:
• Delta rule
• Hebbian rule
• Gradient descent rule
Delta rule
 The delta learning rule states that ‘ if it’s not broke, don’t
fix it
 That is; there is no need of changing any of the weights if
there is no learning error
 Ie.
if (desired output – actual output)=0 then don’t adjust them
 ‘’Delta ” refers to the difference between desired and actual
output (learning error).
Hebbian Learning Rule
 Hebbian rule states that
 “Neurons that fire together, wire together.”
 i.e:
 When two connected neurons are firing at the same time, the
strength of the synapse between them increases.
 The rule builds on Hebbs's 1949 learning rule which states that
the connections between two neurons might be strengthened if the
neurons fire simultaneously.
 rule that specifies how much the weight of the connection
between two units should be increased or decreased in proportion
to the product of their activation.
Hebbian Learning Rule
 The Hebb rule determines the change in the weight connection
from unit i to unitj by
 Dwij = r * ai * aj,
 where r is the learning rate and ai, aj represent the activations
of ui and uj respectively.
 Thus, if both ui and uj are activated the weight of the
connection from ui to uj should be adjusted.
Gradient descent learning rule
 This rule states that the Minimum of a function is
found by following the slope of the function
Learning algorithms
 The most popular learning algorithms include:
 Perceptron learning
 Back propagation algorithm
Lecture Notes for data mining
49
49

More Related Content

Similar to lecture11_Artificial neural networks.ppt

lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
butest
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
AMIT KUMAR
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
sravanthi computers
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
pratik610182
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
butest
 

Similar to lecture11_Artificial neural networks.ppt (20)

lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Artificial neural network paper
Artificial neural network paperArtificial neural network paper
Artificial neural network paper
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
 
Neural networks introduction
Neural networks introductionNeural networks introduction
Neural networks introduction
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
7 nn1-intro.ppt
7 nn1-intro.ppt7 nn1-intro.ppt
7 nn1-intro.ppt
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
Artificial neural network for machine learning
Artificial neural network for machine learningArtificial neural network for machine learning
Artificial neural network for machine learning
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Dr. Syed Muhammad Ali Tirmizi - Special topics in finance lec 13
Dr. Syed Muhammad Ali Tirmizi - Special topics in finance   lec 13Dr. Syed Muhammad Ali Tirmizi - Special topics in finance   lec 13
Dr. Syed Muhammad Ali Tirmizi - Special topics in finance lec 13
 
Neural networks
Neural networksNeural networks
Neural networks
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Data Science - Part VIII - Artifical Neural Network
Data Science - Part VIII -  Artifical Neural NetworkData Science - Part VIII -  Artifical Neural Network
Data Science - Part VIII - Artifical Neural Network
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 

lecture11_Artificial neural networks.ppt

  • 1. Lecture 11 Artificial Neural Networks Data mining and Warehousing
  • 2. Properties of Brain It has Ten billion (1010) neurons On average, each neuron has several thousand connections Many neurons die as we progress through life, and are not replaced, yet we continue to learn. They are Compensated for problems by massive parallelism
  • 3. Properties of Brain  The interconnections of biological neurons is called biological neural network.  Neural network allows High degree of parallel computation
  • 4. biological Neuron  Biological Neuron is the fundamental processing unit of the brain.  It learns from experience (by example)  It consists of the following components:  1. soma  2. axon  3. synapse  4. dendrites  5. nucleus  6. Axon Hillock  7. Myelin sheath  8.Nodes of ranvier  9.Terminal Buttons
  • 5. Components of a biological Neuron
  • 6. Components of a biological Neuron 1. Nucleus. The smallest unit of a neuron. 2. Soma: this is a cell body that contains a nucleus. - It supports chemical processing and production of neural transmitters. 3 .Dendrites: This is the input component a neuron for receiving connections from other neurons. 4. Axon: output component that carries information away from soma to the synaptic sites of other neurons. Axon splits into a number of strands each of which connects to another neuron. 5. Axon Hillock: this is the site of summation of incoming information from other neurons
  • 7. Components of a biological Neuron 6. Myelin sheath: Consists of fat-containing cells that insulate the axon from electrical activity. This insulation acts to increase the rate of transmission of signals. 7. Nodes of ranvier: These are gaps between mylen sheath cells along axons. 8. Terminal Buttons: These are the small knobs at the end of an axon that release chemicals. are also called neuron transmitters. 8. Synapse : This is the point at which neurons join other neurons. A neuron may connect to as many as 100,000 other neurons. Electro chemical communication between neurons takes place at these junctions
  • 8. Information flow in a Biological Neuron Input/output and the propagation of information is as follows
  • 9. Information flow in a Biological Neuron 1. Dendrites receives activation from other neurons. 2. Soma processes the incoming activations by summing the inputs. Once a threshold level is reached then it converts input activations into output activations . 3. Output activations are sent down the axon as an electrical impulse.
  • 10. Synapses Synapses refers to junctions that allow signal transmission between the axons and dendrites. Sending activation to other neurons is known as firing output Synapses vary in strength Good connections allows a large signal Slight connections allow only a weak signal.
  • 12. Artificial Neural Networks  A network of processing units (programming constructs) that mimic the properties of biological neurons. inputs outputs Ann Brain
  • 13. Parts of A Neural Network A neural network two main components : 1. Artificial Neurons: these are individual Processing units{uj}, where each uj has a certain activation level aj(t) at any point in time. 2. Weighted interconnections between the various processing units which determine how the activation of one unit leads to input for another unit.
  • 14. General Structure of ANN • The input layer: set of neurons Introducing input values into the network. – No activation function or other processing. • The hidden layers.: Perform processing e.g classifying inputs – Consist summation and activation functions – Two hidden layers are sufficient to solve any problem – Ouputs are passed on to the ouput layers – More hidden layers may be better • The output layer. Perform processing e.g classifying inputs - Consist summation and activation functions - Outputs are passed on to the world outside the neural network.
  • 15. Lecture Notes for data mining 15 General Structure of ANN Input Layer Hidden Layer Output Layer x1 x2 x3 x4 x5 y
  • 16. Benefits of artificial neural networks: 1. Solve complex problem: A neural networks are used to perform tasks that a linear program can not. 2. Parallelism: When an element of the neural network fails, it can continue without any problem by their parallel nature. 3. Learning capability: A neural network learns and does not need to be reprogrammed. 4. Wider application : It can be implemented in any application.
  • 17. Disadvantages of ANN 1. Training requirement: The neural network needs training to operate. 2. Resource Intensive: Requires high processing time for large neural networks. 3. Complexity: Neural Networks can be extremely hard to use 4. Many parameters: Consists of many Parameters to be set
  • 18. Artificial neuron  Artificial neuron is a mathematical function which simulates the biological neuron.  It act as the basic information processing unit of a artificial Neural network. Activation function g(Si ) Si Oi I1 I2 I3 wi1 wi2 wi3 Oi Neuron i Input Output threshold, t
  • 19. Flow of information in Artificial Neuron  A set of input connections brings in activations from other neurons. E.g input1,input2,input3  A processing unit sums the inputs and then applies an activation function. e.g (input1*w)+(input2*)  An output line transmits the result to other neurons. Activation function g(Si ) Si Oi I1 I2 I3 wi1 wi2 wi3 Oi Neuron i Input Output threshold, t
  • 20. General Structure of Artificial neuron  Artificial neuron has two main components  1. summation function  2. activation function
  • 21. General Structure of Artificial neuron 1. A summation function (linear combiner) A function(rule) which computes the weighted sum of the inputs From other neurons  It is also known as adder function:    m 1 j jx w u j
  • 22. General Structure of Artificial neuron 2. An activation function this is a function that is applied to the weighted sum of the inputs (u) of a neuron to produce the output Activation refers to the output signal produced by this function when it acts on the set of input signals.  The output value is passed to other neurons in the network.  This function is also called squashing function since it limits the amplitude of the output of the neuron.
  • 23. Neural network architectures Neural networks architectures are divided into two main categories:  1. Recurrent neural networks  2. Feed forward neural networks Artificial Neural Networks architectures Feedforward Recurrent hebbian, SOM,BP, Perceptron ART Elman, Jordan, Hopfield
  • 24. 1) feed- forward networks  This is a network which has no feedback (loops).  Allow signals to travel one way only; from input to output (unidirectional)  They are two types of feed- forward networks:  1. Single layer feed-forward networks  2. Multi-layer feed-forward networks
  • 25. Single layer feed-forward networks  This is a feed-forward network where every output node is connected to every input node  It has no hidden layer Input Output layer layer Example :
  • 26. Multi-layer feed-forward networks  This a feed-forward network with One or more hidden layers. Input Hidden Output layer layer layer 2-layer or 1-hidden layer fully connected network
  • 27. Examples of feed forward networks  1. som  2. Hebbian  3. perceptron  4. back propagation
  • 28. 2. Recurrent networks  This is a network where information can travel back from the output to the input. Where connections between units form a directed cycle (loop)
  • 29. NN 1 29  Recurrent networks consists of one or more feedback loops.  The connections between units form a directed cycle z-1 z-1 z-1 Recurrent network input hidden output
  • 30. Benefits of Recurrent networks  They can implement more complex agent designs  Examples  Hopfield Networks and Boltzmann machines.
  • 31. Recurrent networks limitation: It can be unstable, or oscillate, or exhibit chaotic behaviour e.g., given some input values, it can take a long time to compute stable output and learning is made more difficult
  • 32. Learning in Neural networks
  • 33. Learning in brains Brains learns by altering strength of connections between neurons (synapses).
  • 34. Learning in brain  Learning in brain occurs when synapse strength change.  Good connections allows a large signal  Slight connections allow only a weak signal  Amount of signal passing through a neuron depends on: 1. Intensity of signal from feeding neurons 2. Their synaptic strengths 3. Threshold (activation level (point)) of the receiving neuron
  • 35. Lecture Notes for data mining 35 Learning in artificial neural networks ( ANN Activation function g(Si ) Si Oi I1 I2 I3 wi1 wi2 wi3 Oi Neuron i Input Output threshold, t Input Layer Hidden Layer Output Layer x1 x2 x3 x4 x5 y Training ANN involve adjusting the weights of the neuron inputs
  • 36. Learning in neural networks  Input weights represent synapse strengths of biological neurons.  Weights are adjusted in such a way that the output of ANN is consistent with class labels of training examples so as to reduce the learning error. -1 2 2 X1 X2 X3 Y
  • 37. General Learning algorithm in ANN Learning in Ann involves four Steps : 1. Introduce inputs and guess initial weight values. (Initializing) 2. Computes An Output and Compare with Desired Output to determine the error 3. Determine direction of weight adjustments (whether +ve or –ve) 4. Adjust Weights for output layer according to calculations in order to Reduce Error 5 adjust weights for hidden layer.
  • 38. Training the Artificial Neural Network
  • 39. Learning error  Learning Error is the difference between actual and desired output.  Weight is adjusted relative to error size  Propagation to previous layer is done if the error is not equal to zero.  With time this leads to Improved performance
  • 40. Neural network parameters  Before learning starts the following parameters need to be specified. 1. Threshold : 2. learning rate: 3. learning rule: 4. learning algorithm
  • 41. Neural network parameters: 1. Threshold : is the lowest possible input value (potential) that is required for the neuron to activate (fire). •Generally Neurons do not fire (produce an output) unless their total input is equal or above a threshold value.
  • 42. Neural network parameters:  2. learning rate : a value that is set to determine the speed at which the network learns.  It is abbreviated as ‘C’ which stands for ‘constant’  The Larger the c, faster the learning  If the learning rate (c) is very small, then neural network may not correct its mistake immediately.  i.e it will take longer to learn.  It ranges between 1 and 0.  Commonly used values are: 0.1 and 0.25
  • 43. Neural network parameters: 3. Learning rules (learning function): Refers to functions that are used to specify how to adjust weights. They include: • Delta rule • Hebbian rule • Gradient descent rule
  • 44. Delta rule  The delta learning rule states that ‘ if it’s not broke, don’t fix it  That is; there is no need of changing any of the weights if there is no learning error  Ie. if (desired output – actual output)=0 then don’t adjust them  ‘’Delta ” refers to the difference between desired and actual output (learning error).
  • 45. Hebbian Learning Rule  Hebbian rule states that  “Neurons that fire together, wire together.”  i.e:  When two connected neurons are firing at the same time, the strength of the synapse between them increases.  The rule builds on Hebbs's 1949 learning rule which states that the connections between two neurons might be strengthened if the neurons fire simultaneously.  rule that specifies how much the weight of the connection between two units should be increased or decreased in proportion to the product of their activation.
  • 46. Hebbian Learning Rule  The Hebb rule determines the change in the weight connection from unit i to unitj by  Dwij = r * ai * aj,  where r is the learning rate and ai, aj represent the activations of ui and uj respectively.  Thus, if both ui and uj are activated the weight of the connection from ui to uj should be adjusted.
  • 47. Gradient descent learning rule  This rule states that the Minimum of a function is found by following the slope of the function
  • 48. Learning algorithms  The most popular learning algorithms include:  Perceptron learning  Back propagation algorithm
  • 49. Lecture Notes for data mining 49 49

Editor's Notes

  1. 10-00
  2. Back propagation This method is proven highly successful in training of multilayered neural nets. The network is not just given reinforcement for how it is doing on a task. Information about errors is also filtered back through the system and is used to adjust the connections between the layers, thus improving performance. A form of supervised learning.