SlideShare a Scribd company logo
1 of 91
Neural Networks
Adri Jovin J J, M.Tech., Ph.D.
UITE221- SOFT COMPUTING
McCulloch-Pitts Neuron
• Discovered in 1943
• Usually called M-P Neuron
• Connected by directed weighted paths
• Binary
• Has both excitatory connections and inhibitory connections
𝑓 𝑦𝑚 =
1 𝑖𝑓 𝑦𝑖𝑛 ≥ 𝜃
0 𝑖𝑓 𝑦𝑖𝑛 < 𝜃
UITE221 SOFT COMPUTING 2
Y
x1
x2
xn
Xn+1
Xn+m
-p
-p
w
w
w
y
McCulloch-Pitts Neuron
• For inhibition
𝜃 > 𝑛𝑤 − 𝑝
• For firing
𝑘𝑤 ≥ 𝜃 > 𝑘 − 1 𝑤
• No particular training algorithm
• Analysis has to be done to determine the values of the weights and the threshold
• Used to make a neuron perform a simple logic function
UITE221 SOFT COMPUTING 3
Linear Separability
• ANN does not give exact solution for a nonlinear problems rather it provides possible approximate solutions
• Linear separability: separation of input space into regions based on whether the network response is positive or not
• Decision line (decision-making line or decision-support line or linear separable line) separates the positive and
negative responses
𝑦𝑖𝑛 = 𝑏 +
𝑖=1
𝑛
𝑥𝑖𝑤𝑖
UITE221 SOFT COMPUTING 4
Hebb Network
Hebb Rule: 𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝑥𝑖𝑦
The Algorithm:
Step 0 : Initialize the weights. Basically in this network they may be set to zero,
i.e., 𝑤𝑖 = 0 𝑓𝑜𝑟 𝑖 𝑡𝑜 𝑛
Step 1 : Steps 2 – 4 have to be performed for each input training vector and the target output
pair, s:t
Step 2 : Input units activations are set. Generally, the activation function of the input layer us
identify function: 𝑥𝑖 = 𝑠𝑖 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛
Step 3 : Output units activations are set: 𝑦 = 𝑡
Step 4 : Weight adjustments and bias adjustments are performed:
𝑤𝑖(𝑛𝑒𝑤) = 𝑤𝑖(𝑜𝑙𝑑) + 𝑥𝑖𝑦
𝑏(𝑛𝑒𝑤) = 𝑏(𝑜𝑙𝑑) + 𝑦
UITE221 SOFT COMPUTING 5
Start
Initialize Weights
For
each
𝑠: 𝑡
Activate input units
𝑥𝑖 = 𝑠𝑖
Activate output units
𝑦 = 𝑡
Weight update
𝑤𝑖(𝑛𝑒𝑤) = 𝑤𝑖(𝑜𝑙𝑑) + 𝑥𝑖𝑦
Bias update
𝑏(𝑛𝑒𝑤) = 𝑏(𝑜𝑙𝑑) + 𝑦
Stop
No
Yes
Perceptron Networks
• Also known as simple perceptron
• Single-layer feed-forward network
• Consist of 3 units: sensory unit (input unit), associator unit (hidden unit) and response unit (output unit)
• Sensory units are connected to associator units with fixed weights having values 1, 0, -1 (assigned randomly)
• Binary activation function is used in sensory and associator unit
• Response unit has an activation of 1, 0, -1. The binary step with fixed threshold 𝜃 is used as activation for associator.
• Output signals from associator unit to the response unit is binary
UITE221 SOFT COMPUTING 6
Perceptron Networks
• The output of the perceptron network is given by
𝑦 = 𝑓(𝑦𝑖𝑛)
where 𝑓(𝑦𝑖𝑛) is the activation function which is defined as
𝑓 𝑦𝑖𝑛 =
1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃
0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃
−1 𝑖𝑓 𝑦𝑖𝑛 < −𝜃
• The perceptron learning rule is used in the weight update between the associator unit and the response unit
• For each training input the network will calculate the response and it will determine whether or not an error has
occurred
• Error calculation is based on the comparison of the values of targets with those of the calculated outputs
• The weights on the connections from the units that send the nonzero signal will get adjusted suitably
UITE221 SOFT COMPUTING 7
Perceptron Networks
The weights will be adjusted based on the learning rule if an error has occurred for a particular training pattern
𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝛼𝑡𝑥𝑖
𝑏 𝑛𝑒𝑤 = 𝑏 𝑜𝑙𝑑 + 𝛼𝑡
• If there is no error, no weight update takes place and hence the training process may be stopped
• In the above equation the target value 𝑡 is +1 or −1 and 𝛼 is the learning rate
• The associator unit is found to consist if a set of sub-circuits called feature predicates
• Feature predicates are hardwired to detect the specific feature of a pattern and are equivalent to the feature detectors
• The weights present in the input layers are all fixed, while the weights on the response unit are trainable
UITE221 SOFT COMPUTING 8
Original Perceptron Networks
x x x
x
x
UITE221 SOFT COMPUTING 9
𝑥1
𝑥2
𝑥𝑛
.
.
.
𝑌1
𝑌2
𝑌𝑚
.
.
.
.
.
.
.
.
.
𝑥1
𝑥2
𝑥𝑛
𝑦1
𝑦2
𝑦𝑚
𝑡1
𝑡2
𝑡𝑛
𝜃1
𝜃2
𝜃𝑛
𝑤11
𝑤12
𝑤1𝑚
𝑤21
𝑤22
𝑤2𝑚
𝑤𝑛1
𝑤𝑛2
𝑤𝑛𝑚
Sensory unit sensor grid
representing any pattern
Fixed weight
value of 1, 0,
-1 at random
Output 0 or 1 Output 0 or 1 Desired output
Perceptron Learning Rule
The activation function applied over the network input is as follows:
𝑓 𝑦𝑖𝑛 =
1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃
0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃
−1 𝑖𝑓 𝑦𝑖𝑛 < −𝜃
The update of weight in case of perceptron is as shown below:
If 𝑦 ≠ 𝑡, then
𝑤(𝑛𝑒𝑤) = 𝑤(𝑜𝑙𝑑) + 𝛼𝑡𝑥
else, we have
𝑤(𝑛𝑒𝑤) = 𝑤(𝑜𝑙𝑑)
UITE221 SOFT COMPUTING 10
Architecture
UITE221 SOFT COMPUTING 11
Y
1
x1
xi
Xn
wn
wi
w1
b
y
x1
xi
xn
x0
Flowchart
UITE221 SOFT COMPUTING 12
Start Initialize weights & bias Set 𝛼(0 𝑡𝑜 1)
For
each
𝑠: 𝑡
Activate input units
xi=si
Calculate net
input yin
Apply activation, obtain
y=f(yin)
if y!=t
wi(new) = wi(old) + αtx
b(new) = b(old) + αt
wi(new) = wi(old)
b(new) = b(old)
If weight
changes Stop
Yes
Yes
Yes
No
No
No
Perceptron Network Testing Algorithm
Step 0 : The initial weights to be used here are taken from the training algorithms (the final weights obtained during training).
Step 1 : For each input vector X to be classified, perform steps 2-3.
Step 2 : Set activations of the input unit.
Step 3 : Obtain the response of the output unit
𝑦𝑖𝑛 =
𝑖=1
𝑛
𝑥𝑖𝑤𝑖
𝑦 = 𝑓 𝑦𝑖𝑛 =
1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃
0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃
−1 𝑖𝑓𝑦𝑖𝑛 < −𝜃
UITE221 SOFT COMPUTING 13
Adaptive Linear Neuron (ADALINE)
• Network with single linear unit
• The input-output relationship is linear
• Uses bipolar activation
• Trained using delta rule, also known as Least Mean Square (LMS) rule or Widrow-Hoff rule
• Learning rule minimize the mean-squared error between the activation and target value
• Delta rule for adjusting the weight of the ith pattern (i = 1 to n)
∆𝑤𝑖 = 𝛼 𝑡 − 𝑦𝑖𝑛 𝑥𝑖
UITE221 SOFT COMPUTING 14
Change of weight rate of learning net input to output unit Vector of activation of input unit
Architecture
UITE221 SOFT COMPUTING 15
Σ
1
x1
x2
Xn
wn
w2
w1
b
Y
x1
x2
xn
x0= 1
f(yin)
Adaptive
Algorithm
Output error
generator
𝜃 = 𝑡 − 𝑦𝑖𝑛
𝑦𝑖𝑛 = 𝑥𝑖𝑤𝑖
𝑦𝑖𝑛
t
Flowchart
UITE221 SOFT COMPUTING 16
Start
Set initial values weights
and bias, learning state
𝑤, 𝑏, 𝛼
Input the specified tolerance
error 𝐸𝑠
For
each
𝑠: 𝑡
Activate input layer units
𝑥𝑖 = 𝑠𝑖(𝑖 = 1 𝑡𝑜 𝑛)
Calculate net input
𝑦𝑖𝑛 = 𝑏 + 𝑥𝑖𝑤𝑖
Weight update
𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦𝑖𝑛)𝑥𝑖
𝑏 𝑛𝑒𝑤 = 𝑏 𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦𝑖𝑛)
Calculate error
𝐸𝑖 = (𝑡 − 𝑌𝑖𝑛)2
If
𝐸𝑖 = 𝐸𝑠
Stop
Yes
Yes
No
No
Training Algorithm
UITE221 SOFT COMPUTING 17
Testing Algorithm
UITE221 SOFT COMPUTING 18
Multiple Adaptive Linear Neuron
• Multiple adaptive linear neurons (MADLINE) consist of
many ADALINE in parallel with a single output unit
• Output is based on certain selection rules
• Majority vote rule – answer is true or false
• AND rule – true if and only if both inputs are true
• Weights from Adaline to Madaline are fixed whereas
weights between input and Adaline layer are adjusted
during the training process
UITE221 SOFT COMPUTING 19
Flow chart
UITE221 SOFT COMPUTING 20
Algorithm
UITE221 SOFT COMPUTING 21
Algorithm(Contd…)
UITE221 SOFT COMPUTING 22
Back-Propagation Network
Most important development in neural networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985;
Rumelhart, 1986)
Applied to multilayer feed-forward networks consisting of processing elements with continuous differentiable activation
functions
Networks associated with back-propagation learning algorithm are called back-propagation networks (BPNs)
Provides procedure for change of weights in BPN to classify patterns correctly
Memorization and Generalization
UITE221 SOFT COMPUTING 23
Architecture
UITE221 SOFT COMPUTING 24
Flowchart
UITE221 SOFT COMPUTING 25
Training Algorithm
UITE221 SOFT COMPUTING 26
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 27
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 28
Learning Factors
• Initial Weights
• Learning Rate
• Momentum Factor
• Generalization
• Number of training data
• Number of hidden layers
UITE221 SOFT COMPUTING 29
Testing Algorithm
UITE221 SOFT COMPUTING 30
Radial Basis Function Network
• Radial Basis Function (RBF) is a classification and functional approximation Neural Network developed by M.J.D.
Powell
• Uses either sigmoidal or Gaussian kernel functions
UITE221 SOFT COMPUTING 31
Flowchart
UITE221 SOFT COMPUTING 32
Training Algorithm
UITE221 SOFT COMPUTING 33
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 34
Time Delay Neural Network
• Respond to sequence of patterns
• Produce a particular output sequence in response to particular sequence of inputs
UITE221 SOFT COMPUTING 35
Time delay neural network (FIR Filter) TDNN with output feedback (IIR Filter)
Associative Memory Networks
• Can store a set of patterns as memories
• Content-addressable memories (CAM)
• Associates data to address
UITE221 SOFT COMPUTING 36
Training algorithms for pattern association
Hebb Rule
UITE221 SOFT COMPUTING 37
Auto-associative Memory Network
• Training input and the target output vectors are the same
• Determination of weights of the net is called storing of vectors
UITE221 SOFT COMPUTING 38
Flowchart
UITE221 SOFT COMPUTING 39
Training Algorithm
UITE221 SOFT COMPUTING 40
Testing Algorithm
UITE221 SOFT COMPUTING 41
Heteroassociative Memory Network
• The training input and the target output are different
• The weights are determined such that the net can store a set of pattern associations
• Determination of weight is done either by Hebb rule or delta rule
UITE221 SOFT COMPUTING 42
Testing Algorithm
UITE221 SOFT COMPUTING 43
Bidirectional Associative Memory (BAM)
• Developed by Kosko in 1988
• Performs forward and backward associative searches
for stored stimulus responses
• Recurrent pattern-matching network that encodes
binary or bipolar patterns using Hebbian rule
• Two types: discrete and continuous
UITE221 SOFT COMPUTING 44
Testing algorithm (Discrete BAM)
UITE221 SOFT COMPUTING 45
Testing algorithm (Discrete BAM)
UITE221 SOFT COMPUTING 46
Hopfield Networks
Developed by John J. Hopfield in 1982
Conforms to the asynchronous nature of biological neurons
Promoted the design of the first analog VLSI neural chip
Two types: discrete and continuous
UITE221 SOFT COMPUTING 47
Architecture of Discrete Hopfield Net
UITE221 SOFT COMPUTING 48
Training algorithm
UITE221 SOFT COMPUTING 49
Testing Algorithm
UITE221 SOFT COMPUTING 50
Continuous Hopfield Network
UITE221 SOFT COMPUTING 51
Iterative Auto-associative Memory Networks
• Also known as recurrent auto-associative networks
• Developed by James Anderson in 1977
• Brain-in-the-box model, an extension described in 1972
• An activity pattern inside the box receives positive feedback on certain components which has the effect of forcing it
outward
UITE221 SOFT COMPUTING 52
Training Algorithm
UITE221 SOFT COMPUTING 53
Testing Algorithm
UITE221 SOFT COMPUTING 54
Unsupervised learning networks
Kohonen self-organizing feature maps
Learning Vector Quantization
Counter Propagation networks
Adaptive Resonance Theory network
Kohonen Self-organizing Feature Maps
Feature Mapping – converts patterns of arbitrary dimensionality into a response of one- or two- dimensional neuron
(conversion of wide pattern space into a feature space)
UITE221 SOFT COMPUTING 56
Architecture
UITE221 SOFT COMPUTING 57
Flowchart
UITE221 SOFT COMPUTING 58
Training Algorithm
UITE221 SOFT COMPUTING 59
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 60
Kohonen Self-organizing Motor Map
UITE221 SOFT COMPUTING 61
Learning Vector Quantization
Process of classifying the patterns where each output unit represents a particular class
For each class several units should be used
The output weight vector is called the reference vector or code book vector for the class which the unit vector
represents
Special case of competitive net
Minimizes misclassification
Used in Optical Character Recognition, speech processing etc.
UITE221 SOFT COMPUTING 62
Architecture
UITE221 SOFT COMPUTING 63
Flowchart
UITE221 SOFT COMPUTING 64
Training Algorithm
UITE221 SOFT COMPUTING 65
Variants
• LVQ 2
• LVQ 2.1
• LVQ 3
UITE221 SOFT COMPUTING 66
Counter Propagation Networks
Proposed by Hecht Nielsen in 1987
Based on input, output and clustering layers
Used in data compression, function approximation and pattern association
Constructed from instar-outstar model
UITE221 SOFT COMPUTING 67
Full Counter Propagation Net
UITE221 SOFT COMPUTING 68
Phase-I of Full CPN
UITE221 SOFT COMPUTING 69
Phase-II of Full CPN
UITE221 SOFT COMPUTING 70
Flowchart
UITE221 SOFT COMPUTING 71
Training Algorithm
UITE221 SOFT COMPUTING 72
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 73
Testing Algorithm
UITE221 SOFT COMPUTING 74
Forward-only CP Net
UITE221 SOFT COMPUTING 75
Flowchart
UITE221 SOFT COMPUTING 76
Training Algorithm
UITE221 SOFT COMPUTING 77
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 78
Testing Algorithm
UITE221 SOFT COMPUTING 79
Adaptive Resonance Theory Network
Developed by Steven Grossberg and Gail Carpenter in 1987
Based on competition
Finds categories anonymously and learns new categories if needed
Two types: ART 1: designed for clustering binary vectors
ART 2: designed to accept continuous-valued vectors
UITE221 SOFT COMPUTING 80
Fundamental Algorithm
UITE221 SOFT COMPUTING 81
ART 1
Made up of two units:
1. Computational Units
2. Supplemental Units
UITE221 SOFT COMPUTING 82
Supplemental part
Flowchart
UITE221 SOFT COMPUTING 83
Training Algorithm
UITE221 SOFT COMPUTING 84
Training Algorithm
UITE221 SOFT COMPUTING 85
ART 2
UITE221 SOFT COMPUTING 86
Supplemental part
Flowchart
87
Training Algorithm
UITE221 SOFT COMPUTING 88
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 89
Training Algorithm (Contd…)
UITE221 SOFT COMPUTING 90
References
Rajasekaran, S., & Pai, G. V. (2017). Neural Networks, Fuzzy Systems and Evolutionary Algorithms: Synthesis and
Applications. PHI Learning Pvt. Ltd..
Haykin, S. (2010). Neural Networks and Learning Machines, 3/E. Pearson Education India.
Sivanandam, S. N., & Deepa, S. N. (2007). Principles of soft computing. John Wiley & Sons.
UITE221 SOFT COMPUTING 91

More Related Content

What's hot

Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networksSi Haem
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationYan Xu
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural NetworksDatabricks
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance TheoryNaveen Kumar
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 
MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMPuneet Kulyana
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural NetworksDean Wyatte
 
Artificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionArtificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionMohammed Bennamoun
 

What's hot (20)

Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks introduction
Neural networks introductionNeural networks introduction
Neural networks introduction
 
Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
 
Neural network
Neural networkNeural network
Neural network
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Back propagation
Back propagationBack propagation
Back propagation
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Max net
Max netMax net
Max net
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHM
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
 
Artificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionArtificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competition
 

Similar to Neural Networks Explained

Similar to Neural Networks Explained (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
03 Single layer Perception Classifier
03 Single layer Perception Classifier03 Single layer Perception Classifier
03 Single layer Perception Classifier
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
 
chapter3.pptx
chapter3.pptxchapter3.pptx
chapter3.pptx
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Supervised learning network
Supervised learning networkSupervised learning network
Supervised learning network
 
Unit 1
Unit 1Unit 1
Unit 1
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
Nural network ER. Abhishek k. upadhyay Learning rules
Nural network ER. Abhishek  k. upadhyay Learning rulesNural network ER. Abhishek  k. upadhyay Learning rules
Nural network ER. Abhishek k. upadhyay Learning rules
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
 
Neural networks
Neural networksNeural networks
Neural networks
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
SOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - UnitSOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - Unit
 
Neural
NeuralNeural
Neural
 
CS767_Lecture_05.pptx
CS767_Lecture_05.pptxCS767_Lecture_05.pptx
CS767_Lecture_05.pptx
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
 
Unit 2
Unit 2Unit 2
Unit 2
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 

More from Adri Jovin

Adri Jovin J J - CV
Adri Jovin J J - CVAdri Jovin J J - CV
Adri Jovin J J - CVAdri Jovin
 
Introduction to Relational Database Management Systems
Introduction to Relational Database Management SystemsIntroduction to Relational Database Management Systems
Introduction to Relational Database Management SystemsAdri Jovin
 
Introduction to ER Diagrams
Introduction to ER DiagramsIntroduction to ER Diagrams
Introduction to ER DiagramsAdri Jovin
 
Introduction to Database Management Systems
Introduction to Database Management SystemsIntroduction to Database Management Systems
Introduction to Database Management SystemsAdri Jovin
 
Introduction to Genetic Algorithm
Introduction to Genetic AlgorithmIntroduction to Genetic Algorithm
Introduction to Genetic AlgorithmAdri Jovin
 
Introduction to Fuzzy logic
Introduction to Fuzzy logicIntroduction to Fuzzy logic
Introduction to Fuzzy logicAdri Jovin
 
Introductory Session on Soft Computing
Introductory Session on Soft ComputingIntroductory Session on Soft Computing
Introductory Session on Soft ComputingAdri Jovin
 
Creative Commons
Creative CommonsCreative Commons
Creative CommonsAdri Jovin
 
Image based security
Image based securityImage based security
Image based securityAdri Jovin
 
Blockchain Technologies
Blockchain TechnologiesBlockchain Technologies
Blockchain TechnologiesAdri Jovin
 
Introduction to Cybersecurity
Introduction to CybersecurityIntroduction to Cybersecurity
Introduction to CybersecurityAdri Jovin
 
Advanced Encryption System & Block Cipher Modes of Operations
Advanced Encryption System & Block Cipher Modes of OperationsAdvanced Encryption System & Block Cipher Modes of Operations
Advanced Encryption System & Block Cipher Modes of OperationsAdri Jovin
 
Heartbleed Bug: A case study
Heartbleed Bug: A case studyHeartbleed Bug: A case study
Heartbleed Bug: A case studyAdri Jovin
 
Zoom: Privacy and Security - A case study
Zoom: Privacy and Security - A case studyZoom: Privacy and Security - A case study
Zoom: Privacy and Security - A case studyAdri Jovin
 
Elliptic Curve Cryptography
Elliptic Curve CryptographyElliptic Curve Cryptography
Elliptic Curve CryptographyAdri Jovin
 
El Gamal Cryptosystem
El Gamal CryptosystemEl Gamal Cryptosystem
El Gamal CryptosystemAdri Jovin
 
Data Encryption Standard
Data Encryption StandardData Encryption Standard
Data Encryption StandardAdri Jovin
 
Classical cryptographic techniques, Feistel cipher structure
Classical cryptographic techniques, Feistel cipher structureClassical cryptographic techniques, Feistel cipher structure
Classical cryptographic techniques, Feistel cipher structureAdri Jovin
 
Mathematical Foundations of Cryptography
Mathematical Foundations of CryptographyMathematical Foundations of Cryptography
Mathematical Foundations of CryptographyAdri Jovin
 
Security Models
Security ModelsSecurity Models
Security ModelsAdri Jovin
 

More from Adri Jovin (20)

Adri Jovin J J - CV
Adri Jovin J J - CVAdri Jovin J J - CV
Adri Jovin J J - CV
 
Introduction to Relational Database Management Systems
Introduction to Relational Database Management SystemsIntroduction to Relational Database Management Systems
Introduction to Relational Database Management Systems
 
Introduction to ER Diagrams
Introduction to ER DiagramsIntroduction to ER Diagrams
Introduction to ER Diagrams
 
Introduction to Database Management Systems
Introduction to Database Management SystemsIntroduction to Database Management Systems
Introduction to Database Management Systems
 
Introduction to Genetic Algorithm
Introduction to Genetic AlgorithmIntroduction to Genetic Algorithm
Introduction to Genetic Algorithm
 
Introduction to Fuzzy logic
Introduction to Fuzzy logicIntroduction to Fuzzy logic
Introduction to Fuzzy logic
 
Introductory Session on Soft Computing
Introductory Session on Soft ComputingIntroductory Session on Soft Computing
Introductory Session on Soft Computing
 
Creative Commons
Creative CommonsCreative Commons
Creative Commons
 
Image based security
Image based securityImage based security
Image based security
 
Blockchain Technologies
Blockchain TechnologiesBlockchain Technologies
Blockchain Technologies
 
Introduction to Cybersecurity
Introduction to CybersecurityIntroduction to Cybersecurity
Introduction to Cybersecurity
 
Advanced Encryption System & Block Cipher Modes of Operations
Advanced Encryption System & Block Cipher Modes of OperationsAdvanced Encryption System & Block Cipher Modes of Operations
Advanced Encryption System & Block Cipher Modes of Operations
 
Heartbleed Bug: A case study
Heartbleed Bug: A case studyHeartbleed Bug: A case study
Heartbleed Bug: A case study
 
Zoom: Privacy and Security - A case study
Zoom: Privacy and Security - A case studyZoom: Privacy and Security - A case study
Zoom: Privacy and Security - A case study
 
Elliptic Curve Cryptography
Elliptic Curve CryptographyElliptic Curve Cryptography
Elliptic Curve Cryptography
 
El Gamal Cryptosystem
El Gamal CryptosystemEl Gamal Cryptosystem
El Gamal Cryptosystem
 
Data Encryption Standard
Data Encryption StandardData Encryption Standard
Data Encryption Standard
 
Classical cryptographic techniques, Feistel cipher structure
Classical cryptographic techniques, Feistel cipher structureClassical cryptographic techniques, Feistel cipher structure
Classical cryptographic techniques, Feistel cipher structure
 
Mathematical Foundations of Cryptography
Mathematical Foundations of CryptographyMathematical Foundations of Cryptography
Mathematical Foundations of Cryptography
 
Security Models
Security ModelsSecurity Models
Security Models
 

Recently uploaded

Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxsocialsciencegdgrohi
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 

Recently uploaded (20)

Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 

Neural Networks Explained

  • 1. Neural Networks Adri Jovin J J, M.Tech., Ph.D. UITE221- SOFT COMPUTING
  • 2. McCulloch-Pitts Neuron • Discovered in 1943 • Usually called M-P Neuron • Connected by directed weighted paths • Binary • Has both excitatory connections and inhibitory connections 𝑓 𝑦𝑚 = 1 𝑖𝑓 𝑦𝑖𝑛 ≥ 𝜃 0 𝑖𝑓 𝑦𝑖𝑛 < 𝜃 UITE221 SOFT COMPUTING 2 Y x1 x2 xn Xn+1 Xn+m -p -p w w w y
  • 3. McCulloch-Pitts Neuron • For inhibition 𝜃 > 𝑛𝑤 − 𝑝 • For firing 𝑘𝑤 ≥ 𝜃 > 𝑘 − 1 𝑤 • No particular training algorithm • Analysis has to be done to determine the values of the weights and the threshold • Used to make a neuron perform a simple logic function UITE221 SOFT COMPUTING 3
  • 4. Linear Separability • ANN does not give exact solution for a nonlinear problems rather it provides possible approximate solutions • Linear separability: separation of input space into regions based on whether the network response is positive or not • Decision line (decision-making line or decision-support line or linear separable line) separates the positive and negative responses 𝑦𝑖𝑛 = 𝑏 + 𝑖=1 𝑛 𝑥𝑖𝑤𝑖 UITE221 SOFT COMPUTING 4
  • 5. Hebb Network Hebb Rule: 𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝑥𝑖𝑦 The Algorithm: Step 0 : Initialize the weights. Basically in this network they may be set to zero, i.e., 𝑤𝑖 = 0 𝑓𝑜𝑟 𝑖 𝑡𝑜 𝑛 Step 1 : Steps 2 – 4 have to be performed for each input training vector and the target output pair, s:t Step 2 : Input units activations are set. Generally, the activation function of the input layer us identify function: 𝑥𝑖 = 𝑠𝑖 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑛 Step 3 : Output units activations are set: 𝑦 = 𝑡 Step 4 : Weight adjustments and bias adjustments are performed: 𝑤𝑖(𝑛𝑒𝑤) = 𝑤𝑖(𝑜𝑙𝑑) + 𝑥𝑖𝑦 𝑏(𝑛𝑒𝑤) = 𝑏(𝑜𝑙𝑑) + 𝑦 UITE221 SOFT COMPUTING 5 Start Initialize Weights For each 𝑠: 𝑡 Activate input units 𝑥𝑖 = 𝑠𝑖 Activate output units 𝑦 = 𝑡 Weight update 𝑤𝑖(𝑛𝑒𝑤) = 𝑤𝑖(𝑜𝑙𝑑) + 𝑥𝑖𝑦 Bias update 𝑏(𝑛𝑒𝑤) = 𝑏(𝑜𝑙𝑑) + 𝑦 Stop No Yes
  • 6. Perceptron Networks • Also known as simple perceptron • Single-layer feed-forward network • Consist of 3 units: sensory unit (input unit), associator unit (hidden unit) and response unit (output unit) • Sensory units are connected to associator units with fixed weights having values 1, 0, -1 (assigned randomly) • Binary activation function is used in sensory and associator unit • Response unit has an activation of 1, 0, -1. The binary step with fixed threshold 𝜃 is used as activation for associator. • Output signals from associator unit to the response unit is binary UITE221 SOFT COMPUTING 6
  • 7. Perceptron Networks • The output of the perceptron network is given by 𝑦 = 𝑓(𝑦𝑖𝑛) where 𝑓(𝑦𝑖𝑛) is the activation function which is defined as 𝑓 𝑦𝑖𝑛 = 1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃 0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃 −1 𝑖𝑓 𝑦𝑖𝑛 < −𝜃 • The perceptron learning rule is used in the weight update between the associator unit and the response unit • For each training input the network will calculate the response and it will determine whether or not an error has occurred • Error calculation is based on the comparison of the values of targets with those of the calculated outputs • The weights on the connections from the units that send the nonzero signal will get adjusted suitably UITE221 SOFT COMPUTING 7
  • 8. Perceptron Networks The weights will be adjusted based on the learning rule if an error has occurred for a particular training pattern 𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝛼𝑡𝑥𝑖 𝑏 𝑛𝑒𝑤 = 𝑏 𝑜𝑙𝑑 + 𝛼𝑡 • If there is no error, no weight update takes place and hence the training process may be stopped • In the above equation the target value 𝑡 is +1 or −1 and 𝛼 is the learning rate • The associator unit is found to consist if a set of sub-circuits called feature predicates • Feature predicates are hardwired to detect the specific feature of a pattern and are equivalent to the feature detectors • The weights present in the input layers are all fixed, while the weights on the response unit are trainable UITE221 SOFT COMPUTING 8
  • 9. Original Perceptron Networks x x x x x UITE221 SOFT COMPUTING 9 𝑥1 𝑥2 𝑥𝑛 . . . 𝑌1 𝑌2 𝑌𝑚 . . . . . . . . . 𝑥1 𝑥2 𝑥𝑛 𝑦1 𝑦2 𝑦𝑚 𝑡1 𝑡2 𝑡𝑛 𝜃1 𝜃2 𝜃𝑛 𝑤11 𝑤12 𝑤1𝑚 𝑤21 𝑤22 𝑤2𝑚 𝑤𝑛1 𝑤𝑛2 𝑤𝑛𝑚 Sensory unit sensor grid representing any pattern Fixed weight value of 1, 0, -1 at random Output 0 or 1 Output 0 or 1 Desired output
  • 10. Perceptron Learning Rule The activation function applied over the network input is as follows: 𝑓 𝑦𝑖𝑛 = 1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃 0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃 −1 𝑖𝑓 𝑦𝑖𝑛 < −𝜃 The update of weight in case of perceptron is as shown below: If 𝑦 ≠ 𝑡, then 𝑤(𝑛𝑒𝑤) = 𝑤(𝑜𝑙𝑑) + 𝛼𝑡𝑥 else, we have 𝑤(𝑛𝑒𝑤) = 𝑤(𝑜𝑙𝑑) UITE221 SOFT COMPUTING 10
  • 11. Architecture UITE221 SOFT COMPUTING 11 Y 1 x1 xi Xn wn wi w1 b y x1 xi xn x0
  • 12. Flowchart UITE221 SOFT COMPUTING 12 Start Initialize weights & bias Set 𝛼(0 𝑡𝑜 1) For each 𝑠: 𝑡 Activate input units xi=si Calculate net input yin Apply activation, obtain y=f(yin) if y!=t wi(new) = wi(old) + αtx b(new) = b(old) + αt wi(new) = wi(old) b(new) = b(old) If weight changes Stop Yes Yes Yes No No No
  • 13. Perceptron Network Testing Algorithm Step 0 : The initial weights to be used here are taken from the training algorithms (the final weights obtained during training). Step 1 : For each input vector X to be classified, perform steps 2-3. Step 2 : Set activations of the input unit. Step 3 : Obtain the response of the output unit 𝑦𝑖𝑛 = 𝑖=1 𝑛 𝑥𝑖𝑤𝑖 𝑦 = 𝑓 𝑦𝑖𝑛 = 1 𝑖𝑓 𝑦𝑖𝑛 > 𝜃 0 𝑖𝑓 − 𝜃 ≤ 𝑦𝑖𝑛 ≤ 𝜃 −1 𝑖𝑓𝑦𝑖𝑛 < −𝜃 UITE221 SOFT COMPUTING 13
  • 14. Adaptive Linear Neuron (ADALINE) • Network with single linear unit • The input-output relationship is linear • Uses bipolar activation • Trained using delta rule, also known as Least Mean Square (LMS) rule or Widrow-Hoff rule • Learning rule minimize the mean-squared error between the activation and target value • Delta rule for adjusting the weight of the ith pattern (i = 1 to n) ∆𝑤𝑖 = 𝛼 𝑡 − 𝑦𝑖𝑛 𝑥𝑖 UITE221 SOFT COMPUTING 14 Change of weight rate of learning net input to output unit Vector of activation of input unit
  • 15. Architecture UITE221 SOFT COMPUTING 15 Σ 1 x1 x2 Xn wn w2 w1 b Y x1 x2 xn x0= 1 f(yin) Adaptive Algorithm Output error generator 𝜃 = 𝑡 − 𝑦𝑖𝑛 𝑦𝑖𝑛 = 𝑥𝑖𝑤𝑖 𝑦𝑖𝑛 t
  • 16. Flowchart UITE221 SOFT COMPUTING 16 Start Set initial values weights and bias, learning state 𝑤, 𝑏, 𝛼 Input the specified tolerance error 𝐸𝑠 For each 𝑠: 𝑡 Activate input layer units 𝑥𝑖 = 𝑠𝑖(𝑖 = 1 𝑡𝑜 𝑛) Calculate net input 𝑦𝑖𝑛 = 𝑏 + 𝑥𝑖𝑤𝑖 Weight update 𝑤𝑖 𝑛𝑒𝑤 = 𝑤𝑖 𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦𝑖𝑛)𝑥𝑖 𝑏 𝑛𝑒𝑤 = 𝑏 𝑜𝑙𝑑 + 𝛼(𝑡 − 𝑦𝑖𝑛) Calculate error 𝐸𝑖 = (𝑡 − 𝑌𝑖𝑛)2 If 𝐸𝑖 = 𝐸𝑠 Stop Yes Yes No No
  • 19. Multiple Adaptive Linear Neuron • Multiple adaptive linear neurons (MADLINE) consist of many ADALINE in parallel with a single output unit • Output is based on certain selection rules • Majority vote rule – answer is true or false • AND rule – true if and only if both inputs are true • Weights from Adaline to Madaline are fixed whereas weights between input and Adaline layer are adjusted during the training process UITE221 SOFT COMPUTING 19
  • 20. Flow chart UITE221 SOFT COMPUTING 20
  • 23. Back-Propagation Network Most important development in neural networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985; Rumelhart, 1986) Applied to multilayer feed-forward networks consisting of processing elements with continuous differentiable activation functions Networks associated with back-propagation learning algorithm are called back-propagation networks (BPNs) Provides procedure for change of weights in BPN to classify patterns correctly Memorization and Generalization UITE221 SOFT COMPUTING 23
  • 29. Learning Factors • Initial Weights • Learning Rate • Momentum Factor • Generalization • Number of training data • Number of hidden layers UITE221 SOFT COMPUTING 29
  • 31. Radial Basis Function Network • Radial Basis Function (RBF) is a classification and functional approximation Neural Network developed by M.J.D. Powell • Uses either sigmoidal or Gaussian kernel functions UITE221 SOFT COMPUTING 31
  • 35. Time Delay Neural Network • Respond to sequence of patterns • Produce a particular output sequence in response to particular sequence of inputs UITE221 SOFT COMPUTING 35 Time delay neural network (FIR Filter) TDNN with output feedback (IIR Filter)
  • 36. Associative Memory Networks • Can store a set of patterns as memories • Content-addressable memories (CAM) • Associates data to address UITE221 SOFT COMPUTING 36
  • 37. Training algorithms for pattern association Hebb Rule UITE221 SOFT COMPUTING 37
  • 38. Auto-associative Memory Network • Training input and the target output vectors are the same • Determination of weights of the net is called storing of vectors UITE221 SOFT COMPUTING 38
  • 42. Heteroassociative Memory Network • The training input and the target output are different • The weights are determined such that the net can store a set of pattern associations • Determination of weight is done either by Hebb rule or delta rule UITE221 SOFT COMPUTING 42
  • 44. Bidirectional Associative Memory (BAM) • Developed by Kosko in 1988 • Performs forward and backward associative searches for stored stimulus responses • Recurrent pattern-matching network that encodes binary or bipolar patterns using Hebbian rule • Two types: discrete and continuous UITE221 SOFT COMPUTING 44
  • 45. Testing algorithm (Discrete BAM) UITE221 SOFT COMPUTING 45
  • 46. Testing algorithm (Discrete BAM) UITE221 SOFT COMPUTING 46
  • 47. Hopfield Networks Developed by John J. Hopfield in 1982 Conforms to the asynchronous nature of biological neurons Promoted the design of the first analog VLSI neural chip Two types: discrete and continuous UITE221 SOFT COMPUTING 47
  • 48. Architecture of Discrete Hopfield Net UITE221 SOFT COMPUTING 48
  • 52. Iterative Auto-associative Memory Networks • Also known as recurrent auto-associative networks • Developed by James Anderson in 1977 • Brain-in-the-box model, an extension described in 1972 • An activity pattern inside the box receives positive feedback on certain components which has the effect of forcing it outward UITE221 SOFT COMPUTING 52
  • 55. Unsupervised learning networks Kohonen self-organizing feature maps Learning Vector Quantization Counter Propagation networks Adaptive Resonance Theory network
  • 56. Kohonen Self-organizing Feature Maps Feature Mapping – converts patterns of arbitrary dimensionality into a response of one- or two- dimensional neuron (conversion of wide pattern space into a feature space) UITE221 SOFT COMPUTING 56
  • 61. Kohonen Self-organizing Motor Map UITE221 SOFT COMPUTING 61
  • 62. Learning Vector Quantization Process of classifying the patterns where each output unit represents a particular class For each class several units should be used The output weight vector is called the reference vector or code book vector for the class which the unit vector represents Special case of competitive net Minimizes misclassification Used in Optical Character Recognition, speech processing etc. UITE221 SOFT COMPUTING 62
  • 66. Variants • LVQ 2 • LVQ 2.1 • LVQ 3 UITE221 SOFT COMPUTING 66
  • 67. Counter Propagation Networks Proposed by Hecht Nielsen in 1987 Based on input, output and clustering layers Used in data compression, function approximation and pattern association Constructed from instar-outstar model UITE221 SOFT COMPUTING 67
  • 68. Full Counter Propagation Net UITE221 SOFT COMPUTING 68
  • 69. Phase-I of Full CPN UITE221 SOFT COMPUTING 69
  • 70. Phase-II of Full CPN UITE221 SOFT COMPUTING 70
  • 75. Forward-only CP Net UITE221 SOFT COMPUTING 75
  • 80. Adaptive Resonance Theory Network Developed by Steven Grossberg and Gail Carpenter in 1987 Based on competition Finds categories anonymously and learns new categories if needed Two types: ART 1: designed for clustering binary vectors ART 2: designed to accept continuous-valued vectors UITE221 SOFT COMPUTING 80
  • 82. ART 1 Made up of two units: 1. Computational Units 2. Supplemental Units UITE221 SOFT COMPUTING 82 Supplemental part
  • 86. ART 2 UITE221 SOFT COMPUTING 86 Supplemental part
  • 91. References Rajasekaran, S., & Pai, G. V. (2017). Neural Networks, Fuzzy Systems and Evolutionary Algorithms: Synthesis and Applications. PHI Learning Pvt. Ltd.. Haykin, S. (2010). Neural Networks and Learning Machines, 3/E. Pearson Education India. Sivanandam, S. N., & Deepa, S. N. (2007). Principles of soft computing. John Wiley & Sons. UITE221 SOFT COMPUTING 91