SlideShare a Scribd company logo
Neural Networks and
Fuzzy Systems
Single layer Perception Classifier
Dr. Tamer Ahmed Farrag
Course No.: 803522-3
Course Outline
Part I : Neural Networks (11 weeks)
• Introduction to Machine Learning
• Fundamental Concepts of Artificial Neural Networks
(ANN)
• Single layer Perception Classifier
• Multi-layer Feed forward Networks
• Single layer FeedBack Networks
• Unsupervised learning
Part II : Fuzzy Systems (4 weeks)
• Fuzzy set theory
• Fuzzy Systems
2
Building Neural Networks Strategy
• Formulating neural network solutions for particular
problems is a multi-stage process:
1. Understand and specify the problem in terms of inputs and
required outputs
2. Take the simplest form of network you think might be able to
solve your problem
3. Try to find the appropriate connection weights (including neuron
thresholds) so that the network produces the right outputs for
each input in its training data
4. Make sure that the network works on its training data and test its
generalization by checking its performance on new testing data
5. If the network doesn’t perform well enough, go back to stage 3
and try harder.
6. If the network still doesn’t perform well enough, go back to stage
2 and try harder.
7. If the network still doesn’t perform well enough, go back to stage
1 and try harder.
8. Problem solved – or not.
3
Decision Hyperplanes and Linear Separability
• If we have two inputs, then the decision boundary that is
a one dimensional straight line in the two dimensional
input space of possible input values.
• In general, A set of points in n-dimensional space are
linearly separable if there is a hyperplane of
(n − 1) dimensions that separates the sets.
• This hyperplane is clearly still linear (i.e., straight or flat
or non-curved) and can still only divide the space into two
regions.
• Problems with input patterns that can be classified using a
single hyperplane are said to be linearly separable.
Problems (such as XOR) which cannot be classified in this
way are said to be non-linearly separable.
4
Linear separated vs Non linear separated
binary Classification problems
Linear separated
Non linear separated
Class 2
Class 1
Non linear separated
Non linear separated Non linear separated
Decision Boundary for some of Logic Gates
6
outx2x1
000
010
001
111 A
x1
A
x1
x2
x1
outx2x1
000
110
101
111
outx2x1
000
110
101
011
out = 2
out= 1
AND Gate
Linear separated
OR Gate
Linear separated
XOR Gate
Non Linear separated
Implementation of Logical NOT, AND, and OR using
McCulloch-Pitts neuron
7
outx2x1
000
010
001
111
outx2x1
000
110
101
111
AND ORNOT
outx1
10
01
How to Finding Weights and Threshold?
• Constructing simple networks by hand (e.g., by
trial and error) is one thing. But what about harder
problems?
• How long should we keep looking for a solution?
We need to be able to calculate appropriate
parameter values rather than searching for
solutions by trial and error.
• Each training pattern produces a linear inequality
for the output in terms of the inputs and the
network parameters. These can be used to
compute the weights and thresholds.
8
Finding Weights Analytically for the AND Network?
9
x1
x2
W1=?
W2=?
out
T=?
outx2x1
000
010
001
111
equations
0 * w1 + 0 * w2<T
0 w1 + 1 * w2<T
1 * w1 + 0 * w2<T
1 * w1 + 1 * w2≥ T
F(z)=
0 𝑓𝑜𝑟 𝑥 < 𝑇
1 𝑓𝑜𝑟 𝑥 ≥ 𝑇
𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2)
equations
T >0
w2< T
w1 < T
w1 + w2≥ T
Easy to solve
(infinite number of solutions)
For example assume:
T=1.5 , w1=1 , w2=1
Finding Weights Analytically for the XOR Network?
10
x1
x2
W1=?
W2=?
out
T=?
outx2x1
000
110
101
011
equations
0 * w1 + 0 * w2<T
0 w1 + 1 * w2≥T
1 * w1 + 0 * w2≥T
1 * w1 + 1 * w2< T
F(z)=
0 𝑓𝑜𝑟 𝑥 < 𝑇
1 𝑓𝑜𝑟 𝑥 ≥ 𝑇
𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2)
equations
T >0 (1)
w2 ≥ T (2)
w1 ≥ T (3)
w1 + w2 < T (4)
No Solution
Why?
How to solve this problem ?
Useful Notation
• We often need to deal with ordered sets of numbers, which we write as
vectors, e.g.
x = (x1, x2, x3, …, xn) , y = (y1, y2, y3, …, ym)
• The components xi can be added up to give a scalar (number), e.g.
s = x1 + x2 + x3 + … + xn =
𝑖=1
𝑛
𝑥𝑖
• Two vectors of the same length may be added to give another vector,
e.g.
z = x + y = (x1 + y1, x2 + y2, …, xn + yn)
• Two vectors of the same length may be multiplied to give a scalar, e.g.
p = x.y = x1y1 + x2 y2 + …+ xnyn =
𝑖=1
𝑛
𝑥𝑖 𝑦𝑖
11
What is the problem of implementing XOR?
• In the previous slide, Clearly the first, second and
third inequalities are incompatible with the fourth,
so there is in fact no solution.
• We need more complex networks, e.g. that combine
together many simple networks, or use different
activation/thresholding/transfer functions.
• It then becomes much more difficult to determine
all the weights and thresholds by hand. Next, we
will see how a neural network can learn these
parameters.
12
Perceptron
• In 1958, Frank Rosenblatt introduced a training algorithm that
provided the first procedure for training a simple ANN: a perceptron.
• Any number of McCulloch-Pitts neurons can be connected together in
any way we like.
• The arrangement that has one layer of input neurons feeding forward
to one output layer of McCulloch-Pitts neurons, with full connectivity, is
known as a Perceptron:
13
𝒊= 𝟎
𝒏
𝒘𝒊 𝒙𝒊 +𝒃
1
0
X1
X2
X3
w1
w2
w3
Output
𝑶𝒖𝒕𝒑𝒖𝒕 = 𝒉𝒂𝒓𝒅𝒍𝒊𝒎
𝒊=𝟎
𝒏
𝒘𝒊 𝒙𝒊 + 𝒃
Perceptron
• In 1958, Frank Rosenblatt introduced a training algorithm
that provided the first procedure for training a simple ANN:
a perceptron.
• Any number of McCulloch-Pitts neurons can be connected
together in any way we like.
• The arrangement that has one layer of input neurons
feeding forward to one output layer of McCulloch-Pitts
neurons, with full connectivity, is known as a Perceptron.
• It using hard limiter as the activation function.
• There are a learning rule (algorithm) to adjust the weights
for better results
14
Single layer feedforward (Perceptron)
• The following figure showing a perceptron network
with n inputs(organized in one input layer) and m
output (organized in one output layer) no hidden
layers :
15
i j
1
2
3
Wij
n
1
2
3
Perceptron
• Simple network:
The following figure showing a perceptron network with
2 input(organized in one input layer) and 1 output
(organized in one output layer) :
• Important Remark: for binary classification problems
we always has single neuron in the output layer
16
b
𝒊= 𝟎
𝒏
𝒘𝒊 𝒙𝒊 +𝒃
1
0
X1
X2
w1
w2
1
output
Fixed Increment Perceptron Learning
Algorithm
• The problem: using the perceptron to perform binary
classification task
• The goal: adjust the weights and bias to some values by which
the percentage of the classification error decreased.
• How: by training the perceptron using pre- classified examples
(supervised learning).
• the desired output is the correct output should be generated (D)
• Network calculates the output (Y) which may be wrong and need
to be recalculated after adjust the network parameters.
• Normally , we starts random initial weights and adjust them in
small steps (using the learning algorism) until the required
outputs are produced.
17
Fixed Increment Perceptron Learning
Algorithm (cont.)
18
Calculating the new weights:
• Calculate the error (ej)
𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗
• Network changes its weights in proportion to the error:
Δ 𝑤𝑖𝑗 = 𝛼 ∗ 𝑒𝑗 ∗ 𝑥𝑖
• Where  is the learning rate or step size :
1. Used to control the amount of weight adjustment at each step of training.
2. ranges from 0 to 1.
3. determines the rate of learning in each time step.
• The new (adjusted) weight:
𝑤𝑖𝑗
𝑛𝑒𝑤
= 𝑤𝑖𝑗
𝑜𝑙𝑑
+ Δ 𝑤𝑖𝑗
Or , 𝑤𝑖𝑗
𝑛𝑒𝑤
= 𝑤𝑖𝑗
𝑜𝑙𝑑
+ 𝛼 𝑒𝑗 𝑥𝑖
• This rule can be extended to train the bias by noting that a bias is
simply a weight whose input is always 1
𝑏𝑗
𝑛𝑒𝑤
= 𝑏𝑗
𝑜𝑙𝑑
+ 𝛼 𝑒𝑗
Fixed Increment Perceptron Learning Algorithm flow chart
19
random initial weights
Calculate the output (Yj) using
Training Examples
Calculate Error
𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗
Error =0
End
Update Weights and biases
Yes
No
Perceptron
• The perceptron is a linear classifier. (Why?)
• The Perceptron algorithm rule is guaranteed to
converge to a weight vector that correctly
classifies the examples provided the training
examples are linearly separable.
• To get the correct weights, many training epochs
are used with suitable learning rate α
• So, it can’t be used to solve Non linear separated
(such as XOR problem)
20
Example: AND problem
• The AND problem is linear separated problem Has
2 inputs(X1 , X2) and one output (out)
21
outx2x1
000
010
001
111
x1
x2
out
bias
w1
w2
1
Example of perceptron learning: the logical operation AND
22
Assume α =1 ,initial values : ( b=1 , w1=0.3 , w2=-1)
Final
bias
Final weights
Error
(e)
Actual
Output
(Y)
bias
Initial weights
desired
output (D)
input
epoch
w2w1w2w1x2X1
0-10.3-111-10.3000
1
0-10.3000-10.3010
-1-1-0.7-110-10.3001
000.310-1-1-0.7111
-100.3-11000.3000
2
-100.300-100.3010
-100.300-100.3001
011.310-100.3111
-111.3-11011.3000
-201.3-11-111.30103
-201.300-201.3001
-112.310-201.3111
.
.
.
.
-312.300-312.3000
-312.300-312.30107
-312.300-312.3001
-312.301-312.3111
More Examples
• Try another initial values and learning rate
• Try another linear separated functions such as OR ,
NAND, NOR.
• What do you notice?
23
Example: Trucks Classification problems
24
• Consider the simple example of classifying trucks given their masses and
lengths
• How do we construct a neural network that can classify any Lorry and
Van?
Mass Length Class
10 6 Lorry
20 5 Lorry
5 4 Van
2 5 Van
2 5 Van
3 6 Lorry
10 7 Lorry
15 8 Lorry
5 9 Lorry
0
1
2
3
4
5
6
7
8
9
10
0510152025
Solution
• The trucks classification problem has the folowing
features:
• It is is a binary classification problem ( 2 category lorry
or van)
• As shown, it is linear separated problem
• Has 2 inputs(mass , length)
• Has one output (Class)
• So, It can be solved by single layer perceptron as
shown:
Check: TrunkExample.ipynb
25
mass
length
class
bias
w1
w2
1
Overcome Perceptron the limitations
• To overcome the limitations of single layer
networks, multi-layer feed-forward networks can
be used, which not only have input and output
units, but also have hidden units that are neither
input nor output units.
• Using Non-linear Activation functions (e.g.
sigmoid).
• Using advanced learning algorithms (e.g. Gradian
descent, backpropagation).
26

More Related Content

What's hot

8 queens problem using back tracking
8 queens problem using back tracking8 queens problem using back tracking
8 queens problem using back trackingTech_MX
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madaline
Nagarajan
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksSivagowry Shathesh
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
parry prabhu
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
Muhammad Rasel
 
8 queen problem
8 queen problem8 queen problem
8 queen problem
NagajothiN1
 
The n Queen Problem
The n Queen ProblemThe n Queen Problem
The n Queen Problem
Sukrit Gupta
 
CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)
Mostafa G. M. Mostafa
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
omaraldabash
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
sheetal katkar
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
Marina Santini
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
Dr Shashikant Athawale
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
Sopheaktra YONG
 
What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?
Kazuki Yoshida
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksFrancesco Collova'
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
Knoldus Inc.
 
Informed search
Informed searchInformed search
Informed search
Amit Kumar Rathi
 
Activation function
Activation functionActivation function
Activation function
RakshithGowdakodihal
 
0 1 knapsack using branch and bound
0 1 knapsack using branch and bound0 1 knapsack using branch and bound
0 1 knapsack using branch and bound
Abhishek Singh
 

What's hot (20)

8 queens problem using back tracking
8 queens problem using back tracking8 queens problem using back tracking
8 queens problem using back tracking
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madaline
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
8 queen problem
8 queen problem8 queen problem
8 queen problem
 
The n Queen Problem
The n Queen ProblemThe n Queen Problem
The n Queen Problem
 
CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Np complete
Np completeNp complete
Np complete
 
Informed search
Informed searchInformed search
Informed search
 
Activation function
Activation functionActivation function
Activation function
 
0 1 knapsack using branch and bound
0 1 knapsack using branch and bound0 1 knapsack using branch and bound
0 1 knapsack using branch and bound
 

Similar to 03 Single layer Perception Classifier

Neural Networks
Neural NetworksNeural Networks
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
SaifKhan703888
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Randa Elanwar
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
ShujatHussainGadi
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Atul Krishna
 
2-Perceptrons.pdf
2-Perceptrons.pdf2-Perceptrons.pdf
2-Perceptrons.pdf
DrSmithaVasP
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ssuserab4f3e
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Adri Jovin
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
Vahid Mirjalili
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2
sravanthi computers
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
Grishma Sharma
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
mohanapriyastp
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Randa Elanwar
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
 
Introduction to Neural networks (under graduate course) Lecture 5 of 9
Introduction to Neural networks (under graduate course) Lecture 5 of 9Introduction to Neural networks (under graduate course) Lecture 5 of 9
Introduction to Neural networks (under graduate course) Lecture 5 of 9
Randa Elanwar
 

Similar to 03 Single layer Perception Classifier (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
2-Perceptrons.pdf
2-Perceptrons.pdf2-Perceptrons.pdf
2-Perceptrons.pdf
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
Soft Computering Technics - Unit2
Soft Computering Technics - Unit2Soft Computering Technics - Unit2
Soft Computering Technics - Unit2
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
 
Neural
NeuralNeural
Neural
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Introduction to Neural networks (under graduate course) Lecture 5 of 9
Introduction to Neural networks (under graduate course) Lecture 5 of 9Introduction to Neural networks (under graduate course) Lecture 5 of 9
Introduction to Neural networks (under graduate course) Lecture 5 of 9
 

Recently uploaded

How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
Col Mukteshwar Prasad
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
Steve Thomason
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS Module
Celine George
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)
rosedainty
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
AzmatAli747758
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
GeoBlogs
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
Celine George
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 

Recently uploaded (20)

How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS Module
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 

03 Single layer Perception Classifier

  • 1. Neural Networks and Fuzzy Systems Single layer Perception Classifier Dr. Tamer Ahmed Farrag Course No.: 803522-3
  • 2. Course Outline Part I : Neural Networks (11 weeks) • Introduction to Machine Learning • Fundamental Concepts of Artificial Neural Networks (ANN) • Single layer Perception Classifier • Multi-layer Feed forward Networks • Single layer FeedBack Networks • Unsupervised learning Part II : Fuzzy Systems (4 weeks) • Fuzzy set theory • Fuzzy Systems 2
  • 3. Building Neural Networks Strategy • Formulating neural network solutions for particular problems is a multi-stage process: 1. Understand and specify the problem in terms of inputs and required outputs 2. Take the simplest form of network you think might be able to solve your problem 3. Try to find the appropriate connection weights (including neuron thresholds) so that the network produces the right outputs for each input in its training data 4. Make sure that the network works on its training data and test its generalization by checking its performance on new testing data 5. If the network doesn’t perform well enough, go back to stage 3 and try harder. 6. If the network still doesn’t perform well enough, go back to stage 2 and try harder. 7. If the network still doesn’t perform well enough, go back to stage 1 and try harder. 8. Problem solved – or not. 3
  • 4. Decision Hyperplanes and Linear Separability • If we have two inputs, then the decision boundary that is a one dimensional straight line in the two dimensional input space of possible input values. • In general, A set of points in n-dimensional space are linearly separable if there is a hyperplane of (n − 1) dimensions that separates the sets. • This hyperplane is clearly still linear (i.e., straight or flat or non-curved) and can still only divide the space into two regions. • Problems with input patterns that can be classified using a single hyperplane are said to be linearly separable. Problems (such as XOR) which cannot be classified in this way are said to be non-linearly separable. 4
  • 5. Linear separated vs Non linear separated binary Classification problems Linear separated Non linear separated Class 2 Class 1 Non linear separated Non linear separated Non linear separated
  • 6. Decision Boundary for some of Logic Gates 6 outx2x1 000 010 001 111 A x1 A x1 x2 x1 outx2x1 000 110 101 111 outx2x1 000 110 101 011 out = 2 out= 1 AND Gate Linear separated OR Gate Linear separated XOR Gate Non Linear separated
  • 7. Implementation of Logical NOT, AND, and OR using McCulloch-Pitts neuron 7 outx2x1 000 010 001 111 outx2x1 000 110 101 111 AND ORNOT outx1 10 01
  • 8. How to Finding Weights and Threshold? • Constructing simple networks by hand (e.g., by trial and error) is one thing. But what about harder problems? • How long should we keep looking for a solution? We need to be able to calculate appropriate parameter values rather than searching for solutions by trial and error. • Each training pattern produces a linear inequality for the output in terms of the inputs and the network parameters. These can be used to compute the weights and thresholds. 8
  • 9. Finding Weights Analytically for the AND Network? 9 x1 x2 W1=? W2=? out T=? outx2x1 000 010 001 111 equations 0 * w1 + 0 * w2<T 0 w1 + 1 * w2<T 1 * w1 + 0 * w2<T 1 * w1 + 1 * w2≥ T F(z)= 0 𝑓𝑜𝑟 𝑥 < 𝑇 1 𝑓𝑜𝑟 𝑥 ≥ 𝑇 𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2) equations T >0 w2< T w1 < T w1 + w2≥ T Easy to solve (infinite number of solutions) For example assume: T=1.5 , w1=1 , w2=1
  • 10. Finding Weights Analytically for the XOR Network? 10 x1 x2 W1=? W2=? out T=? outx2x1 000 110 101 011 equations 0 * w1 + 0 * w2<T 0 w1 + 1 * w2≥T 1 * w1 + 0 * w2≥T 1 * w1 + 1 * w2< T F(z)= 0 𝑓𝑜𝑟 𝑥 < 𝑇 1 𝑓𝑜𝑟 𝑥 ≥ 𝑇 𝑤ℎ𝑒𝑟𝑒 𝑧 = (𝑥1 ∗ 𝑤1 + 𝑥2 ∗ 𝑤2) equations T >0 (1) w2 ≥ T (2) w1 ≥ T (3) w1 + w2 < T (4) No Solution Why? How to solve this problem ?
  • 11. Useful Notation • We often need to deal with ordered sets of numbers, which we write as vectors, e.g. x = (x1, x2, x3, …, xn) , y = (y1, y2, y3, …, ym) • The components xi can be added up to give a scalar (number), e.g. s = x1 + x2 + x3 + … + xn = 𝑖=1 𝑛 𝑥𝑖 • Two vectors of the same length may be added to give another vector, e.g. z = x + y = (x1 + y1, x2 + y2, …, xn + yn) • Two vectors of the same length may be multiplied to give a scalar, e.g. p = x.y = x1y1 + x2 y2 + …+ xnyn = 𝑖=1 𝑛 𝑥𝑖 𝑦𝑖 11
  • 12. What is the problem of implementing XOR? • In the previous slide, Clearly the first, second and third inequalities are incompatible with the fourth, so there is in fact no solution. • We need more complex networks, e.g. that combine together many simple networks, or use different activation/thresholding/transfer functions. • It then becomes much more difficult to determine all the weights and thresholds by hand. Next, we will see how a neural network can learn these parameters. 12
  • 13. Perceptron • In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron. • Any number of McCulloch-Pitts neurons can be connected together in any way we like. • The arrangement that has one layer of input neurons feeding forward to one output layer of McCulloch-Pitts neurons, with full connectivity, is known as a Perceptron: 13 𝒊= 𝟎 𝒏 𝒘𝒊 𝒙𝒊 +𝒃 1 0 X1 X2 X3 w1 w2 w3 Output 𝑶𝒖𝒕𝒑𝒖𝒕 = 𝒉𝒂𝒓𝒅𝒍𝒊𝒎 𝒊=𝟎 𝒏 𝒘𝒊 𝒙𝒊 + 𝒃
  • 14. Perceptron • In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron. • Any number of McCulloch-Pitts neurons can be connected together in any way we like. • The arrangement that has one layer of input neurons feeding forward to one output layer of McCulloch-Pitts neurons, with full connectivity, is known as a Perceptron. • It using hard limiter as the activation function. • There are a learning rule (algorithm) to adjust the weights for better results 14
  • 15. Single layer feedforward (Perceptron) • The following figure showing a perceptron network with n inputs(organized in one input layer) and m output (organized in one output layer) no hidden layers : 15 i j 1 2 3 Wij n 1 2 3
  • 16. Perceptron • Simple network: The following figure showing a perceptron network with 2 input(organized in one input layer) and 1 output (organized in one output layer) : • Important Remark: for binary classification problems we always has single neuron in the output layer 16 b 𝒊= 𝟎 𝒏 𝒘𝒊 𝒙𝒊 +𝒃 1 0 X1 X2 w1 w2 1 output
  • 17. Fixed Increment Perceptron Learning Algorithm • The problem: using the perceptron to perform binary classification task • The goal: adjust the weights and bias to some values by which the percentage of the classification error decreased. • How: by training the perceptron using pre- classified examples (supervised learning). • the desired output is the correct output should be generated (D) • Network calculates the output (Y) which may be wrong and need to be recalculated after adjust the network parameters. • Normally , we starts random initial weights and adjust them in small steps (using the learning algorism) until the required outputs are produced. 17
  • 18. Fixed Increment Perceptron Learning Algorithm (cont.) 18 Calculating the new weights: • Calculate the error (ej) 𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗 • Network changes its weights in proportion to the error: Δ 𝑤𝑖𝑗 = 𝛼 ∗ 𝑒𝑗 ∗ 𝑥𝑖 • Where  is the learning rate or step size : 1. Used to control the amount of weight adjustment at each step of training. 2. ranges from 0 to 1. 3. determines the rate of learning in each time step. • The new (adjusted) weight: 𝑤𝑖𝑗 𝑛𝑒𝑤 = 𝑤𝑖𝑗 𝑜𝑙𝑑 + Δ 𝑤𝑖𝑗 Or , 𝑤𝑖𝑗 𝑛𝑒𝑤 = 𝑤𝑖𝑗 𝑜𝑙𝑑 + 𝛼 𝑒𝑗 𝑥𝑖 • This rule can be extended to train the bias by noting that a bias is simply a weight whose input is always 1 𝑏𝑗 𝑛𝑒𝑤 = 𝑏𝑗 𝑜𝑙𝑑 + 𝛼 𝑒𝑗
  • 19. Fixed Increment Perceptron Learning Algorithm flow chart 19 random initial weights Calculate the output (Yj) using Training Examples Calculate Error 𝑒𝑗 = 𝐷𝑗 − 𝑌𝑗 Error =0 End Update Weights and biases Yes No
  • 20. Perceptron • The perceptron is a linear classifier. (Why?) • The Perceptron algorithm rule is guaranteed to converge to a weight vector that correctly classifies the examples provided the training examples are linearly separable. • To get the correct weights, many training epochs are used with suitable learning rate α • So, it can’t be used to solve Non linear separated (such as XOR problem) 20
  • 21. Example: AND problem • The AND problem is linear separated problem Has 2 inputs(X1 , X2) and one output (out) 21 outx2x1 000 010 001 111 x1 x2 out bias w1 w2 1
  • 22. Example of perceptron learning: the logical operation AND 22 Assume α =1 ,initial values : ( b=1 , w1=0.3 , w2=-1) Final bias Final weights Error (e) Actual Output (Y) bias Initial weights desired output (D) input epoch w2w1w2w1x2X1 0-10.3-111-10.3000 1 0-10.3000-10.3010 -1-1-0.7-110-10.3001 000.310-1-1-0.7111 -100.3-11000.3000 2 -100.300-100.3010 -100.300-100.3001 011.310-100.3111 -111.3-11011.3000 -201.3-11-111.30103 -201.300-201.3001 -112.310-201.3111 . . . . -312.300-312.3000 -312.300-312.30107 -312.300-312.3001 -312.301-312.3111
  • 23. More Examples • Try another initial values and learning rate • Try another linear separated functions such as OR , NAND, NOR. • What do you notice? 23
  • 24. Example: Trucks Classification problems 24 • Consider the simple example of classifying trucks given their masses and lengths • How do we construct a neural network that can classify any Lorry and Van? Mass Length Class 10 6 Lorry 20 5 Lorry 5 4 Van 2 5 Van 2 5 Van 3 6 Lorry 10 7 Lorry 15 8 Lorry 5 9 Lorry 0 1 2 3 4 5 6 7 8 9 10 0510152025
  • 25. Solution • The trucks classification problem has the folowing features: • It is is a binary classification problem ( 2 category lorry or van) • As shown, it is linear separated problem • Has 2 inputs(mass , length) • Has one output (Class) • So, It can be solved by single layer perceptron as shown: Check: TrunkExample.ipynb 25 mass length class bias w1 w2 1
  • 26. Overcome Perceptron the limitations • To overcome the limitations of single layer networks, multi-layer feed-forward networks can be used, which not only have input and output units, but also have hidden units that are neither input nor output units. • Using Non-linear Activation functions (e.g. sigmoid). • Using advanced learning algorithms (e.g. Gradian descent, backpropagation). 26