Seal of Good Local Governance (SGLG) 2024Final.pptx
Introduction-to-Perceptrons.pptx
1. Introduction to
Perceptrons
Perceptrons are a type of neural network used for
supervised learning. They can be single-layered or
multi-layered. Let's dive into their structure and
functioning.
2. Single-Layer Perceptrons
Inputs
The input layer receives
information from the
environment.
Weights
Each input is assigned a weight
value that is adjusted through
training.
Output
The sum of weighted inputs is
passed through an activation
function to generate an output.
3. Multi-Layer Perceptrons (MLP)
Structure
Multiple layers of interconnected
nodes allow for more complex data
processing.
Training
The backpropagation algorithm
adjusts the weights to minimize error
and improve accuracy.
Applications
Multi-layer perceptrons are used in
fields like image recognition, speech
processing, and finance.
4. Perceptron Learning Algorithm
1 Supervised Learning
The perceptron algorithm is a
supervised learning method
that learns from labeled data.
2 Error Correction
The algorithm iteratively
adjusts the weights to correct
errors and improve accuracy.
3 Convergence
With linearly separable data,
the perceptron algorithm is
guaranteed to converge to a
solution.
5. Activation Functions
1
Step Function
Binary activation function, outputting 0 or 1.
2
Sigmoid Function
Maps inputs to a range between 0 and 1, useful for
probability calculations.
3
ReLU Function
Activation function that outputs 0 for negative inputs
and the input value for positive inputs.
6. Bias Nodes
Input Node Weighted Input
Bias Node 1 * Bias Weight
Output Node Activation
Function(Weighted Input +
Bias Weight)
Bias nodes are added to neural networks to improve their ability to
learn and solve complex problems.
7. Perceptron vs. Backpropagation
Perceptron
• Single-layered
• Binary classification
• Updates weights once per misclassification
Backpropagation
• Multi-layered
• Can handle multi-class classification
• Updates weights iteratively through
backpropagation algorithm
8. Real-World Applications
Facial Recognition
Uses multi-layer perceptrons to
analyze and recognize facial
features.
Stock Market Analysis
Uses neural networks to analyze
patterns and make predictions about
stock prices.
Natural Language Processing
Uses multi-layered perceptrons to
process and analyze human
language data.
9. Limitations
1 Requires Labeled Data
Perceptrons require labeled
data for supervised learning,
which can be time-consuming
and expensive to obtain.
2 May Not Converge
In cases where data is not
linearly separable, the
perceptron algorithm may not
converge to a solution.
3 Overfitting Risk
Neural networks can overfit or
memorize the training data,
leading to poor generalization
on new data.
10. Conclusion
1 Perceptrons are a type of neural network used for supervised learning.
2 They can be single-layered or multi-layered, with varying degrees of complexity
and accuracy.
3 Perceptron learning algorithm is a supervised learning method that iteratively
adjusts weights to minimize error and improve accuracy.