This document outlines the course objectives and syllabus for EEE52511: Neural Network & Fuzzy Systems taught by Dr. Hiba Hassan at the University of Khartoum. The course aims to introduce students to neural networks and fuzzy logic theory, and familiarize them with developing neural network and fuzzy systems to solve real-life problems. The syllabus covers topics such as the definition and classifications of neural networks, single and multilayer perceptrons, forward and backward propagation, fuzzy set theory, and applications of neural networks and fuzzy logic systems. References and a brief history of major advances in artificial neural network research are also provided.
1. EEE52511: NEURAL NETWORK &
FUZZY SYSTEMS
By: Dr. Hiba Hassan
Lecture 1
University Of Khartoum
Department Of Electronics & Electrical
Engineering
Software & Control Engineering
2. Course Objectives
To understand neural networks and fuzzy logic theory.
To gain knowledge of neural networks and fuzzy
system development.
To familiarize students with various concepts,
hardware and software used in neural, fuzzy systems
analysis and design.
To apply the techniques for solving real-life problems
using neural networks and fuzzy systems.
If time allows, to introduce hybrid systems such as
neuro-fuzzy systems.
9/1/2023 U of K: Dr. Hiba Hassan 2
3. Syllabus
• Neural Networks:
• definition, similarity with human brain,
• classifications,
• input/output set, learning,
• single layer and multilayer perception,
• forward and backward propagation,
• design of ANN model,
• training set for ANN, test for ANN,
• Application of ANN in Engineering.
9/1/2023 U of K: Dr. Hiba Hassan 3
4. Syllabus ( cont.)
• Fuzzy Logic:
• Fuzzy set theory,
• set theoretic operations,
• law of contradiction and law of Excluded Middle,
• fuzzy operation,
• reasoning and implication,
• fuzzy logic system applications.
9/1/2023 U of K: Dr. Hiba Hassan 4
5. References
• Neural Network Design (2nd Edition),
Martin T. Hagan, Howard B. Demuth, Mark
H. Beale, Orlando De Jes.
• Jang, J.-S. R., Sun, C.-T., & Mizutani, E.
(1997). Neuro-fuzzy and soft computing:
A computational approach to learning
and machine intelligence. Upper Saddle
River, NJ: Prentice Hall.
9/1/2023 U of K: Dr. Hiba Hassan 5
6. History of ANN Research
• Major Leaps in ANN Research:
• McCulloch and Pitts … 1943 (1st Neuron Model)
• Donald Hebb …. 1949 (1st Learning Rule)
• Marvin Minsky …. 1951 (1st Neural Machine)
• Rosenblatt …. 1958 (Perceptron)
9/1/2023 U of K: Dr. Hiba Hassan 6
7. Introduction
• The word neural network actually came from the
biological term neurons.
• Hence, an artificial neural network is a complex
information processing model that tries to imitate
the way a human brain functions.
• Its main objective is to find a suitable function that
maps given inputs to expected outputs.
• Hence, it is generally described as a function
approximator.
9/1/2023 U of K: Dr. Hiba Hassan 7
8. A Look into our Brain!
• Neurons are the core components of our nervous
system, and that includes the brain, spinal cord &
nerve cells.
• A typical neuron possesses a cell body (often
called the soma), dendrites, and an axon.
• Dendrites are thin structures that carry electrical
signals into the neuron body.
• An axon is a single long nerve fiber that carries
the signal from the neuron body to other neurons.
9/1/2023 U of K: Dr. Hiba Hassan 8
9. Cont.
• Synapses are specialized structures where
neurotransmitter chemicals are released to
communicate with target neurons.
• The cell body of a neuron frequently gives rise to
multiple dendrites, but only one axon.
• The axon may branch hundreds of times before it
terminates.
9/1/2023 U of K: Dr. Hiba Hassan 9
11. Cont.
• At the majority of synapses, signals are sent from
the axon of one neuron to a dendrite of another.
• But sometimes, exceptions may take place, such
as:
• neurons that lack dendrites,
• neurons that have no axon,
• synapses that connect an axon to another axon
or
• a dendrite to another dendrite, etc.
9/1/2023 U of K: Dr. Hiba Hassan 11
12. How the brain works!
• Each neuron receives inputs from other neurons
• The effect of each input line on the neuron is controlled
by a synaptic weight
• The weights can be positive or negative.
• The synaptic weights adapt so that the whole network
learns to perform useful computations
• Recognizing objects, understanding language,
making plans, controlling the body.
• Our brain have about neurons each with
approximately connections.
9/1/2023 U of K: Dr. Hiba Hassan 12
1011
104
14. Back to Artificial Neural Networks
• Neural networks employ a huge interconnection of
simple computing cells (neurons or processing
units).
• The computations can be performed through a
process of learning to acquire knowledge from the
environment, this is done by using a Learning
algorithm.
• This learning is used to adjust the interneuron
connection strengths, known as synaptic weights.
9/1/2023 U of K: Dr. Hiba Hassan 14
15. When should we use it?
• When to Consider using Neural Networks:
• if the input is high-dimensional discrete or real-
valued (e.g. raw sensor input).
• if the output is discrete or real valued.
• if the output is a vector of values.
• for possibly noisy data.
• when the form of target function is unknown.
• when human readability of result is unimportant.
9/1/2023 U of K: Dr. Hiba Hassan 15
16. Characteristics of NN
1) Learns from experience.
2) Generalizes from examples: Can interpolate from
previous learning and gives the correct response to new
data.
3) Rapid applications development: NNs are generic
machines and quite independent from domain
knowledge.
4) Adaptability: Adapts to a changing environment, if
properly designed.
5) Computational efficiency: Although the training of a
neural network demands a lot of computer power, a
trained network consumes low power.
6) Non-linearity: Not based on linear assumptions about the
real word.
9/1/2023 U of K: Dr. Hiba Hassan 16
17. A Model Neuron: Node or Unit
• An artificial neuron
model is also called a
node or a unit & it is
represented as follows:
• Where, net i defines
the net input to unit i &
is given by;
• ∑j wijyj .
• Wij refers to the weight
from unit j to unit i
• A neural network node:
9/1/2023 U of K: Dr. Hiba Hassan 17
19. 9/1/2023 U of K: Dr. Hiba Hassan 19
Human Artificial
Neuron Processing Element
Dendrites Combining Function
Cell Body Transfer Function
Axons Element Output
Synapses Weights
The analogy between the Human
and the Artificial Neural Networks:
20. 9/1/2023 U of K: Dr. Hiba Hassan 20
Training a neural network
21. Some Applications of Artificial
Neural Networks
21
• Classification
Marketing: consumer spending pattern.
Defence: radar and sonar image.
Agriculture & fishing: fruit and catch grading.
Medicine: ultrasound, ECG,…. etc medical
diagnosis.
• Recognition and Identification
General Computing & Telecommunications: speech,
vision and handwriting recognition.
Finance: signature verification and bank note
verification
9/1/2023 U of K: Dr. Hiba Hassan 21
22. Cont.
22
• Assessment
Engineering: product inspection monitoring and control.
Defence: target tracking.
Security: motion detection, surveillance image analysis
and fingerprint matching.
• Forecasting and Prediction
Finance: foreign exchange rate and stock market
forecasting.
Agriculture: crop yield forecasting.
Marketing: sales forecasting.
Meteorology: weather prediction.
9/1/2023 U of K: Dr. Hiba Hassan 22
24. Architecture
• Neural networks are designed in one of these two
types:
• Feedforward: information is transmitted in the
forward direction, i.e. from the input to the
output.
• Recurrent, or feedback: at least one path leads
back to the starting neuron, this path is called a
cycle.
9/1/2023 U of K: Dr. Hiba Hassan 24
25. Feed-forward Neural Network
• The neurons are arranged in separate layers,
these layers are the input layer, the hidden layer
and the output layer.
• The hidden layer may be a single layer or several
layers in which case it is called a multi-layer feed-
forward or a deep neural net.
• There are no connections between the neurons of
the same layer.
• The neurons in one layer receive inputs from the
previous layer.
• The neurons in one layer delivers its output to the
next layer.
• The connections are unidirectional.
9/1/2023 U of K: Dr. Hiba Hassan 25
26. 9/1/2023 U of K: Dr. Hiba Hassan 26
3-8-8-2 Neural Network
27. 9/1/2023 U of K: Dr. Hiba Hassan 27
An example of a general feedforward neural
net.
29. Symmetrically connected networks
• These are like recurrent networks, but the
connections between units are symmetrical (they
have the same weight in both directions).
• John Hopfield (and others) realized that
symmetric networks are much easier to analyze
than recurrent networks.
• Symmetrically connected nets without hidden
units are called “Hopfield nets”.
9/1/2023 U of K: Dr. Hiba Hassan 29
30. Symmetrically connected networks
with hidden units
• These are called “Boltzmann machines”.
• They are much more powerful models than
Hopfield nets.
• They are less powerful than recurrent neural
networks.
• They have a simple learning algorithm.
9/1/2023 U of K: Dr. Hiba Hassan 30
31. 9/1/2023 U of K: Dr. Hiba Hassan 31
Simple Artificial Neuron
32. 9/1/2023 U of K: Dr. Hiba Hassan 32
Working with Simple Artificial Neuron
• The node receives input from some other units, or
perhaps from an external source.
• Each input’s associated weight w can be modified
so as to model synaptic learning. The unit
computes some function f of the weighted sum of
its inputs:
• Its output, in turn, can serve as input to other units.
33. 9/1/2023 U of K: Dr. Hiba Hassan 33
Simple Artificial Neuron
• The weighted sum is called the net
input to unit i, hence it is often written as neti.
• The function f is called the unit's activation
function. In the simplest case, f is the identity
function, and the unit's output is just its net input.
This is called a linear unit.
34. 9/1/2023 U of K: Dr. Hiba Hassan 34
Simple neuron models, with and without bias
35. 9/1/2023 U of K: Dr. Hiba Hassan 35
• The previous slide shows two neuron models, one
with bias, b, and one without.
• The bias is like a weight, except that it has a
constant input of 1.
• Here, the input p is a scalar and the weight w is a
scalar as well, hence the product wp is a scalar.
Simple neuron models, with and without bias
(cont.)
36. Cont.
• Suppose that the target is called t. if the output a
is different from t, then the weights are changed
according to the following equation:
wi = wi +η(t - a) * xi
• And η is an attenuation factor
9/1/2023 U of K: Dr. Hiba Hassan 36
37. Example
• Assuming p is input and t is target, develop a
perceptron that can solve the following problem
• Ans:
1. Graphical representation to check if the problem
is linearly separable.
9/1/2023 U of K: Dr. Hiba Hassan 37
38. Cont.
2. Develop the network architecture and choose
initial weights.
9/1/2023 U of K: Dr. Hiba Hassan 38
39. Solution (cont.)
3. Apply the learning rule:
4. Calculate error: e = t – a
5. Then apply:
9/1/2023 U of K: Dr. Hiba Hassan 39