HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
Artificial Neural Network
1. 1
Renas R. Rekany-Nawroz University
Artificial Neural Networks
Renas R. Rekany
2016/2017Computer Science & I.T.
2. 2
Renas R. Rekany-Nawroz University
Artificial Neural Networks
An artificial neural network (ANN) is a system is based
(inspired) on the biological neural network, such as the brain.
The brain has approximately 100 billions neurons, which
communicate through electrochemical signals (the neurons
are connected through a junction called synapse).
Each neuron receives thousands of connections with other
neurons, constantly receiving incoming signals to reach the
cell body.
3. 3
Renas R. Rekany-Nawroz University
Biological Neuron
The information transmission happens at the synapses.
A biological neuron is most basic information processing unit in the
nervous system. a biological neuron consists of the following parts:
1. Dendrites (input)
2. Cell body
3. Axon (output)
A biological neuron takes signals from it’s dendrites and processes
the signal and outputs a signal from it’s axon based on the input
signal.
5. 5
Renas R. Rekany-Nawroz University
Universal Properties of Neurons
Excitability
All cells are excitable that is, they respond to environment
changes. Neurons exhibit this property to the highest degree.
Conductivity
Neurons respond to stimuli by producing electrical signals that
are quickly conducted to other cells at distant locations.
Secretion
When the electrical signal reaches the end of a nerve fiber,
the neuron secretes a chemical neurotransmitter that crosses
the gap and stimulates the next cell.
6. 6
Renas R. Rekany-Nawroz University
Properties of Neurons System
Parallel, distributed information processing.
High degree of connectivity between basic
processing units.
Connections are modified based of experience.
Learning is a constant process.
Learning is based on local information.
8. 8
Renas R. Rekany-Nawroz University
Model of an ANN
1. x1, x2,….,xn are the inputs to the neuron.
2. w1, w2,…., wn are weights applied to the inputs.
3. net is the x1*w1+x2*w2+…+xn*wn is the weight input sum
4. f() is the activation function.
5. y= f(net) is the output of the function
9. 9
Renas R. Rekany-Nawroz University
Artificial Neural Network Architecture
Single Layer Neural Networks, are networks in which the output is
directly passed from the input neurons to the output neurons without
having any hidden processing neurons in between.
These are called Perceptron neurons, and are usually used to resolve
simple mathematical models.
10. 10
Renas R. Rekany-Nawroz University
Artificial Neural Network Architecture
Non-linear function
Step (operands) Sign(Input)
11. 11
Renas R. Rekany-Nawroz University
Artificial Neural Network Architecture
Multi-Layer Neural Networks, are neural networks in which input
neurons pass signals and information to other processing elements
inside a hidden layer, afterwards information is passed to the output
neurons.
These are called back
propagation neurons,
which usually solve
complex problems.
12. 12
Renas R. Rekany-Nawroz University
Learning
This procedure of learning involves updating the network parameters
so that the network can perform a specific task as desired.
This involves testing the network and performing certain procedures
to update the weights such that a desired output is met.
There are two type of learning
• Supervised learning
• Unsupervised learning
13. 13
Renas R. Rekany-Nawroz University
Learning
Supervised learning
In supervised learning a well defined set of inputs and outputs are
provided to the network, this enables the network to generalize it’s
process so that when introduced with a set of inputs, the desired
output is met.
Unsupervised learning
In unsupervised learning, there is a set of inputs without a well
defined set of outputs, such networks try to generalize certain
characteristic in the input data and classify the data accordingly.
14. 14
Renas R. Rekany-Nawroz University
ANN Applications
• Machine vision
• Pattern recognition
• Intelligent security systems
• Intelligent medical devices
• Intelligent control
• Advanced robotics
• Intelligent signal processing and data analysis
15. 15
Renas R. Rekany-Nawroz University
The Concept of Linear Separability
The concept of linear separability is based
on mapping the outputs of a function on
the axis of the inputs.
Example: mapping an “AND” gate can
be as follows when the output can be
separated by a single-line, the problem
can be resolved with perceptron
networks, however, if it takes more than
one line to separate the output, then back
propagation networks must be used. for
example:
16. 16
Renas R. Rekany-Nawroz University
Perceptron Neural Network
A perceptron neural network is a single layer network where an input
is passed to the activation function and an output is generated.
Perceptrons are used to map linear classifiers, in which an input
belongs to one class or another. these neural networks are trained
using supervised learning methods, and is usually provided with the
Hardlim activation function.
17. 17
Renas R. Rekany-Nawroz University
Perceptron Characteristics
• Single Layer Network
• Supervised learning method
• Hardlim activation function
• X1,X2, …, Xn are inputs
• W1,W2, …, Wn are weights
applied to the inputs
• Bias or threshold is the
limit by which the output
is decided
• Net = X1W1+X2W2, … XnWn
• α Alpha is the learning rate (speed)
• f(net - 𝛳) is the activation function
X1
X2
Xn
Net
W1
W2
Wn
Bais 𝛳
y(output)f(net)
19. 19
Renas R. Rekany-Nawroz University
Artificial neurons
one possible model
Inputs
Output
w2
w1
w3
wn
wn-1
.
.
.
x1
x2
x3
…
xn-1
xn
y
)(;
1
zHyxwz
n
i
ii
20. 20
Renas R. Rekany-Nawroz University
From Logical Neurons to Finite
Automata
AND
1
1
1.5
NOT
-1
0
OR
1
1
0.5
22. 22
Renas R. Rekany-Nawroz University
Neural network mathematics
Neural network: input / output transformation
),( WxFyout
W is the matrix of all weight vectors.
23. 23
Renas R. Rekany-Nawroz University
Perceptron Learning
Perceptron learning is based on calculating the error coefficient using
the below equation:
e = y_desired – y_actual
Afterwards, we update the weights using delta weight, based on the
below equation:
ΔW = (α)(Xi)(e)
Delta weight is added to each output contributing weight using:
Wi = Wi + ΔW
24. 24
Renas R. Rekany-Nawroz University
Example
Building a perceptron neural network with one perceptron to perform
the action of the logical AND gate.
the input vectors are X1 = [0,1] and X2 = [0,1], the input set is
p = [0,0;0,1;1,0;1,1] and the respective target vector is t = [0;0;0;1].
assume that 𝛳 = 0.2, α = 0.1, W1 = 0.3 and W2 = -0.1.
epoch X1 X2 Yd Yact e ΔW1 ΔW2 W1 W2
1
0 0 0 0 0 0 0 0.3 -0.1
0 1 0 0 0 0 0 0.3 -0.1
1 0 0 1 -1 -0.1 0 0.2 -0.1
1 1 1 0 1 0.1 0.1 0.3 0
27. 27
Renas R. Rekany-Nawroz University
Example
The last epoch for this example according the input vectors are
X1 = [0,1] and X2 = [0,1], the input set is p = [0,0;0,1;1,0;1,1] and
the respective target vector is t = [0;0;0;1].
Hint: 𝛳 = 0.2, α = 0.1, W1 = 0.3 and W2 = -0.1.
epoch X1 X2 Yd Yact e ΔW1 ΔW2 W1 W2
6
0 0 0 0 0 0 0 0.1 0.1
0 1 0 0 0 0 0 0.1 0.1
1 0 0 0 0 0 0 0.1 0.1
1 1 1 1 0 0 0 0.1 0.1
28. 28
Renas R. Rekany-Nawroz University
NOT Gate
The last epoch for this example according the input vectors are X =
[0,1], the input set is p = [0;1] and the respective target vector is t =
[1;0]. Hint: 𝛳 = 0, α = 0.1, W = -1.
epoch x Yd Yact e ΔW W1
1
0 1 1 0 0 -1
1 0 0 0 0 -1
29. 29
Renas R. Rekany-Nawroz University
Multi-Layer Perceptron
• One or more hidden
layers
• Sigmoid activations
functions
1st hidden
layer
2nd hidden
layer
Output layer
Input data
30. 30
Renas R. Rekany-Nawroz University
Structure
Types of
Decision Regions
Result
Single-Layer
Two-Layer
Three-Layer
Half Plane
Bounded By
Hyperplane
Convex Open
Or
Closed Regions
Abitrary
(Complexity
Limited by No.
of Nodes)
A
AB
B
A
AB
B
A
AB
B
Multi-Layer Perceptron Application
31. 31
Renas R. Rekany-Nawroz University
References
1. http://neuron.eng.wayne.edu/software.html
Many useful example.
2. http://ieee.uow.edu.au/~daniel/software/libn
eural/BPN_tutorial/BPN_English/BPN_Engli
sh/BPN_English.html
3. http://www.ai-junkie.com/
4. http://diwww.epfl.ch/mantra/tutorial/english/