SlideShare a Scribd company logo
Presented by Rauf Asadov
Neural Networks
The human brain is made up of billions of simple processing units – neurons.
NEURON
• Dendrites – Receive information
Biological Neuron
Hippocampal Neurons
Source: heart.cbl.utoronto.ca/ ~berj/projects.html
• Cell Body – Process information
• Axon – Carries processed information to other neurons
• Synapse – Junction between Axon end and Dendrites of other Neurons
Dendrites
Cell Body
Axon
Schematic
Synapse
Artificial Neuron
•Receives Inputs X1 X2 … Xp from other neurons or
environment
• Inputs fed-in through connections with ‘weights’
• Total Input = Weighted sum of inputs from all sources
• Transfer function (Activation function) converts the input to
output
• Output goes to other neurons or environment
Biological Neural Network Artificial Neural Network
Soma
Dendrite
Axon
Synapse
Neuron
Input
Output
Weight
Analogy between biological and
artificial neural networks
How do ANNs work?
Transfer Function
(Activation Function)
Output
x1x2xm
∑
y
Processing
Input
w1
w2wm
weights
. . . . . . . . . .
. .
f(vk)
. . . .
.
Activation functions of a neuron
Step function Sign function
+1
-1
0
+1
-1
0X
Y
X
Y
+1
-1
0 X
Y
Sigmoid function
+1
-1
0 X
Y
Linear function






0if,0
0if,1
X
X
Ystep






0if,1
0if,1
X
X
Y sign
X
sigmoid
e
Y


1
1 XY linear

 The neuron computes the weighted sum of the input
signals and compares the result with a threshold
value, . If the net input is less than the threshold,
the neuron output is –1. But if the net input is
greater than or equal to the threshold, the neuron
becomes activated and its output attains a value +1.
 The neuron uses the following transfer or activation
function:
 This type of activation function is called a sign
function.



n
i
iiwxX
1 





X
X
Y
if,1
if,1
Can a single neuron learn a task?
 In 1958, Frank Rosenblatt introduced a training
algorithm that provided the first procedure for
training a simple ANN: a perceptron.
 The perceptron is the simplest form of a neural
network. It consists of a single neuron with
adjustable synaptic weights and a hard limiter.
Threshold
Inputs
x1
x2
Output
Y
Hard
Limiter
w2
w1
Linear
Combiner

Single-layer two-input perceptron
11
Perceptron
• Is a network with all inputs connected directly to the output.
This is called a single layer NN (Neural Network) or a
Perceptron Network.
• A perceptron is a single neuron that classifies a set of inputs into
one of two categories (usually 1 or -1)
• If the inputs are in the form of a grid, a perceptron can be used to
recognize visual images of shapes.
• The perceptron usually uses a step function, which returns 1 if the
weighted sum of inputs exceeds a threshold, and –1 otherwise.
 The operation of Rosenblatt’s perceptron is based on the
McCulloch and Pitts neuron model. The model consists of a
linear combiner followed by a hard limiter.
 The weighted sum of the inputs is applied to the hard limiter,
which produces an output equal to +1 if its input is positive and
1 if it is negative.
An ANN can:
1.compute any computable function, by the appropriate
selection of the network topology and weights values.
2.learn from experience!
 Specifically, by trial‐and‐error
Learning by trial‐and‐error
Continuous process of:
Trial:
Processing an input to produce an output (In terms of ANN: Compute the
output function of a given input)
Evaluate:
Evaluating this output by comparing the actual output with the
expected output.
Adjust:
Adjust the weights.
x2
x1
??
Or hyperplane in
n-dimensional space
x2= mx1+q
Perceptron learns a linear separator
This is an (hyper)-line in an n-dimensional
space, what is learnt
are the coefficients wi
Instances X(x1,x2..x2) such that:
Are classified as positive, else they are classified as
negative
Perceptron Training- Preparation
• First, inputs are given random weights (usually
between –0.5 and 0.5)
• In the case of an elementary perceptron, the n-
dimensional space is divided by a hyperplane
into two decision regions. (i.e If we have 2
results we can separate them with a line with
each group result on a different side of the line)
The hyperplane is defined by the linearly
separable function:
0
1


n
i
iiwx
 If at iteration p, the actual output is Y(p) and the
desired output is Yd (p), then the error is given by:
where p = 1, 2, 3, . . .
Iteration p here refers to the pth training example
presented to the perceptron.
 If the error, e(p), is positive, we need to increase
perceptron output Y(p), but if it is negative, we
need to decrease Y(p).
)()()( pYpYpe d 
The perceptron learning formula
where p = 1, 2, 3, . . .
 is the learning rate, a positive constant less than
unity.
)()()()1( pepxpwpw iii  
Step 1: Initialisation
Set initial weights w1, w2,…, wn and threshold 
to random numbers in the range [0.5, 0.5].
If the error, e(p), is positive, we need to increase
perceptron output Y(p), but if it is negative, we
need to decrease Y(p).
Perceptron’s training algorithm
Step 2: Activation
Activate the perceptron by applying inputs x1(p),
x2(p),…, xn(p) and desired output Yd (p). Calculate
the actual output at iteration p = 1
where n is the number of the perceptron inputs,
and step is a step activation function.
Perceptron’s tarining algorithm (continued)








 

n
i
ii pwpxsteppY
1
)()()(
Step 3: Weight training
Update the weights of the perceptron (Back
Propagation-minimize errors)
where delta w is the weight correction at iteration p.
The weight correction is computed by the delta rule:
Step 4: Iteration
Increase iteration p by one, go back to Step 2 and
repeat the process until convergence.
)()()1( pwpwpw iii 
Perceptron’s training algorithm (continued)
)()()( pepxpw ii 
X1
X2
W1
W2
X1 X2 Y Train
0 0 0
0 1 0
1 0 0
1 1 1
Perceptron’s training for AND logic gate
∑
Activation function
Example of perceptron learning: the logical operation AND
Inputs
x1 x2
0
0
1
1
0
1
0
1
0
0
0
Epoch
Desired
output
Yd
1
Initial
weights
w1 w2
1
0.3
0.3
0.3
0.2
0.1
0.1
0.1
0.1
0
0
1
0
Actual
output
Y
Error
e
0
0
1
1
Final
weights
w1 w2
0.3
0.3
0.2
0.3
0.1
0.1
0.1
0.0
0
0
1
1
0
1
0
1
0
0
0
2
1
0.3
0.3
0.3
0.2
0
0
1
1
0
0
1
0
0.3
0.3
0.2
0.2
0.0
0.0
0.0
0.0
0
0
1
1
0
1
0
1
0
0
0
3
1
0.2
0.2
0.2
0.1
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
1
0
0
0
1
1
0.2
0.2
0.1
0.2
0.0
0.0
0.0
0.1
0
0
1
1
0
1
0
1
0
0
0
4
1
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
0
1
0
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0
0
1
1
0
1
0
1
0
0
0
5
1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
1
0
0
0
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
Threshold:  = 0.2; learning rate:  = 0.1
Multilayer Perceptron
 A multilayer perceptron neural network
with one or more hidden layers.
 Hierarchical structure
 The network consists of an input layer of
source neurons, at least one middle or
hidden layer of computational neurons,
and an output layer of computational
neurons.
Input
layer
First
hidden
layer
Second
hidden
layer
Output
layer
OutputSignals
InputSignals
What does the middle layer hide?
 A hidden layer “hides” its desired output. Neurons
in the hidden layer cannot be observed through the
input/output behaviour of the network. There is no
obvious way to know what the desired output of
the hidden layer should be.
 Commercial ANNs incorporate three and sometimes
four layers, including one or two hidden layers.
Each layer can contain from 10 to 1000 neurons.
Experimental neural networks may have five or
even six layers, including three or four hidden
layers, and utilise millions of neurons.
Learning Paradigms
Supervised learning
Unsupervised learning
Reinforcement learning
In artificial neural networks, learning refers to the
method of modifying the weights of connections
between the nodes of a specified network.
Supervised learning
 This is what we have seen so far!
 A network is fed with a set of training samples
(inputs and corresponding output), and it uses
these samples to learn the general relationship
between the inputs and the outputs.
 This relationship is represented by the values of
the weights of the trained network.
Unsupervised learning
 No desired output is associated with the
training data!
 Faster than supervised learning
 Used to find out structures within data:
 Clustering
 Compression
Reinforcement learning
 Like supervised learning, but:
 Weights adjusting is not directly related to the error
value.
 The error value is used to randomly, shuffle weights!
 Relatively slow learning due to ‘randomness’.

More Related Content

What's hot

Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
Rutul Mehta
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks
 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
partha pratim deb
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural NetworkPutri Wikie
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
shailesh kandel
 
Unit+i
Unit+iUnit+i
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
Smruti Ranjan Sahoo
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Knoldus Inc.
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - Architectures
Erin Brunston
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
Prof. Neeta Awasthy
 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Randa Elanwar
 
Neural network
Neural networkNeural network
Neural network
Facebook
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix design
Monjurul Shuvo
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural NetworksAhmedMahany
 
Artifical Neural Network
Artifical Neural NetworkArtifical Neural Network
Artifical Neural Network
mahalakshmimalini
 
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuronComparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Saransh Choudhary
 
Neural network
Neural networkNeural network
Neural network
Mahmoud Hussein
 

What's hot (20)

Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural Network
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
Unit+i
Unit+iUnit+i
Unit+i
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
 
hopfield neural network
hopfield neural networkhopfield neural network
hopfield neural network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - Architectures
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
 
Neural network
Neural networkNeural network
Neural network
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix design
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural Networks
 
Artifical Neural Network
Artifical Neural NetworkArtifical Neural Network
Artifical Neural Network
 
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuronComparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
 
Neural network
Neural networkNeural network
Neural network
 

Similar to Neural network

SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
ESTIBALYZJIMENEZCAST
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
EdutechLearners
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
MDYasin34
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ssuserab4f3e
 
Deep learning simplified
Deep learning simplifiedDeep learning simplified
Deep learning simplified
Lovelyn Rose
 
Artificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptxArtificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptx
AttaNox1
 
Ann ics320 part4
Ann ics320 part4Ann ics320 part4
Ann ics320 part4
Hasan Suthar
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
pratik610182
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)Ming-Chi Liu
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Sagacious IT Solution
 
Perceptron
PerceptronPerceptron
Perceptron
Nagarajan
 
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…Dongseo University
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
Iman Ardekani
 
Lec-02.pdf
Lec-02.pdfLec-02.pdf
Lec-02.pdf
MuhammadLatifZia
 
Single Layer Rosenblatt Perceptron
Single Layer Rosenblatt PerceptronSingle Layer Rosenblatt Perceptron
Single Layer Rosenblatt Perceptron
AndriyOleksiuk
 
Perceptron 2015.ppt
Perceptron 2015.pptPerceptron 2015.ppt
Perceptron 2015.ppt
SadafAyesha9
 

Similar to Neural network (20)

SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Deep learning simplified
Deep learning simplifiedDeep learning simplified
Deep learning simplified
 
Artificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptxArtificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptx
 
Ann ics320 part4
Ann ics320 part4Ann ics320 part4
Ann ics320 part4
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
 
SOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - UnitSOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - Unit
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Perceptron
PerceptronPerceptron
Perceptron
 
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
2013-1 Machine Learning Lecture 04 - Michael Negnevitsky - Artificial neur…
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
Lec-02.pdf
Lec-02.pdfLec-02.pdf
Lec-02.pdf
 
Single Layer Rosenblatt Perceptron
Single Layer Rosenblatt PerceptronSingle Layer Rosenblatt Perceptron
Single Layer Rosenblatt Perceptron
 
Perceptron 2015.ppt
Perceptron 2015.pptPerceptron 2015.ppt
Perceptron 2015.ppt
 

More from marada0033

Modern face recognition with deep learning Script
Modern face recognition with deep learning ScriptModern face recognition with deep learning Script
Modern face recognition with deep learning Script
marada0033
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
marada0033
 
Modern face recognition with deep learning
Modern face recognition with deep learningModern face recognition with deep learning
Modern face recognition with deep learning
marada0033
 
Introduction to Computer Engineering. Motherboard.
Introduction to Computer Engineering. Motherboard.Introduction to Computer Engineering. Motherboard.
Introduction to Computer Engineering. Motherboard.
marada0033
 
Protected addressing mode and Paging
Protected addressing mode and PagingProtected addressing mode and Paging
Protected addressing mode and Paging
marada0033
 
Dos & Ddos Attack. Man in The Middle Attack
Dos & Ddos Attack. Man in The Middle AttackDos & Ddos Attack. Man in The Middle Attack
Dos & Ddos Attack. Man in The Middle Attack
marada0033
 
Audio spotlight
Audio spotlightAudio spotlight
Audio spotlight
marada0033
 
Java J2ME
Java J2MEJava J2ME
Java J2ME
marada0033
 
Babəkin Başçılığı Altında Azadlıq Hərəkatı
Babəkin Başçılığı Altında Azadlıq HərəkatıBabəkin Başçılığı Altında Azadlıq Hərəkatı
Babəkin Başçılığı Altında Azadlıq Hərəkatı
marada0033
 
Wireless Power Transmission
Wireless Power TransmissionWireless Power Transmission
Wireless Power Transmission
marada0033
 

More from marada0033 (10)

Modern face recognition with deep learning Script
Modern face recognition with deep learning ScriptModern face recognition with deep learning Script
Modern face recognition with deep learning Script
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
Modern face recognition with deep learning
Modern face recognition with deep learningModern face recognition with deep learning
Modern face recognition with deep learning
 
Introduction to Computer Engineering. Motherboard.
Introduction to Computer Engineering. Motherboard.Introduction to Computer Engineering. Motherboard.
Introduction to Computer Engineering. Motherboard.
 
Protected addressing mode and Paging
Protected addressing mode and PagingProtected addressing mode and Paging
Protected addressing mode and Paging
 
Dos & Ddos Attack. Man in The Middle Attack
Dos & Ddos Attack. Man in The Middle AttackDos & Ddos Attack. Man in The Middle Attack
Dos & Ddos Attack. Man in The Middle Attack
 
Audio spotlight
Audio spotlightAudio spotlight
Audio spotlight
 
Java J2ME
Java J2MEJava J2ME
Java J2ME
 
Babəkin Başçılığı Altında Azadlıq Hərəkatı
Babəkin Başçılığı Altında Azadlıq HərəkatıBabəkin Başçılığı Altında Azadlıq Hərəkatı
Babəkin Başçılığı Altında Azadlıq Hərəkatı
 
Wireless Power Transmission
Wireless Power TransmissionWireless Power Transmission
Wireless Power Transmission
 

Recently uploaded

JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 

Recently uploaded (20)

JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 

Neural network

  • 1. Presented by Rauf Asadov Neural Networks
  • 2. The human brain is made up of billions of simple processing units – neurons. NEURON • Dendrites – Receive information Biological Neuron Hippocampal Neurons Source: heart.cbl.utoronto.ca/ ~berj/projects.html • Cell Body – Process information • Axon – Carries processed information to other neurons • Synapse – Junction between Axon end and Dendrites of other Neurons Dendrites Cell Body Axon Schematic Synapse
  • 3.
  • 4. Artificial Neuron •Receives Inputs X1 X2 … Xp from other neurons or environment • Inputs fed-in through connections with ‘weights’ • Total Input = Weighted sum of inputs from all sources • Transfer function (Activation function) converts the input to output • Output goes to other neurons or environment
  • 5. Biological Neural Network Artificial Neural Network Soma Dendrite Axon Synapse Neuron Input Output Weight Analogy between biological and artificial neural networks
  • 6. How do ANNs work? Transfer Function (Activation Function) Output x1x2xm ∑ y Processing Input w1 w2wm weights . . . . . . . . . . . . f(vk) . . . . .
  • 7. Activation functions of a neuron Step function Sign function +1 -1 0 +1 -1 0X Y X Y +1 -1 0 X Y Sigmoid function +1 -1 0 X Y Linear function       0if,0 0if,1 X X Ystep       0if,1 0if,1 X X Y sign X sigmoid e Y   1 1 XY linear 
  • 8.  The neuron computes the weighted sum of the input signals and compares the result with a threshold value, . If the net input is less than the threshold, the neuron output is –1. But if the net input is greater than or equal to the threshold, the neuron becomes activated and its output attains a value +1.  The neuron uses the following transfer or activation function:  This type of activation function is called a sign function.    n i iiwxX 1       X X Y if,1 if,1
  • 9. Can a single neuron learn a task?  In 1958, Frank Rosenblatt introduced a training algorithm that provided the first procedure for training a simple ANN: a perceptron.  The perceptron is the simplest form of a neural network. It consists of a single neuron with adjustable synaptic weights and a hard limiter.
  • 11. 11 Perceptron • Is a network with all inputs connected directly to the output. This is called a single layer NN (Neural Network) or a Perceptron Network. • A perceptron is a single neuron that classifies a set of inputs into one of two categories (usually 1 or -1) • If the inputs are in the form of a grid, a perceptron can be used to recognize visual images of shapes. • The perceptron usually uses a step function, which returns 1 if the weighted sum of inputs exceeds a threshold, and –1 otherwise.  The operation of Rosenblatt’s perceptron is based on the McCulloch and Pitts neuron model. The model consists of a linear combiner followed by a hard limiter.  The weighted sum of the inputs is applied to the hard limiter, which produces an output equal to +1 if its input is positive and 1 if it is negative.
  • 12. An ANN can: 1.compute any computable function, by the appropriate selection of the network topology and weights values. 2.learn from experience!  Specifically, by trial‐and‐error Learning by trial‐and‐error Continuous process of: Trial: Processing an input to produce an output (In terms of ANN: Compute the output function of a given input) Evaluate: Evaluating this output by comparing the actual output with the expected output. Adjust: Adjust the weights.
  • 13. x2 x1 ?? Or hyperplane in n-dimensional space x2= mx1+q Perceptron learns a linear separator This is an (hyper)-line in an n-dimensional space, what is learnt are the coefficients wi Instances X(x1,x2..x2) such that: Are classified as positive, else they are classified as negative
  • 14. Perceptron Training- Preparation • First, inputs are given random weights (usually between –0.5 and 0.5) • In the case of an elementary perceptron, the n- dimensional space is divided by a hyperplane into two decision regions. (i.e If we have 2 results we can separate them with a line with each group result on a different side of the line) The hyperplane is defined by the linearly separable function: 0 1   n i iiwx
  • 15.  If at iteration p, the actual output is Y(p) and the desired output is Yd (p), then the error is given by: where p = 1, 2, 3, . . . Iteration p here refers to the pth training example presented to the perceptron.  If the error, e(p), is positive, we need to increase perceptron output Y(p), but if it is negative, we need to decrease Y(p). )()()( pYpYpe d 
  • 16. The perceptron learning formula where p = 1, 2, 3, . . .  is the learning rate, a positive constant less than unity. )()()()1( pepxpwpw iii  
  • 17. Step 1: Initialisation Set initial weights w1, w2,…, wn and threshold  to random numbers in the range [0.5, 0.5]. If the error, e(p), is positive, we need to increase perceptron output Y(p), but if it is negative, we need to decrease Y(p). Perceptron’s training algorithm
  • 18. Step 2: Activation Activate the perceptron by applying inputs x1(p), x2(p),…, xn(p) and desired output Yd (p). Calculate the actual output at iteration p = 1 where n is the number of the perceptron inputs, and step is a step activation function. Perceptron’s tarining algorithm (continued)            n i ii pwpxsteppY 1 )()()(
  • 19. Step 3: Weight training Update the weights of the perceptron (Back Propagation-minimize errors) where delta w is the weight correction at iteration p. The weight correction is computed by the delta rule: Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until convergence. )()()1( pwpwpw iii  Perceptron’s training algorithm (continued) )()()( pepxpw ii 
  • 20. X1 X2 W1 W2 X1 X2 Y Train 0 0 0 0 1 0 1 0 0 1 1 1 Perceptron’s training for AND logic gate ∑ Activation function
  • 21. Example of perceptron learning: the logical operation AND Inputs x1 x2 0 0 1 1 0 1 0 1 0 0 0 Epoch Desired output Yd 1 Initial weights w1 w2 1 0.3 0.3 0.3 0.2 0.1 0.1 0.1 0.1 0 0 1 0 Actual output Y Error e 0 0 1 1 Final weights w1 w2 0.3 0.3 0.2 0.3 0.1 0.1 0.1 0.0 0 0 1 1 0 1 0 1 0 0 0 2 1 0.3 0.3 0.3 0.2 0 0 1 1 0 0 1 0 0.3 0.3 0.2 0.2 0.0 0.0 0.0 0.0 0 0 1 1 0 1 0 1 0 0 0 3 1 0.2 0.2 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 1 0 0 0 1 1 0.2 0.2 0.1 0.2 0.0 0.0 0.0 0.1 0 0 1 1 0 1 0 1 0 0 0 4 1 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 0 1 0 0.2 0.2 0.1 0.1 0.1 0.1 0.1 0.1 0 0 1 1 0 1 0 1 0 0 0 5 1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 0 0 1 0 0 0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0 Threshold:  = 0.2; learning rate:  = 0.1
  • 22. Multilayer Perceptron  A multilayer perceptron neural network with one or more hidden layers.  Hierarchical structure  The network consists of an input layer of source neurons, at least one middle or hidden layer of computational neurons, and an output layer of computational neurons.
  • 24. What does the middle layer hide?  A hidden layer “hides” its desired output. Neurons in the hidden layer cannot be observed through the input/output behaviour of the network. There is no obvious way to know what the desired output of the hidden layer should be.  Commercial ANNs incorporate three and sometimes four layers, including one or two hidden layers. Each layer can contain from 10 to 1000 neurons. Experimental neural networks may have five or even six layers, including three or four hidden layers, and utilise millions of neurons.
  • 25. Learning Paradigms Supervised learning Unsupervised learning Reinforcement learning In artificial neural networks, learning refers to the method of modifying the weights of connections between the nodes of a specified network.
  • 26. Supervised learning  This is what we have seen so far!  A network is fed with a set of training samples (inputs and corresponding output), and it uses these samples to learn the general relationship between the inputs and the outputs.  This relationship is represented by the values of the weights of the trained network.
  • 27. Unsupervised learning  No desired output is associated with the training data!  Faster than supervised learning  Used to find out structures within data:  Clustering  Compression
  • 28. Reinforcement learning  Like supervised learning, but:  Weights adjusting is not directly related to the error value.  The error value is used to randomly, shuffle weights!  Relatively slow learning due to ‘randomness’.