SlideShare a Scribd company logo
Artificial Neural Networks
- Introduction -
Peter Andras
peter.andras@ncl.ac.uk
Overview
1. Biological inspiration
2. Artificial neurons and neural networks
3. Learning processes
4. Learning with artificial neural networks
Biological inspiration
Animals are able to react adaptively to changes in their
external and internal environment, and they use their nervous
system to perform these behaviours.
An appropriate model/simulation of the nervous system
should be able to produce similar responses and behaviours in
artificial systems.
The nervous system is build by relatively simple units, the
neurons, so copying their behavior and functionality should be
the solution.
Biological inspiration
Dendrites
Soma (cell body)
Axon
Biological inspiration
synapses
axon
dendrites
The information transmission happens at the synapses.
Biological inspiration
The spikes travelling along the axon of the pre-synaptic
neuron trigger the release of neurotransmitter substances
at the synapse.
The neurotransmitters cause excitation or inhibition in the
dendrite of the post-synaptic neuron.
The integration of the excitatory and inhibitory signals
may produce spikes in the post-synaptic neuron.
The contribution of the signals depends on the strength of
the synaptic connection.
Artificial neurons
Neurons work by processing information. They receive and
provide information in form of spikes.
The McCullogh-Pitts model
Inputs
Output
w2
w1
w3
wn
wn-1
.
.
.
x1
x2
x3
…
xn-1
xn
y
)(;
1
zHyxwz
n
i
ii == ∑=
Artificial neurons
The McCullogh-Pitts model:
• spikes are interpreted as spike rates;
• synaptic strength are translated as synaptic weights;
• excitation means positive product between the
incoming spike rate and the corresponding synaptic
weight;
• inhibition means negative product between the
incoming spike rate and the corresponding synaptic
weight;
Artificial neurons
Nonlinear generalization of the McCullogh-Pitts
neuron:
),( wxfy =
y is the neuron’s output, x is the vector of inputs, and w
is the vector of synaptic weights.
Examples:
2
2
2
||||
1
1
a
wx
axw
ey
e
y T
−
−
−−
=
+
= sigmoidal neuron
Gaussian neuron
Artificial neural networksInputs
Output
An artificial neural network is composed of many artificial
neurons that are linked together according to a specific
network architecture. The objective of the neural network
is to transform the inputs into meaningful outputs.
Artificial neural networks
Tasks to be solved by artificial neural networks:
• controlling the movements of a robot based on self-
perception and other information (e.g., visual
information);
• deciding the category of potential food items (e.g.,
edible or non-edible) in an artificial world;
• recognizing a visual object (e.g., a familiar face);
• predicting where a moving object goes, when a robot
wants to catch it.
Learning in biological systems
Learning = learning by adaptation
The young animal learns that the green fruits are sour,
while the yellowish/reddish ones are sweet. The learning
happens by adapting the fruit picking behavior.
At the neural level the learning happens by changing of the
synaptic strengths, eliminating some synapses, and
building new ones.
Learning as optimisation
The objective of adapting the responses on the basis of the
information received from the environment is to achieve a
better state. E.g., the animal likes to eat many energy rich,
juicy fruits that make its stomach full, and makes it feel
happy.
In other words, the objective of learning in biological
organisms is to optimise the amount of available resources,
happiness, or in general to achieve a closer to optimal state.
Learning in biological neural
networks
The learning rules of Hebb:
• synchronous activation increases the synaptic strength;
• asynchronous activation decreases the synaptic
strength.
These rules fit with energy minimization principles.
Maintaining synaptic strength needs energy, it should be
maintained at those places where it is needed, and it
shouldn’t be maintained at places where it’s not needed.
Learning principle for
artificial neural networks
ENERGY MINIMIZATION
We need an appropriate definition of energy for artificial
neural networks, and having that we can use
mathematical optimisation techniques to find how to
change the weights of the synaptic connections between
neurons.
ENERGY = measure of task performance error
Neural network mathematicsInputs
Output
),(
),(
),(
),(
1
44
1
4
1
33
1
3
1
22
1
2
1
11
1
1
wxfy
wxfy
wxfy
wxfy
=
=
=
=
),(
),(
),(
2
3
12
3
2
2
12
2
2
1
12
1
wyfy
wyfy
wyfy
=
=
=














=
1
4
1
3
1
2
1
1
1
y
y
y
y
y ),( 3
1
2
wyfyOut =










=
2
3
2
3
2
3
2
y
y
y
y
Neural network mathematics
Neural network: input / output transformation
),( WxFyout =
W is the matrix of all weight vectors.
MLP neural networks
MLP = multi-layer perceptron
Perceptron:
MLP neural network:
xwy T
out = x yout
x yout
23
2
1
23
2
2
2
1
2
2
1
3
1
2
1
1
1
1
),(
2,1,
1
1
),,(
3,2,1,
1
1
212
11
ywywy
yyy
k
e
y
yyyy
k
e
y
T
k
kkout
T
aywk
T
axwk
k
kT
k
kT
==
=
=
+
=
=
=
+
=
∑
=
−−
−−
RBF neural networks
RBF = radial basis function
2
2
2
||||
)(
||)(||)(
a
wx
exf
cxrxr
−
−
=
−=
Example: Gaussian RBF
x
yout
∑
=
−
−
⋅=
4
1
)(2
||||
2 2
2,1
k
a
wx
kout
k
k
ewy
Neural network tasks
• control
• classification
• prediction
• approximation
These can be reformulated
in general as
FUNCTION
APPROXIMATION
tasks.
Approximation: given a set of values of a function g(x)
build a neural network that approximates the g(x) values
for any input x.
Neural network approximation
Task specification:
Data: set of value pairs: (xt
, yt), yt=g(xt
) + zt; zt is random
measurement noise.
Objective: find a neural network that represents the input /
output transformation (a function) F(x,W) such that
F(x,W) approximates g(x) for every x
Learning to approximate
Error measure:
∑
=
−=
N
t
tt yWxF
N
E
1
2
));((
1
Rule for changing the synaptic weights:
j
i
j
i
newj
i
j
i
j
i
www
W
w
E
cw
∆+=
∂
∂
⋅−=∆
,
)(
c is the learning parameter (usually a constant)
Learning with a perceptron
Perceptron: xwy T
out =
Data: ),(),...,,(),,( 2
2
1
1
N
N
yxyxyx
Error:
22
))(())(()( t
tT
tout yxtwytytE −=−=
Learning:
t
j
m
j
j
T
t
it
tT
ii
i
t
tT
i
i
ii
xtwxtw
xyxtwctwtw
w
yxtw
ctw
w
tE
ctwtw
⋅=
⋅−⋅−=+
∂
−∂
⋅−=
∂
∂
⋅−=+
∑
=1
2
)()(
))(()()1(
))((
)(
)(
)()1(
A perceptron is able to learn a linear function.
Learning with RBF neural
networks
RBF neural network:
Data: ),(),...,,(),,( 2
2
1
1
N
N
yxyxyx
Error: 2
1
)(2
||||
22
))(())(()(
2
2,1
t
M
k
a
wx
ktout yetwytytE k
kt
−⋅=−= ∑
=
−
−
Learning:
2
2,1
)(2
||||
2
2
22
)))(,((2
)(
)(
)()1(
i
it
a
wx
t
t
i
i
ii
eytWxF
w
tE
w
tE
ctwtw
−
−
⋅−⋅=
∂
∂
∂
∂
⋅−=+
Only the synaptic weights of the output neuron are modified.
An RBF neural network learns a nonlinear function.
∑
=
−
−
⋅==
M
k
a
wx
kout
k
k
ewWxFy
1
)(2
||||
2 2
2,1
),(
Learning with MLP neural
networks
MLP neural network:
with p layers
Data: ),(),...,,(),,( 2
2
1
1
N
N
yxyxyx
Error: 22
));(())(()( t
t
tout yWxFytytE −=−=
1
22
1
2
2
2
11
1
1
1
1
);(
...
),...,(
,...,1,
1
1
),...,(
,...,1,
1
1
2
212
1
11
−
−−
−−
==
=
=
+
=
=
=
+
=
ppT
out
T
M
aywk
T
M
axwk
ywWxFy
yyy
Mk
e
y
yyy
Mk
e
y
k
kT
k
kT
It is very complicated to calculate the weight changes.
x
yout
1 2 … p-1 p
Learning with backpropagation
Solution of the complicated learning:
• calculate first the changes for the synaptic weights
of the output neuron;
• calculate the changes backward starting from layer
p-1, and propagate backward the local error terms.
The method is still relatively complicated but it
is much simpler than the original optimisation
problem.
Learning with general optimisation
In general it is enough to have a single layer of nonlinear
neurons in a neural network in order to learn to
approximate a nonlinear function.
In such case general optimisation may be applied without
too much difficulty.
Example: an MLP neural network with a single hidden layer:
∑
=
−−
+
⋅==
M
k
axwkout
k
kT
e
wWxFy
1
2
,1
1
1
);(
Learning with general optimisation
i
tiT
axwt
t
i
i
ii
e
ytWxF
w
tE
w
tE
ctwtw
−−
+
⋅−⋅=
∂
∂
∂
∂
⋅−=+
,1
1
1
)))(,((2
)(
)(
)()1(
2
2
22
Synaptic weight change rules for the output neuron:
Synaptic weight change rules for the neurons of the
hidden layer:
( )
( )
( )
( )
)(
1
)))(,((2)()1(
11
1
1
1
)))(,((2
)(
)(
)()1(
2
,1,1
,1
,1
,1
,12,1
,1,1
,1
,1,1
,1
,1
,1
,1
,1
,1
t
j
axw
axw
t
ti
j
i
j
t
ji
tiT
i
j
i
tiT
i
j
axw
axw
axwi
j
axwi
j
t
t
i
j
i
j
i
j
i
j
x
e
e
ytWxFctwtw
xaxw
w
axw
we
e
ew
ew
ytWxF
w
tE
w
tE
ctwtw
i
tiT
i
tiT
i
tiT
i
tiT
i
tiT
i
tiT
−⋅
+
⋅−⋅⋅−=+
−=−−
∂
∂
−−
∂
∂
⋅
+
=





+∂
∂






+∂
∂
⋅−⋅=
∂
∂
∂
∂
⋅−=+
−−
−−
−−
−−
−−
−−
New methods for learning with
neural networks
Bayesian learning:
the distribution of the neural network
parameters is learnt
Support vector learning:
the minimal representative subset of the
available data is used to calculate the synaptic
weights of the neurons
Summary
• Artificial neural networks are inspired by the learning
processes that take place in biological systems.
• Artificial neurons and neural networks try to imitate the
working mechanisms of their biological counterparts.
• Learning can be perceived as an optimisation process.
• Biological neural learning happens by the modification
of the synaptic strength. Artificial neural networks learn
in the same way.
• The synapse strength modification rules for artificial
neural networks can be derived by applying
mathematical optimisation methods.
Summary
• Learning tasks of artificial neural networks can be
reformulated as function approximation tasks.
• Neural networks can be considered as nonlinear function
approximating tools (i.e., linear combinations of nonlinear
basis functions), where the parameters of the networks
should be found by applying optimisation methods.
• The optimisation is done with respect to the approximation
error measure.
• In general it is enough to have a single hidden layer neural
network (MLP, RBF or other) to learn the approximation of
a nonlinear function. In such cases general optimisation can
be applied to find the change rules for the synaptic weights.

More Related Content

What's hot

JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network DynamicsJAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
hirokazutanaka
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Jason Tsai
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Jason Tsai
 
Backpropagation and the brain review
Backpropagation and the brain reviewBackpropagation and the brain review
Backpropagation and the brain review
Seonghyun Kim
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…Dongseo University
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
hirokazutanaka
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciAConeAdam Cone
 
ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power System
Yousuf Khan
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
Institute of Technology Telkom
 
Neural network and artificial intelligent
Neural network and artificial intelligentNeural network and artificial intelligent
Neural network and artificial intelligent
HapPy SumOn
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
Learnbay Datascience
 
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...konmpoz
 
Deep learning in a brain (skolkovo 2016)
Deep learning in a brain (skolkovo 2016)Deep learning in a brain (skolkovo 2016)
Deep learning in a brain (skolkovo 2016)
Skolkovo Robotics Center
 

What's hot (17)

JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network DynamicsJAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
Introduction to Spiking Neural Networks: From a Computational Neuroscience pe...
 
Backpropagation and the brain review
Backpropagation and the brain reviewBackpropagation and the brain review
Backpropagation and the brain review
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
2013-1 Machine Learning Lecture 07 - Michael Negnevitsky - Evolutionary Co…
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciACone
 
basal-ganglia
basal-gangliabasal-ganglia
basal-ganglia
 
ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power System
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Neural network and artificial intelligent
Neural network and artificial intelligentNeural network and artificial intelligent
Neural network and artificial intelligent
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
 
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
 
Lec 5
Lec 5Lec 5
Lec 5
 
Deep learning in a brain (skolkovo 2016)
Deep learning in a brain (skolkovo 2016)Deep learning in a brain (skolkovo 2016)
Deep learning in a brain (skolkovo 2016)
 
iit
iitiit
iit
 

Viewers also liked

My hero
My heroMy hero
My hero
Prosv
 
Derecho Administrativo
Derecho AdministrativoDerecho Administrativo
Derecho Administrativo
Michael Rodriguez A
 
Notorietatea organizatiilor neguvernamentale in Romania
Notorietatea organizatiilor neguvernamentale in RomaniaNotorietatea organizatiilor neguvernamentale in Romania
Notorietatea organizatiilor neguvernamentale in Romania
Asociatia React
 
Managing Director Presents at Proactive Investors' Investor Luncheons
Managing Director Presents at Proactive Investors' Investor LuncheonsManaging Director Presents at Proactive Investors' Investor Luncheons
Managing Director Presents at Proactive Investors' Investor Luncheons
Minotaur Exploration
 
Maoríes
MaoríesMaoríes
Maoríes
Sara Gonzalez
 
Technical SEO Best Practices
Technical SEO Best PracticesTechnical SEO Best Practices
Technical SEO Best Practices
ClickThrough Marketing
 
Indonesia Infrastructure Dilemma
Indonesia Infrastructure DilemmaIndonesia Infrastructure Dilemma
Indonesia Infrastructure Dilemma
Anjar Priandoyo
 

Viewers also liked (12)

CV Cherian_IT
CV Cherian_ITCV Cherian_IT
CV Cherian_IT
 
#
##
#
 
Deep
DeepDeep
Deep
 
My hero
My heroMy hero
My hero
 
Derecho Administrativo
Derecho AdministrativoDerecho Administrativo
Derecho Administrativo
 
Notorietatea organizatiilor neguvernamentale in Romania
Notorietatea organizatiilor neguvernamentale in RomaniaNotorietatea organizatiilor neguvernamentale in Romania
Notorietatea organizatiilor neguvernamentale in Romania
 
windows PPT
windows PPTwindows PPT
windows PPT
 
Managing Director Presents at Proactive Investors' Investor Luncheons
Managing Director Presents at Proactive Investors' Investor LuncheonsManaging Director Presents at Proactive Investors' Investor Luncheons
Managing Director Presents at Proactive Investors' Investor Luncheons
 
Maoríes
MaoríesMaoríes
Maoríes
 
Akshay tambe cv
Akshay tambe cvAkshay tambe cv
Akshay tambe cv
 
Technical SEO Best Practices
Technical SEO Best PracticesTechnical SEO Best Practices
Technical SEO Best Practices
 
Indonesia Infrastructure Dilemma
Indonesia Infrastructure DilemmaIndonesia Infrastructure Dilemma
Indonesia Infrastructure Dilemma
 

Similar to Annintro

Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
annintro.ppt
annintro.pptannintro.ppt
annintro.ppt
GoodlookingAbbas
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
sai anjaneya
 
JAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
JAKBOS.COM | Nonton Film Online, Streaming Movie IndonesiaJAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
JAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
jakbos
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
Mohd Faiz
 
Unit+i
Unit+iUnit+i
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptx
RatuRumana3
 
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
hirokazutanaka
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
ESTIBALYZJIMENEZCAST
 
Lecture 4 neural networks
Lecture 4 neural networksLecture 4 neural networks
Lecture 4 neural networks
ParveenMalik18
 
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPICARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
DrBHariChandana
 
Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANN
MostafaHazemMostafaa
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Mohd Arafat Shaikh
 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical Diagnosis
Adityendra Kumar Singh
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4CSITSansar
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
ncct
 

Similar to Annintro (20)

Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
annintro.ppt
annintro.pptannintro.ppt
annintro.ppt
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
JAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
JAKBOS.COM | Nonton Film Online, Streaming Movie IndonesiaJAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
JAKBOS.COM | Nonton Film Online, Streaming Movie Indonesia
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
Unit+i
Unit+iUnit+i
Unit+i
 
Neural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptxNeural Networks-introduction_with_prodecure.pptx
Neural Networks-introduction_with_prodecure.pptx
 
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
Lecture 4 neural networks
Lecture 4 neural networksLecture 4 neural networks
Lecture 4 neural networks
 
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPICARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
 
Lect1_Threshold_Logic_Unit lecture 1 - ANN
Lect1_Threshold_Logic_Unit  lecture 1 - ANNLect1_Threshold_Logic_Unit  lecture 1 - ANN
Lect1_Threshold_Logic_Unit lecture 1 - ANN
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Project. Thalamocortical
Project. ThalamocorticalProject. Thalamocortical
Project. Thalamocortical
 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical Diagnosis
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 

Recently uploaded

Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
pchutichetpong
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
74nqk8xf
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
AbhimanyuSinha9
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
rwarrenll
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
dwreak4tg
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
v3tuleee
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
slg6lamcq
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
Timothy Spann
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
ewymefz
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 

Recently uploaded (20)

Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证书)伯明翰城市大学毕业证如何办理
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
一比一原版(Adelaide毕业证书)阿德莱德大学毕业证如何办理
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 

Annintro

  • 1. Artificial Neural Networks - Introduction - Peter Andras peter.andras@ncl.ac.uk
  • 2. Overview 1. Biological inspiration 2. Artificial neurons and neural networks 3. Learning processes 4. Learning with artificial neural networks
  • 3. Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours. An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems. The nervous system is build by relatively simple units, the neurons, so copying their behavior and functionality should be the solution.
  • 6. Biological inspiration The spikes travelling along the axon of the pre-synaptic neuron trigger the release of neurotransmitter substances at the synapse. The neurotransmitters cause excitation or inhibition in the dendrite of the post-synaptic neuron. The integration of the excitatory and inhibitory signals may produce spikes in the post-synaptic neuron. The contribution of the signals depends on the strength of the synaptic connection.
  • 7. Artificial neurons Neurons work by processing information. They receive and provide information in form of spikes. The McCullogh-Pitts model Inputs Output w2 w1 w3 wn wn-1 . . . x1 x2 x3 … xn-1 xn y )(; 1 zHyxwz n i ii == ∑=
  • 8. Artificial neurons The McCullogh-Pitts model: • spikes are interpreted as spike rates; • synaptic strength are translated as synaptic weights; • excitation means positive product between the incoming spike rate and the corresponding synaptic weight; • inhibition means negative product between the incoming spike rate and the corresponding synaptic weight;
  • 9. Artificial neurons Nonlinear generalization of the McCullogh-Pitts neuron: ),( wxfy = y is the neuron’s output, x is the vector of inputs, and w is the vector of synaptic weights. Examples: 2 2 2 |||| 1 1 a wx axw ey e y T − − −− = + = sigmoidal neuron Gaussian neuron
  • 10. Artificial neural networksInputs Output An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.
  • 11. Artificial neural networks Tasks to be solved by artificial neural networks: • controlling the movements of a robot based on self- perception and other information (e.g., visual information); • deciding the category of potential food items (e.g., edible or non-edible) in an artificial world; • recognizing a visual object (e.g., a familiar face); • predicting where a moving object goes, when a robot wants to catch it.
  • 12. Learning in biological systems Learning = learning by adaptation The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior. At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.
  • 13. Learning as optimisation The objective of adapting the responses on the basis of the information received from the environment is to achieve a better state. E.g., the animal likes to eat many energy rich, juicy fruits that make its stomach full, and makes it feel happy. In other words, the objective of learning in biological organisms is to optimise the amount of available resources, happiness, or in general to achieve a closer to optimal state.
  • 14. Learning in biological neural networks The learning rules of Hebb: • synchronous activation increases the synaptic strength; • asynchronous activation decreases the synaptic strength. These rules fit with energy minimization principles. Maintaining synaptic strength needs energy, it should be maintained at those places where it is needed, and it shouldn’t be maintained at places where it’s not needed.
  • 15. Learning principle for artificial neural networks ENERGY MINIMIZATION We need an appropriate definition of energy for artificial neural networks, and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connections between neurons. ENERGY = measure of task performance error
  • 17. Neural network mathematics Neural network: input / output transformation ),( WxFyout = W is the matrix of all weight vectors.
  • 18. MLP neural networks MLP = multi-layer perceptron Perceptron: MLP neural network: xwy T out = x yout x yout 23 2 1 23 2 2 2 1 2 2 1 3 1 2 1 1 1 1 ),( 2,1, 1 1 ),,( 3,2,1, 1 1 212 11 ywywy yyy k e y yyyy k e y T k kkout T aywk T axwk k kT k kT == = = + = = = + = ∑ = −− −−
  • 19. RBF neural networks RBF = radial basis function 2 2 2 |||| )( ||)(||)( a wx exf cxrxr − − = −= Example: Gaussian RBF x yout ∑ = − − ⋅= 4 1 )(2 |||| 2 2 2,1 k a wx kout k k ewy
  • 20. Neural network tasks • control • classification • prediction • approximation These can be reformulated in general as FUNCTION APPROXIMATION tasks. Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x.
  • 21. Neural network approximation Task specification: Data: set of value pairs: (xt , yt), yt=g(xt ) + zt; zt is random measurement noise. Objective: find a neural network that represents the input / output transformation (a function) F(x,W) such that F(x,W) approximates g(x) for every x
  • 22. Learning to approximate Error measure: ∑ = −= N t tt yWxF N E 1 2 ));(( 1 Rule for changing the synaptic weights: j i j i newj i j i j i www W w E cw ∆+= ∂ ∂ ⋅−=∆ , )( c is the learning parameter (usually a constant)
  • 23. Learning with a perceptron Perceptron: xwy T out = Data: ),(),...,,(),,( 2 2 1 1 N N yxyxyx Error: 22 ))(())(()( t tT tout yxtwytytE −=−= Learning: t j m j j T t it tT ii i t tT i i ii xtwxtw xyxtwctwtw w yxtw ctw w tE ctwtw ⋅= ⋅−⋅−=+ ∂ −∂ ⋅−= ∂ ∂ ⋅−=+ ∑ =1 2 )()( ))(()()1( ))(( )( )( )()1( A perceptron is able to learn a linear function.
  • 24. Learning with RBF neural networks RBF neural network: Data: ),(),...,,(),,( 2 2 1 1 N N yxyxyx Error: 2 1 )(2 |||| 22 ))(())(()( 2 2,1 t M k a wx ktout yetwytytE k kt −⋅=−= ∑ = − − Learning: 2 2,1 )(2 |||| 2 2 22 )))(,((2 )( )( )()1( i it a wx t t i i ii eytWxF w tE w tE ctwtw − − ⋅−⋅= ∂ ∂ ∂ ∂ ⋅−=+ Only the synaptic weights of the output neuron are modified. An RBF neural network learns a nonlinear function. ∑ = − − ⋅== M k a wx kout k k ewWxFy 1 )(2 |||| 2 2 2,1 ),(
  • 25. Learning with MLP neural networks MLP neural network: with p layers Data: ),(),...,,(),,( 2 2 1 1 N N yxyxyx Error: 22 ));(())(()( t t tout yWxFytytE −=−= 1 22 1 2 2 2 11 1 1 1 1 );( ... ),...,( ,...,1, 1 1 ),...,( ,...,1, 1 1 2 212 1 11 − −− −− == = = + = = = + = ppT out T M aywk T M axwk ywWxFy yyy Mk e y yyy Mk e y k kT k kT It is very complicated to calculate the weight changes. x yout 1 2 … p-1 p
  • 26. Learning with backpropagation Solution of the complicated learning: • calculate first the changes for the synaptic weights of the output neuron; • calculate the changes backward starting from layer p-1, and propagate backward the local error terms. The method is still relatively complicated but it is much simpler than the original optimisation problem.
  • 27. Learning with general optimisation In general it is enough to have a single layer of nonlinear neurons in a neural network in order to learn to approximate a nonlinear function. In such case general optimisation may be applied without too much difficulty. Example: an MLP neural network with a single hidden layer: ∑ = −− + ⋅== M k axwkout k kT e wWxFy 1 2 ,1 1 1 );(
  • 28. Learning with general optimisation i tiT axwt t i i ii e ytWxF w tE w tE ctwtw −− + ⋅−⋅= ∂ ∂ ∂ ∂ ⋅−=+ ,1 1 1 )))(,((2 )( )( )()1( 2 2 22 Synaptic weight change rules for the output neuron: Synaptic weight change rules for the neurons of the hidden layer: ( ) ( ) ( ) ( ) )( 1 )))(,((2)()1( 11 1 1 1 )))(,((2 )( )( )()1( 2 ,1,1 ,1 ,1 ,1 ,12,1 ,1,1 ,1 ,1,1 ,1 ,1 ,1 ,1 ,1 ,1 t j axw axw t ti j i j t ji tiT i j i tiT i j axw axw axwi j axwi j t t i j i j i j i j x e e ytWxFctwtw xaxw w axw we e ew ew ytWxF w tE w tE ctwtw i tiT i tiT i tiT i tiT i tiT i tiT −⋅ + ⋅−⋅⋅−=+ −=−− ∂ ∂ −− ∂ ∂ ⋅ + =      +∂ ∂       +∂ ∂ ⋅−⋅= ∂ ∂ ∂ ∂ ⋅−=+ −− −− −− −− −− −−
  • 29. New methods for learning with neural networks Bayesian learning: the distribution of the neural network parameters is learnt Support vector learning: the minimal representative subset of the available data is used to calculate the synaptic weights of the neurons
  • 30. Summary • Artificial neural networks are inspired by the learning processes that take place in biological systems. • Artificial neurons and neural networks try to imitate the working mechanisms of their biological counterparts. • Learning can be perceived as an optimisation process. • Biological neural learning happens by the modification of the synaptic strength. Artificial neural networks learn in the same way. • The synapse strength modification rules for artificial neural networks can be derived by applying mathematical optimisation methods.
  • 31. Summary • Learning tasks of artificial neural networks can be reformulated as function approximation tasks. • Neural networks can be considered as nonlinear function approximating tools (i.e., linear combinations of nonlinear basis functions), where the parameters of the networks should be found by applying optimisation methods. • The optimisation is done with respect to the approximation error measure. • In general it is enough to have a single hidden layer neural network (MLP, RBF or other) to learn the approximation of a nonlinear function. In such cases general optimisation can be applied to find the change rules for the synaptic weights.