SlideShare a Scribd company logo
Echo State
Networks and
Locomotion
Patterns
Master Degree in Automation
Engineering and Control of
Complex Systems
1
DIPARTIMENTO DI INGEGNERIA
ELETTRICA ELETTRONICA E DEI
SISTEMI
Student:
Vito Strano
Professor:
Prof. Eng. Paolo Arena
 Create a dynamical model able to generate the
speed profile of a legged simulated robot from the
stepping diagrams drawn from a dynamical
simulator.
 The capability of Echo state networks to model
dynamical nonlinear systems in real time is
exploited.
 The network is conceived to act as an internal
model receiving in input the ground contact
sensors signals, providing as output, the average
velocity profile for the robot.
 Echo state networks with leaky integrate and fire
model neurons have been implemented.
2
Aim of the work
Echo State networks
Echo State neural networks (ESN) :
special case of recurrent neural networks
(RNN), with a goal to achieve their greater
predictive ability.
Advantage of RNN is the correspondence
to biological neural networks.
ESN, only weights to output neurons are
trained
3
The main idea is to drive a random, large,
fixed recurrent neural network with the
input signal, thereby inducing in each
neuron within this "reservoir" network a
nonlinear response signal, and combine a
desired output signal by a trainable linear
combination of all of these response signals.
4
Basic Idea
5
Structure of ESN
 𝑿 𝒏 = 𝒙 𝟏 𝒏 , 𝒙 𝟐 𝒏 , … . 𝒙 𝑵 𝒏
Hidden layer neurons (reservoir).
 𝒙𝒊(𝒏)
output of the 𝑖 𝑡𝑡 hidden neuron in time n.
 𝑼 𝒏 = 𝒖 𝟏 𝒏 , 𝒖 𝟐 𝒏 , … 𝒖 𝒌 𝒏
input vector.
 𝒀 𝒏 = 𝒚 𝟏 𝒏 , 𝒚 𝟐 𝒏 , … 𝒚 𝑳 𝒏
output vector.
Each 𝑥𝑖(𝑛) is a function of the networks previous inputs
𝑢 𝑛 , 𝑢 𝑛 − 1 , …, processed by the network.
Hidden neurons should be sparse, to encourage rich variety of
dynamics in dynamical reservoir synaptic weights were initialized with
uniform distribution, also input neurons should be sparse.
6
Structure of ESN
The states of hidden neurons in “dynamical reservoir” are
calculated by the formula
𝑿 𝒏 + 𝟏 = 𝒇(𝑾𝒊𝒊 𝒖 𝒏 + 𝑾 𝒅𝒅 𝒙 𝒏 + 𝑾 𝒃𝒃 𝒅 𝒏 )
where
 f is the activation function of hidden neurons
 𝑑 𝑛 is teacher for train mode or network output in previous
step for test mode
 𝑊𝑖𝑖 input weight
 𝑊𝑑𝑑 hidden weight
 𝑊𝑜𝑜𝑜 output weight
 𝑊𝑏𝑏 feedback weight.
7
Structure of ESN
The states of output neurons are calculated by the
formula
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑢 𝑛 + 1 , 𝑥 𝑛 + 1 , 𝑦 𝑛 )
where
 𝑓𝑜𝑢𝑡 is the activation function of output neurons
In this application the states of output neurons are
calculated removing input-output relationship
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 )
8
Structure of ESN
The units in standard sigmoid networks have no memory.
For learning slowly and continuously changing systems, it is more
adequate to use networks with a continuous dynamics.
The evolution of a continuous-time leaky integrator network is
𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 )
9
Leaky integrator
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
esn output(green) - teacher(yellow)
Where
 C is a time constant
 a the leaking decay rate
 δ step size
In our case toolbox the variable “a”
is equal to 1 and net.time_cost equal to
δ𝐶.
Feedback and spectral radius involves
in time decay of the response.
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
esn output(green) - teacher(yellow)
In the ESN approach, training is solved by the following steps:
 Create a random dynamical sparse reservoir RNN
 Attach input units to the reservoir
 Create output units attached all-to-all to the reservoir
 If the task requires output feedback install randomly generated
output-to-reservoir connections (all-to-all).
 Drive the dynamical reservoir with the training data, this means to
write both the input into the input unit and the teacher output into
the output unit.
 Compute output weights. Compute the output weights as the
linear regression weights (Wiener-Hopf or pseudoinverse) of the
teacher outputs on the reservoir states. Use these weights to create
reservoir-to-output connections.
10
ESN Train
The desired output weights are the linear regression weights of the desired
outputs on the harvested extended states.
Let 𝑅 = 𝑋′
𝑋 be the correlation matrix of the extended reservoir states, and
let 𝑃 = 𝑋𝑋𝑋 be the cross-correlation matrix of the states vs. the desired
outputs. Then, one way to compute is to invoke the Wiener-Hopf (WH)
solution
𝑊𝑜𝑜𝑜 = 𝑅−1
𝑃
Another way is to use the pseudo inverse (PINV)
𝑊𝑜𝑜𝑜 = 𝑝𝑝𝑝𝑝 𝑋 𝐷
Both methods are, in principle, equivalent, but WH is ill-conditioned,
however, is faster to compute than PINV (much faster if n is large).
11
Output Weights
The Central Pattern Generator (CPG) containing the key
mechanisms needed to generate the rhythmic motion
patterns.
CPGs are viewed as networks of coupled nonlinear systems
(oscillators) with given connections and parameters to be
modulated in order to account for distinct gaits. The emerging
solution is a travelling wave pattern which visits all the motor
neurons and thus imposes a specific gait to the controlled
robotic structure.
A particular locomotor pattern consists of a series of signals
with a well-defined phase shift. This pattern is due to the
pattern of neural activities of the CPG.
12
Central Pattern generator
A network ring of N oscillators (neurons).
Each neurons only fires one at a time and each
of them is connected to its neighbor with an
excitatory (or inhibitory) synapse.
A suitable valuable of the synaptic weight is a
well-defined phase of the pattern (traveling
wave).
13
If we now add to that network n-N (n
number of legs) neurons by using
synchronization via “coupling” or
synchronization via “duplicating” and
choose the correct synaptic weights
we can create a locomotor pattern.
Central Pattern generator
In locomotion patterns the white row represents the
stance phase meanwhile a black row represents a
swing phase, studying the phase displacement we can
calculate the speed.
 Stance phase : leg is on the ground.
 Swing phase : pull up the leg.
14
Locomotor Pattern
 In supervised training, one starts with teacher
data 𝑑 𝑛 .
In this case we use the Locomotion Patterns as
input series and the mean value of speed as
output calculated along three periods of AEP
(anterior extreme position).
In input sequence black squares corresponds to
zero and white squares corresponds to one.
All of these information are generated using a
dynamic simulation environment based on the
Open Dynamic Engine platform.
 In test phase, one starts without teacher data
𝑑 𝑛 and using only the input time series and
previous output sequence of ESN, we obtain
the desired model behavior.
15
ESN Train and Test
0 1000 2000 3000 4000 5000 6000 7000 8000
1.1
1.15
1.2
1.25
1.3
1.35
1.4
1.45
1.5
Teacher sequence
A dynamic simulator permit to simulate the time
varying behavior of bio-inspired robot in several
contexts through the definition of the real-world
constraints and the physical laws that govern it.
Goals:
 build robot bio-inspired and reproduce the
interaction with the real environment;
 implement learning algorithms that simulate the
neural activity;
 analyze decisions taken from the robot after the
training phase and to verify their effects on the
simulated environment;
 test the bio-robotic behavior in scenarios hardly
replicable in the realty.
16
Dynamic Simulation of Bio-Robot behaviors
There is a separation between the appearance
of the objects in the scene (visual model) and
the simulated physical realty (physical model).
The computation of the collision detection is
simpler for Graphics Processing Unit.
The simulator is written in C++ and includes the
software components: ODE, OSG, ColladaDom.
Using this new toolbox is possible to :
 choose different density of connectivity for input and
reservoir
 choose two different update algorithms for output weights
(pseudoinverse or Wiener-Hopf)
 compute output weights in real-time learning
 compute output weights in real-time learning with a time
window
 compute output weights in one step (batch learning)
 compute NRMSE (Normalized Root Mean Square Error)
evaluation of results
 no input-output relationship
 use leaky integrator.
17
Toolbox
18
Toolbox – flow diagram
Consider the deployment of following
formula, to adapt the potential use into
a microcontroller.
𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1
+ 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 )
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 )
19
C code
In the following some results obtained in different network
configurations using two different datasets.
Summary results
 Increasing number of neurons, network quickly reaches the
average speed value of the teacher;
 A smaller feedback involves a greater frequency of oscillation;
 With a small time constant, network has a sinusoidal behavior;
 A small spectral radius involves small fluctuations;
 A large spectral radius involves large fluctuations;
 combining large time constant and large spectral radius, network
with linear output has a behavior similar to that with output tanh.
20
Results
 Case 2) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: tanh
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky no
 Learn NRMSE: 0.67289
 Test NRMSE: 0.87892
21
Batch
0 1000 2000 3000 4000 5000 6000 7000 8000
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
22
Batch
 Case 2) TEST
 n. input: 6
 n. hidden: 30
 output activation: tanh
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky no
 Learn NRMSE: 0.67289
 Test NRMSE: 0.87892
0 1000 2000 3000 4000 5000 6000 7000 8000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
23
0 1000 2000 3000 4000 5000 6000 7000 8000
0.9
1
1.1
1.2
1.3
1.4
1.5
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
Batch
 Case 17) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.01
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 0.91589
 Test NRMSE: 0.95269
0 1000 2000 3000 4000 5000 6000 7000 8000
0.9
1
1.1
1.2
1.3
1.4
1.5
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
24
Batch
 Case 17) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.01
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 0.91589
 Test NRMSE: 0.95269
0 1000 2000 3000 4000 5000 6000 7000 8000
0
0.5
1
1.5
2
2.5
3
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
25
Real-time after batch
 Case 24) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 2.5977
 Test NRMSE: 2.5333
0 2000 4000 6000 8000 10000 12000
0
0.5
1
1.5
2
2.5
3
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
26
Real-time after batch
 Case 24) TEST
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 2.5977
 Test NRMSE: 2.5333
1000 2000 3000 4000 5000 6000 7000
1.1
1.2
1.3
1.4
1.5
1.6
1.7
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
27
Real-time time window
 Case 27) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky 0.7
 Time window 2000 s.
 Learn NRMSE: 0.9102
 Test NRMSE: 1.4849
1000 2000 3000 4000 5000 6000 7000
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
28
Real-time time window
 Case 27) TEST
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky 0.7
 Time window 2000 s.
 Learn NRMSE: 0.9102
 Test NRMSE: 1.4849
In the following we shown a comparison between Matlab test and C
test.
Network configuration :
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky: 0.1
 Test NRMSE 0.8915
29
ESN Test – C code
30
ESN Test – C code
0 1000 2000 3000 4000 5000 6000 7000 8000
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Test : esn output(green)
0 1000 2000 3000 4000 5000 6000 7000 8000
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Test : C output(blue)
Matlab output vs C output :
Error zero average
1000 2000 3000 4000 5000 6000 7000
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
 The Echo state network is a recurrent neural network with a structure that is
well suited to be used in systems biologically inspired. In ESN the dominant
changes are in the output weights. In cognitive neuroscience, a related
mechanism has been investigated by Peter F. Dominey in the context of
modeling processing in mammalian brains, especially speech recognition in
humans.
 Tests have shown that the response of network is leveling out at an average
speed obtained with the dynamic simulator. Varying in an appropriate
manner the parameters of the network we are able to follow more faithful
these values, in addition, the introduction of a leaky integrator allows us to
realize the behavior of an artificial neuron of the first order.
 The network is robust to disturbances, because it was not necessary to filter
the input signals.
In case of a tanh output function activation will be necessary to climb in an
appropriate manner the teacher, in a way to avoid saturation.
 The algorithm for calculation of output weights unfortunately is not suitable for
a network biologically inspired and at the conclusion of this, would be
appropriate to use an algorithm biologically inspired compared to the use of
pseudoinverse or Wiener-Hopf.
 The C algorithm allows us to use this network in future in microcontrollers.
31
Conclusion

More Related Content

What's hot

Image segmentation with deep learning
Image segmentation with deep learningImage segmentation with deep learning
Image segmentation with deep learning
Antonio Rueda-Toicen
 
Convolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNetConvolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNet
SungminYou
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Preferred Networks
 
Feature Selection
Feature Selection Feature Selection
Feature Selection
Lippo Group Digital
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
Kasun Chinthaka Piyarathna
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
Gaurav Mittal
 
Survey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer VisionSurvey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer Vision
SwatiNarkhede1
 
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
MLconf
 
PyTorch for Deep Learning Practitioners
PyTorch for Deep Learning PractitionersPyTorch for Deep Learning Practitioners
PyTorch for Deep Learning Practitioners
Bayu Aldi Yansyah
 
Deep learning
Deep learningDeep learning
Deep learning
Pratap Dangeti
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Adri Jovin
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
Sivagowry Shathesh
 
Neural collaborative filtering-발표
Neural collaborative filtering-발표Neural collaborative filtering-발표
Neural collaborative filtering-발표
hyunsung lee
 
Beginners Guide to Non-Negative Matrix Factorization
Beginners Guide to Non-Negative Matrix FactorizationBeginners Guide to Non-Negative Matrix Factorization
Beginners Guide to Non-Negative Matrix Factorization
Benjamin Bengfort
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
Ashray Bhandare
 
Introducing Deep learning with Matlab
Introducing Deep learning with MatlabIntroducing Deep learning with Matlab
Introducing Deep learning with Matlab
Massimo Talia
 
Graph Representation Learning
Graph Representation LearningGraph Representation Learning
Graph Representation Learning
Jure Leskovec
 
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Simplilearn
 
Artificial neural networks and its applications
Artificial neural networks and its applications Artificial neural networks and its applications
Artificial neural networks and its applications
PoojaKoshti2
 

What's hot (20)

Image segmentation with deep learning
Image segmentation with deep learningImage segmentation with deep learning
Image segmentation with deep learning
 
Convolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNetConvolutional neural network from VGG to DenseNet
Convolutional neural network from VGG to DenseNet
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
 
Feature Selection
Feature Selection Feature Selection
Feature Selection
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Survey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer VisionSurvey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer Vision
 
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
Dr. Steve Liu, Chief Scientist, Tinder at MLconf SF 2017
 
PyTorch for Deep Learning Practitioners
PyTorch for Deep Learning PractitionersPyTorch for Deep Learning Practitioners
PyTorch for Deep Learning Practitioners
 
Deep learning
Deep learningDeep learning
Deep learning
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Neural collaborative filtering-발표
Neural collaborative filtering-발표Neural collaborative filtering-발표
Neural collaborative filtering-발표
 
Beginners Guide to Non-Negative Matrix Factorization
Beginners Guide to Non-Negative Matrix FactorizationBeginners Guide to Non-Negative Matrix Factorization
Beginners Guide to Non-Negative Matrix Factorization
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Introducing Deep learning with Matlab
Introducing Deep learning with MatlabIntroducing Deep learning with Matlab
Introducing Deep learning with Matlab
 
Graph Representation Learning
Graph Representation LearningGraph Representation Learning
Graph Representation Learning
 
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
 
Artificial neural networks and its applications
Artificial neural networks and its applications Artificial neural networks and its applications
Artificial neural networks and its applications
 

Similar to Echo state networks and locomotion patterns

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
IJCSEA Journal
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
RINUSATHYAN
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ijac123
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
supratikmondal6
 
H017376369
H017376369H017376369
H017376369
IOSR Journals
 
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
iosrjce
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
ncct
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1
swapnac12
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
Aastha Kohli
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
SowmyaJyothi3
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
Kamal Bhatt
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
theijes
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
cscpconf
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 

Similar to Echo state networks and locomotion patterns (20)

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
20120140503023
2012014050302320120140503023
20120140503023
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
 
H017376369
H017376369H017376369
H017376369
 
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Nn devs
Nn devsNn devs
Nn devs
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 

Recently uploaded

Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
Jayaprasanna4
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
JoytuBarua2
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
Kamal Acharya
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Pipe Restoration Solutions
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
fxintegritypublishin
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
SupreethSP4
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation & Control
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
gerogepatton
 

Recently uploaded (20)

Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
 

Echo state networks and locomotion patterns

  • 1. Echo State Networks and Locomotion Patterns Master Degree in Automation Engineering and Control of Complex Systems 1 DIPARTIMENTO DI INGEGNERIA ELETTRICA ELETTRONICA E DEI SISTEMI Student: Vito Strano Professor: Prof. Eng. Paolo Arena
  • 2.  Create a dynamical model able to generate the speed profile of a legged simulated robot from the stepping diagrams drawn from a dynamical simulator.  The capability of Echo state networks to model dynamical nonlinear systems in real time is exploited.  The network is conceived to act as an internal model receiving in input the ground contact sensors signals, providing as output, the average velocity profile for the robot.  Echo state networks with leaky integrate and fire model neurons have been implemented. 2 Aim of the work
  • 3. Echo State networks Echo State neural networks (ESN) : special case of recurrent neural networks (RNN), with a goal to achieve their greater predictive ability. Advantage of RNN is the correspondence to biological neural networks. ESN, only weights to output neurons are trained 3
  • 4. The main idea is to drive a random, large, fixed recurrent neural network with the input signal, thereby inducing in each neuron within this "reservoir" network a nonlinear response signal, and combine a desired output signal by a trainable linear combination of all of these response signals. 4 Basic Idea
  • 6.  𝑿 𝒏 = 𝒙 𝟏 𝒏 , 𝒙 𝟐 𝒏 , … . 𝒙 𝑵 𝒏 Hidden layer neurons (reservoir).  𝒙𝒊(𝒏) output of the 𝑖 𝑡𝑡 hidden neuron in time n.  𝑼 𝒏 = 𝒖 𝟏 𝒏 , 𝒖 𝟐 𝒏 , … 𝒖 𝒌 𝒏 input vector.  𝒀 𝒏 = 𝒚 𝟏 𝒏 , 𝒚 𝟐 𝒏 , … 𝒚 𝑳 𝒏 output vector. Each 𝑥𝑖(𝑛) is a function of the networks previous inputs 𝑢 𝑛 , 𝑢 𝑛 − 1 , …, processed by the network. Hidden neurons should be sparse, to encourage rich variety of dynamics in dynamical reservoir synaptic weights were initialized with uniform distribution, also input neurons should be sparse. 6 Structure of ESN
  • 7. The states of hidden neurons in “dynamical reservoir” are calculated by the formula 𝑿 𝒏 + 𝟏 = 𝒇(𝑾𝒊𝒊 𝒖 𝒏 + 𝑾 𝒅𝒅 𝒙 𝒏 + 𝑾 𝒃𝒃 𝒅 𝒏 ) where  f is the activation function of hidden neurons  𝑑 𝑛 is teacher for train mode or network output in previous step for test mode  𝑊𝑖𝑖 input weight  𝑊𝑑𝑑 hidden weight  𝑊𝑜𝑜𝑜 output weight  𝑊𝑏𝑏 feedback weight. 7 Structure of ESN
  • 8. The states of output neurons are calculated by the formula 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑢 𝑛 + 1 , 𝑥 𝑛 + 1 , 𝑦 𝑛 ) where  𝑓𝑜𝑢𝑡 is the activation function of output neurons In this application the states of output neurons are calculated removing input-output relationship 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 ) 8 Structure of ESN
  • 9. The units in standard sigmoid networks have no memory. For learning slowly and continuously changing systems, it is more adequate to use networks with a continuous dynamics. The evolution of a continuous-time leaky integrator network is 𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 ) 9 Leaky integrator 0 1 2 3 4 5 6 7 8 9 10 x 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 esn output(green) - teacher(yellow) Where  C is a time constant  a the leaking decay rate  δ step size In our case toolbox the variable “a” is equal to 1 and net.time_cost equal to δ𝐶. Feedback and spectral radius involves in time decay of the response. 0 1 2 3 4 5 6 7 8 9 10 x 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 esn output(green) - teacher(yellow)
  • 10. In the ESN approach, training is solved by the following steps:  Create a random dynamical sparse reservoir RNN  Attach input units to the reservoir  Create output units attached all-to-all to the reservoir  If the task requires output feedback install randomly generated output-to-reservoir connections (all-to-all).  Drive the dynamical reservoir with the training data, this means to write both the input into the input unit and the teacher output into the output unit.  Compute output weights. Compute the output weights as the linear regression weights (Wiener-Hopf or pseudoinverse) of the teacher outputs on the reservoir states. Use these weights to create reservoir-to-output connections. 10 ESN Train
  • 11. The desired output weights are the linear regression weights of the desired outputs on the harvested extended states. Let 𝑅 = 𝑋′ 𝑋 be the correlation matrix of the extended reservoir states, and let 𝑃 = 𝑋𝑋𝑋 be the cross-correlation matrix of the states vs. the desired outputs. Then, one way to compute is to invoke the Wiener-Hopf (WH) solution 𝑊𝑜𝑜𝑜 = 𝑅−1 𝑃 Another way is to use the pseudo inverse (PINV) 𝑊𝑜𝑜𝑜 = 𝑝𝑝𝑝𝑝 𝑋 𝐷 Both methods are, in principle, equivalent, but WH is ill-conditioned, however, is faster to compute than PINV (much faster if n is large). 11 Output Weights
  • 12. The Central Pattern Generator (CPG) containing the key mechanisms needed to generate the rhythmic motion patterns. CPGs are viewed as networks of coupled nonlinear systems (oscillators) with given connections and parameters to be modulated in order to account for distinct gaits. The emerging solution is a travelling wave pattern which visits all the motor neurons and thus imposes a specific gait to the controlled robotic structure. A particular locomotor pattern consists of a series of signals with a well-defined phase shift. This pattern is due to the pattern of neural activities of the CPG. 12 Central Pattern generator
  • 13. A network ring of N oscillators (neurons). Each neurons only fires one at a time and each of them is connected to its neighbor with an excitatory (or inhibitory) synapse. A suitable valuable of the synaptic weight is a well-defined phase of the pattern (traveling wave). 13 If we now add to that network n-N (n number of legs) neurons by using synchronization via “coupling” or synchronization via “duplicating” and choose the correct synaptic weights we can create a locomotor pattern. Central Pattern generator
  • 14. In locomotion patterns the white row represents the stance phase meanwhile a black row represents a swing phase, studying the phase displacement we can calculate the speed.  Stance phase : leg is on the ground.  Swing phase : pull up the leg. 14 Locomotor Pattern
  • 15.  In supervised training, one starts with teacher data 𝑑 𝑛 . In this case we use the Locomotion Patterns as input series and the mean value of speed as output calculated along three periods of AEP (anterior extreme position). In input sequence black squares corresponds to zero and white squares corresponds to one. All of these information are generated using a dynamic simulation environment based on the Open Dynamic Engine platform.  In test phase, one starts without teacher data 𝑑 𝑛 and using only the input time series and previous output sequence of ESN, we obtain the desired model behavior. 15 ESN Train and Test 0 1000 2000 3000 4000 5000 6000 7000 8000 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 Teacher sequence
  • 16. A dynamic simulator permit to simulate the time varying behavior of bio-inspired robot in several contexts through the definition of the real-world constraints and the physical laws that govern it. Goals:  build robot bio-inspired and reproduce the interaction with the real environment;  implement learning algorithms that simulate the neural activity;  analyze decisions taken from the robot after the training phase and to verify their effects on the simulated environment;  test the bio-robotic behavior in scenarios hardly replicable in the realty. 16 Dynamic Simulation of Bio-Robot behaviors There is a separation between the appearance of the objects in the scene (visual model) and the simulated physical realty (physical model). The computation of the collision detection is simpler for Graphics Processing Unit. The simulator is written in C++ and includes the software components: ODE, OSG, ColladaDom.
  • 17. Using this new toolbox is possible to :  choose different density of connectivity for input and reservoir  choose two different update algorithms for output weights (pseudoinverse or Wiener-Hopf)  compute output weights in real-time learning  compute output weights in real-time learning with a time window  compute output weights in one step (batch learning)  compute NRMSE (Normalized Root Mean Square Error) evaluation of results  no input-output relationship  use leaky integrator. 17 Toolbox
  • 19. Consider the deployment of following formula, to adapt the potential use into a microcontroller. 𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 ) 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 ) 19 C code
  • 20. In the following some results obtained in different network configurations using two different datasets. Summary results  Increasing number of neurons, network quickly reaches the average speed value of the teacher;  A smaller feedback involves a greater frequency of oscillation;  With a small time constant, network has a sinusoidal behavior;  A small spectral radius involves small fluctuations;  A large spectral radius involves large fluctuations;  combining large time constant and large spectral radius, network with linear output has a behavior similar to that with output tanh. 20 Results
  • 21.  Case 2) TRAIN  n. input: 6  n. hidden: 30  output activation: tanh  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky no  Learn NRMSE: 0.67289  Test NRMSE: 0.87892 21 Batch 0 1000 2000 3000 4000 5000 6000 7000 8000 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
  • 22. 22 Batch  Case 2) TEST  n. input: 6  n. hidden: 30  output activation: tanh  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky no  Learn NRMSE: 0.67289  Test NRMSE: 0.87892 0 1000 2000 3000 4000 5000 6000 7000 8000 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
  • 23. 23 0 1000 2000 3000 4000 5000 6000 7000 8000 0.9 1 1.1 1.2 1.3 1.4 1.5 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) Batch  Case 17) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.01  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 0.91589  Test NRMSE: 0.95269
  • 24. 0 1000 2000 3000 4000 5000 6000 7000 8000 0.9 1 1.1 1.2 1.3 1.4 1.5 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 24 Batch  Case 17) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.01  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 0.91589  Test NRMSE: 0.95269
  • 25. 0 1000 2000 3000 4000 5000 6000 7000 8000 0 0.5 1 1.5 2 2.5 3 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 25 Real-time after batch  Case 24) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 2.5977  Test NRMSE: 2.5333
  • 26. 0 2000 4000 6000 8000 10000 12000 0 0.5 1 1.5 2 2.5 3 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 26 Real-time after batch  Case 24) TEST  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 2.5977  Test NRMSE: 2.5333
  • 27. 1000 2000 3000 4000 5000 6000 7000 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 27 Real-time time window  Case 27) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky 0.7  Time window 2000 s.  Learn NRMSE: 0.9102  Test NRMSE: 1.4849
  • 28. 1000 2000 3000 4000 5000 6000 7000 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 28 Real-time time window  Case 27) TEST  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky 0.7  Time window 2000 s.  Learn NRMSE: 0.9102  Test NRMSE: 1.4849
  • 29. In the following we shown a comparison between Matlab test and C test. Network configuration :  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky: 0.1  Test NRMSE 0.8915 29 ESN Test – C code
  • 30. 30 ESN Test – C code 0 1000 2000 3000 4000 5000 6000 7000 8000 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Test : esn output(green) 0 1000 2000 3000 4000 5000 6000 7000 8000 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Test : C output(blue) Matlab output vs C output : Error zero average 1000 2000 3000 4000 5000 6000 7000 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05
  • 31.  The Echo state network is a recurrent neural network with a structure that is well suited to be used in systems biologically inspired. In ESN the dominant changes are in the output weights. In cognitive neuroscience, a related mechanism has been investigated by Peter F. Dominey in the context of modeling processing in mammalian brains, especially speech recognition in humans.  Tests have shown that the response of network is leveling out at an average speed obtained with the dynamic simulator. Varying in an appropriate manner the parameters of the network we are able to follow more faithful these values, in addition, the introduction of a leaky integrator allows us to realize the behavior of an artificial neuron of the first order.  The network is robust to disturbances, because it was not necessary to filter the input signals. In case of a tanh output function activation will be necessary to climb in an appropriate manner the teacher, in a way to avoid saturation.  The algorithm for calculation of output weights unfortunately is not suitable for a network biologically inspired and at the conclusion of this, would be appropriate to use an algorithm biologically inspired compared to the use of pseudoinverse or Wiener-Hopf.  The C algorithm allows us to use this network in future in microcontrollers. 31 Conclusion