SlideShare a Scribd company logo
1 of 31
Download to read offline
Echo State
Networks and
Locomotion
Patterns
Master Degree in Automation
Engineering and Control of
Complex Systems
1
DIPARTIMENTO DI INGEGNERIA
ELETTRICA ELETTRONICA E DEI
SISTEMI
Student:
Vito Strano
Professor:
Prof. Eng. Paolo Arena
 Create a dynamical model able to generate the
speed profile of a legged simulated robot from the
stepping diagrams drawn from a dynamical
simulator.
 The capability of Echo state networks to model
dynamical nonlinear systems in real time is
exploited.
 The network is conceived to act as an internal
model receiving in input the ground contact
sensors signals, providing as output, the average
velocity profile for the robot.
 Echo state networks with leaky integrate and fire
model neurons have been implemented.
2
Aim of the work
Echo State networks
Echo State neural networks (ESN) :
special case of recurrent neural networks
(RNN), with a goal to achieve their greater
predictive ability.
Advantage of RNN is the correspondence
to biological neural networks.
ESN, only weights to output neurons are
trained
3
The main idea is to drive a random, large,
fixed recurrent neural network with the
input signal, thereby inducing in each
neuron within this "reservoir" network a
nonlinear response signal, and combine a
desired output signal by a trainable linear
combination of all of these response signals.
4
Basic Idea
5
Structure of ESN
 𝑿 𝒏 = 𝒙 𝟏 𝒏 , 𝒙 𝟐 𝒏 , … . 𝒙 𝑵 𝒏
Hidden layer neurons (reservoir).
 𝒙𝒊(𝒏)
output of the 𝑖 𝑡𝑡 hidden neuron in time n.
 𝑼 𝒏 = 𝒖 𝟏 𝒏 , 𝒖 𝟐 𝒏 , … 𝒖 𝒌 𝒏
input vector.
 𝒀 𝒏 = 𝒚 𝟏 𝒏 , 𝒚 𝟐 𝒏 , … 𝒚 𝑳 𝒏
output vector.
Each 𝑥𝑖(𝑛) is a function of the networks previous inputs
𝑢 𝑛 , 𝑢 𝑛 − 1 , …, processed by the network.
Hidden neurons should be sparse, to encourage rich variety of
dynamics in dynamical reservoir synaptic weights were initialized with
uniform distribution, also input neurons should be sparse.
6
Structure of ESN
The states of hidden neurons in “dynamical reservoir” are
calculated by the formula
𝑿 𝒏 + 𝟏 = 𝒇(𝑾𝒊𝒊 𝒖 𝒏 + 𝑾 𝒅𝒅 𝒙 𝒏 + 𝑾 𝒃𝒃 𝒅 𝒏 )
where
 f is the activation function of hidden neurons
 𝑑 𝑛 is teacher for train mode or network output in previous
step for test mode
 𝑊𝑖𝑖 input weight
 𝑊𝑑𝑑 hidden weight
 𝑊𝑜𝑜𝑜 output weight
 𝑊𝑏𝑏 feedback weight.
7
Structure of ESN
The states of output neurons are calculated by the
formula
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑢 𝑛 + 1 , 𝑥 𝑛 + 1 , 𝑦 𝑛 )
where
 𝑓𝑜𝑢𝑡 is the activation function of output neurons
In this application the states of output neurons are
calculated removing input-output relationship
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 )
8
Structure of ESN
The units in standard sigmoid networks have no memory.
For learning slowly and continuously changing systems, it is more
adequate to use networks with a continuous dynamics.
The evolution of a continuous-time leaky integrator network is
𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 )
9
Leaky integrator
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
esn output(green) - teacher(yellow)
Where
 C is a time constant
 a the leaking decay rate
 δ step size
In our case toolbox the variable “a”
is equal to 1 and net.time_cost equal to
δ𝐶.
Feedback and spectral radius involves
in time decay of the response.
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
esn output(green) - teacher(yellow)
In the ESN approach, training is solved by the following steps:
 Create a random dynamical sparse reservoir RNN
 Attach input units to the reservoir
 Create output units attached all-to-all to the reservoir
 If the task requires output feedback install randomly generated
output-to-reservoir connections (all-to-all).
 Drive the dynamical reservoir with the training data, this means to
write both the input into the input unit and the teacher output into
the output unit.
 Compute output weights. Compute the output weights as the
linear regression weights (Wiener-Hopf or pseudoinverse) of the
teacher outputs on the reservoir states. Use these weights to create
reservoir-to-output connections.
10
ESN Train
The desired output weights are the linear regression weights of the desired
outputs on the harvested extended states.
Let 𝑅 = 𝑋′
𝑋 be the correlation matrix of the extended reservoir states, and
let 𝑃 = 𝑋𝑋𝑋 be the cross-correlation matrix of the states vs. the desired
outputs. Then, one way to compute is to invoke the Wiener-Hopf (WH)
solution
𝑊𝑜𝑜𝑜 = 𝑅−1
𝑃
Another way is to use the pseudo inverse (PINV)
𝑊𝑜𝑜𝑜 = 𝑝𝑝𝑝𝑝 𝑋 𝐷
Both methods are, in principle, equivalent, but WH is ill-conditioned,
however, is faster to compute than PINV (much faster if n is large).
11
Output Weights
The Central Pattern Generator (CPG) containing the key
mechanisms needed to generate the rhythmic motion
patterns.
CPGs are viewed as networks of coupled nonlinear systems
(oscillators) with given connections and parameters to be
modulated in order to account for distinct gaits. The emerging
solution is a travelling wave pattern which visits all the motor
neurons and thus imposes a specific gait to the controlled
robotic structure.
A particular locomotor pattern consists of a series of signals
with a well-defined phase shift. This pattern is due to the
pattern of neural activities of the CPG.
12
Central Pattern generator
A network ring of N oscillators (neurons).
Each neurons only fires one at a time and each
of them is connected to its neighbor with an
excitatory (or inhibitory) synapse.
A suitable valuable of the synaptic weight is a
well-defined phase of the pattern (traveling
wave).
13
If we now add to that network n-N (n
number of legs) neurons by using
synchronization via “coupling” or
synchronization via “duplicating” and
choose the correct synaptic weights
we can create a locomotor pattern.
Central Pattern generator
In locomotion patterns the white row represents the
stance phase meanwhile a black row represents a
swing phase, studying the phase displacement we can
calculate the speed.
 Stance phase : leg is on the ground.
 Swing phase : pull up the leg.
14
Locomotor Pattern
 In supervised training, one starts with teacher
data 𝑑 𝑛 .
In this case we use the Locomotion Patterns as
input series and the mean value of speed as
output calculated along three periods of AEP
(anterior extreme position).
In input sequence black squares corresponds to
zero and white squares corresponds to one.
All of these information are generated using a
dynamic simulation environment based on the
Open Dynamic Engine platform.
 In test phase, one starts without teacher data
𝑑 𝑛 and using only the input time series and
previous output sequence of ESN, we obtain
the desired model behavior.
15
ESN Train and Test
0 1000 2000 3000 4000 5000 6000 7000 8000
1.1
1.15
1.2
1.25
1.3
1.35
1.4
1.45
1.5
Teacher sequence
A dynamic simulator permit to simulate the time
varying behavior of bio-inspired robot in several
contexts through the definition of the real-world
constraints and the physical laws that govern it.
Goals:
 build robot bio-inspired and reproduce the
interaction with the real environment;
 implement learning algorithms that simulate the
neural activity;
 analyze decisions taken from the robot after the
training phase and to verify their effects on the
simulated environment;
 test the bio-robotic behavior in scenarios hardly
replicable in the realty.
16
Dynamic Simulation of Bio-Robot behaviors
There is a separation between the appearance
of the objects in the scene (visual model) and
the simulated physical realty (physical model).
The computation of the collision detection is
simpler for Graphics Processing Unit.
The simulator is written in C++ and includes the
software components: ODE, OSG, ColladaDom.
Using this new toolbox is possible to :
 choose different density of connectivity for input and
reservoir
 choose two different update algorithms for output weights
(pseudoinverse or Wiener-Hopf)
 compute output weights in real-time learning
 compute output weights in real-time learning with a time
window
 compute output weights in one step (batch learning)
 compute NRMSE (Normalized Root Mean Square Error)
evaluation of results
 no input-output relationship
 use leaky integrator.
17
Toolbox
18
Toolbox – flow diagram
Consider the deployment of following
formula, to adapt the potential use into
a microcontroller.
𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1
+ 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 )
𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 )
19
C code
In the following some results obtained in different network
configurations using two different datasets.
Summary results
 Increasing number of neurons, network quickly reaches the
average speed value of the teacher;
 A smaller feedback involves a greater frequency of oscillation;
 With a small time constant, network has a sinusoidal behavior;
 A small spectral radius involves small fluctuations;
 A large spectral radius involves large fluctuations;
 combining large time constant and large spectral radius, network
with linear output has a behavior similar to that with output tanh.
20
Results
 Case 2) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: tanh
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky no
 Learn NRMSE: 0.67289
 Test NRMSE: 0.87892
21
Batch
0 1000 2000 3000 4000 5000 6000 7000 8000
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
22
Batch
 Case 2) TEST
 n. input: 6
 n. hidden: 30
 output activation: tanh
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky no
 Learn NRMSE: 0.67289
 Test NRMSE: 0.87892
0 1000 2000 3000 4000 5000 6000 7000 8000
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
23
0 1000 2000 3000 4000 5000 6000 7000 8000
0.9
1
1.1
1.2
1.3
1.4
1.5
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
Batch
 Case 17) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.01
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 0.91589
 Test NRMSE: 0.95269
0 1000 2000 3000 4000 5000 6000 7000 8000
0.9
1
1.1
1.2
1.3
1.4
1.5
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
24
Batch
 Case 17) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.01
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 0.91589
 Test NRMSE: 0.95269
0 1000 2000 3000 4000 5000 6000 7000 8000
0
0.5
1
1.5
2
2.5
3
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
25
Real-time after batch
 Case 24) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 2.5977
 Test NRMSE: 2.5333
0 2000 4000 6000 8000 10000 12000
0
0.5
1
1.5
2
2.5
3
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
26
Real-time after batch
 Case 24) TEST
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky 0.1
 Learn NRMSE: 2.5977
 Test NRMSE: 2.5333
1000 2000 3000 4000 5000 6000 7000
1.1
1.2
1.3
1.4
1.5
1.6
1.7
Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
27
Real-time time window
 Case 27) TRAIN
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky 0.7
 Time window 2000 s.
 Learn NRMSE: 0.9102
 Test NRMSE: 1.4849
1000 2000 3000 4000 5000 6000 7000
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
28
Real-time time window
 Case 27) TEST
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.5
 input density: 0.1
 hidden density: 0.1
 leaky 0.7
 Time window 2000 s.
 Learn NRMSE: 0.9102
 Test NRMSE: 1.4849
In the following we shown a comparison between Matlab test and C
test.
Network configuration :
 n. input: 6
 n. hidden: 30
 output activation: linear
 feedback: 1
 spectral radius: 0.99
 input density: 0.1
 hidden density: 0.1
 leaky: 0.1
 Test NRMSE 0.8915
29
ESN Test – C code
30
ESN Test – C code
0 1000 2000 3000 4000 5000 6000 7000 8000
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Test : esn output(green)
0 1000 2000 3000 4000 5000 6000 7000 8000
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Test : C output(blue)
Matlab output vs C output :
Error zero average
1000 2000 3000 4000 5000 6000 7000
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
 The Echo state network is a recurrent neural network with a structure that is
well suited to be used in systems biologically inspired. In ESN the dominant
changes are in the output weights. In cognitive neuroscience, a related
mechanism has been investigated by Peter F. Dominey in the context of
modeling processing in mammalian brains, especially speech recognition in
humans.
 Tests have shown that the response of network is leveling out at an average
speed obtained with the dynamic simulator. Varying in an appropriate
manner the parameters of the network we are able to follow more faithful
these values, in addition, the introduction of a leaky integrator allows us to
realize the behavior of an artificial neuron of the first order.
 The network is robust to disturbances, because it was not necessary to filter
the input signals.
In case of a tanh output function activation will be necessary to climb in an
appropriate manner the teacher, in a way to avoid saturation.
 The algorithm for calculation of output weights unfortunately is not suitable for
a network biologically inspired and at the conclusion of this, would be
appropriate to use an algorithm biologically inspired compared to the use of
pseudoinverse or Wiener-Hopf.
 The C algorithm allows us to use this network in future in microcontrollers.
31
Conclusion

More Related Content

What's hot

L13 string handling(string class)
L13 string handling(string class)L13 string handling(string class)
L13 string handling(string class)teach4uin
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and InterpolationFernando Ojeda
 
Brief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineBrief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineArunabha Saha
 
interpolation of unequal intervals
interpolation of unequal intervalsinterpolation of unequal intervals
interpolation of unequal intervalsvaani pathak
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Hojin Yang
 
Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningMulti-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningSeolhokim
 
Matlab lecture 8 – newton's forward and backword interpolation@taj copy
Matlab lecture 8 – newton's forward and backword interpolation@taj   copyMatlab lecture 8 – newton's forward and backword interpolation@taj   copy
Matlab lecture 8 – newton's forward and backword interpolation@taj copyTajim Md. Niamat Ullah Akhund
 
Deep Belief Networks
Deep Belief NetworksDeep Belief Networks
Deep Belief NetworksHasan H Topcu
 
"Deep Learning" Chap.6 Convolutional Neural Net
"Deep Learning" Chap.6 Convolutional Neural Net"Deep Learning" Chap.6 Convolutional Neural Net
"Deep Learning" Chap.6 Convolutional Neural NetKen'ichi Matsui
 
AI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAndrew Ferlitsch
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Akash Goel
 
[系列活動] 一日搞懂生成式對抗網路
[系列活動] 一日搞懂生成式對抗網路[系列活動] 一日搞懂生成式對抗網路
[系列活動] 一日搞懂生成式對抗網路台灣資料科學年會
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronMostafa G. M. Mostafa
 
Streaming Algorithms
Streaming AlgorithmsStreaming Algorithms
Streaming AlgorithmsJoe Kelley
 

What's hot (20)

L13 string handling(string class)
L13 string handling(string class)L13 string handling(string class)
L13 string handling(string class)
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and Interpolation
 
Hmm
HmmHmm
Hmm
 
Brief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann MachineBrief Introduction to Boltzmann Machine
Brief Introduction to Boltzmann Machine
 
interpolation of unequal intervals
interpolation of unequal intervalsinterpolation of unequal intervals
interpolation of unequal intervals
 
Artificial Neural Network Topology
Artificial Neural Network TopologyArtificial Neural Network Topology
Artificial Neural Network Topology
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
 
Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement LearningMulti-Agent Reinforcement Learning
Multi-Agent Reinforcement Learning
 
Matlab lecture 8 – newton's forward and backword interpolation@taj copy
Matlab lecture 8 – newton's forward and backword interpolation@taj   copyMatlab lecture 8 – newton's forward and backword interpolation@taj   copy
Matlab lecture 8 – newton's forward and backword interpolation@taj copy
 
Deep Belief Networks
Deep Belief NetworksDeep Belief Networks
Deep Belief Networks
 
"Deep Learning" Chap.6 Convolutional Neural Net
"Deep Learning" Chap.6 Convolutional Neural Net"Deep Learning" Chap.6 Convolutional Neural Net
"Deep Learning" Chap.6 Convolutional Neural Net
 
Information theory
Information theoryInformation theory
Information theory
 
AI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAI Greedy and A-STAR Search
AI Greedy and A-STAR Search
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Restricted boltzmann machine
Restricted boltzmann machineRestricted boltzmann machine
Restricted boltzmann machine
 
Deep learning
Deep learningDeep learning
Deep learning
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
[系列活動] 一日搞懂生成式對抗網路
[系列活動] 一日搞懂生成式對抗網路[系列活動] 一日搞懂生成式對抗網路
[系列活動] 一日搞懂生成式對抗網路
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Streaming Algorithms
Streaming AlgorithmsStreaming Algorithms
Streaming Algorithms
 

Similar to Echo state networks and locomotion patterns

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORijac123
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)supratikmondal6
 
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...iosrjce
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1swapnac12
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworksAastha Kohli
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfSowmyaJyothi3
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalizationKamal Bhatt
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...cscpconf
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsaciijournal
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksarjitkantgupta
 

Similar to Echo state networks and locomotion patterns (20)

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
20120140503023
2012014050302320120140503023
20120140503023
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
 
H017376369
H017376369H017376369
H017376369
 
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Artificial Neural Networks 1
Artificial Neural Networks 1Artificial Neural Networks 1
Artificial Neural Networks 1
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Nn devs
Nn devsNn devs
Nn devs
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 

Recently uploaded

ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 

Recently uploaded (20)

ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 

Echo state networks and locomotion patterns

  • 1. Echo State Networks and Locomotion Patterns Master Degree in Automation Engineering and Control of Complex Systems 1 DIPARTIMENTO DI INGEGNERIA ELETTRICA ELETTRONICA E DEI SISTEMI Student: Vito Strano Professor: Prof. Eng. Paolo Arena
  • 2.  Create a dynamical model able to generate the speed profile of a legged simulated robot from the stepping diagrams drawn from a dynamical simulator.  The capability of Echo state networks to model dynamical nonlinear systems in real time is exploited.  The network is conceived to act as an internal model receiving in input the ground contact sensors signals, providing as output, the average velocity profile for the robot.  Echo state networks with leaky integrate and fire model neurons have been implemented. 2 Aim of the work
  • 3. Echo State networks Echo State neural networks (ESN) : special case of recurrent neural networks (RNN), with a goal to achieve their greater predictive ability. Advantage of RNN is the correspondence to biological neural networks. ESN, only weights to output neurons are trained 3
  • 4. The main idea is to drive a random, large, fixed recurrent neural network with the input signal, thereby inducing in each neuron within this "reservoir" network a nonlinear response signal, and combine a desired output signal by a trainable linear combination of all of these response signals. 4 Basic Idea
  • 6.  𝑿 𝒏 = 𝒙 𝟏 𝒏 , 𝒙 𝟐 𝒏 , … . 𝒙 𝑵 𝒏 Hidden layer neurons (reservoir).  𝒙𝒊(𝒏) output of the 𝑖 𝑡𝑡 hidden neuron in time n.  𝑼 𝒏 = 𝒖 𝟏 𝒏 , 𝒖 𝟐 𝒏 , … 𝒖 𝒌 𝒏 input vector.  𝒀 𝒏 = 𝒚 𝟏 𝒏 , 𝒚 𝟐 𝒏 , … 𝒚 𝑳 𝒏 output vector. Each 𝑥𝑖(𝑛) is a function of the networks previous inputs 𝑢 𝑛 , 𝑢 𝑛 − 1 , …, processed by the network. Hidden neurons should be sparse, to encourage rich variety of dynamics in dynamical reservoir synaptic weights were initialized with uniform distribution, also input neurons should be sparse. 6 Structure of ESN
  • 7. The states of hidden neurons in “dynamical reservoir” are calculated by the formula 𝑿 𝒏 + 𝟏 = 𝒇(𝑾𝒊𝒊 𝒖 𝒏 + 𝑾 𝒅𝒅 𝒙 𝒏 + 𝑾 𝒃𝒃 𝒅 𝒏 ) where  f is the activation function of hidden neurons  𝑑 𝑛 is teacher for train mode or network output in previous step for test mode  𝑊𝑖𝑖 input weight  𝑊𝑑𝑑 hidden weight  𝑊𝑜𝑜𝑜 output weight  𝑊𝑏𝑏 feedback weight. 7 Structure of ESN
  • 8. The states of output neurons are calculated by the formula 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑢 𝑛 + 1 , 𝑥 𝑛 + 1 , 𝑦 𝑛 ) where  𝑓𝑜𝑢𝑡 is the activation function of output neurons In this application the states of output neurons are calculated removing input-output relationship 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 ) 8 Structure of ESN
  • 9. The units in standard sigmoid networks have no memory. For learning slowly and continuously changing systems, it is more adequate to use networks with a continuous dynamics. The evolution of a continuous-time leaky integrator network is 𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 ) 9 Leaky integrator 0 1 2 3 4 5 6 7 8 9 10 x 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 esn output(green) - teacher(yellow) Where  C is a time constant  a the leaking decay rate  δ step size In our case toolbox the variable “a” is equal to 1 and net.time_cost equal to δ𝐶. Feedback and spectral radius involves in time decay of the response. 0 1 2 3 4 5 6 7 8 9 10 x 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 esn output(green) - teacher(yellow)
  • 10. In the ESN approach, training is solved by the following steps:  Create a random dynamical sparse reservoir RNN  Attach input units to the reservoir  Create output units attached all-to-all to the reservoir  If the task requires output feedback install randomly generated output-to-reservoir connections (all-to-all).  Drive the dynamical reservoir with the training data, this means to write both the input into the input unit and the teacher output into the output unit.  Compute output weights. Compute the output weights as the linear regression weights (Wiener-Hopf or pseudoinverse) of the teacher outputs on the reservoir states. Use these weights to create reservoir-to-output connections. 10 ESN Train
  • 11. The desired output weights are the linear regression weights of the desired outputs on the harvested extended states. Let 𝑅 = 𝑋′ 𝑋 be the correlation matrix of the extended reservoir states, and let 𝑃 = 𝑋𝑋𝑋 be the cross-correlation matrix of the states vs. the desired outputs. Then, one way to compute is to invoke the Wiener-Hopf (WH) solution 𝑊𝑜𝑜𝑜 = 𝑅−1 𝑃 Another way is to use the pseudo inverse (PINV) 𝑊𝑜𝑜𝑜 = 𝑝𝑝𝑝𝑝 𝑋 𝐷 Both methods are, in principle, equivalent, but WH is ill-conditioned, however, is faster to compute than PINV (much faster if n is large). 11 Output Weights
  • 12. The Central Pattern Generator (CPG) containing the key mechanisms needed to generate the rhythmic motion patterns. CPGs are viewed as networks of coupled nonlinear systems (oscillators) with given connections and parameters to be modulated in order to account for distinct gaits. The emerging solution is a travelling wave pattern which visits all the motor neurons and thus imposes a specific gait to the controlled robotic structure. A particular locomotor pattern consists of a series of signals with a well-defined phase shift. This pattern is due to the pattern of neural activities of the CPG. 12 Central Pattern generator
  • 13. A network ring of N oscillators (neurons). Each neurons only fires one at a time and each of them is connected to its neighbor with an excitatory (or inhibitory) synapse. A suitable valuable of the synaptic weight is a well-defined phase of the pattern (traveling wave). 13 If we now add to that network n-N (n number of legs) neurons by using synchronization via “coupling” or synchronization via “duplicating” and choose the correct synaptic weights we can create a locomotor pattern. Central Pattern generator
  • 14. In locomotion patterns the white row represents the stance phase meanwhile a black row represents a swing phase, studying the phase displacement we can calculate the speed.  Stance phase : leg is on the ground.  Swing phase : pull up the leg. 14 Locomotor Pattern
  • 15.  In supervised training, one starts with teacher data 𝑑 𝑛 . In this case we use the Locomotion Patterns as input series and the mean value of speed as output calculated along three periods of AEP (anterior extreme position). In input sequence black squares corresponds to zero and white squares corresponds to one. All of these information are generated using a dynamic simulation environment based on the Open Dynamic Engine platform.  In test phase, one starts without teacher data 𝑑 𝑛 and using only the input time series and previous output sequence of ESN, we obtain the desired model behavior. 15 ESN Train and Test 0 1000 2000 3000 4000 5000 6000 7000 8000 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1.5 Teacher sequence
  • 16. A dynamic simulator permit to simulate the time varying behavior of bio-inspired robot in several contexts through the definition of the real-world constraints and the physical laws that govern it. Goals:  build robot bio-inspired and reproduce the interaction with the real environment;  implement learning algorithms that simulate the neural activity;  analyze decisions taken from the robot after the training phase and to verify their effects on the simulated environment;  test the bio-robotic behavior in scenarios hardly replicable in the realty. 16 Dynamic Simulation of Bio-Robot behaviors There is a separation between the appearance of the objects in the scene (visual model) and the simulated physical realty (physical model). The computation of the collision detection is simpler for Graphics Processing Unit. The simulator is written in C++ and includes the software components: ODE, OSG, ColladaDom.
  • 17. Using this new toolbox is possible to :  choose different density of connectivity for input and reservoir  choose two different update algorithms for output weights (pseudoinverse or Wiener-Hopf)  compute output weights in real-time learning  compute output weights in real-time learning with a time window  compute output weights in one step (batch learning)  compute NRMSE (Normalized Root Mean Square Error) evaluation of results  no input-output relationship  use leaky integrator. 17 Toolbox
  • 19. Consider the deployment of following formula, to adapt the potential use into a microcontroller. 𝑋 𝑛 + 1 = (1 − δ𝐶𝐶)𝑥 𝑛 + δ𝐶(𝑓(𝑊𝑖𝑖 𝑢 𝑛 + 1 + 𝑊𝑑𝑑 𝑥 𝑛 + 𝑊𝑏𝑏 𝑑 𝑛 ) 𝑌 𝑛 + 1 = 𝑓𝑜𝑜𝑜(𝑊𝑜𝑜𝑜 𝑋 𝑛 + 1 ) 19 C code
  • 20. In the following some results obtained in different network configurations using two different datasets. Summary results  Increasing number of neurons, network quickly reaches the average speed value of the teacher;  A smaller feedback involves a greater frequency of oscillation;  With a small time constant, network has a sinusoidal behavior;  A small spectral radius involves small fluctuations;  A large spectral radius involves large fluctuations;  combining large time constant and large spectral radius, network with linear output has a behavior similar to that with output tanh. 20 Results
  • 21.  Case 2) TRAIN  n. input: 6  n. hidden: 30  output activation: tanh  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky no  Learn NRMSE: 0.67289  Test NRMSE: 0.87892 21 Batch 0 1000 2000 3000 4000 5000 6000 7000 8000 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
  • 22. 22 Batch  Case 2) TEST  n. input: 6  n. hidden: 30  output activation: tanh  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky no  Learn NRMSE: 0.67289  Test NRMSE: 0.87892 0 1000 2000 3000 4000 5000 6000 7000 8000 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow)
  • 23. 23 0 1000 2000 3000 4000 5000 6000 7000 8000 0.9 1 1.1 1.2 1.3 1.4 1.5 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) Batch  Case 17) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.01  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 0.91589  Test NRMSE: 0.95269
  • 24. 0 1000 2000 3000 4000 5000 6000 7000 8000 0.9 1 1.1 1.2 1.3 1.4 1.5 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 24 Batch  Case 17) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.01  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 0.91589  Test NRMSE: 0.95269
  • 25. 0 1000 2000 3000 4000 5000 6000 7000 8000 0 0.5 1 1.5 2 2.5 3 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 25 Real-time after batch  Case 24) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 2.5977  Test NRMSE: 2.5333
  • 26. 0 2000 4000 6000 8000 10000 12000 0 0.5 1 1.5 2 2.5 3 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 26 Real-time after batch  Case 24) TEST  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky 0.1  Learn NRMSE: 2.5977  Test NRMSE: 2.5333
  • 27. 1000 2000 3000 4000 5000 6000 7000 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Train : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 27 Real-time time window  Case 27) TRAIN  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky 0.7  Time window 2000 s.  Learn NRMSE: 0.9102  Test NRMSE: 1.4849
  • 28. 1000 2000 3000 4000 5000 6000 7000 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Test : esn output(green) - mean esn(blue) - mean gait(red) - teacher(yellow) 28 Real-time time window  Case 27) TEST  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.5  input density: 0.1  hidden density: 0.1  leaky 0.7  Time window 2000 s.  Learn NRMSE: 0.9102  Test NRMSE: 1.4849
  • 29. In the following we shown a comparison between Matlab test and C test. Network configuration :  n. input: 6  n. hidden: 30  output activation: linear  feedback: 1  spectral radius: 0.99  input density: 0.1  hidden density: 0.1  leaky: 0.1  Test NRMSE 0.8915 29 ESN Test – C code
  • 30. 30 ESN Test – C code 0 1000 2000 3000 4000 5000 6000 7000 8000 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Test : esn output(green) 0 1000 2000 3000 4000 5000 6000 7000 8000 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Test : C output(blue) Matlab output vs C output : Error zero average 1000 2000 3000 4000 5000 6000 7000 -0.02 -0.01 0 0.01 0.02 0.03 0.04 0.05
  • 31.  The Echo state network is a recurrent neural network with a structure that is well suited to be used in systems biologically inspired. In ESN the dominant changes are in the output weights. In cognitive neuroscience, a related mechanism has been investigated by Peter F. Dominey in the context of modeling processing in mammalian brains, especially speech recognition in humans.  Tests have shown that the response of network is leveling out at an average speed obtained with the dynamic simulator. Varying in an appropriate manner the parameters of the network we are able to follow more faithful these values, in addition, the introduction of a leaky integrator allows us to realize the behavior of an artificial neuron of the first order.  The network is robust to disturbances, because it was not necessary to filter the input signals. In case of a tanh output function activation will be necessary to climb in an appropriate manner the teacher, in a way to avoid saturation.  The algorithm for calculation of output weights unfortunately is not suitable for a network biologically inspired and at the conclusion of this, would be appropriate to use an algorithm biologically inspired compared to the use of pseudoinverse or Wiener-Hopf.  The C algorithm allows us to use this network in future in microcontrollers. 31 Conclusion