4. RECURRENT NEURAL NETWORKS
• DYNAMICAL NEURAL NETWORK MODELS NATURALLY
SUITABLE FOR PROCESSING SEQUENTIAL FORMS OF DATA
(TIME-SERIES)
• INTERNAL DYNAMICS ENABLE TREATING ARBITRARILY LONG
SEQUENCES
input
hidden
readout
𝑥(𝑡)
ℎ(𝑡)
𝑦(𝑡)
Dynamical
Recurrent
Representation
Layer
𝐡 𝑡 = tanh(𝐔 𝐱 𝑡 + 𝐖 𝐡 𝑡 − 1 )
𝐲 𝑡 = fY(𝐕 𝐡 𝑡 )
state input
previous
state
output
tuned parameters
5. TRAINING RECURRENT NEURAL NETS
• GRADIENT MIGHT VANISH OR EXPLODE
THROUGH MANY TRANSFORMATIONS
• DIFFICULT TO TRAIN ON LONG-TERM
DEPENDENCIES
• TRAINING RNN S IS SLOW
Bengio et al, “Learning long-term dependencies with
gradient descent is difficult”, IEEE Transactions on
Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent
neural networks”, ICML 2013
6. RESERVOIR COMPUTING
FOCUS ON THE DYNAMICAL SYSTEM:
• THE RECURRENT HIDDEN LAYER IS A (DISCRETE-TIME) NON-
LINEAR & NON-AUTONOMOUS DYNAMICAL SYSTEM
• TRAIN ONLY THE OUTPUT FUNCTION
• MUCH FASTER & LIGHTWEIGHT TO TRAIN
• SPEED-UP ≈ 𝑥100
• SCALABLE FOR EDGE DISTRIBUTED LEARNING
readout
𝑥(𝑡)
ℎ(𝑡)
𝑦(𝑡)
Untrained
Dynamical
System
Trained Output
𝐡 𝑡 = tanh(𝐔 𝐱 𝑡 + 𝐖 𝐡 𝑡 − 1 )
randomized untrained parameters
Reservoir
8. RESERVOIR COMPUTING – INITIALIZATION
𝐡 𝑡 = tanh(𝜔𝐔 𝐱 𝑡 + 𝝆𝐖 𝐡 𝑡 − 1 )
• HOW TO SCALE THE WEIGHT MATRICES?
• FULFILL THE “ECHO STATE PROPERTY”
• GLOBAL ASYMPTOTIC LYAPUNOV STABILITY CONDITION
• SPECTRAL RADIUS < 1
RANDOMLY INITIALIZED + SPARSELY CONNECTED
Yildiz, Izzet B., Herbert Jaeger, and Stefan J. Kiebel. "Re-visiting
the echo state property." Neural networks 35 (2012): 1-9.
9. WHY DOES IT WORK?
Gallicchio, Claudio, and Alessio Micheli. "Architectural
and markovian factors of echo state networks." Neural
Networks 24.5 (2011): 440-456.
Exploit the architectural bias
- Contractive dynamical systems
separate input histories based on
the suffix even without training
- Markovian factor in RNN design
- The separation ability peaks near
the boundary of stability (edge of
chaos)
10. ADVANTAGES
1. FASTER LEARNING
2. CLEAN MATHEMATICAL ANALYSIS
• ARCHITECTURAL BIAS OF RECURRENT NEURAL NETWORKS
3. UNCONVENTIONAL HARDWARE IMPLEMENTATIONS
• E.G., IN PHOTONICS (MORE EFFICIENT, FASTER)
Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande,
eds. Photonic Reservoir Computing: Optical Recurrent Neural
Networks. Walter de Gruyter GmbH & Co KG, 2019.
Tino, Peter, Michal Cernansky, and Lubica Benuskova.
"Markovian architectural bias of recurrent neural networks." IEEE
Transactions on Neural Networks 15.1 (2004): 6-15.
11. APPLICATIONS
• AMBIENT INTELLIGENCE: DEPLOY EFFICIENTLY TRAINABLE RNNS IN RESOURCE-CONSTRAINED DEVICES
• HUMAN ACTIVITY RECOGNITION
• ROBOT LOCALIZATION (E.G., IN HOSPITAL ENVIRONMENTS)
• EARLY IDENTIFICATION OF EARTHQUAKES
• MEDICAL APPLICATIONS
• ESTIMATION OF CLINICAL EXAMS OUTCOMES (E.G., POSTURE AND BALANCE SKILLS)
• EARLY IDENTIFICATION OF (RARE) HEART DISEASES
• HUMAN-CENTRIC INTERACTIONS IN CYBER-PHYSICAL SYSTEMS OF SYSTEMS
https://www.teaching-h2020.eu
http://fp7rubicon.eu/
14. DEPTH IN RECURRENT NEURAL SYSTEMS
• DEVELOP RICHER DYNAMICS EVEN WITHOUT TRAINING OF THE RECURRENT CONNECTIONS
• MULTIPLE TIME-SCALES
• MULTIPLE FREQUENCIES
• NATURALLY BOOST THE PERFORMANCE OF DYNAMICAL NEURAL SYSTEMS EFFICIENTLY
Gallicchio, Claudio and Alessio Micheli. “Deep
Reservoir Computing” (2020). To appear in
"Reservoir Computing: Theory and Physical
Implementations", K. Nakajima and I. Fischer,
eds., Springer.
15. DESIGN OF DEEP ESNS
- Each reservoir layer cuts part of the
frequency content;
- Idea: stop adding new layers
whenever the filtering effect
(centroid shift) becomes negligible
(independently from the readout
part)
Gallicchio, Claudio, Alessio Micheli, and Luca Pedrelli. "Design of
deep echo state networks." Neural Networks 108 (2018): 33-47.
17. RESERVOIR COMPUTING FOR GRAPHS
• BASIC IDEA: EACH INPUT GRAPH IS ENCODED BY THE FIXED POINT OF A DYNAMICAL SYSTEM
• THE DYNAMICAL SYSTEM IS IMPLEMENTED BY A HIDDEN LAYER OF RECURRENT RESERVOIR
NEURONS
• RESERVOIR COMPUTING (RC):
• THE RESERVOIR NEURONS DO NOT REQUIRE LEARNING
• FAST DEEP NEURAL NETWORKS FOR GRAPHS
Deep Neural
Network ?
18. GRAPH REPRESENTATIONS WITHOUT LEARNING
• EACH VERTEX IN AN INPUT GRAPH IS ENCODED BY THE HIDDEN LAYER
𝑣
𝑣1
𝑣2
𝑣 𝑘
𝑥(𝑣)
ℎ(𝑣)
ℎ 𝑣1 ℎ(𝑣 𝑘)
⋮
⋮
embedding (state)
of vertex 𝑣 input feature
of vertex 𝑣
embedding (state)
of neighbors of vertex
input weight matrix hidden weight matrix
𝐡(𝑣) = tanh(𝐔 𝐱 𝑣 +
𝑣′∈𝑁(𝑣)
𝐖 𝐡(𝑣′))
19. GRAPH REPRESENTATIONS WITHOUT LEARNING
• EQUATIONS CAN BE COLLECTIVELY GROUPED
𝑣
𝑣1
𝑣2
𝑣 𝑘
𝐇 = F X, H = tanh(𝐔 𝐗 + 𝐖 𝐇 𝐀)
state
input feature matrixadjacency matrix
Existence (and uniqueness) of solutions is not guaranteed in case of
mutual dependencies (e.g., cycles, undirected edges)
20. GRAPH EMBEDDING BY LEARNING-FREE NEURONS
• THE ENCODING EQUATION CAN BE SEEN AS A DISCRETE TIME DYNAMICAL SYSTEM
• EXISTENCE UNIQUENESS OF THE SOLUTION IS GUARANTEED BY STUDYING LOCAL ASYMPTOTIC
STABILITY OF THE ABOVE EQUATION
• GRAPH EMBEDDING STABILITY (GES): GLOBAL (LYAPUNOV) ASYMPTOTIC STABILITY OF THE
ENCODING PROCESS
INITIALIZE THE DYNAMICAL LAYER UNDER THE GES CONDITION AND THEN LEAVE IT UNTRAINED
RESERVOIR COMPUTING FOR GRAPHS
𝐇 = F X, H = tanh(𝐔 𝐗 + 𝐖 𝐇 𝐀)
𝑣
𝑣1
𝑣2
𝑣 𝑘
21. DEEP RESERVOIRS FOR GRAPHS
• INITIALIZE EACH LAYER TO CONTROL ITS
EFFECTIVE SPECTRAL RADIUS
𝜌(𝑖)
= 𝜌 𝐖(𝑖)
𝑘
• DRIVE (ITERATE) THE NESTED SET OF
DYNAMICAL RESERVOIR SYSTEMS TOWARDS
THE FIXED POINT FOR EACH INPUT GRAPH𝒉 1 (𝑣)
𝒙(𝑣) 𝒉 𝟏
(𝑣1) 𝒉 𝟏
(𝑣 𝑘)…
…
𝒉 𝒊
(𝑣)
𝒉 𝒊−𝟏
(𝑣) 𝒉 𝒊
(𝑣1) 𝒉 𝒊
(𝑣 𝑘)…
…
vertex
feature
embeddings of neighbors
embeddings of neighbors
embedding in the
previous layer
1-st hidden layer
i-th hidden layer
�
�
�
�
Gallicchio, Claudio, and Alessio Micheli. "Fast
and Deep Graph Neural Networks." AAAI. 2020.
23. IT’S ACCURATE
• HIGHLY COMPETITIVE WITH STATE-OF-
THE-ART
• DEEP GNN ARCHITECTURES WITH
STABLE DYNAMICS CAN INHERENTLY
CONSTRUCT RICH NEURAL
EMBEDDINGS FOR GRAPHS EVEN
WITHOUT TRAINING OF
RECURRENT CONNECTIONS
• TRAINING DEEPER NETWORKS COMES
AT THE SAME COST
Gallicchio, Claudio, and Alessio Micheli. "Fast
and Deep Graph Neural Networks." AAAI. 2020.
24. IT’S FAST
• UNTRAINED EMBEDDINGS, LINEAR COMPLEXITY
IN THE # OF VERTICES
• SPARSE AND DEEP ARCHITECTURE
• A VERY SMALL NUMBER OF TRAINABLE WEIGHTS
(MAX. 1001 IN OUR EXPERIMENTS)
Gallicchio, Claudio, and Alessio Micheli. "Fast
and Deep Graph Neural Networks." AAAI. 2020.
25. CONCLUSIONS
• DEEP RESERVOIR COMPUTING ENABLES FAST YET EFFECTIVE LEARNING IN
STRUCTURED DOMAINS
• SEQUENCES, GRAPH DOMAINS
• THE APPROACH HIGHLIGHTS THE INHERENT POSITIVE ARCHITECTURAL BIAS OF
RECURSIVE NEURAL NETWORKS ON GRAPHS
• STABLE AND DEEP ARCHITECTURE ENABLE RICH UNTRAINED EMBEDDINGS
• IT’S ACCURATE AND FAST