SlideShare a Scribd company logo
1 of 63
Download to read offline
Recurrent Graph Convolution Networks for
Forecasting Ethereum prices
ICCS 2018
In collaboration with
tl;dr: We extended Graph Convolution
Networks to be Recurrent over time.
What is Ethereum
- A 100% open source platform to build
and distribute decentralized
applications
- No middle men
- Social sites, Financial systems,
Voting mechanisms, Games,
Reputation Systems
- 100% peer to peer, censorship proof
- Also a Tradable Asset.
RECURRENT GRAPH NEURAL NETWORKS
Experiment Setting
EXPERIMENT SETTING
Time 0 900 1800 2700 3600 4500 5400 6300 7200 8100 9000 9900 10800 11700 12600 13500 14400 15300 16200 17100 18000 18900 19800 โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ
Batch - 1
Batch - 2
Batch - 3
Batch - 4
Batch - 5
Batch - 6
Batch - 7
Batch - 8
Batch - 9
โ€ฆ
โ€ฆ
โ€ฆ
Optimization Window Unseen
Vector โ€“ 60 Min lagged prices
Ground Truth โ€“ ETH Future 5 Min Prices
Batch โ€“ Training: 240 Vectors Test: 90 Vectors
In Sample
Training set: 28.02.18 - 13.05.18
Test set: 13.05.18 โ€“ 29.05.18
Out of sample
5 (min)60 (min)
Vector Structure
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
Optimization Window Unseen
RECURRENT GRAPH NEURAL NETWORKS
DEEP LEARNING SUPERIORITY
96.92%
Deep Learning
94.9%
Human
ref: http://www.image-net.org/challenges/LSVRC/
RECURRENT GRAPH NEURAL NETWORKS
GRADIENT DESCENT
๐ธ = Error of the network
๐‘ค๐‘ก = ๐‘ค๐‘กโˆ’1 โˆ’ ๐›พ
๐œ•๐ธ
๐œ•๐‘ค
๐‘Š = Weight matrix representing the filters
RECURRENT GRAPH NEURAL NETWORKS
BackPropagation
Legend
๐‘ฅ0
๐‘“0(๐‘ฅ0, ๐‘ค0)
๐‘“1(๐‘ฅ1, ๐‘ค1)
๐‘“2(๐‘ฅ2, ๐‘ค2)
๐‘“๐‘› ๐‘ฅ ๐‘›, ๐‘ค ๐‘› = เทœ๐‘ฆ
๐‘“๐‘›โˆ’1(๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘›โˆ’1)
๐‘“๐‘›โˆ’2(๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’2)
๐‘ค0
๐‘ค1
๐‘ค ๐‘›
๐‘ค ๐‘›โˆ’1
๐ธ = ๐‘™ เทœ๐‘ฆ, ๐‘ฆ๐‘ฆ
๐‘™ เทœ๐‘ฆ, ๐‘ฆ - Loss Function
๐‘ฅ0 - Features Vector
๐‘ฅ๐‘– - Output of ๐‘– layer
๐‘ค๐‘– - Weights of ๐‘– layer
๐‘ฆ โ€“ Ground Truth
เทœ๐‘ฆ โ€“ Model Output
๐ธ โ€“ Loss Surface
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›
=
๐œ•๐‘™ เทœ๐‘ฆ, ๐‘ฆ
๐œ•๐‘ฅ ๐‘›
๐œ•๐ธ
๐œ•๐‘ค ๐‘›
=
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›
๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘›
๐œ•๐‘ค ๐‘›
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›โˆ’1
=
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›
๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘›
๐‘ฅ ๐‘›โˆ’1
๐‘“โ€“ Activation Function
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›โˆ’2
=
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›โˆ’1
๐œ•๐‘“๐‘›โˆ’1 ๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’1
๐‘ฅ ๐‘›โˆ’2
๐œ•๐ธ
๐œ•๐‘ค ๐‘›โˆ’1
=
๐œ•๐ธ
๐œ•๐‘ฅ ๐‘›โˆ’1
๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’1
๐œ•๐‘ค ๐‘›โˆ’1
โ€ฆ
โ€ฆ
๐น๐‘œ๐‘Ÿ๐‘ค๐‘Ž๐‘Ÿ๐‘‘๐‘ƒ๐‘Ÿ๐‘œ๐‘๐‘Ž๐‘”๐‘Ž๐‘ก๐‘–๐‘œ๐‘›
๐ต๐‘Ž๐‘๐‘˜๐‘ƒ๐‘Ÿ๐‘œ๐‘๐‘Ž๐‘”๐‘Ž๐‘ก๐‘–๐‘œ๐‘›
1: Forward Propagation 2: Loss Calculation 3: Optimization
RECURRENT GRAPH NEURAL NETWORKS
CONVOLUTION
เถต
โˆ’โˆžโˆ’โˆž
โˆžโˆž
๐‘“ ๐œ1, ๐œ2 โˆ™ ๐‘” ๐‘ฅ โˆ’ ๐œ1, ๐‘ฆ โˆ’ ๐œ2 ๐‘‘๐œ1 ๐‘‘๐œ2
๐‘“ ๐‘ฅ, ๐‘ฆ ๐‘” ๐‘ฅ, ๐‘ฆ ๐‘“ โˆ— ๐‘”
RECURRENT GRAPH NEURAL NETWORKS
ConvNet
๐‘ฅInput
เทœ๐‘ฆ
Class
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Fully Connected
RECURRENT GRAPH NEURAL NETWORKS
F1RMSE PnL(%)
Results: 1D-ConvNet
0.9 0.58 -17.317
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
Recurrent Neural Network
-Memory Achieved through feedback
-Due to self multiplications, Feedback Weight matrix tend to explode or vanish.
-Solution: logistic gating mechanism
Keep
Gate
1.73
Write
Gate
Read
Gate
Input from
rest of RNN
Output to
rest of RNN
Input
Command Output
Cell Gate
RECURRENT GRAPH NEURAL NETWORKS
Recurrent Neural Network
๐’” ๐’•+๐Ÿ๐’” ๐’• ๐’” ๐’•+๐Ÿโ€ฆโ€ฆ.. โ€ฆโ€ฆ..Backpropagation
Through Time
Long Short Term
Memory
แ‰๐‘“๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“
แ‰๐œ„๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„
แ‰๐‘œ๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ
แ‰๐œ๐‘–
๐‘™+1
๐‘ก
= ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘—
๐‘™
+ ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
Forget Gate
Output Gate
Input Gate
Cell
RECURRENT GRAPH NEURAL NETWORKS
F1RMSE PnL(%)
Results: LSTM
0.1 0.42 -7.115
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
INPUT
BIDIRECTIONAL GRU
RESIDUAL
DIALATED
CONV1D
๐’ˆ ๐’•+๐Ÿ๐’ˆ ๐’• ๐’ˆ ๐’•+๐Ÿ
๐’ˆ ๐’•+๐Ÿ๐’ˆ ๐’•+๐Ÿ ๐’ˆ ๐’•
TRANSPOSE
AXIS=1
tanh
softmax
tanh
softmax
tanh
softmax
HARD ATTENTION
F1RMSE PnL(%)
Results: CNN-LSTM
0.05 0.53 -7.461
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
CNN?
RNN?
CNN-RNN?
โ€ฆ
DEEP LEARNING COMMON STRUCTURES
SUPERVISED UNSUPERVISED
Perceptron It is a type of linear classifier, a classification algorithm that makes its predictions based on a linear predictor function
combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the
training set one at a time.
RECURRENTFEED FORWARD
Feed Forward Network sometimes
Referred to as MLP, is a fully connected
dense model used as a simple
classifier.
Convolutional Network assume that
highly correlated features located close
to each other in the input matrix and
can be pooled and treated as one in the
next layer.
Known for superior Image classification
capabilities.
Simple Recurrent Neural Network is a
class of artificial neural network where
connections between units form a
directed cycle.
Hopfield Recurrent Neural Network It is
a RNN in which all connections are
symmetric. it requires stationary
inputs.
Long Short Term Memory Network
contains gates that determine if the
input is significant enough to
remember, when it should continue to
remember or forget the value, and when
it should output
Auto Encoder aims to learn a
representation (encoding) for a set of
data, typically for the purpose of
dimensionality reduction.
Restricted Boltzmann Machine can
learn a probability distribution over its
set of inputs..
Deep Belief Net is a composition of
simple, unsupervised networks such as
restricted Boltzmann machines ,where
each sub-network's hidden layer serves
as the visible layer for the next.
RECURRENT GRAPH NEURAL NETWORKS
Search Problems
=
=
=
RECURRENT GRAPH NEURAL NETWORKS
Markov Decision Process
Action: ๐’‚ ๐’• Action: ๐’‚ ๐’•+๐Ÿ
Reward : ๐’“ ๐’• Reward : ๐’“ ๐’•+๐Ÿ Reward : ๐’“ ๐’•+๐Ÿ
๐’” ๐’•+๐Ÿ๐’” ๐’• ๐’” ๐’•+๐Ÿโ€ฆโ€ฆ.. โ€ฆโ€ฆ..
๐‘† โ‰” {๐‘ 1, ๐‘ 2, ๐‘ 3, โ€ฆ ๐‘  ๐‘›}
๐ด โ‰” {๐‘Ž1, ๐‘Ž2, ๐‘Ž3, โ€ฆ ๐‘Ž ๐‘›}
๐‘‡(๐‘ , ๐‘Ž, ๐‘ ๐‘ก+1)
๐‘…(๐‘ , ๐‘Ž)
Set of states
Set of Actions
Reward Function
Transition Function
RECURRENT GRAPH NEURAL NETWORKS
Policy Search
๐’” ๐’• ๐…
๐‘ธ(๐’”, ๐’‚)
๐‘ธ(๐’”, ๐’‚)
๐‘ธ(๐’”, ๐’‚)
๐‘ธ(๐’”, ๐’‚)
Policy Expected Reward
๐…: ๐’” โ†’ ๐’‚
The goal will be to
Maximize the reward
RECURRENT GRAPH NEURAL NETWORKS
Reinforcement Learning
Observation
Action
Value โ€“ Maps state, action pair to expected future reward
๐‘ธ ๐’”, ๐’‚ โ‰ˆ ๐”ผ ๐‘น ๐’•+๐Ÿ + ๐‘น ๐’•+๐Ÿ + ๐‘น ๐’•+๐Ÿ‘ + โ€ฆ ๐‘บ๐’• = ๐’”, ๐‘จ ๐’• = ๐’‚]
Optimal Value โ€“ Bellman Equation 1957
๐‘ธโˆ—
๐’”, ๐’‚ โ‰ˆ ๐”ผ ๐‘น ๐’•+๐Ÿ ๐‘ธโˆ—
(๐‘บ๐’•+๐Ÿ, ๐’ƒ) ๐‘บ ๐’• = ๐’”, ๐‘จ ๐’• = ๐’‚]
TD Algorithm โ€“ Watkins 1989
๐‘ธ ๐’•+๐Ÿ ๐‘บ๐’•, ๐‘จ ๐’• = ๐‘ธ ๐’• ๐‘บ ๐’•, ๐‘จ ๐’• + ๐œถ(๐‘น ๐’•+๐Ÿ + ฮณmax
๐‘Ž
๐‘ธ ๐’• ๐‘บ๐’•+๐Ÿ, ๐‘จ ๐’• โˆ’ ๐‘ธ ๐’• ๐‘บ๐’•, ๐‘จ ๐’• ]
RECURRENT GRAPH NEURAL NETWORKS
Gets โ€œRewardsโ€ and Penalties based on
itโ€™s success of producing a better
generation of models.
Father Model
Being built, compiled, evaluated and
stored for future reconstruction and
retraining by a Human.
Child Model
Deep
Reinforcement
Learning
Deep Meta Learning
RECURRENT GRAPH NEURAL NETWORKS
F1RMSE PnL(%)
Results: Deep Meta Learning
0.027 0.68 -3.2
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
Reward Shaping
Random Walk LSTM
RECURRENT GRAPH NEURAL NETWORKS
<hash> <hash>
<hash> <hash>
<hash> <hash>
<hash> <hash>
<hash> <hash>
<hash> <hash>
<Amount>
<Amount>
<Amount>
<Amount>
<Amount>
<Amount>
โ€ฆ
โ€ฆ
Blockchain Representation
Learning Graph Representations
Random Walks On Graphs
Perozzi et al., 2014
Spectral networks
Bruna et al., 2013
Marginalized kernels between labeled graphs
Kashima, 2013
Graph Neural Networks
Gori 2015
Convolutional Networks on Graphs for
Learning Molecular Fingerprints
Duvenaud, 2015
RECURRENT GRAPH NEURAL NETWORKS
Spectral Networks
Convolution are diagonalized in Fourier Domain:
๐’™ โˆ— ๐’‰ = ๐“•โˆ’๐Ÿ ๐’…๐’Š๐’‚๐’ˆ ๐“•๐’‰ ๐“•๐’™
Where
๐“• ๐’Œ,๐’ = ๐’†
(
โˆ’๐Ÿ๐…๐’Š(๐’Œโˆ™๐’)
๐‘ต ๐’… )
Fourier basis can be defined as the eigenbasis of
Laplacian operator:
โˆ†๐’™ ๐’– = เท
๐’‹โ‰ค๐’…
๐ ๐Ÿ ๐’™
๐๐’–๐’‹
๐Ÿ
(๐’–)
RECURRENT GRAPH NEURAL NETWORKS
Laplacian
๐‘“
๐‘“โ€ฒ
๐‘“โ€ฒโ€ฒ
๐›ป๐‘“
๐›ป โˆ™
๐บ๐‘Ÿ๐‘Ž๐‘‘๐‘–๐‘’๐‘›๐‘ก
๐ท๐‘–๐‘ฃ๐‘’๐‘Ÿ๐‘”๐‘’๐‘›๐‘๐‘’
RECURRENT GRAPH NEURAL NETWORKS
Graph Laplacian
RECURRENT GRAPH NEURAL NETWORKS
Graph Convolution
Spectral graph convolution
multiplication of a signal with a filter in the Fourier space of a graph.
Graph Fourier transform
multiplication of a graph signal ๐‘‹(i.e. feature vectors for every node)
with the eigenvector matrix ๐‘ˆof the graph Laplacian ๐ฟ.
Graph Laplacian
can be easily computed from the symmetrically normalized graph adjacency
matrix าง๐ด: ๐ฟ = ๐ผ โˆ’ าง๐ด
Fourier basis of ๐‘ฟ are Eigenvectors ๐‘ฝ of ๐‘ณ
RECURRENT GRAPH NEURAL NETWORKS
Spectral Networks
Convolution of Graph:
๐’™ โˆ— ๐’‰ ๐’Œ = ๐‘ฝ๐’…๐’Š๐’‚๐’ˆ(๐’‰)๐‘ฝ ๐‘ป
๐’™
RECURRENT GRAPH NEURAL NETWORKS
Translation Invariance?
Graphs Isomorphism
RECURRENT GRAPH NEURAL NETWORKS
ConvNet (LeNet5)
๐‘ฅInput
เทœ๐‘ฆ
Class
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Fully Connected
RECURRENT GRAPH NEURAL NETWORKS
ConvNet (LeNet)
๐‘ฅInput
เทœ๐‘ฆ
Class
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Convolution
&
Maxpooling
Fully Connected
Classifier
Representation
Learning
RECURRENT GRAPH NEURAL NETWORKS
Representation Bank
Give me the best
representation
for โ€œcatโ€
the best
representation
for โ€œcatโ€
Cat
RECURRENT GRAPH NEURAL NETWORKS
Single CNN layer with 3X3 filter
Convolutional Neural Network
2D
1D
RECURRENT GRAPH NEURAL NETWORKS
Single CNN layer with 3X3 filter
Euclidian Space Convolution
Update for a single pixel
-Transform neighbors individually ๐‘ค๐‘–
(๐‘™)
โ„Ž๐‘–
(๐‘™)
-Add everything up ฯƒ๐‘– ๐‘ค๐‘–
(๐‘™)
โ„Ž๐‘–
(๐‘™)
-Add everything up โ„Ž0
(๐‘™+1)
= ๐œŽ(ฯƒ๐‘– ๐‘ค๐‘–
(๐‘™)
โ„Ž๐‘–
(๐‘™)
)
๐’‰ ๐ŸŽ
๐’‰ ๐Ÿ๐’‰ ๐Ÿ ๐’‰ ๐Ÿ‘
๐’‰ ๐Ÿ’
๐’‰ ๐Ÿ“๐’‰ ๐Ÿ”๐’‰ ๐Ÿ•
๐’‰ ๐Ÿ–
๐‘ค7
๐‘ค8
๐‘ค1 ๐‘ค2
๐‘ค3
๐‘ค4
๐‘ค5๐‘ค6
RECURRENT GRAPH NEURAL NETWORKS
Euclidian Space Convolution
๐’‰ ๐ŸŽ
๐’‰ ๐Ÿ๐’‰ ๐Ÿ ๐’‰ ๐Ÿ‘
๐’‰ ๐Ÿ’
๐’‰ ๐Ÿ“๐’‰ ๐Ÿ”๐’‰ ๐Ÿ•
๐’‰ ๐Ÿ–
๐‘ค7
๐‘ค8
๐‘ค1 ๐‘ค2
๐‘ค3
๐‘ค4
๐‘ค5๐‘ค6
โ„Ž0
(๐‘™+1)
= ๐œŽ(เท
๐‘–
๐‘ค๐‘–
(๐‘™)
โ„Ž๐‘–
(๐‘™)
)
RECURRENT GRAPH NEURAL NETWORKS
Graph Convolution as Message Passing
๐’‰ ๐ŸŽ
(๐’+๐Ÿ)
= ๐ˆ(๐’‰ ๐ŸŽ
(๐’)
๐’˜ ๐ŸŽ
(๐’)
เท
๐’Šโˆˆ๐’
๐Ÿ
๐’„๐’Š,๐’
๐’˜๐’‹
(๐’)
๐’‰๐’‹
(๐’)
)
Propagation rule
๐’˜ ๐ŸŽ
๐’˜ ๐Ÿ
๐’‰๐’Š
RECURRENT GRAPH NEURAL NETWORKS
def relational_graph_convolution(self, inputs):
features = inputs[0]
A = inputs[1:] # list of basis functions
# convolve
supports = list()
for i in range(support):
supports.append(K.dot(A[i], features))
supports = K.concatenate(supports, axis=1)
output = K.dot(supports, self.W)
๐’‰ ๐ŸŽ
(๐’+๐Ÿ)
= ๐ˆ(๐’‰ ๐ŸŽ
(๐’)
๐’˜ ๐ŸŽ
(๐’)
เท
๐’Šโˆˆ๐’
๐Ÿ
๐’„๐’Š,๐’
๐’˜๐’‹
(๐’)
๐’‰๐’‹
(๐’)
)
RECURRENT GRAPH NEURAL NETWORKS
GRAPH CONVOLUTIONAL NETWORKS
ReLUReLU
Input
Features for nodes
๐‘‹ โˆˆ โ„ ๐‘โˆ—๐ธ
Adjacency matrix
containing all links แˆ˜๐ด
Embeddings
Representations that combine features of
neighborhood
Neighborhood size depends on number of
layers
RECURRENT GRAPH NEURAL NETWORKS
Problem
Embeddings are not optimized
For classification task!
GRAPH CONVOLUTIONAL NETWORKS
ReLUReLU
Input
Features for nodes
๐‘‹ โˆˆ โ„ ๐‘โˆ—๐ธ
Adjacency matrix
containing all links แˆ˜๐ด
Evaluate loss on labeled nodes only
โ„’ = โˆ’ เท
๐ผโˆˆ๐‘ฆ ๐‘–
เท
๐‘“=๐ผ
๐น
๐‘Œ๐ผ๐‘“ln(เท
๐‘–
๐‘’ ๐‘ฅ ๐‘–)
RECURRENT GRAPH NEURAL NETWORKS
EXAMPLE OF FORWARD PASS
๐‘“( ) =
RECURRENT GRAPH NEURAL NETWORKS
Inits
SEMI-SUPERVISED CLASSIFICATION WITH
GRAPH CONVOLUTIONAL NETWORKS
Move
Nodes
https://github.com/tkipf/gcn
RECURRENT GRAPH NEURAL NETWORKS
Inits
SEMI-SUPERVISED CLASSIFICATION WITH
GRAPH CONVOLUTIONAL NETWORKS
Move
Nodes
https://github.com/tkipf/gcn
๐’‰ ๐ŸŽ
(๐’+๐Ÿ)
= ๐ˆ(เท
๐’Šโˆˆ๐’
๐Ÿ
๐’„๐’Š,๐’
๐’˜ ๐Ÿ
(๐’)
๐’‰๐’‹
(๐’)
)
RECURRENT GRAPH NEURAL NETWORKS
F1RMSE PnL(%)
Results: Graph Convolution
0.037 0.71 0.3
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
Temporal?
Recurrent Neural Networks Graph Convolution Networks
๐’˜ ๐ŸŽ
๐’˜ ๐Ÿ
๐’‰๐’Š
๐’‰ ๐ŸŽ
(๐’+๐Ÿ)
= ๐ˆ(๐’‰ ๐ŸŽ
(๐’)
๐’˜ ๐ŸŽ
(๐’)
เท
๐’Šโˆˆ๐’
๐Ÿ
๐’„๐’Š,๐’
๐’˜๐’‹
(๐’)
๐’‰๐’‹
(๐’)
)
แ‰๐‘“๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“
แ‰๐œ„๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„
แ‰๐‘œ๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ
แ‰๐œ๐‘–
๐‘™+1
๐‘ก
= ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘—
๐‘™
+ ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
Recurrent Neural Networks Graph Convolution Networks
๐’‰ ๐ŸŽ
(๐’+๐Ÿ)
= ๐ˆ(๐’‰ ๐ŸŽ
(๐’)
๐’˜ ๐ŸŽ
(๐’)
เท
๐’Šโˆˆ๐’
๐Ÿ
๐’„๐’Š,๐’
๐’˜๐’‹
(๐’)
๐’‰๐’‹
(๐’)
)
แ‰๐‘“๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“
แ‰๐œ„๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„
แ‰๐‘œ๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘—
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ
แ‰๐œ๐‘–
๐‘™+1
๐‘ก
= ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘—
๐‘™
+ ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
เตฑ๐‘“๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘“ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“
เตฑ๐œ„๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ”๐œ„ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„
เตฑ๐‘œ๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘œ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ
เตฑ๐œ๐‘–
๐‘™+1
๐‘ก
= ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
+ ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
Forget Gate
Output Gate
Input Gate
Cell
RECURRENT GRAPH CONVOLUTIONAL NETWORKS
RECURRENT GRAPH NEURAL NETWORKS
เตฑ๐‘“๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘“ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“
เตฑ๐œ„๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ”๐œ„ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„
เตฑ๐‘œ๐‘–
๐‘™+1
๐‘ก
= ๐œŽ๐‘”( ๐œ” ๐‘œ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ
เตฑ๐œ๐‘–
๐‘™+1
๐‘ก
= ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ เท
๐‘Ÿโˆˆโ„›
เตฑเท
๐‘—โˆˆ๐’ฉ๐‘–
๐‘Ÿ
1
๐‘๐‘–,๐‘Ÿ
๐œƒ๐‘Ÿ
๐‘™
๐‘ฆ๐‘—
๐‘™
+ ๐œƒ0
๐‘™
๐‘ฆ๐‘–
๐‘™
+ ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
Forget Gate
Output Gate
Input Gate
Cell
โ„Ž ๐‘ก โ„Ž ๐‘ก+1 โ„Ž ๐‘ก+2 โ„Ž ๐‘ก+3
F1RMSE PnL(%)
Results: Recurrent Graph Convolution
0.028 0.77 2.4
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
STRATEGY GRADIENT?
๐œ•( ๐œƒ)
๐œ•๐œƒ
โˆ’
๐œ•( ๐œ‘)
๐œ•๐œ‘
Returns Risk
RECURRENT GRAPH NEURAL NETWORKS
Graph Auto Encoders
๐‘จ โ€“ input graph
๐’™ โ€“ input node
เทก๐‘จ โ€“ output graph
Useful for predicting connectivity links
RECURRENT GRAPH NEURAL NETWORKS
Recommender Systems
Users
Items
Graph
Representation
Graph
Prediction
Graph
AutoEncoder
RECURRENT GRAPH NEURAL NETWORKS
๐œŽ
ฮผ
Simulation
TRADING STRATEGY GRADIENTS
เท
๐‘–=1
๐‘›
๐œŽ๐‘–
2
+ ๐œ‡๐‘–
2
โˆ’ log ๐œŽ๐‘– โˆ’ 1
| เทœ๐‘ฆ โˆ’ ๐‘ฆ| 2
2
Action:
๐’‚ ๐’•+๐Ÿ
RECURRENT GRAPH NEURAL NETWORKS
F1RMSE PnL(%)
Results: Recurrent Graph Auto Encoder
0.024 0.86 5.6
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
Conclusions
-Deep Learning works well on Euclidean data.
-Attempts to utilize DL for Non-Euclidean are starting to become viable.
-Reward shaping and drifted metrics are extremely misleading.
-After trying heavily we conclude that Aggregated data (prices) of Ethereum is
insufficient when trying to forecast behavior.
-We introduce a novel layer: Recurrent Graph Convolution and demonstrate
How this approach yield โ€œtradableโ€ results.
RECURRENT GRAPH NEURAL NETWORKS
FIN

More Related Content

What's hot

increasing the action gap - new operators for reinforcement learning
increasing the action gap - new operators for reinforcement learningincreasing the action gap - new operators for reinforcement learning
increasing the action gap - new operators for reinforcement learningRyo Iwaki
ย 
Deep Learning in Finance
Deep Learning in FinanceDeep Learning in Finance
Deep Learning in FinanceAltoros
ย 
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จ
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จ
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จRyo Iwaki
ย 
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จRyo Iwaki
ย 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoSeongwon Hwang
ย 
Hands-on Tutorial of Machine Learning in Python
Hands-on Tutorial of Machine Learning in PythonHands-on Tutorial of Machine Learning in Python
Hands-on Tutorial of Machine Learning in PythonChun-Ming Chang
ย 
Playing Atari with Deep Reinforcement Learning
Playing Atari with Deep Reinforcement LearningPlaying Atari with Deep Reinforcement Learning
Playing Atari with Deep Reinforcement LearningWilly Marroquin (WillyDevNET)
ย 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Taehoon Kim
ย 
Lecture 5: Neural Networks II
Lecture 5: Neural Networks IILecture 5: Neural Networks II
Lecture 5: Neural Networks IISang Jun Lee
ย 
Workshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with RWorkshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with RShirin Elsinghorst
ย 
Generalized Reinforcement Learning
Generalized Reinforcement LearningGeneralized Reinforcement Learning
Generalized Reinforcement LearningPo-Hsiang (Barnett) Chiu
ย 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)byteLAKE
ย 
Capsule networks
Capsule networksCapsule networks
Capsule networksJaehyeon Park
ย 
Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3TOMMYLINK1
ย 
Dssg talk CNN intro
Dssg talk CNN introDssg talk CNN intro
Dssg talk CNN introVincent Tatan
ย 
Deep Learning for AI (2)
Deep Learning for AI (2)Deep Learning for AI (2)
Deep Learning for AI (2)Dongheon Lee
ย 
Ultrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewUltrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewEduard Tyantov
ย 
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From ScratchPPT - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From ScratchJisang Yoon
ย 
BMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialBMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialpotaters
ย 

What's hot (20)

increasing the action gap - new operators for reinforcement learning
increasing the action gap - new operators for reinforcement learningincreasing the action gap - new operators for reinforcement learning
increasing the action gap - new operators for reinforcement learning
ย 
Deep Learning in Finance
Deep Learning in FinanceDeep Learning in Finance
Deep Learning in Finance
ย 
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จ
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จ
ๆ–น็ญ–ๅ‹พ้…ๅž‹ๅผทๅŒ–ๅญฆ็ฟ’ใฎๅŸบ็คŽใจๅฟœ็”จ
ย 
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ
่‡ช็„ถๆ–น็ญ–ๅ‹พ้…ๆณ•ใฎๅŸบ็คŽใจๅฟœ็”จ
ย 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
ย 
Hands-on Tutorial of Machine Learning in Python
Hands-on Tutorial of Machine Learning in PythonHands-on Tutorial of Machine Learning in Python
Hands-on Tutorial of Machine Learning in Python
ย 
Playing Atari with Deep Reinforcement Learning
Playing Atari with Deep Reinforcement LearningPlaying Atari with Deep Reinforcement Learning
Playing Atari with Deep Reinforcement Learning
ย 
Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
ย 
Lecture 5: Neural Networks II
Lecture 5: Neural Networks IILecture 5: Neural Networks II
Lecture 5: Neural Networks II
ย 
Workshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with RWorkshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with R
ย 
Generalized Reinforcement Learning
Generalized Reinforcement LearningGeneralized Reinforcement Learning
Generalized Reinforcement Learning
ย 
ddpg seminar
ddpg seminarddpg seminar
ddpg seminar
ย 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
ย 
Capsule networks
Capsule networksCapsule networks
Capsule networks
ย 
Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3
ย 
Dssg talk CNN intro
Dssg talk CNN introDssg talk CNN intro
Dssg talk CNN intro
ย 
Deep Learning for AI (2)
Deep Learning for AI (2)Deep Learning for AI (2)
Deep Learning for AI (2)
ย 
Ultrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle reviewUltrasound nerve segmentation, kaggle review
Ultrasound nerve segmentation, kaggle review
ย 
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From ScratchPPT - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
PPT - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
ย 
BMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorialBMVA summer school MATLAB programming tutorial
BMVA summer school MATLAB programming tutorial
ย 

Similar to Learning Graphs Representations Using Recurrent Graph Convolution Networks For Forecasting Ethereum Prices

Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentationOwin Will
ย 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacesbutest
ย 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacesbutest
ย 
All projects
All projectsAll projects
All projectsKarishma Jain
ย 
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose EstimationHRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimationtaeseon ryu
ย 
Lesson_8_DeepLearning.pdf
Lesson_8_DeepLearning.pdfLesson_8_DeepLearning.pdf
Lesson_8_DeepLearning.pdfssuser7f0b19
ย 
2021 06-02-tabnet
2021 06-02-tabnet2021 06-02-tabnet
2021 06-02-tabnetJAEMINJEONG5
ย 
Josh Patterson MLconf slides
Josh Patterson MLconf slidesJosh Patterson MLconf slides
Josh Patterson MLconf slidesMLconf
ย 
Deep Learning with Apache MXNet (September 2017)
Deep Learning with Apache MXNet (September 2017)Deep Learning with Apache MXNet (September 2017)
Deep Learning with Apache MXNet (September 2017)Julien SIMON
ย 
Big data 2.0, deep learning and financial Usecases
Big data 2.0, deep learning and financial UsecasesBig data 2.0, deep learning and financial Usecases
Big data 2.0, deep learning and financial UsecasesArvind Rapaka
ย 
Introduction to Tensor Flow for Optical Character Recognition (OCR)
Introduction to Tensor Flow for Optical Character Recognition (OCR)Introduction to Tensor Flow for Optical Character Recognition (OCR)
Introduction to Tensor Flow for Optical Character Recognition (OCR)Vincenzo Santopietro
ย 
Artificial neural networks introduction
Artificial neural networks introductionArtificial neural networks introduction
Artificial neural networks introductionSungminYou
ย 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learningJunaid Bhat
ย 
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for AttributionRestricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attributiontaeseon ryu
ย 
Safety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdfSafety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdfPolytechnique Montrรฉal
ย 
Intel Nervana Artificial Intelligence Meetup 1/31/17
Intel Nervana Artificial Intelligence Meetup 1/31/17Intel Nervana Artificial Intelligence Meetup 1/31/17
Intel Nervana Artificial Intelligence Meetup 1/31/17Intel Nervana
ย 
Handwritten digits recognition report
Handwritten digits recognition reportHandwritten digits recognition report
Handwritten digits recognition reportSwayamdipta Saha
ย 
Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.Ian Sa
ย 

Similar to Learning Graphs Representations Using Recurrent Graph Convolution Networks For Forecasting Ethereum Prices (20)

Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
ย 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
ย 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
ย 
All projects
All projectsAll projects
All projects
ย 
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose EstimationHRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
HRNET : Deep High-Resolution Representation Learning for Human Pose Estimation
ย 
Lesson_8_DeepLearning.pdf
Lesson_8_DeepLearning.pdfLesson_8_DeepLearning.pdf
Lesson_8_DeepLearning.pdf
ย 
2021 06-02-tabnet
2021 06-02-tabnet2021 06-02-tabnet
2021 06-02-tabnet
ย 
Network predictive analysis
Network predictive analysisNetwork predictive analysis
Network predictive analysis
ย 
Josh Patterson MLconf slides
Josh Patterson MLconf slidesJosh Patterson MLconf slides
Josh Patterson MLconf slides
ย 
Deep Learning with Apache MXNet (September 2017)
Deep Learning with Apache MXNet (September 2017)Deep Learning with Apache MXNet (September 2017)
Deep Learning with Apache MXNet (September 2017)
ย 
Big data 2.0, deep learning and financial Usecases
Big data 2.0, deep learning and financial UsecasesBig data 2.0, deep learning and financial Usecases
Big data 2.0, deep learning and financial Usecases
ย 
Introduction to Tensor Flow for Optical Character Recognition (OCR)
Introduction to Tensor Flow for Optical Character Recognition (OCR)Introduction to Tensor Flow for Optical Character Recognition (OCR)
Introduction to Tensor Flow for Optical Character Recognition (OCR)
ย 
Machine learning
Machine learningMachine learning
Machine learning
ย 
Artificial neural networks introduction
Artificial neural networks introductionArtificial neural networks introduction
Artificial neural networks introduction
ย 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
ย 
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for AttributionRestricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attribution
ย 
Safety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdfSafety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdf
ย 
Intel Nervana Artificial Intelligence Meetup 1/31/17
Intel Nervana Artificial Intelligence Meetup 1/31/17Intel Nervana Artificial Intelligence Meetup 1/31/17
Intel Nervana Artificial Intelligence Meetup 1/31/17
ย 
Handwritten digits recognition report
Handwritten digits recognition reportHandwritten digits recognition report
Handwritten digits recognition report
ย 
Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.
ย 

Recently uploaded

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
ย 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
ย 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
ย 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
ย 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
ย 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
ย 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
ย 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsAndrey Dotsenko
ย 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
ย 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
ย 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
ย 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
ย 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
ย 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
ย 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
ย 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
ย 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
ย 
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort ServiceHot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service9953056974 Low Rate Call Girls In Saket, Delhi NCR
ย 
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge Graph
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge GraphSIEMENS: RAPUNZEL โ€“ A Tale About Knowledge Graph
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge GraphNeo4j
ย 

Recently uploaded (20)

Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
ย 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
ย 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
ย 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
ย 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
ย 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
ย 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
ย 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
ย 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
ย 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
ย 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
ย 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
ย 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
ย 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
ย 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
ย 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
ย 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
ย 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
ย 
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort ServiceHot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service
Hot Sexy call girls in Panjabi Bagh ๐Ÿ” 9953056974 ๐Ÿ” Delhi escort Service
ย 
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge Graph
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge GraphSIEMENS: RAPUNZEL โ€“ A Tale About Knowledge Graph
SIEMENS: RAPUNZEL โ€“ A Tale About Knowledge Graph
ย 

Learning Graphs Representations Using Recurrent Graph Convolution Networks For Forecasting Ethereum Prices

  • 1. Recurrent Graph Convolution Networks for Forecasting Ethereum prices ICCS 2018 In collaboration with
  • 2. tl;dr: We extended Graph Convolution Networks to be Recurrent over time.
  • 3. What is Ethereum - A 100% open source platform to build and distribute decentralized applications - No middle men - Social sites, Financial systems, Voting mechanisms, Games, Reputation Systems - 100% peer to peer, censorship proof - Also a Tradable Asset. RECURRENT GRAPH NEURAL NETWORKS
  • 5. EXPERIMENT SETTING Time 0 900 1800 2700 3600 4500 5400 6300 7200 8100 9000 9900 10800 11700 12600 13500 14400 15300 16200 17100 18000 18900 19800 โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ Batch - 1 Batch - 2 Batch - 3 Batch - 4 Batch - 5 Batch - 6 Batch - 7 Batch - 8 Batch - 9 โ€ฆ โ€ฆ โ€ฆ Optimization Window Unseen Vector โ€“ 60 Min lagged prices Ground Truth โ€“ ETH Future 5 Min Prices Batch โ€“ Training: 240 Vectors Test: 90 Vectors In Sample Training set: 28.02.18 - 13.05.18 Test set: 13.05.18 โ€“ 29.05.18 Out of sample 5 (min)60 (min) Vector Structure Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen Optimization Window Unseen RECURRENT GRAPH NEURAL NETWORKS
  • 6. DEEP LEARNING SUPERIORITY 96.92% Deep Learning 94.9% Human ref: http://www.image-net.org/challenges/LSVRC/ RECURRENT GRAPH NEURAL NETWORKS
  • 7. GRADIENT DESCENT ๐ธ = Error of the network ๐‘ค๐‘ก = ๐‘ค๐‘กโˆ’1 โˆ’ ๐›พ ๐œ•๐ธ ๐œ•๐‘ค ๐‘Š = Weight matrix representing the filters RECURRENT GRAPH NEURAL NETWORKS
  • 8. BackPropagation Legend ๐‘ฅ0 ๐‘“0(๐‘ฅ0, ๐‘ค0) ๐‘“1(๐‘ฅ1, ๐‘ค1) ๐‘“2(๐‘ฅ2, ๐‘ค2) ๐‘“๐‘› ๐‘ฅ ๐‘›, ๐‘ค ๐‘› = เทœ๐‘ฆ ๐‘“๐‘›โˆ’1(๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘›โˆ’1) ๐‘“๐‘›โˆ’2(๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’2) ๐‘ค0 ๐‘ค1 ๐‘ค ๐‘› ๐‘ค ๐‘›โˆ’1 ๐ธ = ๐‘™ เทœ๐‘ฆ, ๐‘ฆ๐‘ฆ ๐‘™ เทœ๐‘ฆ, ๐‘ฆ - Loss Function ๐‘ฅ0 - Features Vector ๐‘ฅ๐‘– - Output of ๐‘– layer ๐‘ค๐‘– - Weights of ๐‘– layer ๐‘ฆ โ€“ Ground Truth เทœ๐‘ฆ โ€“ Model Output ๐ธ โ€“ Loss Surface ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘› = ๐œ•๐‘™ เทœ๐‘ฆ, ๐‘ฆ ๐œ•๐‘ฅ ๐‘› ๐œ•๐ธ ๐œ•๐‘ค ๐‘› = ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘› ๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘› ๐œ•๐‘ค ๐‘› ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘›โˆ’1 = ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘› ๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’1, ๐‘ค ๐‘› ๐‘ฅ ๐‘›โˆ’1 ๐‘“โ€“ Activation Function ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘›โˆ’2 = ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘›โˆ’1 ๐œ•๐‘“๐‘›โˆ’1 ๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’1 ๐‘ฅ ๐‘›โˆ’2 ๐œ•๐ธ ๐œ•๐‘ค ๐‘›โˆ’1 = ๐œ•๐ธ ๐œ•๐‘ฅ ๐‘›โˆ’1 ๐œ•๐‘“๐‘› ๐‘ฅ ๐‘›โˆ’2, ๐‘ค ๐‘›โˆ’1 ๐œ•๐‘ค ๐‘›โˆ’1 โ€ฆ โ€ฆ ๐น๐‘œ๐‘Ÿ๐‘ค๐‘Ž๐‘Ÿ๐‘‘๐‘ƒ๐‘Ÿ๐‘œ๐‘๐‘Ž๐‘”๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐ต๐‘Ž๐‘๐‘˜๐‘ƒ๐‘Ÿ๐‘œ๐‘๐‘Ž๐‘”๐‘Ž๐‘ก๐‘–๐‘œ๐‘› 1: Forward Propagation 2: Loss Calculation 3: Optimization RECURRENT GRAPH NEURAL NETWORKS
  • 9. CONVOLUTION เถต โˆ’โˆžโˆ’โˆž โˆžโˆž ๐‘“ ๐œ1, ๐œ2 โˆ™ ๐‘” ๐‘ฅ โˆ’ ๐œ1, ๐‘ฆ โˆ’ ๐œ2 ๐‘‘๐œ1 ๐‘‘๐œ2 ๐‘“ ๐‘ฅ, ๐‘ฆ ๐‘” ๐‘ฅ, ๐‘ฆ ๐‘“ โˆ— ๐‘” RECURRENT GRAPH NEURAL NETWORKS
  • 11. F1RMSE PnL(%) Results: 1D-ConvNet 0.9 0.58 -17.317 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 12. Recurrent Neural Network -Memory Achieved through feedback -Due to self multiplications, Feedback Weight matrix tend to explode or vanish. -Solution: logistic gating mechanism Keep Gate 1.73 Write Gate Read Gate Input from rest of RNN Output to rest of RNN Input Command Output Cell Gate RECURRENT GRAPH NEURAL NETWORKS
  • 13. Recurrent Neural Network ๐’” ๐’•+๐Ÿ๐’” ๐’• ๐’” ๐’•+๐Ÿโ€ฆโ€ฆ.. โ€ฆโ€ฆ..Backpropagation Through Time Long Short Term Memory แ‰๐‘“๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“ แ‰๐œ„๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„ แ‰๐‘œ๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ แ‰๐œ๐‘– ๐‘™+1 ๐‘ก = ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘— ๐‘™ + ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ Forget Gate Output Gate Input Gate Cell RECURRENT GRAPH NEURAL NETWORKS
  • 14. F1RMSE PnL(%) Results: LSTM 0.1 0.42 -7.115 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 15. INPUT BIDIRECTIONAL GRU RESIDUAL DIALATED CONV1D ๐’ˆ ๐’•+๐Ÿ๐’ˆ ๐’• ๐’ˆ ๐’•+๐Ÿ ๐’ˆ ๐’•+๐Ÿ๐’ˆ ๐’•+๐Ÿ ๐’ˆ ๐’• TRANSPOSE AXIS=1 tanh softmax tanh softmax tanh softmax HARD ATTENTION
  • 16. F1RMSE PnL(%) Results: CNN-LSTM 0.05 0.53 -7.461 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 18. DEEP LEARNING COMMON STRUCTURES SUPERVISED UNSUPERVISED Perceptron It is a type of linear classifier, a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. RECURRENTFEED FORWARD Feed Forward Network sometimes Referred to as MLP, is a fully connected dense model used as a simple classifier. Convolutional Network assume that highly correlated features located close to each other in the input matrix and can be pooled and treated as one in the next layer. Known for superior Image classification capabilities. Simple Recurrent Neural Network is a class of artificial neural network where connections between units form a directed cycle. Hopfield Recurrent Neural Network It is a RNN in which all connections are symmetric. it requires stationary inputs. Long Short Term Memory Network contains gates that determine if the input is significant enough to remember, when it should continue to remember or forget the value, and when it should output Auto Encoder aims to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. Restricted Boltzmann Machine can learn a probability distribution over its set of inputs.. Deep Belief Net is a composition of simple, unsupervised networks such as restricted Boltzmann machines ,where each sub-network's hidden layer serves as the visible layer for the next. RECURRENT GRAPH NEURAL NETWORKS
  • 20. Markov Decision Process Action: ๐’‚ ๐’• Action: ๐’‚ ๐’•+๐Ÿ Reward : ๐’“ ๐’• Reward : ๐’“ ๐’•+๐Ÿ Reward : ๐’“ ๐’•+๐Ÿ ๐’” ๐’•+๐Ÿ๐’” ๐’• ๐’” ๐’•+๐Ÿโ€ฆโ€ฆ.. โ€ฆโ€ฆ.. ๐‘† โ‰” {๐‘ 1, ๐‘ 2, ๐‘ 3, โ€ฆ ๐‘  ๐‘›} ๐ด โ‰” {๐‘Ž1, ๐‘Ž2, ๐‘Ž3, โ€ฆ ๐‘Ž ๐‘›} ๐‘‡(๐‘ , ๐‘Ž, ๐‘ ๐‘ก+1) ๐‘…(๐‘ , ๐‘Ž) Set of states Set of Actions Reward Function Transition Function RECURRENT GRAPH NEURAL NETWORKS
  • 21. Policy Search ๐’” ๐’• ๐… ๐‘ธ(๐’”, ๐’‚) ๐‘ธ(๐’”, ๐’‚) ๐‘ธ(๐’”, ๐’‚) ๐‘ธ(๐’”, ๐’‚) Policy Expected Reward ๐…: ๐’” โ†’ ๐’‚ The goal will be to Maximize the reward RECURRENT GRAPH NEURAL NETWORKS
  • 22. Reinforcement Learning Observation Action Value โ€“ Maps state, action pair to expected future reward ๐‘ธ ๐’”, ๐’‚ โ‰ˆ ๐”ผ ๐‘น ๐’•+๐Ÿ + ๐‘น ๐’•+๐Ÿ + ๐‘น ๐’•+๐Ÿ‘ + โ€ฆ ๐‘บ๐’• = ๐’”, ๐‘จ ๐’• = ๐’‚] Optimal Value โ€“ Bellman Equation 1957 ๐‘ธโˆ— ๐’”, ๐’‚ โ‰ˆ ๐”ผ ๐‘น ๐’•+๐Ÿ ๐‘ธโˆ— (๐‘บ๐’•+๐Ÿ, ๐’ƒ) ๐‘บ ๐’• = ๐’”, ๐‘จ ๐’• = ๐’‚] TD Algorithm โ€“ Watkins 1989 ๐‘ธ ๐’•+๐Ÿ ๐‘บ๐’•, ๐‘จ ๐’• = ๐‘ธ ๐’• ๐‘บ ๐’•, ๐‘จ ๐’• + ๐œถ(๐‘น ๐’•+๐Ÿ + ฮณmax ๐‘Ž ๐‘ธ ๐’• ๐‘บ๐’•+๐Ÿ, ๐‘จ ๐’• โˆ’ ๐‘ธ ๐’• ๐‘บ๐’•, ๐‘จ ๐’• ] RECURRENT GRAPH NEURAL NETWORKS
  • 23. Gets โ€œRewardsโ€ and Penalties based on itโ€™s success of producing a better generation of models. Father Model Being built, compiled, evaluated and stored for future reconstruction and retraining by a Human. Child Model Deep Reinforcement Learning Deep Meta Learning RECURRENT GRAPH NEURAL NETWORKS
  • 24.
  • 25. F1RMSE PnL(%) Results: Deep Meta Learning 0.027 0.68 -3.2 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 26. Reward Shaping Random Walk LSTM RECURRENT GRAPH NEURAL NETWORKS
  • 27.
  • 28. <hash> <hash> <hash> <hash> <hash> <hash> <hash> <hash> <hash> <hash> <hash> <hash> <Amount> <Amount> <Amount> <Amount> <Amount> <Amount> โ€ฆ โ€ฆ Blockchain Representation
  • 29. Learning Graph Representations Random Walks On Graphs Perozzi et al., 2014 Spectral networks Bruna et al., 2013 Marginalized kernels between labeled graphs Kashima, 2013 Graph Neural Networks Gori 2015 Convolutional Networks on Graphs for Learning Molecular Fingerprints Duvenaud, 2015 RECURRENT GRAPH NEURAL NETWORKS
  • 30. Spectral Networks Convolution are diagonalized in Fourier Domain: ๐’™ โˆ— ๐’‰ = ๐“•โˆ’๐Ÿ ๐’…๐’Š๐’‚๐’ˆ ๐“•๐’‰ ๐“•๐’™ Where ๐“• ๐’Œ,๐’ = ๐’† ( โˆ’๐Ÿ๐…๐’Š(๐’Œโˆ™๐’) ๐‘ต ๐’… ) Fourier basis can be defined as the eigenbasis of Laplacian operator: โˆ†๐’™ ๐’– = เท ๐’‹โ‰ค๐’… ๐ ๐Ÿ ๐’™ ๐๐’–๐’‹ ๐Ÿ (๐’–) RECURRENT GRAPH NEURAL NETWORKS
  • 33. Graph Convolution Spectral graph convolution multiplication of a signal with a filter in the Fourier space of a graph. Graph Fourier transform multiplication of a graph signal ๐‘‹(i.e. feature vectors for every node) with the eigenvector matrix ๐‘ˆof the graph Laplacian ๐ฟ. Graph Laplacian can be easily computed from the symmetrically normalized graph adjacency matrix าง๐ด: ๐ฟ = ๐ผ โˆ’ าง๐ด Fourier basis of ๐‘ฟ are Eigenvectors ๐‘ฝ of ๐‘ณ RECURRENT GRAPH NEURAL NETWORKS
  • 34. Spectral Networks Convolution of Graph: ๐’™ โˆ— ๐’‰ ๐’Œ = ๐‘ฝ๐’…๐’Š๐’‚๐’ˆ(๐’‰)๐‘ฝ ๐‘ป ๐’™ RECURRENT GRAPH NEURAL NETWORKS
  • 39. Representation Bank Give me the best representation for โ€œcatโ€ the best representation for โ€œcatโ€ Cat RECURRENT GRAPH NEURAL NETWORKS
  • 40. Single CNN layer with 3X3 filter Convolutional Neural Network 2D 1D RECURRENT GRAPH NEURAL NETWORKS
  • 41. Single CNN layer with 3X3 filter Euclidian Space Convolution Update for a single pixel -Transform neighbors individually ๐‘ค๐‘– (๐‘™) โ„Ž๐‘– (๐‘™) -Add everything up ฯƒ๐‘– ๐‘ค๐‘– (๐‘™) โ„Ž๐‘– (๐‘™) -Add everything up โ„Ž0 (๐‘™+1) = ๐œŽ(ฯƒ๐‘– ๐‘ค๐‘– (๐‘™) โ„Ž๐‘– (๐‘™) ) ๐’‰ ๐ŸŽ ๐’‰ ๐Ÿ๐’‰ ๐Ÿ ๐’‰ ๐Ÿ‘ ๐’‰ ๐Ÿ’ ๐’‰ ๐Ÿ“๐’‰ ๐Ÿ”๐’‰ ๐Ÿ• ๐’‰ ๐Ÿ– ๐‘ค7 ๐‘ค8 ๐‘ค1 ๐‘ค2 ๐‘ค3 ๐‘ค4 ๐‘ค5๐‘ค6 RECURRENT GRAPH NEURAL NETWORKS
  • 42. Euclidian Space Convolution ๐’‰ ๐ŸŽ ๐’‰ ๐Ÿ๐’‰ ๐Ÿ ๐’‰ ๐Ÿ‘ ๐’‰ ๐Ÿ’ ๐’‰ ๐Ÿ“๐’‰ ๐Ÿ”๐’‰ ๐Ÿ• ๐’‰ ๐Ÿ– ๐‘ค7 ๐‘ค8 ๐‘ค1 ๐‘ค2 ๐‘ค3 ๐‘ค4 ๐‘ค5๐‘ค6 โ„Ž0 (๐‘™+1) = ๐œŽ(เท ๐‘– ๐‘ค๐‘– (๐‘™) โ„Ž๐‘– (๐‘™) ) RECURRENT GRAPH NEURAL NETWORKS
  • 43. Graph Convolution as Message Passing ๐’‰ ๐ŸŽ (๐’+๐Ÿ) = ๐ˆ(๐’‰ ๐ŸŽ (๐’) ๐’˜ ๐ŸŽ (๐’) เท ๐’Šโˆˆ๐’ ๐Ÿ ๐’„๐’Š,๐’ ๐’˜๐’‹ (๐’) ๐’‰๐’‹ (๐’) ) Propagation rule ๐’˜ ๐ŸŽ ๐’˜ ๐Ÿ ๐’‰๐’Š RECURRENT GRAPH NEURAL NETWORKS
  • 44. def relational_graph_convolution(self, inputs): features = inputs[0] A = inputs[1:] # list of basis functions # convolve supports = list() for i in range(support): supports.append(K.dot(A[i], features)) supports = K.concatenate(supports, axis=1) output = K.dot(supports, self.W) ๐’‰ ๐ŸŽ (๐’+๐Ÿ) = ๐ˆ(๐’‰ ๐ŸŽ (๐’) ๐’˜ ๐ŸŽ (๐’) เท ๐’Šโˆˆ๐’ ๐Ÿ ๐’„๐’Š,๐’ ๐’˜๐’‹ (๐’) ๐’‰๐’‹ (๐’) ) RECURRENT GRAPH NEURAL NETWORKS
  • 45. GRAPH CONVOLUTIONAL NETWORKS ReLUReLU Input Features for nodes ๐‘‹ โˆˆ โ„ ๐‘โˆ—๐ธ Adjacency matrix containing all links แˆ˜๐ด Embeddings Representations that combine features of neighborhood Neighborhood size depends on number of layers RECURRENT GRAPH NEURAL NETWORKS
  • 46. Problem Embeddings are not optimized For classification task!
  • 47. GRAPH CONVOLUTIONAL NETWORKS ReLUReLU Input Features for nodes ๐‘‹ โˆˆ โ„ ๐‘โˆ—๐ธ Adjacency matrix containing all links แˆ˜๐ด Evaluate loss on labeled nodes only โ„’ = โˆ’ เท ๐ผโˆˆ๐‘ฆ ๐‘– เท ๐‘“=๐ผ ๐น ๐‘Œ๐ผ๐‘“ln(เท ๐‘– ๐‘’ ๐‘ฅ ๐‘–) RECURRENT GRAPH NEURAL NETWORKS
  • 48. EXAMPLE OF FORWARD PASS ๐‘“( ) = RECURRENT GRAPH NEURAL NETWORKS
  • 49. Inits SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS Move Nodes https://github.com/tkipf/gcn RECURRENT GRAPH NEURAL NETWORKS
  • 50. Inits SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS Move Nodes https://github.com/tkipf/gcn ๐’‰ ๐ŸŽ (๐’+๐Ÿ) = ๐ˆ(เท ๐’Šโˆˆ๐’ ๐Ÿ ๐’„๐’Š,๐’ ๐’˜ ๐Ÿ (๐’) ๐’‰๐’‹ (๐’) ) RECURRENT GRAPH NEURAL NETWORKS
  • 51. F1RMSE PnL(%) Results: Graph Convolution 0.037 0.71 0.3 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 53. Recurrent Neural Networks Graph Convolution Networks ๐’˜ ๐ŸŽ ๐’˜ ๐Ÿ ๐’‰๐’Š ๐’‰ ๐ŸŽ (๐’+๐Ÿ) = ๐ˆ(๐’‰ ๐ŸŽ (๐’) ๐’˜ ๐ŸŽ (๐’) เท ๐’Šโˆˆ๐’ ๐Ÿ ๐’„๐’Š,๐’ ๐’˜๐’‹ (๐’) ๐’‰๐’‹ (๐’) ) แ‰๐‘“๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“ แ‰๐œ„๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„ แ‰๐‘œ๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ แ‰๐œ๐‘– ๐‘™+1 ๐‘ก = ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘— ๐‘™ + ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ
  • 54. Recurrent Neural Networks Graph Convolution Networks ๐’‰ ๐ŸŽ (๐’+๐Ÿ) = ๐ˆ(๐’‰ ๐ŸŽ (๐’) ๐’˜ ๐ŸŽ (๐’) เท ๐’Šโˆˆ๐’ ๐Ÿ ๐’„๐’Š,๐’ ๐’˜๐’‹ (๐’) ๐’‰๐’‹ (๐’) ) แ‰๐‘“๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘“ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“ แ‰๐œ„๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ”๐œ„ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„ แ‰๐‘œ๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘œ ๐‘ฆ๐‘— ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ แ‰๐œ๐‘– ๐‘™+1 ๐‘ก = ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ ๐‘ฆ๐‘— ๐‘™ + ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ เตฑ๐‘“๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘“ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“ เตฑ๐œ„๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ”๐œ„ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„ เตฑ๐‘œ๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘œ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ เตฑ๐œ๐‘– ๐‘™+1 ๐‘ก = ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ + ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ Forget Gate Output Gate Input Gate Cell
  • 55. RECURRENT GRAPH CONVOLUTIONAL NETWORKS RECURRENT GRAPH NEURAL NETWORKS เตฑ๐‘“๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘“ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘“โ„Ž ๐‘กโˆ’1 + ๐‘๐‘“ เตฑ๐œ„๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ”๐œ„ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“๐œ„โ„Ž ๐‘กโˆ’1 + ๐‘๐œ„ เตฑ๐‘œ๐‘– ๐‘™+1 ๐‘ก = ๐œŽ๐‘”( ๐œ” ๐‘œ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ ๐œ ๐‘กโˆ’1 + ๐œ“ ๐‘œโ„Ž ๐‘กโˆ’1 + ๐‘ ๐‘œ เตฑ๐œ๐‘– ๐‘™+1 ๐‘ก = ๐‘ก๐‘Ž๐‘›โ„Ž( ๐œ”๐œ เท ๐‘Ÿโˆˆโ„› เตฑเท ๐‘—โˆˆ๐’ฉ๐‘– ๐‘Ÿ 1 ๐‘๐‘–,๐‘Ÿ ๐œƒ๐‘Ÿ ๐‘™ ๐‘ฆ๐‘— ๐‘™ + ๐œƒ0 ๐‘™ ๐‘ฆ๐‘– ๐‘™ + ๐œ“๐œโ„Ž ๐‘กโˆ’1 + ๐‘๐œ Forget Gate Output Gate Input Gate Cell โ„Ž ๐‘ก โ„Ž ๐‘ก+1 โ„Ž ๐‘ก+2 โ„Ž ๐‘ก+3
  • 56. F1RMSE PnL(%) Results: Recurrent Graph Convolution 0.028 0.77 2.4 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 57. STRATEGY GRADIENT? ๐œ•( ๐œƒ) ๐œ•๐œƒ โˆ’ ๐œ•( ๐œ‘) ๐œ•๐œ‘ Returns Risk RECURRENT GRAPH NEURAL NETWORKS
  • 58. Graph Auto Encoders ๐‘จ โ€“ input graph ๐’™ โ€“ input node เทก๐‘จ โ€“ output graph Useful for predicting connectivity links RECURRENT GRAPH NEURAL NETWORKS
  • 60. ๐œŽ ฮผ Simulation TRADING STRATEGY GRADIENTS เท ๐‘–=1 ๐‘› ๐œŽ๐‘– 2 + ๐œ‡๐‘– 2 โˆ’ log ๐œŽ๐‘– โˆ’ 1 | เทœ๐‘ฆ โˆ’ ๐‘ฆ| 2 2 Action: ๐’‚ ๐’•+๐Ÿ RECURRENT GRAPH NEURAL NETWORKS
  • 61. F1RMSE PnL(%) Results: Recurrent Graph Auto Encoder 0.024 0.86 5.6 Results for out of sample simulated trading Simple root mean square error F1-beta score (Harmonic mean of precision and recall) taken as classification decision where the predicted price is greater then the current price +15% transaction fee. Profits and losses (percentage) for out of sample trading. Assuming 15% transaction fee. RECURRENT GRAPH NEURAL NETWORKS
  • 62. Conclusions -Deep Learning works well on Euclidean data. -Attempts to utilize DL for Non-Euclidean are starting to become viable. -Reward shaping and drifted metrics are extremely misleading. -After trying heavily we conclude that Aggregated data (prices) of Ethereum is insufficient when trying to forecast behavior. -We introduce a novel layer: Recurrent Graph Convolution and demonstrate How this approach yield โ€œtradableโ€ results. RECURRENT GRAPH NEURAL NETWORKS
  • 63. FIN