Recently, number of papers re-visited this problem of generalizing neural networks to work on arbitrarily structured graphs, some achieving promising results in domains that have previously been dominated by other shallower algorithms. While Graph convolutions are generalization of spacial convolutions, and easiest to define in spectral domain, General Fourier transform used to represent them scales poorly with size of data. Therefore, first order approximation in Fourier-domain used to obtain efficient linear-time graph-CNNs. Those scales poorly with size of data. due to that, the expressiveness power of the proposed graph convolutional networks is severely impoverished. Another approach for learning graph representations requires the repeated application of contraction maps as propagation functions until node representations reach a stable fixed point. We combine those approaches and propose a recurrent version of Relational Graph Convolution networks, we then proceed to construct two models, Recurrent Variational Graph AutoEncoder and Recurrent Graph Convolution Regressor and show that for Ethereum Blockchain transaction graph we outperform the traditional Graph Convolution Network at predicting future movments of the corresponding tradable asset: Ether.
3. What is Ethereum
- A 100% open source platform to build
and distribute decentralized
applications
- No middle men
- Social sites, Financial systems,
Voting mechanisms, Games,
Reputation Systems
- 100% peer to peer, censorship proof
- Also a Tradable Asset.
RECURRENT GRAPH NEURAL NETWORKS
11. F1RMSE PnL(%)
Results: 1D-ConvNet
0.9 0.58 -17.317
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
12. Recurrent Neural Network
-Memory Achieved through feedback
-Due to self multiplications, Feedback Weight matrix tend to explode or vanish.
-Solution: logistic gating mechanism
Keep
Gate
1.73
Write
Gate
Read
Gate
Input from
rest of RNN
Output to
rest of RNN
Input
Command Output
Cell Gate
RECURRENT GRAPH NEURAL NETWORKS
14. F1RMSE PnL(%)
Results: LSTM
0.1 0.42 -7.115
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
16. F1RMSE PnL(%)
Results: CNN-LSTM
0.05 0.53 -7.461
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
18. DEEP LEARNING COMMON STRUCTURES
SUPERVISED UNSUPERVISED
Perceptron It is a type of linear classifier, a classification algorithm that makes its predictions based on a linear predictor function
combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the
training set one at a time.
RECURRENTFEED FORWARD
Feed Forward Network sometimes
Referred to as MLP, is a fully connected
dense model used as a simple
classifier.
Convolutional Network assume that
highly correlated features located close
to each other in the input matrix and
can be pooled and treated as one in the
next layer.
Known for superior Image classification
capabilities.
Simple Recurrent Neural Network is a
class of artificial neural network where
connections between units form a
directed cycle.
Hopfield Recurrent Neural Network It is
a RNN in which all connections are
symmetric. it requires stationary
inputs.
Long Short Term Memory Network
contains gates that determine if the
input is significant enough to
remember, when it should continue to
remember or forget the value, and when
it should output
Auto Encoder aims to learn a
representation (encoding) for a set of
data, typically for the purpose of
dimensionality reduction.
Restricted Boltzmann Machine can
learn a probability distribution over its
set of inputs..
Deep Belief Net is a composition of
simple, unsupervised networks such as
restricted Boltzmann machines ,where
each sub-network's hidden layer serves
as the visible layer for the next.
RECURRENT GRAPH NEURAL NETWORKS
23. Gets โRewardsโ and Penalties based on
itโs success of producing a better
generation of models.
Father Model
Being built, compiled, evaluated and
stored for future reconstruction and
retraining by a Human.
Child Model
Deep
Reinforcement
Learning
Deep Meta Learning
RECURRENT GRAPH NEURAL NETWORKS
24.
25. F1RMSE PnL(%)
Results: Deep Meta Learning
0.027 0.68 -3.2
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
33. Graph Convolution
Spectral graph convolution
multiplication of a signal with a filter in the Fourier space of a graph.
Graph Fourier transform
multiplication of a graph signal ๐(i.e. feature vectors for every node)
with the eigenvector matrix ๐of the graph Laplacian ๐ฟ.
Graph Laplacian
can be easily computed from the symmetrically normalized graph adjacency
matrix าง๐ด: ๐ฟ = ๐ผ โ าง๐ด
Fourier basis of ๐ฟ are Eigenvectors ๐ฝ of ๐ณ
RECURRENT GRAPH NEURAL NETWORKS
44. def relational_graph_convolution(self, inputs):
features = inputs[0]
A = inputs[1:] # list of basis functions
# convolve
supports = list()
for i in range(support):
supports.append(K.dot(A[i], features))
supports = K.concatenate(supports, axis=1)
output = K.dot(supports, self.W)
๐ ๐
(๐+๐)
= ๐(๐ ๐
(๐)
๐ ๐
(๐)
เท
๐โ๐
๐
๐๐,๐
๐๐
(๐)
๐๐
(๐)
)
RECURRENT GRAPH NEURAL NETWORKS
45. GRAPH CONVOLUTIONAL NETWORKS
ReLUReLU
Input
Features for nodes
๐ โ โ ๐โ๐ธ
Adjacency matrix
containing all links แ๐ด
Embeddings
Representations that combine features of
neighborhood
Neighborhood size depends on number of
layers
RECURRENT GRAPH NEURAL NETWORKS
51. F1RMSE PnL(%)
Results: Graph Convolution
0.037 0.71 0.3
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
56. F1RMSE PnL(%)
Results: Recurrent Graph Convolution
0.028 0.77 2.4
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
61. F1RMSE PnL(%)
Results: Recurrent Graph Auto Encoder
0.024 0.86 5.6
Results for out of sample simulated trading
Simple root mean square error F1-beta score (Harmonic mean of
precision and recall) taken as
classification decision where the
predicted price is greater then the
current price +15% transaction fee.
Profits and losses (percentage) for
out of sample trading.
Assuming 15% transaction fee.
RECURRENT GRAPH NEURAL NETWORKS
62. Conclusions
-Deep Learning works well on Euclidean data.
-Attempts to utilize DL for Non-Euclidean are starting to become viable.
-Reward shaping and drifted metrics are extremely misleading.
-After trying heavily we conclude that Aggregated data (prices) of Ethereum is
insufficient when trying to forecast behavior.
-We introduce a novel layer: Recurrent Graph Convolution and demonstrate
How this approach yield โtradableโ results.
RECURRENT GRAPH NEURAL NETWORKS