SlideShare a Scribd company logo
1 of 21
Download to read offline
Learning Financial Market Data
with Recurrent Autoencoders and TensorFlow
.
Steven Hutt steven.c.hutt@gmail.com
July 28, 2016
TensorFlow London Meetup
The Limit Order Book
A financial exchange provides a central location for electronically trading contracts.
Trading is implemented on a Matching Engine, which matches buy and sell orders
according to certain priority rules and queues orders which do not immediately match.
A. Booth©
A Marketable limit order is immediately matched
and results in a trade
A Passive limit order cannot be matched and joins
the queue
Data In: Client ID tagged order flow
Data Out: LOB state (quantity at price)
In either case need to learn data sequences
1
Learning Sequences with RNNs
In a Recurrent Neural Network there are nodes which feedback on themselves. Below
xt ∈ Rnx
, ht ∈ Rnh and zt ∈ Rnz
, for t = 1, 2, . . . , T. Each node is a vector or 1-dim array.
..
output
.state .
input
.
zt
.ht
.
xt
.
Whx
. Whh
.
Wzh
. h0
.
z1
.
z2
.
z3
. h1
. h2
. h3
.
x1
.
x2
.
x3
.
zT
. hT
.
xT
.
· · ·
. · · ·.
· · ·
The RNN is defined by update equations:
ht+1 = f(Whhht + Whxxt + bh), zt = g(Wyhht + bz).
where Wuv ∈ Rnu×nv
and bu ∈ Rnu
are the RNN weights and biases respectively. The
functions f and g are generally nonlinear.
2
Long Short Term Memory Units
An LSTM [3] learns long term dependencies by adding a memory cell and controlling what
gets modified and what gets forgotten:
..memory ct .
state ht
.∗.
forget
. +.
modify
.
σ
.
σ
.
tanh
.
σ
.
∗
.
∗
.
tanh
.
•
.
⃝
.
⃝
.
⃝
.. ⃝.. ct+1.
ht+1
.
xt
.
input
.
⃝
.
= copy
.
•
.
⃝
.
= join then copy
An LSTM can remember state dependencies further back than a standard RNN. It is
capable of learning real grammars.
3
Unitary RNNs
The behaviour of the LSTM can be difficult to analyse.
An alternative approach [1] is to ensure there are no contractive directions, that is
require Whh ∈ U(n), the space of unitary matrices.
This preserves the simple form of an RNN but ensures the state dependencies go further
back in the sequence.
A key problem is that gradient descent does not preserve
the unitary property:
x ∈ U(n) → x + λυ /∈ U(n)
The solution in [1] is to choose Whh from a
subset generated by parametrized unitary
matrices:
Whh = D3R2F−1
D2ΠR1FD1.
Here D is diagonal, R is a reflection, F is the Fourier trans-
form and Π is a permutation. 4
RNNs as Programs addressing External Memory
RNNs are Turing complete but the theoretical capabilities are not matched in practice
due to training inefficiencies.
The Neural Turing Machine emulates a differentiable Turing Machine by adding external
addressable memory and using the RNN state as a memory controller [2].
..
Input xt
.
Output yt
.RNN Controller.RNN Controller.RNN Controller.
Read Heads
.
Write Heads
.
Memory
.
ml
.
ml
Vanila NTM Architecture
NTM equivalent 'code'
initialise: move head to start location
while input delimiter not seen do
receive input vector
write input to head location
increment head location by 1
end while
return head to start location
while true do
read output vector from head location
emit output
increment head location by 1
end while 5
Deep RNNs
..
out
.
L3
.
L2
.L1
.
in
.
zt
.
h3t
.
h2t
.h1t
.
xt
.
Wh1x
. Wh1h1
.
Wh2h2
.
Wh3h3
.
h1t
.
h2t
.
Wzh3
.
h30
.
h20
. h10
.
z1
.
z2
.
z3
.
h31
.
h32
.
h33
.
h21
.
h22
.
h23
. h11
. h12
. h13
.
x1
.
x2
.
x3
.
zT
.
h3T
.
h2T
. h1T
.
xT
.
· · ·
.
· · ·
.
· · ·
. · · ·.
· · ·
The input to hidden layer Li is the state hi−1 of hidden layer Li−1. Deep RNNs can learn at
different time scales.
6
Autoencoders
The more regularities there are in data the more it can be compressed. The more
we are able to compress the data, the more we have learned about the data.
(Grünwald, 1998)
..
x1
.
x2
.
x3
.
x4
.
x5
.
x6
.
h11
.
h12
.
h13
.
h14
.
h21
.
h22
.
h31
.
h32
.
h33
.
h34
.
y1
.
y2
.
y3
.
y4
.
y5
.
y6
.
input
.
code
.
reconstruct
.
encode
.
decode
.
Wh1x
.
Wh2h1
.
Wh3h2
.
Wyh3
Autoencoder
7
Recurrent Autoencoders
.. he
0
. he
1
. he
2
. he
3
.
x1
.
x2
.
x3
. he
T
.
xT
. · · ·.
· · ·
. hd
0
.
z1
.
z2
.
z3
. hd
1
. hd
2
. hd
3
.
z0
.
z1
.
z2
.
zT
. hd
T
.
zT−1
.
· · ·
. · · ·.
· · ·
. =
encoder decoder
A Recurrent Autoencoder is trained to output a reconstruction z of the the input
sequence x. All data must pass through the narrow he
T. Compression between input and
output forces pattern learning.
8
Recurrent Autoencoder Updates
.. he
0
. he
1
. he
2
. he
3
.
x1
.
x2
.
x3
. he
T
.
xT
. · · ·.
· · ·
................ hd
0
.
z1
.
z2
.
z3
. hd
1
. hd
2
. hd
3
.
z0
.
z1
.
z2
.
zT
. hd
T
.
zT−1
.
· · ·
. · · ·.
· · ·
. =...............
encoder decoder
he
1 = f(We
hhhe
0 + We
hxx1 + be
h)
he
2 = f(We
hhhe
1 + We
hxx2 + be
h)
he
3 = f(We
hhhe
2 + We
hxx3 + be
h)
· · ·
encoder updates
hd
0 = he
T
copy
hd
1 = f(Wd
hhhd
0 + Wd
hxz0 + bd
h)
hd
2 = f(Wd
hhhd
1 + Wd
hxz1 + bd
h)
hd
3 = f(Wd
hhhd
2 + Wd
hxz2 + bd
h)
· · ·
decoder updates
9
Easy RNNs with TensorFlow VariableScope
def linear(in_array, out_size):
in_size = in_array.get_shape()
# creates or returns a variable
W = vs.get_variable(shape=[in_size, out_size], name='W')
# out_array = in_array * W
out_array = math_ops.matmul(in_array, W)
return out_array
with tf.variable_scope('encoder_0') as scope:
# creates variable encoder_0/W
out_array_1 = linear(in_array_0, out_size_1)
# error: encoder_0/W already exists
# out_array_2 = linear(out_array_1, out_size_2)
# reuse encoder_0/W
scope.reuse_variables()
out_array_2 = linear(out_array_1, out_size_2)
10
TensorFlow encoder
with tf.variable_scope('encoder_0') as scope:
for t in range(1, seq_len):
# placeholder for input data at time t
xs_0_enc[t] = tf.placeholder(shape=[None, nx_enc])
# encoder update at time t
hs_0_enc[t] = lstm_cell_l0_enc(xs_0_enc[t], hs_0_enc[t-1])
scope.reuse_variables() 11
Deep Recurrent Autoencoder
...
encoder_2
.
encoder_1
.encoder_0.
he
30
.
he
20
. he
10
.
he
31
.
he
32
.
he
33
.
he
21
.
he
22
.
he
23
. he
11
. he
12
. he
13
.
x1
.
x2
.
x3
.
he
3T
.
he
2T
. he
1T
.
xT
.
· · ·
.
· · ·
. · · ·.
· · ·
.
decoder_2
.
decoder_1
.
decoder_0
.
hd
30
.
hd
20
.
hd
10
.
hd
31
.
hd
32
.
hd
33
.
hd
21
.
hd
22
.
hd
23
.
hd
11
.
hd
12
.
hd
13
.
z1
.
z2
.
z3
.
z0
.
z1
.
z2
.
hd
3T
.
hd
2T
.
hd
1T
.
zT−1
.
zT
.
· · ·
.
· · ·
.
· · ·
.
· · ·
.
· · ·
.
=
12
TensorFlow Deep Recurrent AutoEncoder
with tf.variable_scope('encoder_0') as scope:
for t in range(1, seq_len):
# placeholder for input data
xs_0_enc[t] = tf.placeholder(shape=[None, nx_enc])
hs_0_enc[t] = lstm_cell_l0_enc(xs_0_enc[t], hs_0_enc[t-1])
scope.reuse_variables()
with tf.variable_scope('encoder_1') as scope:
for t in range(1, seq_len):
# encoder_1 input is encoder_0 hidden state
xs_1_enc[t] = hs_0_enc[t]
hs_1_enc[t] = lstm_cell_l1_enc(xs_1_enc[t], hs_1_enc[t-1])
scope.reuse_variables()
with tf.variable_scope('encoder_2') as scope:
for t in range(1, seq_len):
# encoder_2 input is encoder_1 hidden state
xs_2_enc[t] = hs_1_enc[t]
hs_2_enc[t] = lstm_cell_l2_enc(xs_2_enc[t], hs_2_enc[t-1])
scope.reuse_variables()
13
Variational Autoencoder
We assume the market data {xk} is sampled from a probability distribution with a 'small'
number of latent variables h.
Assume pθ(x, h) = pθ(x | h)pθ(h) where θ are
weights of a NN/RNN. Then
pθ(x) =
∑
h
pθ(x | h)pθ(h)
Train network to minimize
Lθ({xk}) = min
θ
∑
k
− log(pθ(xk)).
Compute pθ(h | x) and cluster.
But pθ(h | x) is intractable!
..h.
x
. pθ(h).
pθ(x)
. prior.
likelihood
.
marginal
.
posterior
.
p(x | h)
.
pθ(h | x)
14
Variational Inference
Variational Inference learns an NN/RNN approximation qϕ(h | x) to pθ(h | x) during
training.
Think of qϕ as encoder and pθ decoder networks: an autoencoder.
Then log pθ(xk) ≥ Lθ,ϕ(xk) where
Lθ,ϕ(xk) = −DKL(qϕ(h | xk)||pθ(h))
regularization term
+ Eqϕ (log pθ(xk | h))
reconstruction term
is the variational lower bound.
Reconstruction term = log-likelihood wrt qϕ
Regularization term = target prior on encoder
..h.
x
. pθ(h).
pθ(x)
. prior.
likelihood
.
marginal
.
approx
.
pθ(x | h)
.
qϕ(h | x)
15
Search
Train a Deep Recurrent Autoencoder on
public market data (book state over time)
Use the trained encoder as hash function
Search based on Euclidean or Hamming
distance (after binary projection)
16
Classify
EUR vs AUD
Train a Variational Recurrent Autoencoder with Gaus-
sian prior on public market data
Include two contracts: EUR and AUD in training data
Use trained encoder to infer Gaussian means from
training data
17
Predict
Train an LSTM on sequences of states of the LOB to predict changes in best bid.
Target data: change in best bid over time t → t + 1.
LSTM output: zt = softmax distribution for best bid moves (−1, 0, +1) over time t → t + 1.
It's tough to make predictions, especially about the future.
target
predict
18
Questions?
18
References
M. Arjovsky, A. Shah, and Y. Bengio.
Unitary evolution recurrent neural networks.
arXiv:1511.06464v4, 2016.
A. Graves, G. Wayne, and I. Danihelka.
Neural turing machines.
arXiv:1410.5401v2, 2014.
S. Hochreiter and J. Schmidhuber.
Long short term memory.
Neural Computation, 9:1735--1780, 1997.
19

More Related Content

What's hot

Introducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowIntroducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowEtsuji Nakai
 
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)Universitat Politècnica de Catalunya
 
Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3TOMMYLINK1
 
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Universitat Politècnica de Catalunya
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018Universitat Politècnica de Catalunya
 
Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)Oswald Campesato
 
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)Universitat Politècnica de Catalunya
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural NetworksAhmedMahany
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep LearningOswald Campesato
 
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)Universitat Politècnica de Catalunya
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionEun Ji Lee
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Universitat Politècnica de Catalunya
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier홍배 김
 
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)Universitat Politècnica de Catalunya
 
Introduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlowIntroduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlowSri Ambati
 
Attention mechanisms with tensorflow
Attention mechanisms with tensorflowAttention mechanisms with tensorflow
Attention mechanisms with tensorflowKeon Kim
 
Learning stochastic neural networks with Chainer
Learning stochastic neural networks with ChainerLearning stochastic neural networks with Chainer
Learning stochastic neural networks with ChainerSeiya Tokui
 

What's hot (20)

Introducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowIntroducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlow
 
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
 
Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3Rabbit challenge 5_dnn3
Rabbit challenge 5_dnn3
 
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
 
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
 
Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)
 
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)
Recurrent Neural Networks (D2L2 2017 UPC Deep Learning for Computer Vision)
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural Networks
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep Learning
 
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
 
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
 
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
 
Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
 
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)
Recurrent Neural Networks (D2L8 Insight@DCU Machine Learning Workshop 2017)
 
Introduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlowIntroduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlow
 
Attention mechanisms with tensorflow
Attention mechanisms with tensorflowAttention mechanisms with tensorflow
Attention mechanisms with tensorflow
 
Learning stochastic neural networks with Chainer
Learning stochastic neural networks with ChainerLearning stochastic neural networks with Chainer
Learning stochastic neural networks with Chainer
 

Similar to Learning Financial Market Data with Recurrent Autoencoders and TensorFlow

Tensorflow, deep learning and recurrent neural networks without a ph d
Tensorflow, deep learning and recurrent neural networks   without a ph dTensorflow, deep learning and recurrent neural networks   without a ph d
Tensorflow, deep learning and recurrent neural networks without a ph dDanielGinot
 
Pointing the Unknown Words
Pointing the Unknown WordsPointing the Unknown Words
Pointing the Unknown Wordshytae
 
Deep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptDeep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptShankerRajendiran2
 
Deep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptDeep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptSatyaNarayana594629
 
Deep learning for detection hate speech.ppt
Deep learning for detection hate speech.pptDeep learning for detection hate speech.ppt
Deep learning for detection hate speech.pptusmanshoukat28
 
12337673 deep learning RNN RNN DL ML sa.ppt
12337673 deep learning RNN RNN DL ML sa.ppt12337673 deep learning RNN RNN DL ML sa.ppt
12337673 deep learning RNN RNN DL ML sa.pptManiMaran230751
 
Deep-Learning-2017-Lecture ML DL RNN.ppt
Deep-Learning-2017-Lecture  ML DL RNN.pptDeep-Learning-2017-Lecture  ML DL RNN.ppt
Deep-Learning-2017-Lecture ML DL RNN.pptManiMaran230751
 
14889574 dl ml RNN Deeplearning MMMm.ppt
14889574 dl ml RNN Deeplearning MMMm.ppt14889574 dl ml RNN Deeplearning MMMm.ppt
14889574 dl ml RNN Deeplearning MMMm.pptManiMaran230751
 
TensorFlow for IITians
TensorFlow for IITiansTensorFlow for IITians
TensorFlow for IITiansAshish Bansal
 
Deep Learning for Cyber Security
Deep Learning for Cyber SecurityDeep Learning for Cyber Security
Deep Learning for Cyber SecurityAltoros
 
Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs)Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs)Abdullah al Mamun
 
Basics of edge detection and forier transform
Basics of edge detection and forier transformBasics of edge detection and forier transform
Basics of edge detection and forier transformSimranjit Singh
 
Cheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksCheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksSteve Nouri
 
Detecting Bugs in Binaries Using Decompilation and Data Flow Analysis
Detecting Bugs in Binaries Using Decompilation and Data Flow AnalysisDetecting Bugs in Binaries Using Decompilation and Data Flow Analysis
Detecting Bugs in Binaries Using Decompilation and Data Flow AnalysisSilvio Cesare
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing ssuser2797e4
 
A Fast Hadamard Transform for Signals with Sub-linear Sparsity
A Fast Hadamard Transform for Signals with Sub-linear SparsityA Fast Hadamard Transform for Signals with Sub-linear Sparsity
A Fast Hadamard Transform for Signals with Sub-linear SparsityRobin Scheibler
 

Similar to Learning Financial Market Data with Recurrent Autoencoders and TensorFlow (20)

Tensorflow, deep learning and recurrent neural networks without a ph d
Tensorflow, deep learning and recurrent neural networks   without a ph dTensorflow, deep learning and recurrent neural networks   without a ph d
Tensorflow, deep learning and recurrent neural networks without a ph d
 
Pointing the Unknown Words
Pointing the Unknown WordsPointing the Unknown Words
Pointing the Unknown Words
 
Deep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptDeep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.ppt
 
Deep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.pptDeep-Learning-2017-Lecture6RNN.ppt
Deep-Learning-2017-Lecture6RNN.ppt
 
RNN.ppt
RNN.pptRNN.ppt
RNN.ppt
 
Deep learning for detection hate speech.ppt
Deep learning for detection hate speech.pptDeep learning for detection hate speech.ppt
Deep learning for detection hate speech.ppt
 
12337673 deep learning RNN RNN DL ML sa.ppt
12337673 deep learning RNN RNN DL ML sa.ppt12337673 deep learning RNN RNN DL ML sa.ppt
12337673 deep learning RNN RNN DL ML sa.ppt
 
Deep-Learning-2017-Lecture ML DL RNN.ppt
Deep-Learning-2017-Lecture  ML DL RNN.pptDeep-Learning-2017-Lecture  ML DL RNN.ppt
Deep-Learning-2017-Lecture ML DL RNN.ppt
 
14889574 dl ml RNN Deeplearning MMMm.ppt
14889574 dl ml RNN Deeplearning MMMm.ppt14889574 dl ml RNN Deeplearning MMMm.ppt
14889574 dl ml RNN Deeplearning MMMm.ppt
 
TensorFlow for IITians
TensorFlow for IITiansTensorFlow for IITians
TensorFlow for IITians
 
Deep Learning for Cyber Security
Deep Learning for Cyber SecurityDeep Learning for Cyber Security
Deep Learning for Cyber Security
 
Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs)Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs)
 
Basics of edge detection and forier transform
Basics of edge detection and forier transformBasics of edge detection and forier transform
Basics of edge detection and forier transform
 
Cheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networksCheatsheet recurrent-neural-networks
Cheatsheet recurrent-neural-networks
 
Detecting Bugs in Binaries Using Decompilation and Data Flow Analysis
Detecting Bugs in Binaries Using Decompilation and Data Flow AnalysisDetecting Bugs in Binaries Using Decompilation and Data Flow Analysis
Detecting Bugs in Binaries Using Decompilation and Data Flow Analysis
 
Ruifeng.pptx
Ruifeng.pptxRuifeng.pptx
Ruifeng.pptx
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
 
PhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-SeneviratnePhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-Seneviratne
 
Dsp 2marks
Dsp 2marksDsp 2marks
Dsp 2marks
 
A Fast Hadamard Transform for Signals with Sub-linear Sparsity
A Fast Hadamard Transform for Signals with Sub-linear SparsityA Fast Hadamard Transform for Signals with Sub-linear Sparsity
A Fast Hadamard Transform for Signals with Sub-linear Sparsity
 

More from Altoros

Maturing with Kubernetes
Maturing with KubernetesMaturing with Kubernetes
Maturing with KubernetesAltoros
 
Kubernetes Platform Readiness and Maturity Assessment
Kubernetes Platform Readiness and Maturity AssessmentKubernetes Platform Readiness and Maturity Assessment
Kubernetes Platform Readiness and Maturity AssessmentAltoros
 
Journey Through Four Stages of Kubernetes Deployment Maturity
Journey Through Four Stages of Kubernetes Deployment MaturityJourney Through Four Stages of Kubernetes Deployment Maturity
Journey Through Four Stages of Kubernetes Deployment MaturityAltoros
 
SGX: Improving Privacy, Security, and Trust Across Blockchain Networks
SGX: Improving Privacy, Security, and Trust Across Blockchain NetworksSGX: Improving Privacy, Security, and Trust Across Blockchain Networks
SGX: Improving Privacy, Security, and Trust Across Blockchain NetworksAltoros
 
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...Altoros
 
A Zero-Knowledge Proof: Improving Privacy on a Blockchain
A Zero-Knowledge Proof:  Improving Privacy on a BlockchainA Zero-Knowledge Proof:  Improving Privacy on a Blockchain
A Zero-Knowledge Proof: Improving Privacy on a BlockchainAltoros
 
Crap. Your Big Data Kitchen Is Broken.
Crap. Your Big Data Kitchen Is Broken.Crap. Your Big Data Kitchen Is Broken.
Crap. Your Big Data Kitchen Is Broken.Altoros
 
Containers and Kubernetes
Containers and KubernetesContainers and Kubernetes
Containers and KubernetesAltoros
 
Distributed Ledger Technology for Over-the-Counter Trading
Distributed Ledger Technology for Over-the-Counter TradingDistributed Ledger Technology for Over-the-Counter Trading
Distributed Ledger Technology for Over-the-Counter TradingAltoros
 
5-Step Deployment of Hyperledger Fabric on Multiple Nodes
5-Step Deployment of Hyperledger Fabric on Multiple Nodes5-Step Deployment of Hyperledger Fabric on Multiple Nodes
5-Step Deployment of Hyperledger Fabric on Multiple NodesAltoros
 
Deploying Kubernetes on GCP with Kubespray
Deploying Kubernetes on GCP with KubesprayDeploying Kubernetes on GCP with Kubespray
Deploying Kubernetes on GCP with KubesprayAltoros
 
UAA for Kubernetes
UAA for KubernetesUAA for Kubernetes
UAA for KubernetesAltoros
 
Troubleshooting .NET Applications on Cloud Foundry
Troubleshooting .NET Applications on Cloud FoundryTroubleshooting .NET Applications on Cloud Foundry
Troubleshooting .NET Applications on Cloud FoundryAltoros
 
Continuous Integration and Deployment with Jenkins for PCF
Continuous Integration and Deployment with Jenkins for PCFContinuous Integration and Deployment with Jenkins for PCF
Continuous Integration and Deployment with Jenkins for PCFAltoros
 
How to Never Leave Your Deployment Unattended
How to Never Leave Your Deployment UnattendedHow to Never Leave Your Deployment Unattended
How to Never Leave Your Deployment UnattendedAltoros
 
Cloud Foundry Monitoring How-To: Collecting Metrics and Logs
Cloud Foundry Monitoring How-To: Collecting Metrics and LogsCloud Foundry Monitoring How-To: Collecting Metrics and Logs
Cloud Foundry Monitoring How-To: Collecting Metrics and LogsAltoros
 
Smart Baggage Tracking: End-to-End Sensor-Based Solution
Smart Baggage Tracking: End-to-End Sensor-Based SolutionSmart Baggage Tracking: End-to-End Sensor-Based Solution
Smart Baggage Tracking: End-to-End Sensor-Based SolutionAltoros
 
Navigating the Ecosystem of Pivotal Cloud Foundry Tiles
Navigating the Ecosystem of Pivotal Cloud Foundry TilesNavigating the Ecosystem of Pivotal Cloud Foundry Tiles
Navigating the Ecosystem of Pivotal Cloud Foundry TilesAltoros
 
AI as a Catalyst for IoT
AI as a Catalyst for IoTAI as a Catalyst for IoT
AI as a Catalyst for IoTAltoros
 
Over-Engineering: Causes, Symptoms, and Treatment
Over-Engineering: Causes, Symptoms, and TreatmentOver-Engineering: Causes, Symptoms, and Treatment
Over-Engineering: Causes, Symptoms, and TreatmentAltoros
 

More from Altoros (20)

Maturing with Kubernetes
Maturing with KubernetesMaturing with Kubernetes
Maturing with Kubernetes
 
Kubernetes Platform Readiness and Maturity Assessment
Kubernetes Platform Readiness and Maturity AssessmentKubernetes Platform Readiness and Maturity Assessment
Kubernetes Platform Readiness and Maturity Assessment
 
Journey Through Four Stages of Kubernetes Deployment Maturity
Journey Through Four Stages of Kubernetes Deployment MaturityJourney Through Four Stages of Kubernetes Deployment Maturity
Journey Through Four Stages of Kubernetes Deployment Maturity
 
SGX: Improving Privacy, Security, and Trust Across Blockchain Networks
SGX: Improving Privacy, Security, and Trust Across Blockchain NetworksSGX: Improving Privacy, Security, and Trust Across Blockchain Networks
SGX: Improving Privacy, Security, and Trust Across Blockchain Networks
 
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...
Using the Cloud Foundry and Kubernetes Stack as a Part of a Blockchain CI/CD ...
 
A Zero-Knowledge Proof: Improving Privacy on a Blockchain
A Zero-Knowledge Proof:  Improving Privacy on a BlockchainA Zero-Knowledge Proof:  Improving Privacy on a Blockchain
A Zero-Knowledge Proof: Improving Privacy on a Blockchain
 
Crap. Your Big Data Kitchen Is Broken.
Crap. Your Big Data Kitchen Is Broken.Crap. Your Big Data Kitchen Is Broken.
Crap. Your Big Data Kitchen Is Broken.
 
Containers and Kubernetes
Containers and KubernetesContainers and Kubernetes
Containers and Kubernetes
 
Distributed Ledger Technology for Over-the-Counter Trading
Distributed Ledger Technology for Over-the-Counter TradingDistributed Ledger Technology for Over-the-Counter Trading
Distributed Ledger Technology for Over-the-Counter Trading
 
5-Step Deployment of Hyperledger Fabric on Multiple Nodes
5-Step Deployment of Hyperledger Fabric on Multiple Nodes5-Step Deployment of Hyperledger Fabric on Multiple Nodes
5-Step Deployment of Hyperledger Fabric on Multiple Nodes
 
Deploying Kubernetes on GCP with Kubespray
Deploying Kubernetes on GCP with KubesprayDeploying Kubernetes on GCP with Kubespray
Deploying Kubernetes on GCP with Kubespray
 
UAA for Kubernetes
UAA for KubernetesUAA for Kubernetes
UAA for Kubernetes
 
Troubleshooting .NET Applications on Cloud Foundry
Troubleshooting .NET Applications on Cloud FoundryTroubleshooting .NET Applications on Cloud Foundry
Troubleshooting .NET Applications on Cloud Foundry
 
Continuous Integration and Deployment with Jenkins for PCF
Continuous Integration and Deployment with Jenkins for PCFContinuous Integration and Deployment with Jenkins for PCF
Continuous Integration and Deployment with Jenkins for PCF
 
How to Never Leave Your Deployment Unattended
How to Never Leave Your Deployment UnattendedHow to Never Leave Your Deployment Unattended
How to Never Leave Your Deployment Unattended
 
Cloud Foundry Monitoring How-To: Collecting Metrics and Logs
Cloud Foundry Monitoring How-To: Collecting Metrics and LogsCloud Foundry Monitoring How-To: Collecting Metrics and Logs
Cloud Foundry Monitoring How-To: Collecting Metrics and Logs
 
Smart Baggage Tracking: End-to-End Sensor-Based Solution
Smart Baggage Tracking: End-to-End Sensor-Based SolutionSmart Baggage Tracking: End-to-End Sensor-Based Solution
Smart Baggage Tracking: End-to-End Sensor-Based Solution
 
Navigating the Ecosystem of Pivotal Cloud Foundry Tiles
Navigating the Ecosystem of Pivotal Cloud Foundry TilesNavigating the Ecosystem of Pivotal Cloud Foundry Tiles
Navigating the Ecosystem of Pivotal Cloud Foundry Tiles
 
AI as a Catalyst for IoT
AI as a Catalyst for IoTAI as a Catalyst for IoT
AI as a Catalyst for IoT
 
Over-Engineering: Causes, Symptoms, and Treatment
Over-Engineering: Causes, Symptoms, and TreatmentOver-Engineering: Causes, Symptoms, and Treatment
Over-Engineering: Causes, Symptoms, and Treatment
 

Recently uploaded

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKJago de Vreede
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfOverkill Security
 

Recently uploaded (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 

Learning Financial Market Data with Recurrent Autoencoders and TensorFlow

  • 1. Learning Financial Market Data with Recurrent Autoencoders and TensorFlow . Steven Hutt steven.c.hutt@gmail.com July 28, 2016 TensorFlow London Meetup
  • 2. The Limit Order Book A financial exchange provides a central location for electronically trading contracts. Trading is implemented on a Matching Engine, which matches buy and sell orders according to certain priority rules and queues orders which do not immediately match. A. Booth© A Marketable limit order is immediately matched and results in a trade A Passive limit order cannot be matched and joins the queue Data In: Client ID tagged order flow Data Out: LOB state (quantity at price) In either case need to learn data sequences 1
  • 3. Learning Sequences with RNNs In a Recurrent Neural Network there are nodes which feedback on themselves. Below xt ∈ Rnx , ht ∈ Rnh and zt ∈ Rnz , for t = 1, 2, . . . , T. Each node is a vector or 1-dim array. .. output .state . input . zt .ht . xt . Whx . Whh . Wzh . h0 . z1 . z2 . z3 . h1 . h2 . h3 . x1 . x2 . x3 . zT . hT . xT . · · · . · · ·. · · · The RNN is defined by update equations: ht+1 = f(Whhht + Whxxt + bh), zt = g(Wyhht + bz). where Wuv ∈ Rnu×nv and bu ∈ Rnu are the RNN weights and biases respectively. The functions f and g are generally nonlinear. 2
  • 4. Long Short Term Memory Units An LSTM [3] learns long term dependencies by adding a memory cell and controlling what gets modified and what gets forgotten: ..memory ct . state ht .∗. forget . +. modify . σ . σ . tanh . σ . ∗ . ∗ . tanh . • . ⃝ . ⃝ . ⃝ .. ⃝.. ct+1. ht+1 . xt . input . ⃝ . = copy . • . ⃝ . = join then copy An LSTM can remember state dependencies further back than a standard RNN. It is capable of learning real grammars. 3
  • 5. Unitary RNNs The behaviour of the LSTM can be difficult to analyse. An alternative approach [1] is to ensure there are no contractive directions, that is require Whh ∈ U(n), the space of unitary matrices. This preserves the simple form of an RNN but ensures the state dependencies go further back in the sequence. A key problem is that gradient descent does not preserve the unitary property: x ∈ U(n) → x + λυ /∈ U(n) The solution in [1] is to choose Whh from a subset generated by parametrized unitary matrices: Whh = D3R2F−1 D2ΠR1FD1. Here D is diagonal, R is a reflection, F is the Fourier trans- form and Π is a permutation. 4
  • 6. RNNs as Programs addressing External Memory RNNs are Turing complete but the theoretical capabilities are not matched in practice due to training inefficiencies. The Neural Turing Machine emulates a differentiable Turing Machine by adding external addressable memory and using the RNN state as a memory controller [2]. .. Input xt . Output yt .RNN Controller.RNN Controller.RNN Controller. Read Heads . Write Heads . Memory . ml . ml Vanila NTM Architecture NTM equivalent 'code' initialise: move head to start location while input delimiter not seen do receive input vector write input to head location increment head location by 1 end while return head to start location while true do read output vector from head location emit output increment head location by 1 end while 5
  • 7. Deep RNNs .. out . L3 . L2 .L1 . in . zt . h3t . h2t .h1t . xt . Wh1x . Wh1h1 . Wh2h2 . Wh3h3 . h1t . h2t . Wzh3 . h30 . h20 . h10 . z1 . z2 . z3 . h31 . h32 . h33 . h21 . h22 . h23 . h11 . h12 . h13 . x1 . x2 . x3 . zT . h3T . h2T . h1T . xT . · · · . · · · . · · · . · · ·. · · · The input to hidden layer Li is the state hi−1 of hidden layer Li−1. Deep RNNs can learn at different time scales. 6
  • 8. Autoencoders The more regularities there are in data the more it can be compressed. The more we are able to compress the data, the more we have learned about the data. (Grünwald, 1998) .. x1 . x2 . x3 . x4 . x5 . x6 . h11 . h12 . h13 . h14 . h21 . h22 . h31 . h32 . h33 . h34 . y1 . y2 . y3 . y4 . y5 . y6 . input . code . reconstruct . encode . decode . Wh1x . Wh2h1 . Wh3h2 . Wyh3 Autoencoder 7
  • 9. Recurrent Autoencoders .. he 0 . he 1 . he 2 . he 3 . x1 . x2 . x3 . he T . xT . · · ·. · · · . hd 0 . z1 . z2 . z3 . hd 1 . hd 2 . hd 3 . z0 . z1 . z2 . zT . hd T . zT−1 . · · · . · · ·. · · · . = encoder decoder A Recurrent Autoencoder is trained to output a reconstruction z of the the input sequence x. All data must pass through the narrow he T. Compression between input and output forces pattern learning. 8
  • 10. Recurrent Autoencoder Updates .. he 0 . he 1 . he 2 . he 3 . x1 . x2 . x3 . he T . xT . · · ·. · · · ................ hd 0 . z1 . z2 . z3 . hd 1 . hd 2 . hd 3 . z0 . z1 . z2 . zT . hd T . zT−1 . · · · . · · ·. · · · . =............... encoder decoder he 1 = f(We hhhe 0 + We hxx1 + be h) he 2 = f(We hhhe 1 + We hxx2 + be h) he 3 = f(We hhhe 2 + We hxx3 + be h) · · · encoder updates hd 0 = he T copy hd 1 = f(Wd hhhd 0 + Wd hxz0 + bd h) hd 2 = f(Wd hhhd 1 + Wd hxz1 + bd h) hd 3 = f(Wd hhhd 2 + Wd hxz2 + bd h) · · · decoder updates 9
  • 11. Easy RNNs with TensorFlow VariableScope def linear(in_array, out_size): in_size = in_array.get_shape() # creates or returns a variable W = vs.get_variable(shape=[in_size, out_size], name='W') # out_array = in_array * W out_array = math_ops.matmul(in_array, W) return out_array with tf.variable_scope('encoder_0') as scope: # creates variable encoder_0/W out_array_1 = linear(in_array_0, out_size_1) # error: encoder_0/W already exists # out_array_2 = linear(out_array_1, out_size_2) # reuse encoder_0/W scope.reuse_variables() out_array_2 = linear(out_array_1, out_size_2) 10
  • 12. TensorFlow encoder with tf.variable_scope('encoder_0') as scope: for t in range(1, seq_len): # placeholder for input data at time t xs_0_enc[t] = tf.placeholder(shape=[None, nx_enc]) # encoder update at time t hs_0_enc[t] = lstm_cell_l0_enc(xs_0_enc[t], hs_0_enc[t-1]) scope.reuse_variables() 11
  • 13. Deep Recurrent Autoencoder ... encoder_2 . encoder_1 .encoder_0. he 30 . he 20 . he 10 . he 31 . he 32 . he 33 . he 21 . he 22 . he 23 . he 11 . he 12 . he 13 . x1 . x2 . x3 . he 3T . he 2T . he 1T . xT . · · · . · · · . · · ·. · · · . decoder_2 . decoder_1 . decoder_0 . hd 30 . hd 20 . hd 10 . hd 31 . hd 32 . hd 33 . hd 21 . hd 22 . hd 23 . hd 11 . hd 12 . hd 13 . z1 . z2 . z3 . z0 . z1 . z2 . hd 3T . hd 2T . hd 1T . zT−1 . zT . · · · . · · · . · · · . · · · . · · · . = 12
  • 14. TensorFlow Deep Recurrent AutoEncoder with tf.variable_scope('encoder_0') as scope: for t in range(1, seq_len): # placeholder for input data xs_0_enc[t] = tf.placeholder(shape=[None, nx_enc]) hs_0_enc[t] = lstm_cell_l0_enc(xs_0_enc[t], hs_0_enc[t-1]) scope.reuse_variables() with tf.variable_scope('encoder_1') as scope: for t in range(1, seq_len): # encoder_1 input is encoder_0 hidden state xs_1_enc[t] = hs_0_enc[t] hs_1_enc[t] = lstm_cell_l1_enc(xs_1_enc[t], hs_1_enc[t-1]) scope.reuse_variables() with tf.variable_scope('encoder_2') as scope: for t in range(1, seq_len): # encoder_2 input is encoder_1 hidden state xs_2_enc[t] = hs_1_enc[t] hs_2_enc[t] = lstm_cell_l2_enc(xs_2_enc[t], hs_2_enc[t-1]) scope.reuse_variables() 13
  • 15. Variational Autoencoder We assume the market data {xk} is sampled from a probability distribution with a 'small' number of latent variables h. Assume pθ(x, h) = pθ(x | h)pθ(h) where θ are weights of a NN/RNN. Then pθ(x) = ∑ h pθ(x | h)pθ(h) Train network to minimize Lθ({xk}) = min θ ∑ k − log(pθ(xk)). Compute pθ(h | x) and cluster. But pθ(h | x) is intractable! ..h. x . pθ(h). pθ(x) . prior. likelihood . marginal . posterior . p(x | h) . pθ(h | x) 14
  • 16. Variational Inference Variational Inference learns an NN/RNN approximation qϕ(h | x) to pθ(h | x) during training. Think of qϕ as encoder and pθ decoder networks: an autoencoder. Then log pθ(xk) ≥ Lθ,ϕ(xk) where Lθ,ϕ(xk) = −DKL(qϕ(h | xk)||pθ(h)) regularization term + Eqϕ (log pθ(xk | h)) reconstruction term is the variational lower bound. Reconstruction term = log-likelihood wrt qϕ Regularization term = target prior on encoder ..h. x . pθ(h). pθ(x) . prior. likelihood . marginal . approx . pθ(x | h) . qϕ(h | x) 15
  • 17. Search Train a Deep Recurrent Autoencoder on public market data (book state over time) Use the trained encoder as hash function Search based on Euclidean or Hamming distance (after binary projection) 16
  • 18. Classify EUR vs AUD Train a Variational Recurrent Autoencoder with Gaus- sian prior on public market data Include two contracts: EUR and AUD in training data Use trained encoder to infer Gaussian means from training data 17
  • 19. Predict Train an LSTM on sequences of states of the LOB to predict changes in best bid. Target data: change in best bid over time t → t + 1. LSTM output: zt = softmax distribution for best bid moves (−1, 0, +1) over time t → t + 1. It's tough to make predictions, especially about the future. target predict 18
  • 21. References M. Arjovsky, A. Shah, and Y. Bengio. Unitary evolution recurrent neural networks. arXiv:1511.06464v4, 2016. A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv:1410.5401v2, 2014. S. Hochreiter and J. Schmidhuber. Long short term memory. Neural Computation, 9:1735--1780, 1997. 19