SlideShare a Scribd company logo
Introduction to Algorithmic Trading Strategies
Lecture 2
Hidden Markov Trading Model
Haksun Li
haksun.li@numericalmethod.com
www.numericalmethod.com
References
 Algorithmic Trading: Hidden Markov Models on
Foreign Exchange Data. Patrik Idvall, Conny Jonsson.
University essay from Linköpings
universitet/Matematiska institutionen; Linköpings
universitet/Matematiska institutionen. 2008.
 A tutorial on hidden Markov models and selected
applications in speech recognition. Rabiner, L.R.
Proceedings of the IEEE, vol 77 Issue 2, Feb 1989.
 Hidden Markov Models for Time Series: An
Introduction Using R. Walter Zucchini, Iain L.
MacDonald. 2009.
2
Bayes Theorem
 Bayes theorem computes the posterior probability of a
hypothesis H after evidence E is observed in terms of
 the prior probability, 𝑃 𝐻
 the prior probability of E, 𝑃 𝐞
 the conditional probability of 𝑃 𝐞|𝐻
 𝑃 𝐻|𝐞 =
𝑃 𝐞|𝐻
𝑃 𝐞
𝑃 𝐻 =
𝑃 𝐞|𝐻
𝑃 𝐞|𝐻 ∗𝑃 𝐻 +𝑃 𝐞|¬𝐻 ∗𝑃 ¬𝐻
𝑃 𝐻
3
Bayes Theorem Examples
4
 A rare event may have occurred with a high probability if
the chance of the evidence is also rare. “scaled"
 P(Jesus resurrection) = very small
 P(apostle conversion) = very small, also
 P(Jesus resurrection | apostle conversion)
 ≈ P(Jesus resurrection)/ P(apostle conversion)
 ≈ not too small and in fact quite probable
 The occurrence of a highly likely consequence does not
mean that the event may have occurred. The probability
needs to be “discounted” by the background probability.
 P(Pattern | Rare) = 98%
 P(Pattern | ¬Rare) = 5%
 P(Rare) = 0.1%
 P(Rare | Pattern) = ?
Markov Chain
5
x=+: 0.8
x=-: 0.2
x=+: 0.3
x=-: 0.7
0.4
0.5
0.6 0.5
Markov Property
 The conditional probability distribution of future
states of the process (conditional on both past and
present states) depends only upon the present state,
not on the sequence of events that preceded it.
 𝑃 𝑥𝑡|𝑞𝑡, ⋯ , 𝑞1, 𝑥𝑡−1, ⋯ , 𝑥1 = 𝑃 𝑥𝑡|𝑞𝑡
 Consistent with the weak form of the efficient market
hypothesis.
6
Matrix Notations
7
 A two-state Markov chain.
x=+: 0.8
x=-: 0.2
x=+: 0.3
x=-: 0.7
0.4
0.5
0.6 0.5
 transition probability matrix 𝐎 =
0.6 0.4
0.5 0.5
 conditional probability matrix 𝑃𝑥 = diag 𝑝1 𝑥 , 
 , 𝑝 𝑁 𝑥
 𝑃+ =
0.8 0
0 0.3
, 𝑃− =
0.2 0
0 0.7
Examples
 What is the probability of observing the sequence
𝑆 = 𝑠1, 𝑠1, 𝑠2
 𝑃 𝑆|Model = P 𝑠1, 𝑠1, 𝑠2|Model
 = P 𝑠1|Model × P 𝑠1|𝑠1,Model × P 𝑠2|𝑠1,Model
 = 1 ×0.6×0.6×0.4
 = 0.144
 𝑃 𝑋1 = +, 𝑋2 = +, 𝑋3 = + =
∑ ∑ ∑ 𝜋𝑖 𝑝𝑖 1 𝑎𝑖𝑖 𝑝𝑗 1 𝑎𝑗𝑗 𝑝 𝑘 12
𝑘=1
2
𝑗=1
2
𝑖=1
= Π𝑃 1 𝐎𝐎 1 A𝑃 1 1′
8
Hidden Markov Model
9
+: ?
-: ?
+: ?
-: ?
?
?
? ?
Hidden Markov Model
 Only observations are observable (duh).
 World states may not be known (hidden).
 We want to model the hidden states as a Markov Chain.
 HMM in general does not satisfy the Markov property.
10
Problems
 Likelihood
 Given the parameters, 𝜗, and an observation sequence, X,
compute 𝑃 𝑋|𝜗 .
 Decoding
 Given the parameters, 𝜗, and an observation sequence, X,
determine the best hidden sequence Q.
 Learning
 Given an observation sequence, X, and HMM structure,
learn 𝜗.
11
Likelihood Solutions
12
Likelihood By Enumeration
 𝑃 𝑋|𝜗 = ∑ 𝑃 𝑋, 𝑄|𝜗𝑞 ′ 𝑠
 = ∑ 𝑃 𝑋|𝑄, 𝜗 × 𝑃 𝑋|𝜗𝑞 ′ 𝑠
 𝑃 𝑋|𝑄, 𝜗 = ∏ 𝑃 𝑥𝑡|𝑞 𝑡, 𝜗𝑇
𝑡=1
 𝑃 𝑄|𝜗 = 𝜋 𝑞1
× 𝑎 𝑞1 𝑞2
× 𝑎 𝑞2 𝑞3
× ⋯ × 𝑎 𝑞 𝑇−1 𝑞 𝑇
 But
 this is not computationally feasible due to the
need to enumerate all possible (finite) state sequences.
13
Forward Procedure
 𝛌 𝑡 𝑖 = 𝑃 𝑥1, 𝑥2, ⋯ , 𝑥𝑡, 𝑞 𝑡 = 𝑖|𝜗
 the probability of the partial observation sequence until
time t and the system in state 𝑠𝑖 at time t.
 Initialization
 𝛌1 𝑖 = 𝜋𝑖 𝑝𝑖 𝑥1
 𝑝𝑖: the conditional distribution of 𝑥 in 𝑠𝑖
 Induction
 𝛌 𝑡+1 𝑗 = ∑ 𝛌 𝑡 𝑖 𝑎𝑖𝑖
𝑁
𝑖=1 𝑝𝑗 𝑥𝑡+1
 Termination
 𝑃 𝑋|𝜗 = ∑ 𝛌 𝑇 𝑖𝑁
𝑖=1 , the likelihood
14
Backward Procedure
 𝛜𝑡 𝑖 = 𝑃 𝑥𝑡+1, 𝑥𝑡+2, ⋯ , 𝑥 𝑇|𝑞𝑡 = 𝑖, 𝜗
 the probability of the system in state 𝑖 at time t, and the
partial observations from then onward till time t
 Initialization
 𝛜 𝑇 𝑖 =1
 Induction
 𝛜𝑡 𝑖 = ∑ 𝑎𝑖𝑖
𝑁
𝑗=1 𝑝𝑗 𝑥𝑡+1 𝛜𝑡+1 𝑗
15
Decoding Solutions
16
Decoding Solutions
 Given the observations and model, the probability of
the system in state 𝑖 is:
 𝛟𝑡 𝑖 = 𝑃 𝑞 𝑡 = 𝑖|𝑋, 𝜗
 =
𝑃 𝑞 𝑡=𝑖,𝑋|𝜗
𝑃 𝑋|𝜗
 =
𝛌 𝑡 𝑖 𝛜𝑡 𝑖
𝑃 𝑋|𝜗
 =
𝛌 𝑡 𝑖 𝛜𝑡 𝑖
∑ 𝛌 𝑡 𝑖 𝛜𝑡 𝑖𝑁
𝑖=1
17
Maximizing The Expected Number Of States
 𝑞 𝑡 = argmax1≀𝑖≀𝑁 𝛟𝑡 𝑖
 This determines the most likely state at every instant,
t, without regard to the probability of occurrence of
sequences of states.
18
Viterbi Algorithm
 The maximal probability of the system travelling these
states stopping at state 𝑖 and generating these
observations:
 𝛿𝑡 𝑖 = max 𝑃 𝑞1, 𝑞2, ⋯ , 𝑞 𝑡 = 𝑖, 𝑥0, ⋯ , 𝑥𝑡|𝜗
19
Viterbi Algorithm
 Initialization
 𝛿1 𝑖 = 𝜋𝑖 𝑝𝑖 𝑥1
 Recursion
 𝛿𝑡 𝑗 = max
𝑖
𝛿𝑡−1 𝑖 𝑎𝑖𝑖 𝑝𝑗 𝑥𝑡
 the probability of the most probable state sequence for the first t
observations, ending in state j
 𝜓 𝑡 𝑗 = argmax 𝛿𝑡−1 𝑖 𝑎𝑖𝑖
 the state chosen at t
 Termination
 𝑃∗ = max 𝛿 𝑇 𝑖
 𝑞∗= argmax 𝛿 𝑇 𝑖
20
Viterbi Algorithm Example
21
 𝛿1 𝑈 = 𝜋 𝑈 𝑝 𝑈 + = 0.5 ∗ 0.8 = 0.4
 𝛿1 𝐷 = 𝜋 𝐷 𝑝 𝐷 + = 0.5 ∗ 0.3 = 0.15
 𝛿2 𝑈 = max 𝛿1 𝑈 𝑎 𝑈𝑈, 𝛿1 𝐷 𝑎 𝐷𝑈 𝑝 𝑈 + =
max 0.4 ∗ 0.6,0.15 ∗ 0.5 ∗ 0.8 = 0.24 ∗ 0.8 = 0.192
 𝛿2 𝐷 = max 𝛿1 𝑈 𝑎 𝑈𝐷, 𝛿1 𝐷 𝑎 𝐷𝐷 𝑝 𝐷 +
+: 0.8
-: 0.2
+: 0.3
-: 0.7
0.4
0.5
0.6 0.5
 +, +, −, +
Learning Solutions
22
Rabiner Model
23
 Discrete probability masses for observations.
+: ?
-: ?
+: ?
-: ?
?
?
? ?
As A Maximization Problem
 Our objective is to find 𝜗 that maximizes 𝑃 𝑋|𝜗 .
 For any given 𝜗, we can compute 𝑃 𝑋|𝜗 .
 Then solve a maximization problem.
 Algorithm: Nelder-Mead.
24
Baum-Welch
 the probability of being in state 𝑖 at time 𝑡, and state 𝑗
at time 𝑡 + 1 , given the model and the observation
sequence
 𝜉𝑡 𝑖, 𝑗 = 𝑃 𝑞 𝑡 = 𝑖, 𝑞 𝑡+1 = 𝑗|𝑋, 𝜗
25
Xi
 𝜉𝑡 𝑖, 𝑗 = 𝑃 𝑞 𝑡 = 𝑖, 𝑞 𝑡+1 = 𝑗|𝑋, 𝜗
 =
𝑃 𝑞 𝑡=𝑖,𝑞 𝑡+1=𝑗,𝑋|𝜗
𝑃 𝑋|𝜆
 =
𝛌 𝑡 𝑖 𝑎 𝑖𝑖 𝑝 𝑗 𝑥 𝑡+1 𝛜𝑡+1 𝑗
𝑃 𝑋|𝜗
 𝛟𝑡 𝑖 = 𝑃 𝑞 𝑡 = 𝑖 |𝑋, 𝜗
 = ∑ 𝜉𝑡 𝑖, 𝑗𝑁
𝑗=1
26
Estimation Equation
 By summing up over time,
 𝛟𝑡 𝑖 ~ the number of times state 𝑖 is visited
 𝜉𝑡 𝑖, 𝑗 ~ the number of times the system goes from
state 𝑖 to state 𝑗
 Thus, the parameters λ are:
 𝜋ᅵ 𝑖 = 𝛟1 𝑖 , initial state probabilities
 𝑎ᅵ 𝑖𝑖 =
∑ 𝜉 𝑡 𝑖,𝑗𝑇−1
𝑡=1
∑ 𝛟𝑡 𝑖𝑇−1
𝑡=1
, transition probabilities
 𝑝ᅵ𝑗 𝑣 𝑘 =
∑ 𝛟𝑡 𝑗𝑇−1
𝑡=1,𝑥 𝑡=𝑣 𝑘
∑ 𝛟𝑡 𝑗𝑇−1
𝑡=1
, conditional probabilities
27
Conditional Probabilities
 Our formulation so far assumes discrete conditional
probabilities.
 The formulations that take other probability density
functions are similar.
 But the computations are more complicated, and the
solutions may not even be analytical, e.g., t-distribution.
28
Heavy Tail Distributions
 t-distribution
 Gaussian Mixture Model
 a weighted sum of Normal distributions
29
Trading Ideas
 Compute the next state.
 Compute the expected return.
 Long (short) when expected return > (<) 0.
 Long (short) when expected return > (<) c.
 c = the transaction costs
 Any other ideas?
30
Experiment Setup
 EURUSD daily prices from 2003 to 2006.
 6 unknown factors.
 Λ is estimated on a rolling basis.
 Evaluations:
 Hypothesis testing
 Sharpe ratio
 VaR
 Max drawdown
 alpha
31
Best Discrete Case
32
Best Continuous Case
33
Results
 More data (the 6 factors) do not always help (esp. for
the discrete case).
 Parameters unstable.
34
TODOs
 How can we improve the HMM model(s)? Ideas?
35
Maximum Likelihood
36
 One way to estimate parameters for a model.
 Which is the most likely model/dice/number of faces to
generate the following observations?
 1,2,1,2,1,1,3,4,1,1,2,4,2,4,1,2
 1,2,3,4,5,6,4,5,6,3,5,2,4,6,2
 1,1,1,1,1,1,1,1,1,1
 Do you think you get the right model?
 P(1,1,1,1,1,1,1,1,1,1 | 12-faced-dice) = ?
Likelihood Function
 Probability: a function of outcomes given a fixed
parameter value.
 What is the probability of getting 10 Heads flipping a fair
coin?
 Likelihood: a function of parameter value given an
outcome.
 What is the likelihood that the coin is fair when it landed
Heads 10 times in a roll?
37
Maximum Likelihood Estimate
 Intuition: we want to find a model (parameter value)
such that the probability of observing the outcome is
maximized, i.e., most likely.
 We want to find a 𝜗 that 𝑝 𝑋|𝜗 is the biggest.
 𝐿 𝜗; 𝑋 = 𝑝 𝑋|𝜗
 We find 𝜗 such that 𝐿 𝜗; 𝑋 is maximized given the
observation.
38
Example Using the Normal Distribution
 We want to estimate the mean of a sample of size
𝑁 drawn from a Normal distribution.
 𝑓 𝑥 =
1
2𝜋𝜎2
exp −
𝑥−𝜇 2
2𝜎2
 𝜗 = 𝜇, 𝜎
 𝐿 𝑁 𝜗; 𝑋 = ∏
1
2𝜋𝜎2
exp −
𝑥𝑖−𝜇 2
2𝜎2
𝑁
𝑖=1
39
Log-Likelihood
 log 𝐿 𝑁 𝜗; 𝑋 = ∑ log
1
2𝜋𝜎2
−
𝑥𝑖−𝜇 2
2𝜎2
𝑁
𝑖=1
 Maximizing the log-likelihood for 𝜇 is equivalent to
maximizing the following.
 − ∑ 𝑥𝑖 − 𝜇 2𝑁
𝑖=1
 First order condition w.r.t.,𝜇
 𝜇ᅵ =
1
𝑁
∑ 𝑥𝑖
𝑁
𝑖=1
 Likewise, for variance, we have
 𝜎ᅵ2 =
1
𝑁
∑ 𝑥𝑖 − 𝜇ᅵ 2𝑁
𝑖=1
40
Marginal Likelihood
 For the set of hidden states, 𝑍𝑡 , we write
 𝐿 𝜗; 𝑋 = 𝑝 𝑋|𝜗 = ∑ 𝑝 𝑋, 𝑍|𝜗𝑍
 Assume we know the conditional distribution of 𝑍, we
could instead maximize the following.
 ma𝑥
𝜗
E
𝑍
𝐿 𝜗|𝑋, 𝑍 , or
 ma𝑥
𝜗
E
𝑍
log 𝐿 𝜗|𝑋, 𝑍
 The expectation is a weighted sum of the (log-)
likelihoods weighted by the probability of the hidden
states.
41
The Q-Function
 Where do we get the conditional distribution of 𝑍𝑡
from?
 Suppose we somehow have an (initial) estimation of
the parameters, 𝜗0. Then the model has no unknowns.
We can compute the distribution of 𝑍𝑡 .
 𝑄 𝜗|𝜗 𝑡 = E
𝑍|𝑋,𝜗
log 𝐿 𝜗|𝑋, 𝑍
42
EM Intuition
 Suppose we know 𝜗, we know completely about the
model; we can find 𝑍.
 Suppose we know 𝑍, we can estimate 𝜗, by, e.g.,
maximum likelihood.
 What do we do if we don’t know both 𝜗 and 𝑍?
43
Expectation-Maximization Algorithm
 Expectation step (E-step): compute the expected value
of the log-likelihood function, w.r.t., the conditional
distribution of 𝑍 under 𝑋 and 𝜗 𝑡 .
 𝑄 𝜗|𝜗 𝑡 = E
𝑍|𝑋,𝜗 𝑡
log 𝐿 𝜗|𝑋, 𝑍
 Maximization step (M-step): find the parameters, 𝜗,
that maximize the Q-value.
 𝜗 𝑡+1 = argmax
𝜗
𝑄 𝜗|𝜗 𝑡
44
Mixture HMM
45
 Continuous probabilities for observations.
?
?
? ?
Matrix Notation
46
 Likelihood: 𝐿 𝑇 𝜗; 𝑋 = Π𝑃 𝑥1 𝐎𝐎 𝑥2 𝐎𝐎 𝑥3 
 𝐎𝐎 𝑥 𝑇 1′
 Forward probabilities:
𝛌 𝑡 = Π𝑃 𝑥1 𝐎𝐎 𝑥2 𝐎𝐎 𝑥3 
 𝐎𝐎 𝑥 𝑡 = Π𝑃 𝑥1 ∏ 𝐎𝐎 𝑥𝑖
𝑡
𝑖=2
 Each entry is the joint probability of seeing all the observations
up to time t and ending up in state j:
𝛌 𝑡 𝑗 = 𝑃𝑃 𝑋1
𝑡
= 𝑥1
𝑡
, 𝑞𝑡 = 𝑗
 Induction: 𝛌 𝑡+1 𝑗 = ∑ 𝛌 𝑡 𝑖 𝑎𝑖𝑖
𝑁
𝑖=1 𝑝𝑗 𝑥𝑡+1
 Backward probabilities:
𝛜𝑡
′
= 𝐎𝐎 𝑥 𝑡+1 𝐎𝐎 𝑥 𝑡+2 
 𝐎𝐎 𝑥 𝑇 1′
 Each entry is the conditional probability of seeing all the future
observations given starting out from state j:
𝛜𝑡 𝑗 = 𝑃𝑃 𝑋1
𝑡
= 𝑥1
𝑡
|𝑞𝑡 = 𝑗
 Induction: 𝛜𝑡
′
= 𝐎𝐎 𝑥𝑡+1 𝛜𝑡+1
′
 𝜆 𝑡 𝑖 = 𝑃 𝑋, 𝑞𝑡 = 𝑖|𝜗 =𝛌 𝑡 𝑖 𝛜𝑡 𝑖
 Likelihood: ∑ 𝜆 𝑡 𝑖𝑁
𝑖=1 = 𝛌 𝑡 𝛜𝑡
′
= 𝐿 𝑇
EM for HMM
47
 𝑢𝑗 𝑡 = 1 if 𝑞 𝑡 = 𝑗
 𝜈𝑗𝑗 = 1 if 𝑞 𝑡−1 = 𝑗 and 𝑞 𝑡 = 𝑘
 log 𝑃𝑃 𝑥1
𝑇, 𝑞 𝑇 = log 𝜋 𝑞1
∏ 𝑎 𝑞 𝑡−1,𝑞 𝑡
𝑇
𝑡=2 ∏ 𝑝 𝑞 𝑡
𝑥𝑡
𝑇
𝑡=1
 = log 𝜋 𝑞1
+ ∑ log 𝑎 𝑞 𝑡−1,𝑞 𝑡
𝑇
𝑡=2 + ∑ 𝑝 𝑞 𝑡
𝑥𝑡
𝑇
𝑡=1
 = ∑ 𝑢𝑗 1 log 𝜋𝑗
𝑁
𝑗=1 + ∑ ∑ ∑ 𝜈𝑗𝑗 log 𝑎𝑗,𝑘
𝑁
𝑘=1
𝑁
𝑗=1
𝑇
𝑡=2 +
∑ ∑ 𝑢𝑗 𝑡 log 𝑝𝑗 𝑥𝑡
𝑇
𝑗=1
𝑇
𝑡=1
 Term 1: ∑ 𝑢𝑗 1 log 𝜋𝑗
𝑁
𝑗=1
 Term 2: ∑ ∑ ∑ 𝜈𝑗𝑗 log 𝑎𝑗,𝑘
𝑁
𝑘=1
𝑁
𝑗=1
𝑇
𝑡=2
 Term 3: ∑ ∑ 𝑢𝑗 𝑡 log 𝑝𝑗 𝑥𝑡
𝑇
𝑗=1
𝑇
𝑡=1
E Step
48
 Given the current 𝜗 = 𝜋, 𝐎, 𝜆 , we can estimate 𝑢𝑗 𝑡
and 𝜈𝑗𝑗 from the forward and backward probabilities.
 𝑢ᅵ𝑗 𝑡 = 𝑃𝑃 𝑞 𝑡 = 𝑗 |𝑥1
𝑇 = 𝛌 𝑡 𝑗 𝛜𝑡 𝑗 /𝐿 𝑇
 𝜈̂𝑗𝑗 𝑡 = 𝑃𝑃 𝑞 𝑡−1 = 𝑗, 𝑞 𝑡 = 𝑘 | 𝑥1
𝑇 =
𝛌 𝑡−1 𝑗 𝑎𝑗𝑗 𝑝 𝑘 𝑥𝑡 𝛜𝑡 𝑘 /𝐿 𝑇
M Step
49
 Term 1: max ∑ 𝑢ᅵ𝑗 1 log 𝜋𝑗
𝑁
𝑗=1 w.r.t each 𝜋𝑗
 𝜋ᅵ𝑗 = 𝑢ᅵ𝑗 1 / ∑ 𝑢ᅵ𝑗 1𝑁
𝑗=1 = 𝑢ᅵ𝑗 1
 Term 2: max ∑ ∑ ∑ 𝜈̂𝑗𝑗 log 𝑎𝑗,𝑘
𝑁
𝑘=1
𝑁
𝑗=1
𝑇
𝑡=2 w.r.t. each 𝜈̂𝑗𝑗
 𝜈̂𝑗𝑗 = 𝑓𝑗𝑗/ ∑ 𝑓𝑗𝑗
𝑁
𝑗=1 , where 𝑓𝑗𝑗 = ∑ 𝜈̂𝑗𝑗 𝑡𝑇
𝑡=2
 Term 3: max ∑ ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑡
𝑇
𝑗=1
𝑇
𝑡=1 w.r.t. the
parameters 𝜆 for each conditional probability
distribution in each state 𝑝𝑗 𝑥 .
Poisson-HMM
50
 𝑝𝑗 𝑥 = 𝑒−𝜆 𝑗 𝜆𝑗
𝑥
/𝑥!
 max ∑ ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑇
𝑗=1
𝑇
𝑡=1
 Each of the j term can be individually maximized.
 ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑇
𝑡=1
 ∑ 𝑢ᅵ𝑗 𝑡 −𝜆𝑗 + 𝑥 log 𝜆𝑗 − 𝑥!𝑇
𝑡=1
 0 = ∑ 𝑢ᅵ𝑗 𝑡 −1 + 𝑥/𝜆𝑗
𝑇
𝑡=1
 𝜆̂𝑗 = ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡
𝑇
𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇
𝑡=1
Normal-HMM
51
 𝜇̂ 𝑗 = ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡
𝑇
𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇
𝑡=1
 𝜎ᅵ𝑗
2
= ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡 − 𝑢ᅵ𝑗
2𝑇
𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇
𝑡=1

More Related Content

What's hot

Intro to Quant Trading Strategies (Lecture 9 of 10)
Intro to Quant Trading Strategies (Lecture 9 of 10)Intro to Quant Trading Strategies (Lecture 9 of 10)
Intro to Quant Trading Strategies (Lecture 9 of 10)
Adrian Aley
 
Intro to Quantitative Investment (Lecture 6 of 6)
Intro to Quantitative Investment (Lecture 6 of 6)Intro to Quantitative Investment (Lecture 6 of 6)
Intro to Quantitative Investment (Lecture 6 of 6)
Adrian Aley
 
Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)
Adrian Aley
 
Probability distributions
Probability distributionsProbability distributions
Probability distributions
mvskrishna
 
Statistical inference: Probability and Distribution
Statistical inference: Probability and DistributionStatistical inference: Probability and Distribution
Statistical inference: Probability and Distribution
Eugene Yan Ziyou
 
Intro to Quantitative Investment (Lecture 3 of 6)
Intro to Quantitative Investment (Lecture 3 of 6)Intro to Quantitative Investment (Lecture 3 of 6)
Intro to Quantitative Investment (Lecture 3 of 6)
Adrian Aley
 
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
Daniel Katz
 
Discrete Probability Distributions.
Discrete Probability Distributions.Discrete Probability Distributions.
Discrete Probability Distributions.
ConflagratioNal Jahid
 
Stat lesson 5.1 probability distributions
Stat lesson 5.1 probability distributionsStat lesson 5.1 probability distributions
Stat lesson 5.1 probability distributionspipamutuc
 
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
Daniel Katz
 
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Daniel Katz
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Daniel Katz
 
Les5e ppt 04
Les5e ppt 04Les5e ppt 04
Les5e ppt 04
Subas Nandy
 
Chapter 07
Chapter 07Chapter 07
Chapter 07bmcfad01
 
Binomial distribution
Binomial distributionBinomial distribution
Binomial distribution
numanmunir01
 
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Daniel Katz
 
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Daniel Katz
 
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Daniel Katz
 
Chapter 06
Chapter 06 Chapter 06
Chapter 06
Tuul Tuul
 
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Daniel Katz
 

What's hot (20)

Intro to Quant Trading Strategies (Lecture 9 of 10)
Intro to Quant Trading Strategies (Lecture 9 of 10)Intro to Quant Trading Strategies (Lecture 9 of 10)
Intro to Quant Trading Strategies (Lecture 9 of 10)
 
Intro to Quantitative Investment (Lecture 6 of 6)
Intro to Quantitative Investment (Lecture 6 of 6)Intro to Quantitative Investment (Lecture 6 of 6)
Intro to Quantitative Investment (Lecture 6 of 6)
 
Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)
 
Probability distributions
Probability distributionsProbability distributions
Probability distributions
 
Statistical inference: Probability and Distribution
Statistical inference: Probability and DistributionStatistical inference: Probability and Distribution
Statistical inference: Probability and Distribution
 
Intro to Quantitative Investment (Lecture 3 of 6)
Intro to Quantitative Investment (Lecture 3 of 6)Intro to Quantitative Investment (Lecture 3 of 6)
Intro to Quantitative Investment (Lecture 3 of 6)
 
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
Quantitative Methods for Lawyers - Class #20 - Regression Analysis - Part 3
 
Discrete Probability Distributions.
Discrete Probability Distributions.Discrete Probability Distributions.
Discrete Probability Distributions.
 
Stat lesson 5.1 probability distributions
Stat lesson 5.1 probability distributionsStat lesson 5.1 probability distributions
Stat lesson 5.1 probability distributions
 
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
Quantitative Methods for Lawyers - Class #10 - Binomial Distributions, Normal...
 
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
Quantitative Methods for Lawyers - Class #6 - Basic Statistics + Probability ...
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
 
Les5e ppt 04
Les5e ppt 04Les5e ppt 04
Les5e ppt 04
 
Chapter 07
Chapter 07Chapter 07
Chapter 07
 
Binomial distribution
Binomial distributionBinomial distribution
Binomial distribution
 
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
Quantitative Methods for Lawyers - Class #7 - Probability & Basic Statistics ...
 
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
 
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
Quantitative Methods for Lawyers - Class #15 - R Boot Camp - Part 2 - Profess...
 
Chapter 06
Chapter 06 Chapter 06
Chapter 06
 
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
Quantitative Methods for Lawyers - Class #9 - Bayes Theorem (Part 2), Skewnes...
 

Similar to Intro to Quant Trading Strategies (Lecture 2 of 10)

Learning group em - 20171025 - copy
Learning group   em - 20171025 - copyLearning group   em - 20171025 - copy
Learning group em - 20171025 - copy
Shuai Zhang
 
Koh_Liang_ICML2017
Koh_Liang_ICML2017Koh_Liang_ICML2017
Koh_Liang_ICML2017
Masa Kato
 
Stochastic Optimization
Stochastic OptimizationStochastic Optimization
Stochastic Optimization
Mohammad Reza Jabbari
 
02 - Discrete-Time Markov Models - incomplete.pptx
02 - Discrete-Time Markov Models - incomplete.pptx02 - Discrete-Time Markov Models - incomplete.pptx
02 - Discrete-Time Markov Models - incomplete.pptx
AntonioDapporto1
 
STLtalk about statistical analysis and its application
STLtalk about statistical analysis and its applicationSTLtalk about statistical analysis and its application
STLtalk about statistical analysis and its application
JulieDash5
 
Statistical analysis for recurrent events data with multiple competing risks
Statistical analysis for recurrent events data with multiple competing risksStatistical analysis for recurrent events data with multiple competing risks
Statistical analysis for recurrent events data with multiple competing risks
䜳蓉 倪
 
Randomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced AlgorithmRandomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced Algorithm
Mahbubur Rahman
 
Probability and Statistics
Probability and StatisticsProbability and Statistics
Probability and Statistics
Roozbeh Sanaei
 
Solving Poisson Equation using Conjugate Gradient Method and its implementation
Solving Poisson Equation using Conjugate Gradient Methodand its implementationSolving Poisson Equation using Conjugate Gradient Methodand its implementation
Solving Poisson Equation using Conjugate Gradient Method and its implementation
Jongsu "Liam" Kim
 
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
Golden Helix Inc
 
Two queue tandem resim 16 presentatio
Two queue tandem resim 16 presentatioTwo queue tandem resim 16 presentatio
Two queue tandem resim 16 presentatio
Manuel Villen Altamirano
 
MLU_DTE_Lecture_2.pptx
MLU_DTE_Lecture_2.pptxMLU_DTE_Lecture_2.pptx
MLU_DTE_Lecture_2.pptx
RahulChaudhry15
 
Stochastic optimal control &amp; rl
Stochastic optimal control &amp; rlStochastic optimal control &amp; rl
Stochastic optimal control &amp; rl
ChoiJinwon3
 
Machine learning introduction lecture notes
Machine learning introduction lecture notesMachine learning introduction lecture notes
Machine learning introduction lecture notes
UmeshJagga1
 
1. linear model, inference, prediction
1. linear model, inference, prediction1. linear model, inference, prediction
1. linear model, inference, prediction
Malik Hassan Qayyum 🕵🏻‍♂
 
Lecture_10_SVD.pptx
Lecture_10_SVD.pptxLecture_10_SVD.pptx
Lecture_10_SVD.pptx
avulodttlrjhycmbmc
 
Hidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathHidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable Path
Lê Hòa
 
硕士论文
硕士论文硕士论文
硕士论文William Ding
 
Generalised Statistical Convergence For Double Sequences
Generalised Statistical Convergence For Double SequencesGeneralised Statistical Convergence For Double Sequences
Generalised Statistical Convergence For Double Sequences
IOSR Journals
 
QTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapQTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature Map
Ha Phuong
 

Similar to Intro to Quant Trading Strategies (Lecture 2 of 10) (20)

Learning group em - 20171025 - copy
Learning group   em - 20171025 - copyLearning group   em - 20171025 - copy
Learning group em - 20171025 - copy
 
Koh_Liang_ICML2017
Koh_Liang_ICML2017Koh_Liang_ICML2017
Koh_Liang_ICML2017
 
Stochastic Optimization
Stochastic OptimizationStochastic Optimization
Stochastic Optimization
 
02 - Discrete-Time Markov Models - incomplete.pptx
02 - Discrete-Time Markov Models - incomplete.pptx02 - Discrete-Time Markov Models - incomplete.pptx
02 - Discrete-Time Markov Models - incomplete.pptx
 
STLtalk about statistical analysis and its application
STLtalk about statistical analysis and its applicationSTLtalk about statistical analysis and its application
STLtalk about statistical analysis and its application
 
Statistical analysis for recurrent events data with multiple competing risks
Statistical analysis for recurrent events data with multiple competing risksStatistical analysis for recurrent events data with multiple competing risks
Statistical analysis for recurrent events data with multiple competing risks
 
Randomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced AlgorithmRandomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced Algorithm
 
Probability and Statistics
Probability and StatisticsProbability and Statistics
Probability and Statistics
 
Solving Poisson Equation using Conjugate Gradient Method and its implementation
Solving Poisson Equation using Conjugate Gradient Methodand its implementationSolving Poisson Equation using Conjugate Gradient Methodand its implementation
Solving Poisson Equation using Conjugate Gradient Method and its implementation
 
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
 
Two queue tandem resim 16 presentatio
Two queue tandem resim 16 presentatioTwo queue tandem resim 16 presentatio
Two queue tandem resim 16 presentatio
 
MLU_DTE_Lecture_2.pptx
MLU_DTE_Lecture_2.pptxMLU_DTE_Lecture_2.pptx
MLU_DTE_Lecture_2.pptx
 
Stochastic optimal control &amp; rl
Stochastic optimal control &amp; rlStochastic optimal control &amp; rl
Stochastic optimal control &amp; rl
 
Machine learning introduction lecture notes
Machine learning introduction lecture notesMachine learning introduction lecture notes
Machine learning introduction lecture notes
 
1. linear model, inference, prediction
1. linear model, inference, prediction1. linear model, inference, prediction
1. linear model, inference, prediction
 
Lecture_10_SVD.pptx
Lecture_10_SVD.pptxLecture_10_SVD.pptx
Lecture_10_SVD.pptx
 
Hidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable PathHidden Markov Model - The Most Probable Path
Hidden Markov Model - The Most Probable Path
 
硕士论文
硕士论文硕士论文
硕士论文
 
Generalised Statistical Convergence For Double Sequences
Generalised Statistical Convergence For Double SequencesGeneralised Statistical Convergence For Double Sequences
Generalised Statistical Convergence For Double Sequences
 
QTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapQTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature Map
 

Recently uploaded

Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
Vighnesh Shashtri
 
what is a pi whale and how to access one.
what is a pi whale and how to access one.what is a pi whale and how to access one.
what is a pi whale and how to access one.
DOT TECH
 
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
bbeucd
 
Turin Startup Ecosystem 2024 - Ricerca sulle Startup e il Sistema dell'Innov...
Turin Startup Ecosystem 2024  - Ricerca sulle Startup e il Sistema dell'Innov...Turin Startup Ecosystem 2024  - Ricerca sulle Startup e il Sistema dell'Innov...
Turin Startup Ecosystem 2024 - Ricerca sulle Startup e il Sistema dell'Innov...
Quotidiano Piemontese
 
how to sell pi coins effectively (from 50 - 100k pi)
how to sell pi coins effectively (from 50 - 100k  pi)how to sell pi coins effectively (from 50 - 100k  pi)
how to sell pi coins effectively (from 50 - 100k pi)
DOT TECH
 
USDA Loans in California: A Comprehensive Overview.pptx
USDA Loans in California: A Comprehensive Overview.pptxUSDA Loans in California: A Comprehensive Overview.pptx
USDA Loans in California: A Comprehensive Overview.pptx
marketing367770
 
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
beulahfernandes8
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
egoetzinger
 
Donald Trump Presentation and his life.pptx
Donald Trump Presentation and his life.pptxDonald Trump Presentation and his life.pptx
Donald Trump Presentation and his life.pptx
SerdarHudaykuliyew
 
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
conose1
 
can I really make money with pi network.
can I really make money with pi network.can I really make money with pi network.
can I really make money with pi network.
DOT TECH
 
how to swap pi coins to foreign currency withdrawable.
how to swap pi coins to foreign currency withdrawable.how to swap pi coins to foreign currency withdrawable.
how to swap pi coins to foreign currency withdrawable.
DOT TECH
 
how to sell pi coins on Binance exchange
how to sell pi coins on Binance exchangehow to sell pi coins on Binance exchange
how to sell pi coins on Binance exchange
DOT TECH
 
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdfPensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
Henry Tapper
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designs
egoetzinger
 
What price will pi network be listed on exchanges
What price will pi network be listed on exchangesWhat price will pi network be listed on exchanges
What price will pi network be listed on exchanges
DOT TECH
 
how can i use my minded pi coins I need some funds.
how can i use my minded pi coins I need some funds.how can i use my minded pi coins I need some funds.
how can i use my minded pi coins I need some funds.
DOT TECH
 
What website can I sell pi coins securely.
What website can I sell pi coins securely.What website can I sell pi coins securely.
What website can I sell pi coins securely.
DOT TECH
 
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
Falcon Invoice Discounting
 
how to sell pi coins in South Korea profitably.
how to sell pi coins in South Korea profitably.how to sell pi coins in South Korea profitably.
how to sell pi coins in South Korea profitably.
DOT TECH
 

Recently uploaded (20)

Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
Abhay Bhutada Leads Poonawalla Fincorp To Record Low NPA And Unprecedented Gr...
 
what is a pi whale and how to access one.
what is a pi whale and how to access one.what is a pi whale and how to access one.
what is a pi whale and how to access one.
 
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
䞀比䞀原版(UCSB毕䞚证)圣芭芭拉分校毕䞚证劂䜕办理
 
Turin Startup Ecosystem 2024 - Ricerca sulle Startup e il Sistema dell'Innov...
Turin Startup Ecosystem 2024  - Ricerca sulle Startup e il Sistema dell'Innov...Turin Startup Ecosystem 2024  - Ricerca sulle Startup e il Sistema dell'Innov...
Turin Startup Ecosystem 2024 - Ricerca sulle Startup e il Sistema dell'Innov...
 
how to sell pi coins effectively (from 50 - 100k pi)
how to sell pi coins effectively (from 50 - 100k  pi)how to sell pi coins effectively (from 50 - 100k  pi)
how to sell pi coins effectively (from 50 - 100k pi)
 
USDA Loans in California: A Comprehensive Overview.pptx
USDA Loans in California: A Comprehensive Overview.pptxUSDA Loans in California: A Comprehensive Overview.pptx
USDA Loans in California: A Comprehensive Overview.pptx
 
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
 
Donald Trump Presentation and his life.pptx
Donald Trump Presentation and his life.pptxDonald Trump Presentation and his life.pptx
Donald Trump Presentation and his life.pptx
 
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
䞀比䞀原版(IC毕䞚证)垝囜理工倧孊毕䞚证劂䜕办理
 
can I really make money with pi network.
can I really make money with pi network.can I really make money with pi network.
can I really make money with pi network.
 
how to swap pi coins to foreign currency withdrawable.
how to swap pi coins to foreign currency withdrawable.how to swap pi coins to foreign currency withdrawable.
how to swap pi coins to foreign currency withdrawable.
 
how to sell pi coins on Binance exchange
how to sell pi coins on Binance exchangehow to sell pi coins on Binance exchange
how to sell pi coins on Binance exchange
 
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdfPensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
Pensions and housing - Pensions PlayPen - 4 June 2024 v3 (1).pdf
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designs
 
What price will pi network be listed on exchanges
What price will pi network be listed on exchangesWhat price will pi network be listed on exchanges
What price will pi network be listed on exchanges
 
how can i use my minded pi coins I need some funds.
how can i use my minded pi coins I need some funds.how can i use my minded pi coins I need some funds.
how can i use my minded pi coins I need some funds.
 
What website can I sell pi coins securely.
What website can I sell pi coins securely.What website can I sell pi coins securely.
What website can I sell pi coins securely.
 
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
innovative-invoice-discounting-platforms-in-india-empowering-retail-investors...
 
how to sell pi coins in South Korea profitably.
how to sell pi coins in South Korea profitably.how to sell pi coins in South Korea profitably.
how to sell pi coins in South Korea profitably.
 

Intro to Quant Trading Strategies (Lecture 2 of 10)

  • 1. Introduction to Algorithmic Trading Strategies Lecture 2 Hidden Markov Trading Model Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com
  • 2. References  Algorithmic Trading: Hidden Markov Models on Foreign Exchange Data. Patrik Idvall, Conny Jonsson. University essay from Linköpings universitet/Matematiska institutionen; Linköpings universitet/Matematiska institutionen. 2008.  A tutorial on hidden Markov models and selected applications in speech recognition. Rabiner, L.R. Proceedings of the IEEE, vol 77 Issue 2, Feb 1989.  Hidden Markov Models for Time Series: An Introduction Using R. Walter Zucchini, Iain L. MacDonald. 2009. 2
  • 3. Bayes Theorem  Bayes theorem computes the posterior probability of a hypothesis H after evidence E is observed in terms of  the prior probability, 𝑃 𝐻  the prior probability of E, 𝑃 𝐞  the conditional probability of 𝑃 𝐞|𝐻  𝑃 𝐻|𝐞 = 𝑃 𝐞|𝐻 𝑃 𝐞 𝑃 𝐻 = 𝑃 𝐞|𝐻 𝑃 𝐞|𝐻 ∗𝑃 𝐻 +𝑃 𝐞|¬𝐻 ∗𝑃 ¬𝐻 𝑃 𝐻 3
  • 4. Bayes Theorem Examples 4  A rare event may have occurred with a high probability if the chance of the evidence is also rare. “scaled"  P(Jesus resurrection) = very small  P(apostle conversion) = very small, also  P(Jesus resurrection | apostle conversion)  ≈ P(Jesus resurrection)/ P(apostle conversion)  ≈ not too small and in fact quite probable  The occurrence of a highly likely consequence does not mean that the event may have occurred. The probability needs to be “discounted” by the background probability.  P(Pattern | Rare) = 98%  P(Pattern | ¬Rare) = 5%  P(Rare) = 0.1%  P(Rare | Pattern) = ?
  • 5. Markov Chain 5 x=+: 0.8 x=-: 0.2 x=+: 0.3 x=-: 0.7 0.4 0.5 0.6 0.5
  • 6. Markov Property  The conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it.  𝑃 𝑥𝑡|𝑞𝑡, ⋯ , 𝑞1, 𝑥𝑡−1, ⋯ , 𝑥1 = 𝑃 𝑥𝑡|𝑞𝑡  Consistent with the weak form of the efficient market hypothesis. 6
  • 7. Matrix Notations 7  A two-state Markov chain. x=+: 0.8 x=-: 0.2 x=+: 0.3 x=-: 0.7 0.4 0.5 0.6 0.5  transition probability matrix 𝐎 = 0.6 0.4 0.5 0.5  conditional probability matrix 𝑃𝑥 = diag 𝑝1 𝑥 , 
 , 𝑝 𝑁 𝑥  𝑃+ = 0.8 0 0 0.3 , 𝑃− = 0.2 0 0 0.7
  • 8. Examples  What is the probability of observing the sequence 𝑆 = 𝑠1, 𝑠1, 𝑠2  𝑃 𝑆|Model = P 𝑠1, 𝑠1, 𝑠2|Model  = P 𝑠1|Model × P 𝑠1|𝑠1,Model × P 𝑠2|𝑠1,Model  = 1 ×0.6×0.6×0.4  = 0.144  𝑃 𝑋1 = +, 𝑋2 = +, 𝑋3 = + = ∑ ∑ ∑ 𝜋𝑖 𝑝𝑖 1 𝑎𝑖𝑖 𝑝𝑗 1 𝑎𝑗𝑗 𝑝 𝑘 12 𝑘=1 2 𝑗=1 2 𝑖=1 = Π𝑃 1 𝐎𝐎 1 A𝑃 1 1′ 8
  • 9. Hidden Markov Model 9 +: ? -: ? +: ? -: ? ? ? ? ?
  • 10. Hidden Markov Model  Only observations are observable (duh).  World states may not be known (hidden).  We want to model the hidden states as a Markov Chain.  HMM in general does not satisfy the Markov property. 10
  • 11. Problems  Likelihood  Given the parameters, 𝜗, and an observation sequence, X, compute 𝑃 𝑋|𝜗 .  Decoding  Given the parameters, 𝜗, and an observation sequence, X, determine the best hidden sequence Q.  Learning  Given an observation sequence, X, and HMM structure, learn 𝜗. 11
  • 13. Likelihood By Enumeration  𝑃 𝑋|𝜗 = ∑ 𝑃 𝑋, 𝑄|𝜗𝑞 ′ 𝑠  = ∑ 𝑃 𝑋|𝑄, 𝜗 × 𝑃 𝑋|𝜗𝑞 ′ 𝑠  𝑃 𝑋|𝑄, 𝜗 = ∏ 𝑃 𝑥𝑡|𝑞 𝑡, 𝜗𝑇 𝑡=1  𝑃 𝑄|𝜗 = 𝜋 𝑞1 × 𝑎 𝑞1 𝑞2 × 𝑎 𝑞2 𝑞3 × ⋯ × 𝑎 𝑞 𝑇−1 𝑞 𝑇  But
 this is not computationally feasible due to the need to enumerate all possible (finite) state sequences. 13
  • 14. Forward Procedure  𝛌 𝑡 𝑖 = 𝑃 𝑥1, 𝑥2, ⋯ , 𝑥𝑡, 𝑞 𝑡 = 𝑖|𝜗  the probability of the partial observation sequence until time t and the system in state 𝑠𝑖 at time t.  Initialization  𝛌1 𝑖 = 𝜋𝑖 𝑝𝑖 𝑥1  𝑝𝑖: the conditional distribution of 𝑥 in 𝑠𝑖  Induction  𝛌 𝑡+1 𝑗 = ∑ 𝛌 𝑡 𝑖 𝑎𝑖𝑖 𝑁 𝑖=1 𝑝𝑗 𝑥𝑡+1  Termination  𝑃 𝑋|𝜗 = ∑ 𝛌 𝑇 𝑖𝑁 𝑖=1 , the likelihood 14
  • 15. Backward Procedure  𝛜𝑡 𝑖 = 𝑃 𝑥𝑡+1, 𝑥𝑡+2, ⋯ , 𝑥 𝑇|𝑞𝑡 = 𝑖, 𝜗  the probability of the system in state 𝑖 at time t, and the partial observations from then onward till time t  Initialization  𝛜 𝑇 𝑖 =1  Induction  𝛜𝑡 𝑖 = ∑ 𝑎𝑖𝑖 𝑁 𝑗=1 𝑝𝑗 𝑥𝑡+1 𝛜𝑡+1 𝑗 15
  • 17. Decoding Solutions  Given the observations and model, the probability of the system in state 𝑖 is:  𝛟𝑡 𝑖 = 𝑃 𝑞 𝑡 = 𝑖|𝑋, 𝜗  = 𝑃 𝑞 𝑡=𝑖,𝑋|𝜗 𝑃 𝑋|𝜗  = 𝛌 𝑡 𝑖 𝛜𝑡 𝑖 𝑃 𝑋|𝜗  = 𝛌 𝑡 𝑖 𝛜𝑡 𝑖 ∑ 𝛌 𝑡 𝑖 𝛜𝑡 𝑖𝑁 𝑖=1 17
  • 18. Maximizing The Expected Number Of States  𝑞 𝑡 = argmax1≀𝑖≀𝑁 𝛟𝑡 𝑖  This determines the most likely state at every instant, t, without regard to the probability of occurrence of sequences of states. 18
  • 19. Viterbi Algorithm  The maximal probability of the system travelling these states stopping at state 𝑖 and generating these observations:  𝛿𝑡 𝑖 = max 𝑃 𝑞1, 𝑞2, ⋯ , 𝑞 𝑡 = 𝑖, 𝑥0, ⋯ , 𝑥𝑡|𝜗 19
  • 20. Viterbi Algorithm  Initialization  𝛿1 𝑖 = 𝜋𝑖 𝑝𝑖 𝑥1  Recursion  𝛿𝑡 𝑗 = max 𝑖 𝛿𝑡−1 𝑖 𝑎𝑖𝑖 𝑝𝑗 𝑥𝑡  the probability of the most probable state sequence for the first t observations, ending in state j  𝜓 𝑡 𝑗 = argmax 𝛿𝑡−1 𝑖 𝑎𝑖𝑖  the state chosen at t  Termination  𝑃∗ = max 𝛿 𝑇 𝑖  𝑞∗= argmax 𝛿 𝑇 𝑖 20
  • 21. Viterbi Algorithm Example 21  𝛿1 𝑈 = 𝜋 𝑈 𝑝 𝑈 + = 0.5 ∗ 0.8 = 0.4  𝛿1 𝐷 = 𝜋 𝐷 𝑝 𝐷 + = 0.5 ∗ 0.3 = 0.15  𝛿2 𝑈 = max 𝛿1 𝑈 𝑎 𝑈𝑈, 𝛿1 𝐷 𝑎 𝐷𝑈 𝑝 𝑈 + = max 0.4 ∗ 0.6,0.15 ∗ 0.5 ∗ 0.8 = 0.24 ∗ 0.8 = 0.192  𝛿2 𝐷 = max 𝛿1 𝑈 𝑎 𝑈𝐷, 𝛿1 𝐷 𝑎 𝐷𝐷 𝑝 𝐷 + +: 0.8 -: 0.2 +: 0.3 -: 0.7 0.4 0.5 0.6 0.5  +, +, −, +
  • 23. Rabiner Model 23  Discrete probability masses for observations. +: ? -: ? +: ? -: ? ? ? ? ?
  • 24. As A Maximization Problem  Our objective is to find 𝜗 that maximizes 𝑃 𝑋|𝜗 .  For any given 𝜗, we can compute 𝑃 𝑋|𝜗 .  Then solve a maximization problem.  Algorithm: Nelder-Mead. 24
  • 25. Baum-Welch  the probability of being in state 𝑖 at time 𝑡, and state 𝑗 at time 𝑡 + 1 , given the model and the observation sequence  𝜉𝑡 𝑖, 𝑗 = 𝑃 𝑞 𝑡 = 𝑖, 𝑞 𝑡+1 = 𝑗|𝑋, 𝜗 25
  • 26. Xi  𝜉𝑡 𝑖, 𝑗 = 𝑃 𝑞 𝑡 = 𝑖, 𝑞 𝑡+1 = 𝑗|𝑋, 𝜗  = 𝑃 𝑞 𝑡=𝑖,𝑞 𝑡+1=𝑗,𝑋|𝜗 𝑃 𝑋|𝜆  = 𝛌 𝑡 𝑖 𝑎 𝑖𝑖 𝑝 𝑗 𝑥 𝑡+1 𝛜𝑡+1 𝑗 𝑃 𝑋|𝜗  𝛟𝑡 𝑖 = 𝑃 𝑞 𝑡 = 𝑖 |𝑋, 𝜗  = ∑ 𝜉𝑡 𝑖, 𝑗𝑁 𝑗=1 26
  • 27. Estimation Equation  By summing up over time,  𝛟𝑡 𝑖 ~ the number of times state 𝑖 is visited  𝜉𝑡 𝑖, 𝑗 ~ the number of times the system goes from state 𝑖 to state 𝑗  Thus, the parameters λ are:  𝜋ᅵ 𝑖 = 𝛟1 𝑖 , initial state probabilities  𝑎ᅵ 𝑖𝑖 = ∑ 𝜉 𝑡 𝑖,𝑗𝑇−1 𝑡=1 ∑ 𝛟𝑡 𝑖𝑇−1 𝑡=1 , transition probabilities  𝑝ᅵ𝑗 𝑣 𝑘 = ∑ 𝛟𝑡 𝑗𝑇−1 𝑡=1,𝑥 𝑡=𝑣 𝑘 ∑ 𝛟𝑡 𝑗𝑇−1 𝑡=1 , conditional probabilities 27
  • 28. Conditional Probabilities  Our formulation so far assumes discrete conditional probabilities.  The formulations that take other probability density functions are similar.  But the computations are more complicated, and the solutions may not even be analytical, e.g., t-distribution. 28
  • 29. Heavy Tail Distributions  t-distribution  Gaussian Mixture Model  a weighted sum of Normal distributions 29
  • 30. Trading Ideas  Compute the next state.  Compute the expected return.  Long (short) when expected return > (<) 0.  Long (short) when expected return > (<) c.  c = the transaction costs  Any other ideas? 30
  • 31. Experiment Setup  EURUSD daily prices from 2003 to 2006.  6 unknown factors.  Λ is estimated on a rolling basis.  Evaluations:  Hypothesis testing  Sharpe ratio  VaR  Max drawdown  alpha 31
  • 34. Results  More data (the 6 factors) do not always help (esp. for the discrete case).  Parameters unstable. 34
  • 35. TODOs  How can we improve the HMM model(s)? Ideas? 35
  • 36. Maximum Likelihood 36  One way to estimate parameters for a model.  Which is the most likely model/dice/number of faces to generate the following observations?  1,2,1,2,1,1,3,4,1,1,2,4,2,4,1,2  1,2,3,4,5,6,4,5,6,3,5,2,4,6,2  1,1,1,1,1,1,1,1,1,1  Do you think you get the right model?  P(1,1,1,1,1,1,1,1,1,1 | 12-faced-dice) = ?
  • 37. Likelihood Function  Probability: a function of outcomes given a fixed parameter value.  What is the probability of getting 10 Heads flipping a fair coin?  Likelihood: a function of parameter value given an outcome.  What is the likelihood that the coin is fair when it landed Heads 10 times in a roll? 37
  • 38. Maximum Likelihood Estimate  Intuition: we want to find a model (parameter value) such that the probability of observing the outcome is maximized, i.e., most likely.  We want to find a 𝜗 that 𝑝 𝑋|𝜗 is the biggest.  𝐿 𝜗; 𝑋 = 𝑝 𝑋|𝜗  We find 𝜗 such that 𝐿 𝜗; 𝑋 is maximized given the observation. 38
  • 39. Example Using the Normal Distribution  We want to estimate the mean of a sample of size 𝑁 drawn from a Normal distribution.  𝑓 𝑥 = 1 2𝜋𝜎2 exp − 𝑥−𝜇 2 2𝜎2  𝜗 = 𝜇, 𝜎  𝐿 𝑁 𝜗; 𝑋 = ∏ 1 2𝜋𝜎2 exp − 𝑥𝑖−𝜇 2 2𝜎2 𝑁 𝑖=1 39
  • 40. Log-Likelihood  log 𝐿 𝑁 𝜗; 𝑋 = ∑ log 1 2𝜋𝜎2 − 𝑥𝑖−𝜇 2 2𝜎2 𝑁 𝑖=1  Maximizing the log-likelihood for 𝜇 is equivalent to maximizing the following.  − ∑ 𝑥𝑖 − 𝜇 2𝑁 𝑖=1  First order condition w.r.t.,𝜇  𝜇ᅵ = 1 𝑁 ∑ 𝑥𝑖 𝑁 𝑖=1  Likewise, for variance, we have  𝜎ᅵ2 = 1 𝑁 ∑ 𝑥𝑖 − 𝜇ᅵ 2𝑁 𝑖=1 40
  • 41. Marginal Likelihood  For the set of hidden states, 𝑍𝑡 , we write  𝐿 𝜗; 𝑋 = 𝑝 𝑋|𝜗 = ∑ 𝑝 𝑋, 𝑍|𝜗𝑍  Assume we know the conditional distribution of 𝑍, we could instead maximize the following.  ma𝑥 𝜗 E 𝑍 𝐿 𝜗|𝑋, 𝑍 , or  ma𝑥 𝜗 E 𝑍 log 𝐿 𝜗|𝑋, 𝑍  The expectation is a weighted sum of the (log-) likelihoods weighted by the probability of the hidden states. 41
  • 42. The Q-Function  Where do we get the conditional distribution of 𝑍𝑡 from?  Suppose we somehow have an (initial) estimation of the parameters, 𝜗0. Then the model has no unknowns. We can compute the distribution of 𝑍𝑡 .  𝑄 𝜗|𝜗 𝑡 = E 𝑍|𝑋,𝜗 log 𝐿 𝜗|𝑋, 𝑍 42
  • 43. EM Intuition  Suppose we know 𝜗, we know completely about the model; we can find 𝑍.  Suppose we know 𝑍, we can estimate 𝜗, by, e.g., maximum likelihood.  What do we do if we don’t know both 𝜗 and 𝑍? 43
  • 44. Expectation-Maximization Algorithm  Expectation step (E-step): compute the expected value of the log-likelihood function, w.r.t., the conditional distribution of 𝑍 under 𝑋 and 𝜗 𝑡 .  𝑄 𝜗|𝜗 𝑡 = E 𝑍|𝑋,𝜗 𝑡 log 𝐿 𝜗|𝑋, 𝑍  Maximization step (M-step): find the parameters, 𝜗, that maximize the Q-value.  𝜗 𝑡+1 = argmax 𝜗 𝑄 𝜗|𝜗 𝑡 44
  • 45. Mixture HMM 45  Continuous probabilities for observations. ? ? ? ?
  • 46. Matrix Notation 46  Likelihood: 𝐿 𝑇 𝜗; 𝑋 = Π𝑃 𝑥1 𝐎𝐎 𝑥2 𝐎𝐎 𝑥3 
 𝐎𝐎 𝑥 𝑇 1′  Forward probabilities: 𝛌 𝑡 = Π𝑃 𝑥1 𝐎𝐎 𝑥2 𝐎𝐎 𝑥3 
 𝐎𝐎 𝑥 𝑡 = Π𝑃 𝑥1 ∏ 𝐎𝐎 𝑥𝑖 𝑡 𝑖=2  Each entry is the joint probability of seeing all the observations up to time t and ending up in state j: 𝛌 𝑡 𝑗 = 𝑃𝑃 𝑋1 𝑡 = 𝑥1 𝑡 , 𝑞𝑡 = 𝑗  Induction: 𝛌 𝑡+1 𝑗 = ∑ 𝛌 𝑡 𝑖 𝑎𝑖𝑖 𝑁 𝑖=1 𝑝𝑗 𝑥𝑡+1  Backward probabilities: 𝛜𝑡 ′ = 𝐎𝐎 𝑥 𝑡+1 𝐎𝐎 𝑥 𝑡+2 
 𝐎𝐎 𝑥 𝑇 1′  Each entry is the conditional probability of seeing all the future observations given starting out from state j: 𝛜𝑡 𝑗 = 𝑃𝑃 𝑋1 𝑡 = 𝑥1 𝑡 |𝑞𝑡 = 𝑗  Induction: 𝛜𝑡 ′ = 𝐎𝐎 𝑥𝑡+1 𝛜𝑡+1 ′  𝜆 𝑡 𝑖 = 𝑃 𝑋, 𝑞𝑡 = 𝑖|𝜗 =𝛌 𝑡 𝑖 𝛜𝑡 𝑖  Likelihood: ∑ 𝜆 𝑡 𝑖𝑁 𝑖=1 = 𝛌 𝑡 𝛜𝑡 ′ = 𝐿 𝑇
  • 47. EM for HMM 47  𝑢𝑗 𝑡 = 1 if 𝑞 𝑡 = 𝑗  𝜈𝑗𝑗 = 1 if 𝑞 𝑡−1 = 𝑗 and 𝑞 𝑡 = 𝑘  log 𝑃𝑃 𝑥1 𝑇, 𝑞 𝑇 = log 𝜋 𝑞1 ∏ 𝑎 𝑞 𝑡−1,𝑞 𝑡 𝑇 𝑡=2 ∏ 𝑝 𝑞 𝑡 𝑥𝑡 𝑇 𝑡=1  = log 𝜋 𝑞1 + ∑ log 𝑎 𝑞 𝑡−1,𝑞 𝑡 𝑇 𝑡=2 + ∑ 𝑝 𝑞 𝑡 𝑥𝑡 𝑇 𝑡=1  = ∑ 𝑢𝑗 1 log 𝜋𝑗 𝑁 𝑗=1 + ∑ ∑ ∑ 𝜈𝑗𝑗 log 𝑎𝑗,𝑘 𝑁 𝑘=1 𝑁 𝑗=1 𝑇 𝑡=2 + ∑ ∑ 𝑢𝑗 𝑡 log 𝑝𝑗 𝑥𝑡 𝑇 𝑗=1 𝑇 𝑡=1  Term 1: ∑ 𝑢𝑗 1 log 𝜋𝑗 𝑁 𝑗=1  Term 2: ∑ ∑ ∑ 𝜈𝑗𝑗 log 𝑎𝑗,𝑘 𝑁 𝑘=1 𝑁 𝑗=1 𝑇 𝑡=2  Term 3: ∑ ∑ 𝑢𝑗 𝑡 log 𝑝𝑗 𝑥𝑡 𝑇 𝑗=1 𝑇 𝑡=1
  • 48. E Step 48  Given the current 𝜗 = 𝜋, 𝐎, 𝜆 , we can estimate 𝑢𝑗 𝑡 and 𝜈𝑗𝑗 from the forward and backward probabilities.  𝑢ᅵ𝑗 𝑡 = 𝑃𝑃 𝑞 𝑡 = 𝑗 |𝑥1 𝑇 = 𝛌 𝑡 𝑗 𝛜𝑡 𝑗 /𝐿 𝑇  𝜈̂𝑗𝑗 𝑡 = 𝑃𝑃 𝑞 𝑡−1 = 𝑗, 𝑞 𝑡 = 𝑘 | 𝑥1 𝑇 = 𝛌 𝑡−1 𝑗 𝑎𝑗𝑗 𝑝 𝑘 𝑥𝑡 𝛜𝑡 𝑘 /𝐿 𝑇
  • 49. M Step 49  Term 1: max ∑ 𝑢ᅵ𝑗 1 log 𝜋𝑗 𝑁 𝑗=1 w.r.t each 𝜋𝑗  𝜋ᅵ𝑗 = 𝑢ᅵ𝑗 1 / ∑ 𝑢ᅵ𝑗 1𝑁 𝑗=1 = 𝑢ᅵ𝑗 1  Term 2: max ∑ ∑ ∑ 𝜈̂𝑗𝑗 log 𝑎𝑗,𝑘 𝑁 𝑘=1 𝑁 𝑗=1 𝑇 𝑡=2 w.r.t. each 𝜈̂𝑗𝑗  𝜈̂𝑗𝑗 = 𝑓𝑗𝑗/ ∑ 𝑓𝑗𝑗 𝑁 𝑗=1 , where 𝑓𝑗𝑗 = ∑ 𝜈̂𝑗𝑗 𝑡𝑇 𝑡=2  Term 3: max ∑ ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑡 𝑇 𝑗=1 𝑇 𝑡=1 w.r.t. the parameters 𝜆 for each conditional probability distribution in each state 𝑝𝑗 𝑥 .
  • 50. Poisson-HMM 50  𝑝𝑗 𝑥 = 𝑒−𝜆 𝑗 𝜆𝑗 𝑥 /𝑥!  max ∑ ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑇 𝑗=1 𝑇 𝑡=1  Each of the j term can be individually maximized.  ∑ 𝑢ᅵ𝑗 𝑡 log 𝑝𝑗 𝑥𝑇 𝑡=1  ∑ 𝑢ᅵ𝑗 𝑡 −𝜆𝑗 + 𝑥 log 𝜆𝑗 − 𝑥!𝑇 𝑡=1  0 = ∑ 𝑢ᅵ𝑗 𝑡 −1 + 𝑥/𝜆𝑗 𝑇 𝑡=1  𝜆̂𝑗 = ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡 𝑇 𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇 𝑡=1
  • 51. Normal-HMM 51  𝜇̂ 𝑗 = ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡 𝑇 𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇 𝑡=1  𝜎ᅵ𝑗 2 = ∑ 𝑢ᅵ𝑗 𝑡 𝑥𝑡 − 𝑢ᅵ𝑗 2𝑇 𝑡=1 / ∑ 𝑢ᅵ𝑗 𝑡𝑇 𝑡=1