SlideShare a Scribd company logo
1 of 39
Download to read offline
Data assimilation Section 0:
Monte Carlo Techniques in Earth Sciences
Data assimilation
Amit Apte
International Centre for Theoretical Sciences (ICTS-TIFR)
Bangalore, India
SAMSI workshop, 26 Feb 2018
movies shown earlier are from Philip Brohan
https://vimeo.com/170761410
https://vimeo.com/170971015
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 1 of 30
Data assimilation Section 0:
Outline
*
1 An introduction to data assimilation
2 Mathematical basis of data assimilation
3 Sampling: numerical technique for approximating the posterior
* random images from google!
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 2 of 30
Data assimilation Section 1: An introduction to data assimilation
Outline
1 An introduction to data assimilation
2 Mathematical basis of data assimilation
3 Sampling: numerical technique for approximating the posterior
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 3 of 30
Data assimilation Section 1: An introduction to data assimilation
A few random(!) questions
When is the first total solar eclipse in India after 2100?
What will be the closest approach of Halley’s comet 2060?
How many times in the next hour will a double pendulum reach the
apogee? What will be the angle of a double pendulum after 5 min.,
10 min., ...?
Breaking waves – which wave will reach you?
What will be the min/max temperatures in five largest cities in India,
tomorrow, day-after, over the next month??
What will be the major stock exchange indices tomorrow?
What will be the number of cars that will enter the golden gate
bridge in next 30 minutes?
Who will be the prime minister of India in 2020? In 2030?
How many nuclei from a given piece of U235 will decay in next 10
minutes? ...
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 4 of 30
Data assimilation Section 1: An introduction to data assimilation
Two essential ingredients for describing reality
Physical theories ←→ mathematical models
In order to understand this:
we first need to understand:
Fluid and thermo-dynamics
Ocean model ≡ appropriate approximation
and numerical implementation
“physical parameters” – Bathymetry
(depth of ocean) and coastline; Specific
heat of water; etc.
external forcing – Wind, temperature,
humidity of the atmosphere, inflow of
river water
parametrization of “unresolved
processes”
Even all of the above is NOT sufficient!
data assimilation – using the
measurements from the ocean
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 5 of 30
Data assimilation Section 1: An introduction to data assimilation
Data, of course, provide a crucial link to reality
We have a large number of observations from satellites, ships, weather
stations etc., but they are
not uniformly distributed either in space or time
quite sparse (e.g. much less in southern hemisphere)
could depend in a complicated way on the atmospheric conditions
(satellite data)
Thus, the observations are insufficient to specify the model variables
completely (and to describe the state in the physical theory).
→ under-determined, ill-posed inverse problem
A Note: This is the problem of studying a specific instance (or realization) – this specific planet.
So the chain of interactions
physical theories ↔ models ↔ data
for complex systems such as the planet leads to:
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 6 of 30
Data assimilation Section 1: An introduction to data assimilation
What is data assimilation?
The art of optimally incorporating
partial and noisy observational data of a
chaotic, nonlinear, complex dynamical system with an
imperfect model (of the data and the system dynamics) to get an
estimate and the associated uncertainty for the system state
——————————————————————————————–
8MQI
XVYI
XVENIGXSV]
SFWIVZEXMSRW
L SFW
JYRGXMSR
SFW IVVSV
IRWIQFPI
JSVIGEWX
YTHEXIH
IRWIQFPI
EVVS[W MRHMGEXI HEXE
EWWMQMPEXMSR TVSGIWW SFW
WTEGI
WXEXI
WTEGI
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 7 of 30
Data assimilation Section 1: An introduction to data assimilation
Data assimilation is a estimation problem.
Estimation of state, in time, repetitively.
Breaking waves – which wave will reach you? (insurance)
What will be the min/max temperatures in five largest cities in India,
tomorrow, day-after, over the next month? (planning)
What will be the average temperature in Bangalore, month by month,
in 2050, or up to 2050? (design)
A few characteristics of data assimilation problems:
Good physical theories, but not necessarily good models
Systems are nonlinear and chaotic (usually deterministic)
Multiscale – temporal and spatial – dynamics
Observations of the system are
noisy
partial (sparse)
discrete in time
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 8 of 30
Data assimilation Section 1: An introduction to data assimilation
Main ingredients
A dynamical model: given the state x(t) ∈ Rd at any time t, gives
the state x(s) at any later time s > t: Lorenz-63, Lorenz-96, etc. (for
synthetic data studies, d = 3 or d = 40 etc.) or general circulation
models (for ocean / atmosphere / coupled d = 107 or d = 104)
Observations y1 ∈ Rp at time ti, for i = 1, . . . , T (typically p d)
Observations are partial (with gaps), noisy, discrete in time
Observation operator h : Rd → Rp to relate the model variables at
time t with observations at the same time: if the state were x(t), the
observations without noise would be h(x(t))
Observational “errors”: need to account for the difference between
how the real system is represented in the model (representativeness
error) and the instrumental uncertainty (noise)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 9 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
p(x)dx is the probability of a state x
p(x, y)dxdy is the joint probability of the state x and observation y
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
Probability densities like this in 10x dimension are difficult to represent.
CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1260349 and
By Bscan - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25235145
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
But densities can be represented by “samples” (the dots below)
CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1260349 and
By Bscan - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25235145
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
p(x)dx is the probability of a state x
p(x, y)dxdy is the joint probability of the state x and observation y
Main concept that you need to remember - conditional probability
p(x|y) =
p(x, y)
p(y)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
If and only if two random variables are correlated, information about one
gives some information about the other
mean of
p(x|y=3)
is ~= 1.0
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
p(x)dx is the probability of a state x
p(x, y)dxdy is the joint probability of the state x and observation y
Main concept that you need to remember - conditional probability
p(x|y) =
p(x, y)
p(y)
But this can be written as
p(x, y) = p(x|y)p(y) = p(y|x)p(x)
This is a step away from the Bayes theorem:
p(x|y) =
p(y|x)p(x)
p(y)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
How do we represent uncertainty? Using probabilities!
If and only if two random variables are correlated, information about one
gives some information about the other
mean of
p(x|y=3)
is ~= 1.0
That’s it: that is data assimilation!
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
Data assimilation Section 1: An introduction to data assimilation
So what is the big deal!? Ah... time
Unfortunately, the x and y in the previous slide are all time dependent...
so we should really be watching a movie of the probability densities, rather
than images shown earlier!
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 11 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Outline
1 An introduction to data assimilation
2 Mathematical basis of data assimilation
3 Sampling: numerical technique for approximating the posterior
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 12 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Nonlinear filtering ≡ data assimilation
Consider a stochastic dynamical model
xt+1 = m(xt) + ζt with x0 unknown
Thus we assume a probability density pa(x0) for the initial condition.
We will consider the problem of “estimating” the state x at some
time t given observations at times 1, 2, . . . , N.
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Nonlinear filtering ≡ data assimilation
Consider a stochastic dynamical model
xt+1 = m(xt) + ζt with x0 unknown
Thus we assume a probability density pa(x0) for the initial condition.
We will consider the problem of “estimating” the state x at some
time t given observations at times 1, 2, . . . , N.
Smoothing: Obtain a state estimate xt for t < N using all the
observations up to time N; In particular, determine x0
Filtering: Obtain a state estimate xN using observations up to time N
Prediction: Obtain a state estimate xt for t > N (the time horizon of
prediction is important).
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Nonlinear filtering ≡ data assimilation
Consider a stochastic dynamical model
xt+1 = m(xt) + ζt with x0 unknown
Thus we assume a probability density pa(x0) for the initial condition.
We will consider the problem of “estimating” the state x at some
time t given observations at times 1, 2, . . . , N.
In most applications in earth sciences, data is collected “all the time”
so the most relevant problem is of filtering.
Predictions are obtained by using the filtering solution as “initial
conditions” for the appropriate PDE of interest (hence the common
view that data assimilation is the problem of finding initial
conditions).
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Or data assimilation ≡ determination of posterior i.e.
conditional distribution given the observations
Observations yt at time t depend on the state at that time.
yt = h(xt) + ηt t = 1, . . . , N
h is called the observation operator. ηt is observational noise. Eventually
we will assume independence between ηt and ζt.
Probabilistic statement of Data assimilation problem: find the posterior
distribution of the state conditioned on the observations
Smoothing: p(xt|y1, y2, . . . , yN ) for t < N
Filtering: p(xN |y1, y2, . . . , yN )
Prediction: p(xt|y1, y2, . . . , yN ) for t > N
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 14 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Two-step process for obtaining the filtering density
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 15 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Filtering density: obtained in a two step process
A notation: y1:t = {y1, y2, . . . , yt} and x1:t = {x1, x2, . . . , xt}
The first step is “prediction”
Suppose we have the probability pa(x1:t|y1:t) of states x1:t up to time
t conditioned on observations y1:t up to time t, and recalling that
xt+1 = m(xt) + ζt (which is a Markov chain, with transition kernel
pm(xt+1|xt))
→ Then the probability pf (x1:t+1|y1:t) of the states x1:t+1 up to time
t + 1 conditioned on observations y1:t up to time t, is obtained by:
pf
(x1:t, xt+1|y1:t) = p(x1:t|y1:t) · p(xt+1|x1:t, y1:t)
↓ ↓
= pa
(x1:t|y1:t) · pm
(xt+1|xt)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 16 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Filtering density: obtained in a two step process
A notation: y1:t = {y1, y2, . . . , yt} and x1:t = {x1, x2, . . . , xt}
The next step is “update”
Given the above probability pf (x1:t+1|y1:t) of the states x1:t+1 up to
time t + 1 conditioned on observations y1:t up to time t, and recalling
yt+1 = h(xt+1) + ηt+1
→ Then the probability pa(x1:t+1|y1:t+1) of the states x1:t+1 up to
time t + 1 conditioned on observations y1:t+1 up to time t + 1 is given
by Bayes’ theorem:
pa
(x1:t+1|y1:t, yt+1) = p(x1:t+1|y1:t) · p(yt+1|x1:t+1, y1:t)
1
p(yt+1|y1:t)
↓ ↓
∝ pf
(x1:t+1|y1:t) · pη(yt+1|xt+1)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 16 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Filtering density satisfies a recursion relation
Putting together the two relations from previous slide:
“prediction” given by
pf
(x1:t, xt+1|y1:t) = pa
(x1:t|y1:t) · pm
(xt+1|xt)
“update” given by
pa
(x1:t+1|y1:t, yt+1) ∝ pf
(x1:t+1|y1:t) · pη(yt+1|xt+1)
we obtain the following recursive relation for the posterior distribution
pa
(x1:t+1|y1:t+1) ∝ pa
(x1:t|y1:t) · pm
(xt+1|xt) · pη(yt+1|xt+1)
where pη(yt+1|xt+1) is the observational noise and pm(xt+1|xt) is the
Markov transition Kernel for the dynamical model.
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 17 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Two-step process for obtaining the filtering density
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 18 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Kalman filter: a “two moment” representation of the
Gaussian posterior in case of linear model
Suppose the model is linear m(x) = Mx, the observation operator is
linear h(x) = Hx, the initial distribution for x0 is Gaussian, as are the
stochasticity in the observations ηt and in the dynamical model ζt.
Kalman filter gives a recursion relation for the mean and covariance:
(xa
t , Ca
t ) for pa(xt|y1:t) and (xf
t+1, Cf
t+1) for pf (xt+1|y1:t):
“Update step” given by
xa
t = xf
t + K(yt − Hxf
t ) and Ca
t = (I − KH)Cf
t
Here K = Pf
t HT
(HPf
t HT
+ R)−1
is the Kalman gain matrix
“Prediction step” given by
xf
t+1 = Mxa
t and Cf
t+1 = MCa
t MT
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 19 of 30
Data assimilation Section 2: Mathematical basis of data assimilation
Computational hurdles
Recall the recursive formulae for the exact or the Kalman filter
Exact filtering density
pa
(x1:t+1|y1:t+1) ∝ pa
(x1:t|y1:t) · pm
(xt+1|xt) · pη(yt+1|xt+1)
Kalman filter
xa
t = xf
t + K(yt − Hxf
t ) and Ca
t = (I − KH)Cf
t
xf
t+1 = Mxa
t and Cf
t+1 = MCa
t MT
Also recall: x ∈ Rd with d ∼ 106 − 107, and C is d × d matrix.
Essentially impossible to even store or forecast the covariance matrix!!
Sampling methods provide (seemingly) efficient ways to approximate the
above
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 20 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Outline
1 An introduction to data assimilation
2 Mathematical basis of data assimilation
3 Sampling: numerical technique for approximating the posterior
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 21 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Basic idea of sampling a density f(x)
Suppose X1, X2, . . . XN are N independent, identically distributed (IID)
random variables (RV). For any function g(x), define the sample mean of
g(x) to be
GN =
1
N
N
n=1
g(Xn)
Then
E[GN ] =
1
N
N
n=1
E[g(Xn)] = E[g(X)]
and
var[GN ] =
1
N2
N
n=1
var[g(Xn)] =
1
N
var[g(X)]
So, as N → ∞, var[GN ] → 0
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 22 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Sample mean approximates the mean
Recall, E[GN ] = E[g(x)], and as N → ∞, var[GN ] → 0, thus
E[g(X)] =
∞
−∞
g(x)f(x)dx ≈
1
N
N
n=1
E[g(Xn)]
This is the basis for Monte Carlo integration and sampling methods.
For large enough N, we are guaranteed convergence! Justification:
law of large numbers:
P {limN→∞GN = E[g(X)]} = 1
.
What about the error, for some given N, or how do we choose N if
we fix an error tolerance?
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 23 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Errors are given by Chebyshve inequality
P |GN − E[GN ]| ≥
var[GN ]
δ
1/2
≤ δ
But var[GN ] = var[g(X)]/N, which means:
the probability that the sample mean GN and the exact mean of g(X) differ
by var[g(X)]/(δN) is no more than δ
Two ways to decrease the
error ≈
var[g(X)]
δN
increase the sample size N
decrease var[g(X)]
How can we decrease var[g(X)]? By a change of probability distribution
with respect to which we are taking the expections! This is the basic idea
of importance sampling
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 24 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Importance sampling: change of measure!
First a sleight of hand: for any probability density p(x),
Ef [g(X)] = g(x)f(x)dx =
g(x)f(x)
p(x)
p(x)dx = Ep
f(X)g(X)
p(X)
So now, define ¯g(X) = f(X)g(X)
p(X) . If we take all expectations with respect
to the new probability density p(x)
varp[¯g(X)] =
f2(x)g2(x)
p2(x)
p(x)dx − E2
p[¯g(X)]
Check: the choice p(x) ∝ g(x)f(x) minimizes the variance!!
Not usable since we do not know normalization constant
But intuition is useful: choose p(x) to be as close to g(x)f(x) as
possible.
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 25 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Importance sampling: weighted samples
Recall for any probability density p(x),
Ef [g(X)] = g(x)f(x)dx =
g(x)f(x)
p(x)
p(x)dx = Ep
f(X)g(X)
p(X)
If X1, X2, . . . XN are samples from p(X), then, to get the “correct”
estimate of g(X), we need to define a weighted mean:
GN =
1
N
N
n=1
wng(Xn) with wn =?
Check: E[GN ] = Ef [g(X)] (proof is essentially above.)
Heuristics: choose p(x) to be as close to g(x)f(x) as possible, but
easy to sample.
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 26 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Computational opportunities
Recall the recursive formulae for the exact or the Kalman filter
Particle filters: importance sampling implementation of the following recur-
sion
pa
(x1:t+1|y1:t+1) ∝ pa
(x1:t|y1:t) · pm
(xt+1|xt) · pη(yt+1|xt+1)
Ensemble Kalman filter: Monte Carlo sampling version of KF (with a slight
(nonlinear) variation)
xa
nt = xf
nt + K(ynt − Hxf
nt) n = 1, . . . , N but not Ca
t = (I − KH)Cf
t
xf
n,t+1 = Mxa
nt n = 1, . . . , N but not Cf
t+1 = MCa
t MT
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 27 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
How do we get samples of functions of random variables
If we have samples X1, X2, . . . XN from a distribution for X, how do
we get samples from Z which is a function of X, e.g. Z = h(X)?
Let Zn = h(Xn). We need to show that these are indeed samples
from the distribution of Z!
How do we approximate E[r(Z)] for some function r(Z)?
HN =
1
N
N
n=1
r(Zn)
E[HN ] =
1
N
N
n=1
E[r(Zn)] =
1
N
N
n=1
E[r(h(Xn))] = E[(r ◦ h)(X)]
The samples from the distribution of a function h of the random variable
X are the function of the samples from the distribution of that random
variable.
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 28 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Particle filter: a “weighted sample” representation of the
filtering recursion
pa
(x1:t+1|y1:t+1) ∝ pa
(x1:t|y1:t) · pm
(xt+1|xt) · pη(yt+1|xt+1)
Suppose we have a weighted sample {xi
t, wi
t}, i = 1, . . . , N from
pa(xt|y1:t), i.e., we approximate pa(xt|y1:t) ≈ N
i=1 wi
tδ(xt − xi
t).
If xi
t+1 is a sample from a “importance sampling density” q(x1+1|xi
t),
then the weighted sample {xi
t+1, wi
t+1}, i = 1, . . . , N approximates
the posterior at time t + 1 if we choose
wi
t+1 ∝ wi
t ·
pm(xi
t+1|xi
t) · pη(yt+1|xi
t+1)
q(xi
1+1|xi
t)
This is the main idea behind particle filtering
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 29 of 30
Data assimilation Section 3: Sampling: numerical technique for approximating the posterior
Summary
Data assimilation: the art of optimally incorporating
partial and noisy observational data of a
chaotic, nonlinear, complex dynamical system with an
imperfect model (of the data and the system dynamics) to get an
estimate and the associated uncertainty for the system state
Sampling (including importance sampling) provide efficient ways to
approach high dimensional data assimilation problems, with two
particularly useful methods:
particle filtering (PF)
Ensemble Kalman filtering (EnKF)
Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 30 of 30

More Related Content

What's hot

Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Xin-She Yang
 
DATA VISUALIZATION WITH R PACKAGES
DATA VISUALIZATION WITH R PACKAGESDATA VISUALIZATION WITH R PACKAGES
DATA VISUALIZATION WITH R PACKAGESFatma ÇINAR
 
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning Techniques
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning TechniquesData-Driven Hydrocarbon Production Forecasting Using Machine Learning Techniques
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning TechniquesIJCSIS Research Publications
 
Winner of EY NextWave Data Science Challenge 2019
Winner of EY NextWave Data Science Challenge 2019Winner of EY NextWave Data Science Challenge 2019
Winner of EY NextWave Data Science Challenge 2019ByungEunJeon
 
Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014ijcsbi
 
AIAA Future of Fluids 2018 Balaji
AIAA Future of Fluids 2018 BalajiAIAA Future of Fluids 2018 Balaji
AIAA Future of Fluids 2018 BalajiQiqi Wang
 
environmental scivis via dynamic and thematc mapping
environmental scivis via dynamic and thematc mappingenvironmental scivis via dynamic and thematc mapping
environmental scivis via dynamic and thematc mappingNeale Misquitta
 
SOFM based calssification for LU
SOFM based calssification for LUSOFM based calssification for LU
SOFM based calssification for LUDr. Sanjay Shitole
 
How machines can take decisions
How machines can take decisionsHow machines can take decisions
How machines can take decisionsDeepu S Nath
 
Coordination in Situated Systems: Engineering MAS Environment in TuCSoN
Coordination in Situated Systems: Engineering MAS Environment in TuCSoNCoordination in Situated Systems: Engineering MAS Environment in TuCSoN
Coordination in Situated Systems: Engineering MAS Environment in TuCSoNAndrea Omicini
 

What's hot (15)

Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
 
DATA VISUALIZATION WITH R PACKAGES
DATA VISUALIZATION WITH R PACKAGESDATA VISUALIZATION WITH R PACKAGES
DATA VISUALIZATION WITH R PACKAGES
 
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning Techniques
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning TechniquesData-Driven Hydrocarbon Production Forecasting Using Machine Learning Techniques
Data-Driven Hydrocarbon Production Forecasting Using Machine Learning Techniques
 
Winner of EY NextWave Data Science Challenge 2019
Winner of EY NextWave Data Science Challenge 2019Winner of EY NextWave Data Science Challenge 2019
Winner of EY NextWave Data Science Challenge 2019
 
Cc stat phys draft
Cc stat phys draftCc stat phys draft
Cc stat phys draft
 
Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014Vol 9 No 1 - January 2014
Vol 9 No 1 - January 2014
 
AIAA Future of Fluids 2018 Balaji
AIAA Future of Fluids 2018 BalajiAIAA Future of Fluids 2018 Balaji
AIAA Future of Fluids 2018 Balaji
 
An Evaluation of Models for Runtime Approximation in Link Discovery
An Evaluation of Models for Runtime Approximation in Link DiscoveryAn Evaluation of Models for Runtime Approximation in Link Discovery
An Evaluation of Models for Runtime Approximation in Link Discovery
 
Ml ppt at
Ml ppt atMl ppt at
Ml ppt at
 
environmental scivis via dynamic and thematc mapping
environmental scivis via dynamic and thematc mappingenvironmental scivis via dynamic and thematc mapping
environmental scivis via dynamic and thematc mapping
 
SOFM based calssification for LU
SOFM based calssification for LUSOFM based calssification for LU
SOFM based calssification for LU
 
How machines can take decisions
How machines can take decisionsHow machines can take decisions
How machines can take decisions
 
Coordination in Situated Systems: Engineering MAS Environment in TuCSoN
Coordination in Situated Systems: Engineering MAS Environment in TuCSoNCoordination in Situated Systems: Engineering MAS Environment in TuCSoN
Coordination in Situated Systems: Engineering MAS Environment in TuCSoN
 
Id3313941396
Id3313941396Id3313941396
Id3313941396
 
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
 

Similar to QMC: Undergraduate Workshop, Monte Carlo Techniques in Earth Science - Amit Apte, Feb 26, 2018

Stochastic optimization from mirror descent to recent algorithms
Stochastic optimization from mirror descent to recent algorithmsStochastic optimization from mirror descent to recent algorithms
Stochastic optimization from mirror descent to recent algorithmsSeonho Park
 
La résolution de problèmes à l'aide de graphes
La résolution de problèmes à l'aide de graphesLa résolution de problèmes à l'aide de graphes
La résolution de problèmes à l'aide de graphesData2B
 
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...Artificial Intelligence Institute at UofSC
 
A new-quantile-based-fuzzy-time-series-forecasting-model
A new-quantile-based-fuzzy-time-series-forecasting-modelA new-quantile-based-fuzzy-time-series-forecasting-model
A new-quantile-based-fuzzy-time-series-forecasting-modelCemal Ardil
 
Machine Learning
Machine LearningMachine Learning
Machine Learningbutest
 
On Leveraging Crowdsourcing Techniques for Schema Matching Networks
On Leveraging Crowdsourcing Techniques for Schema Matching NetworksOn Leveraging Crowdsourcing Techniques for Schema Matching Networks
On Leveraging Crowdsourcing Techniques for Schema Matching NetworksPlanetData Network of Excellence
 
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIntegrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIAEME Publication
 
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIntegrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIAEME Publication
 
Traffic flow modeling on road networks using Hamilton-Jacobi equations
Traffic flow modeling on road networks using Hamilton-Jacobi equationsTraffic flow modeling on road networks using Hamilton-Jacobi equations
Traffic flow modeling on road networks using Hamilton-Jacobi equationsGuillaume Costeseque
 
Computational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisComputational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisAboul Ella Hassanien
 
Probabilistic Modelling with Information Filtering Networks
Probabilistic Modelling with Information Filtering NetworksProbabilistic Modelling with Information Filtering Networks
Probabilistic Modelling with Information Filtering NetworksTomaso Aste
 
IRJET - Application of Linear Algebra in Machine Learning
IRJET -  	  Application of Linear Algebra in Machine LearningIRJET -  	  Application of Linear Algebra in Machine Learning
IRJET - Application of Linear Algebra in Machine LearningIRJET Journal
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
 
Time alignment techniques for experimental sensor data
Time alignment techniques for experimental sensor dataTime alignment techniques for experimental sensor data
Time alignment techniques for experimental sensor dataIJCSES Journal
 
Dimensionality reduction by matrix factorization using concept lattice in dat...
Dimensionality reduction by matrix factorization using concept lattice in dat...Dimensionality reduction by matrix factorization using concept lattice in dat...
Dimensionality reduction by matrix factorization using concept lattice in dat...eSAT Journals
 

Similar to QMC: Undergraduate Workshop, Monte Carlo Techniques in Earth Science - Amit Apte, Feb 26, 2018 (20)

Stochastic optimization from mirror descent to recent algorithms
Stochastic optimization from mirror descent to recent algorithmsStochastic optimization from mirror descent to recent algorithms
Stochastic optimization from mirror descent to recent algorithms
 
QMC: Operator Splitting Workshop, Estimation of Inverse Covariance Matrix in ...
QMC: Operator Splitting Workshop, Estimation of Inverse Covariance Matrix in ...QMC: Operator Splitting Workshop, Estimation of Inverse Covariance Matrix in ...
QMC: Operator Splitting Workshop, Estimation of Inverse Covariance Matrix in ...
 
CLIM: Transition Workshop - A Notional Framework for a Theory of Data Systems...
CLIM: Transition Workshop - A Notional Framework for a Theory of Data Systems...CLIM: Transition Workshop - A Notional Framework for a Theory of Data Systems...
CLIM: Transition Workshop - A Notional Framework for a Theory of Data Systems...
 
La résolution de problèmes à l'aide de graphes
La résolution de problèmes à l'aide de graphesLa résolution de problèmes à l'aide de graphes
La résolution de problèmes à l'aide de graphes
 
Classification
ClassificationClassification
Classification
 
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...
Knowledge-empowered Probabilistic Graphical Models for Physical-Cyber-Social ...
 
A new-quantile-based-fuzzy-time-series-forecasting-model
A new-quantile-based-fuzzy-time-series-forecasting-modelA new-quantile-based-fuzzy-time-series-forecasting-model
A new-quantile-based-fuzzy-time-series-forecasting-model
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
On Leveraging Crowdsourcing Techniques for Schema Matching Networks
On Leveraging Crowdsourcing Techniques for Schema Matching NetworksOn Leveraging Crowdsourcing Techniques for Schema Matching Networks
On Leveraging Crowdsourcing Techniques for Schema Matching Networks
 
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIntegrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
 
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessmentIntegrate fault tree analysis and fuzzy sets in quantitative risk assessment
Integrate fault tree analysis and fuzzy sets in quantitative risk assessment
 
P1151133713
P1151133713P1151133713
P1151133713
 
Traffic flow modeling on road networks using Hamilton-Jacobi equations
Traffic flow modeling on road networks using Hamilton-Jacobi equationsTraffic flow modeling on road networks using Hamilton-Jacobi equations
Traffic flow modeling on road networks using Hamilton-Jacobi equations
 
Computational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisComputational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysis
 
Probabilistic Modelling with Information Filtering Networks
Probabilistic Modelling with Information Filtering NetworksProbabilistic Modelling with Information Filtering Networks
Probabilistic Modelling with Information Filtering Networks
 
IRJET - Application of Linear Algebra in Machine Learning
IRJET -  	  Application of Linear Algebra in Machine LearningIRJET -  	  Application of Linear Algebra in Machine Learning
IRJET - Application of Linear Algebra in Machine Learning
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
 
Time alignment techniques for experimental sensor data
Time alignment techniques for experimental sensor dataTime alignment techniques for experimental sensor data
Time alignment techniques for experimental sensor data
 
Lausanne 2019 #2
Lausanne 2019 #2Lausanne 2019 #2
Lausanne 2019 #2
 
Dimensionality reduction by matrix factorization using concept lattice in dat...
Dimensionality reduction by matrix factorization using concept lattice in dat...Dimensionality reduction by matrix factorization using concept lattice in dat...
Dimensionality reduction by matrix factorization using concept lattice in dat...
 

More from The Statistical and Applied Mathematical Sciences Institute

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17Celine George
 
Personalisation of Education by AI and Big Data - Lourdes Guàrdia
Personalisation of Education by AI and Big Data - Lourdes GuàrdiaPersonalisation of Education by AI and Big Data - Lourdes Guàrdia
Personalisation of Education by AI and Big Data - Lourdes GuàrdiaEADTU
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...Gary Wood
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppCeline George
 
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSean M. Fox
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...Nguyen Thanh Tu Collection
 
Trauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesTrauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesPooky Knightsmith
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxCeline George
 
Rich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdfRich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdfJerry Chew
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnershipsexpandedwebsite
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽中 央社
 
Graduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxGraduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxneillewis46
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesAmanpreetKaur157993
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMELOISARIVERA8
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxMarlene Maheu
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptNishitharanjan Rout
 

Recently uploaded (20)

How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17
 
Personalisation of Education by AI and Big Data - Lourdes Guàrdia
Personalisation of Education by AI and Big Data - Lourdes GuàrdiaPersonalisation of Education by AI and Big Data - Lourdes Guàrdia
Personalisation of Education by AI and Big Data - Lourdes Guàrdia
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
 
Trauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical PrinciplesTrauma-Informed Leadership - Five Practical Principles
Trauma-Informed Leadership - Five Practical Principles
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptx
 
Rich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdfRich Dad Poor Dad ( PDFDrive.com )--.pdf
Rich Dad Poor Dad ( PDFDrive.com )--.pdf
 
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
Graduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxGraduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptx
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategies
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 

QMC: Undergraduate Workshop, Monte Carlo Techniques in Earth Science - Amit Apte, Feb 26, 2018

  • 1. Data assimilation Section 0: Monte Carlo Techniques in Earth Sciences Data assimilation Amit Apte International Centre for Theoretical Sciences (ICTS-TIFR) Bangalore, India SAMSI workshop, 26 Feb 2018 movies shown earlier are from Philip Brohan https://vimeo.com/170761410 https://vimeo.com/170971015 Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 1 of 30
  • 2. Data assimilation Section 0: Outline * 1 An introduction to data assimilation 2 Mathematical basis of data assimilation 3 Sampling: numerical technique for approximating the posterior * random images from google! Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 2 of 30
  • 3. Data assimilation Section 1: An introduction to data assimilation Outline 1 An introduction to data assimilation 2 Mathematical basis of data assimilation 3 Sampling: numerical technique for approximating the posterior Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 3 of 30
  • 4. Data assimilation Section 1: An introduction to data assimilation A few random(!) questions When is the first total solar eclipse in India after 2100? What will be the closest approach of Halley’s comet 2060? How many times in the next hour will a double pendulum reach the apogee? What will be the angle of a double pendulum after 5 min., 10 min., ...? Breaking waves – which wave will reach you? What will be the min/max temperatures in five largest cities in India, tomorrow, day-after, over the next month?? What will be the major stock exchange indices tomorrow? What will be the number of cars that will enter the golden gate bridge in next 30 minutes? Who will be the prime minister of India in 2020? In 2030? How many nuclei from a given piece of U235 will decay in next 10 minutes? ... Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 4 of 30
  • 5. Data assimilation Section 1: An introduction to data assimilation Two essential ingredients for describing reality Physical theories ←→ mathematical models In order to understand this: we first need to understand: Fluid and thermo-dynamics Ocean model ≡ appropriate approximation and numerical implementation “physical parameters” – Bathymetry (depth of ocean) and coastline; Specific heat of water; etc. external forcing – Wind, temperature, humidity of the atmosphere, inflow of river water parametrization of “unresolved processes” Even all of the above is NOT sufficient! data assimilation – using the measurements from the ocean Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 5 of 30
  • 6. Data assimilation Section 1: An introduction to data assimilation Data, of course, provide a crucial link to reality We have a large number of observations from satellites, ships, weather stations etc., but they are not uniformly distributed either in space or time quite sparse (e.g. much less in southern hemisphere) could depend in a complicated way on the atmospheric conditions (satellite data) Thus, the observations are insufficient to specify the model variables completely (and to describe the state in the physical theory). → under-determined, ill-posed inverse problem A Note: This is the problem of studying a specific instance (or realization) – this specific planet. So the chain of interactions physical theories ↔ models ↔ data for complex systems such as the planet leads to: Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 6 of 30
  • 7. Data assimilation Section 1: An introduction to data assimilation What is data assimilation? The art of optimally incorporating partial and noisy observational data of a chaotic, nonlinear, complex dynamical system with an imperfect model (of the data and the system dynamics) to get an estimate and the associated uncertainty for the system state ——————————————————————————————– 8MQI XVYI XVENIGXSV] SFWIVZEXMSRW L SFW JYRGXMSR SFW IVVSV IRWIQFPI JSVIGEWX YTHEXIH IRWIQFPI EVVS[W MRHMGEXI HEXE EWWMQMPEXMSR TVSGIWW SFW WTEGI WXEXI WTEGI Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 7 of 30
  • 8. Data assimilation Section 1: An introduction to data assimilation Data assimilation is a estimation problem. Estimation of state, in time, repetitively. Breaking waves – which wave will reach you? (insurance) What will be the min/max temperatures in five largest cities in India, tomorrow, day-after, over the next month? (planning) What will be the average temperature in Bangalore, month by month, in 2050, or up to 2050? (design) A few characteristics of data assimilation problems: Good physical theories, but not necessarily good models Systems are nonlinear and chaotic (usually deterministic) Multiscale – temporal and spatial – dynamics Observations of the system are noisy partial (sparse) discrete in time Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 8 of 30
  • 9. Data assimilation Section 1: An introduction to data assimilation Main ingredients A dynamical model: given the state x(t) ∈ Rd at any time t, gives the state x(s) at any later time s > t: Lorenz-63, Lorenz-96, etc. (for synthetic data studies, d = 3 or d = 40 etc.) or general circulation models (for ocean / atmosphere / coupled d = 107 or d = 104) Observations y1 ∈ Rp at time ti, for i = 1, . . . , T (typically p d) Observations are partial (with gaps), noisy, discrete in time Observation operator h : Rd → Rp to relate the model variables at time t with observations at the same time: if the state were x(t), the observations without noise would be h(x(t)) Observational “errors”: need to account for the difference between how the real system is represented in the model (representativeness error) and the instrumental uncertainty (noise) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 9 of 30
  • 10. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! p(x)dx is the probability of a state x p(x, y)dxdy is the joint probability of the state x and observation y Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 11. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! Probability densities like this in 10x dimension are difficult to represent. CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1260349 and By Bscan - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25235145 Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 12. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! But densities can be represented by “samples” (the dots below) CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1260349 and By Bscan - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=25235145 Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 13. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! p(x)dx is the probability of a state x p(x, y)dxdy is the joint probability of the state x and observation y Main concept that you need to remember - conditional probability p(x|y) = p(x, y) p(y) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 14. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! If and only if two random variables are correlated, information about one gives some information about the other mean of p(x|y=3) is ~= 1.0 Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 15. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! p(x)dx is the probability of a state x p(x, y)dxdy is the joint probability of the state x and observation y Main concept that you need to remember - conditional probability p(x|y) = p(x, y) p(y) But this can be written as p(x, y) = p(x|y)p(y) = p(y|x)p(x) This is a step away from the Bayes theorem: p(x|y) = p(y|x)p(x) p(y) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 16. Data assimilation Section 1: An introduction to data assimilation How do we represent uncertainty? Using probabilities! If and only if two random variables are correlated, information about one gives some information about the other mean of p(x|y=3) is ~= 1.0 That’s it: that is data assimilation! Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 10 of 30
  • 17. Data assimilation Section 1: An introduction to data assimilation So what is the big deal!? Ah... time Unfortunately, the x and y in the previous slide are all time dependent... so we should really be watching a movie of the probability densities, rather than images shown earlier! Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 11 of 30
  • 18. Data assimilation Section 2: Mathematical basis of data assimilation Outline 1 An introduction to data assimilation 2 Mathematical basis of data assimilation 3 Sampling: numerical technique for approximating the posterior Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 12 of 30
  • 19. Data assimilation Section 2: Mathematical basis of data assimilation Nonlinear filtering ≡ data assimilation Consider a stochastic dynamical model xt+1 = m(xt) + ζt with x0 unknown Thus we assume a probability density pa(x0) for the initial condition. We will consider the problem of “estimating” the state x at some time t given observations at times 1, 2, . . . , N. Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
  • 20. Data assimilation Section 2: Mathematical basis of data assimilation Nonlinear filtering ≡ data assimilation Consider a stochastic dynamical model xt+1 = m(xt) + ζt with x0 unknown Thus we assume a probability density pa(x0) for the initial condition. We will consider the problem of “estimating” the state x at some time t given observations at times 1, 2, . . . , N. Smoothing: Obtain a state estimate xt for t < N using all the observations up to time N; In particular, determine x0 Filtering: Obtain a state estimate xN using observations up to time N Prediction: Obtain a state estimate xt for t > N (the time horizon of prediction is important). Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
  • 21. Data assimilation Section 2: Mathematical basis of data assimilation Nonlinear filtering ≡ data assimilation Consider a stochastic dynamical model xt+1 = m(xt) + ζt with x0 unknown Thus we assume a probability density pa(x0) for the initial condition. We will consider the problem of “estimating” the state x at some time t given observations at times 1, 2, . . . , N. In most applications in earth sciences, data is collected “all the time” so the most relevant problem is of filtering. Predictions are obtained by using the filtering solution as “initial conditions” for the appropriate PDE of interest (hence the common view that data assimilation is the problem of finding initial conditions). Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 13 of 30
  • 22. Data assimilation Section 2: Mathematical basis of data assimilation Or data assimilation ≡ determination of posterior i.e. conditional distribution given the observations Observations yt at time t depend on the state at that time. yt = h(xt) + ηt t = 1, . . . , N h is called the observation operator. ηt is observational noise. Eventually we will assume independence between ηt and ζt. Probabilistic statement of Data assimilation problem: find the posterior distribution of the state conditioned on the observations Smoothing: p(xt|y1, y2, . . . , yN ) for t < N Filtering: p(xN |y1, y2, . . . , yN ) Prediction: p(xt|y1, y2, . . . , yN ) for t > N Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 14 of 30
  • 23. Data assimilation Section 2: Mathematical basis of data assimilation Two-step process for obtaining the filtering density Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 15 of 30
  • 24. Data assimilation Section 2: Mathematical basis of data assimilation Filtering density: obtained in a two step process A notation: y1:t = {y1, y2, . . . , yt} and x1:t = {x1, x2, . . . , xt} The first step is “prediction” Suppose we have the probability pa(x1:t|y1:t) of states x1:t up to time t conditioned on observations y1:t up to time t, and recalling that xt+1 = m(xt) + ζt (which is a Markov chain, with transition kernel pm(xt+1|xt)) → Then the probability pf (x1:t+1|y1:t) of the states x1:t+1 up to time t + 1 conditioned on observations y1:t up to time t, is obtained by: pf (x1:t, xt+1|y1:t) = p(x1:t|y1:t) · p(xt+1|x1:t, y1:t) ↓ ↓ = pa (x1:t|y1:t) · pm (xt+1|xt) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 16 of 30
  • 25. Data assimilation Section 2: Mathematical basis of data assimilation Filtering density: obtained in a two step process A notation: y1:t = {y1, y2, . . . , yt} and x1:t = {x1, x2, . . . , xt} The next step is “update” Given the above probability pf (x1:t+1|y1:t) of the states x1:t+1 up to time t + 1 conditioned on observations y1:t up to time t, and recalling yt+1 = h(xt+1) + ηt+1 → Then the probability pa(x1:t+1|y1:t+1) of the states x1:t+1 up to time t + 1 conditioned on observations y1:t+1 up to time t + 1 is given by Bayes’ theorem: pa (x1:t+1|y1:t, yt+1) = p(x1:t+1|y1:t) · p(yt+1|x1:t+1, y1:t) 1 p(yt+1|y1:t) ↓ ↓ ∝ pf (x1:t+1|y1:t) · pη(yt+1|xt+1) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 16 of 30
  • 26. Data assimilation Section 2: Mathematical basis of data assimilation Filtering density satisfies a recursion relation Putting together the two relations from previous slide: “prediction” given by pf (x1:t, xt+1|y1:t) = pa (x1:t|y1:t) · pm (xt+1|xt) “update” given by pa (x1:t+1|y1:t, yt+1) ∝ pf (x1:t+1|y1:t) · pη(yt+1|xt+1) we obtain the following recursive relation for the posterior distribution pa (x1:t+1|y1:t+1) ∝ pa (x1:t|y1:t) · pm (xt+1|xt) · pη(yt+1|xt+1) where pη(yt+1|xt+1) is the observational noise and pm(xt+1|xt) is the Markov transition Kernel for the dynamical model. Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 17 of 30
  • 27. Data assimilation Section 2: Mathematical basis of data assimilation Two-step process for obtaining the filtering density Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 18 of 30
  • 28. Data assimilation Section 2: Mathematical basis of data assimilation Kalman filter: a “two moment” representation of the Gaussian posterior in case of linear model Suppose the model is linear m(x) = Mx, the observation operator is linear h(x) = Hx, the initial distribution for x0 is Gaussian, as are the stochasticity in the observations ηt and in the dynamical model ζt. Kalman filter gives a recursion relation for the mean and covariance: (xa t , Ca t ) for pa(xt|y1:t) and (xf t+1, Cf t+1) for pf (xt+1|y1:t): “Update step” given by xa t = xf t + K(yt − Hxf t ) and Ca t = (I − KH)Cf t Here K = Pf t HT (HPf t HT + R)−1 is the Kalman gain matrix “Prediction step” given by xf t+1 = Mxa t and Cf t+1 = MCa t MT Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 19 of 30
  • 29. Data assimilation Section 2: Mathematical basis of data assimilation Computational hurdles Recall the recursive formulae for the exact or the Kalman filter Exact filtering density pa (x1:t+1|y1:t+1) ∝ pa (x1:t|y1:t) · pm (xt+1|xt) · pη(yt+1|xt+1) Kalman filter xa t = xf t + K(yt − Hxf t ) and Ca t = (I − KH)Cf t xf t+1 = Mxa t and Cf t+1 = MCa t MT Also recall: x ∈ Rd with d ∼ 106 − 107, and C is d × d matrix. Essentially impossible to even store or forecast the covariance matrix!! Sampling methods provide (seemingly) efficient ways to approximate the above Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 20 of 30
  • 30. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Outline 1 An introduction to data assimilation 2 Mathematical basis of data assimilation 3 Sampling: numerical technique for approximating the posterior Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 21 of 30
  • 31. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Basic idea of sampling a density f(x) Suppose X1, X2, . . . XN are N independent, identically distributed (IID) random variables (RV). For any function g(x), define the sample mean of g(x) to be GN = 1 N N n=1 g(Xn) Then E[GN ] = 1 N N n=1 E[g(Xn)] = E[g(X)] and var[GN ] = 1 N2 N n=1 var[g(Xn)] = 1 N var[g(X)] So, as N → ∞, var[GN ] → 0 Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 22 of 30
  • 32. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Sample mean approximates the mean Recall, E[GN ] = E[g(x)], and as N → ∞, var[GN ] → 0, thus E[g(X)] = ∞ −∞ g(x)f(x)dx ≈ 1 N N n=1 E[g(Xn)] This is the basis for Monte Carlo integration and sampling methods. For large enough N, we are guaranteed convergence! Justification: law of large numbers: P {limN→∞GN = E[g(X)]} = 1 . What about the error, for some given N, or how do we choose N if we fix an error tolerance? Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 23 of 30
  • 33. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Errors are given by Chebyshve inequality P |GN − E[GN ]| ≥ var[GN ] δ 1/2 ≤ δ But var[GN ] = var[g(X)]/N, which means: the probability that the sample mean GN and the exact mean of g(X) differ by var[g(X)]/(δN) is no more than δ Two ways to decrease the error ≈ var[g(X)] δN increase the sample size N decrease var[g(X)] How can we decrease var[g(X)]? By a change of probability distribution with respect to which we are taking the expections! This is the basic idea of importance sampling Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 24 of 30
  • 34. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Importance sampling: change of measure! First a sleight of hand: for any probability density p(x), Ef [g(X)] = g(x)f(x)dx = g(x)f(x) p(x) p(x)dx = Ep f(X)g(X) p(X) So now, define ¯g(X) = f(X)g(X) p(X) . If we take all expectations with respect to the new probability density p(x) varp[¯g(X)] = f2(x)g2(x) p2(x) p(x)dx − E2 p[¯g(X)] Check: the choice p(x) ∝ g(x)f(x) minimizes the variance!! Not usable since we do not know normalization constant But intuition is useful: choose p(x) to be as close to g(x)f(x) as possible. Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 25 of 30
  • 35. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Importance sampling: weighted samples Recall for any probability density p(x), Ef [g(X)] = g(x)f(x)dx = g(x)f(x) p(x) p(x)dx = Ep f(X)g(X) p(X) If X1, X2, . . . XN are samples from p(X), then, to get the “correct” estimate of g(X), we need to define a weighted mean: GN = 1 N N n=1 wng(Xn) with wn =? Check: E[GN ] = Ef [g(X)] (proof is essentially above.) Heuristics: choose p(x) to be as close to g(x)f(x) as possible, but easy to sample. Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 26 of 30
  • 36. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Computational opportunities Recall the recursive formulae for the exact or the Kalman filter Particle filters: importance sampling implementation of the following recur- sion pa (x1:t+1|y1:t+1) ∝ pa (x1:t|y1:t) · pm (xt+1|xt) · pη(yt+1|xt+1) Ensemble Kalman filter: Monte Carlo sampling version of KF (with a slight (nonlinear) variation) xa nt = xf nt + K(ynt − Hxf nt) n = 1, . . . , N but not Ca t = (I − KH)Cf t xf n,t+1 = Mxa nt n = 1, . . . , N but not Cf t+1 = MCa t MT Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 27 of 30
  • 37. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior How do we get samples of functions of random variables If we have samples X1, X2, . . . XN from a distribution for X, how do we get samples from Z which is a function of X, e.g. Z = h(X)? Let Zn = h(Xn). We need to show that these are indeed samples from the distribution of Z! How do we approximate E[r(Z)] for some function r(Z)? HN = 1 N N n=1 r(Zn) E[HN ] = 1 N N n=1 E[r(Zn)] = 1 N N n=1 E[r(h(Xn))] = E[(r ◦ h)(X)] The samples from the distribution of a function h of the random variable X are the function of the samples from the distribution of that random variable. Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 28 of 30
  • 38. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Particle filter: a “weighted sample” representation of the filtering recursion pa (x1:t+1|y1:t+1) ∝ pa (x1:t|y1:t) · pm (xt+1|xt) · pη(yt+1|xt+1) Suppose we have a weighted sample {xi t, wi t}, i = 1, . . . , N from pa(xt|y1:t), i.e., we approximate pa(xt|y1:t) ≈ N i=1 wi tδ(xt − xi t). If xi t+1 is a sample from a “importance sampling density” q(x1+1|xi t), then the weighted sample {xi t+1, wi t+1}, i = 1, . . . , N approximates the posterior at time t + 1 if we choose wi t+1 ∝ wi t · pm(xi t+1|xi t) · pη(yt+1|xi t+1) q(xi 1+1|xi t) This is the main idea behind particle filtering Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 29 of 30
  • 39. Data assimilation Section 3: Sampling: numerical technique for approximating the posterior Summary Data assimilation: the art of optimally incorporating partial and noisy observational data of a chaotic, nonlinear, complex dynamical system with an imperfect model (of the data and the system dynamics) to get an estimate and the associated uncertainty for the system state Sampling (including importance sampling) provide efficient ways to approach high dimensional data assimilation problems, with two particularly useful methods: particle filtering (PF) Ensemble Kalman filtering (EnKF) Data assimilation Amit Apte (ICTS-TIFR, Bangalore) ( apte@icts.res.in ) page 30 of 30