4. Multivariate Normal MethodMultivariate Normal Method
The multi-variate normal density function Np(µ,C), p-dimensional normal
random variate, is given by,
( ) ( )⎥
⎦
⎤
⎢
⎣
⎡
µZCµZ
C
Z ---
π||
=f -T
p//
1
221
2
1
exp
)2(
1
)(
where, Np(µ,C) is a multi-variate normal distribution with mean µ, and
covariance matrix C,
p is the number of parameters (nodes of the model),
Z = {Z1, Z2,..., Zp}T, p-dimensional random vector, (p*1),
µ = {µ1, µ2,…, µp}T , p-dimensional mean values vector, (p*1),
T is superscript transpose operation of a matrix,
-1 superscript is inverse operation of a matrix,
C is a pxp covariance matrix given by,
6. Correlation MatrixCorrelation Matrix
The correlation coefficient ρij
σσ
ρ
ZZ
ji
ij
ji
)Z,ZCov(
=
The correlation matrix R
1...
.....
..1..
...1
..1
1
21
112
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
ρ
ρ
ρρ
=
p
p
R
7. General Technique for Generation ofGeneral Technique for Generation of
Multivariate Distribution (CD Approach)Multivariate Distribution (CD Approach)
Let Z = {Z1,Z2,Z3,...Zp}T be the p-dimensional random vector of interest.
)()()...,(),()( 11212112121 zpz|zpz,...z|zpz,...z,z|zpz,...,z,zp p-pn-p-pp −=
The conditional distribution approach involves the following steps:
(1) generate Z1 = z1 from the marginal distribution of Z1 (uni-variate
distribution of Z1);
(2) generate Z2 = z2 from the conditional distribution of Z2 given Z1 = z1;
(3) generate Z3 = z3 from the conditional distribution of Z3 given Z1 = z1
and Z2 = z2...
and so forth through the p steps.
8. Comparison between MultiComparison between Multi--variatevariate StatisticalStatistical
Theory and Random Field TheoryTheory and Random Field Theory
•• Statistical Theory:Statistical Theory:
EachEach variatevariate is considered as a component of ais considered as a component of a
random vector and the covariance matrixrandom vector and the covariance matrix
gives the dependences between componentsgives the dependences between components
of the random vector.of the random vector.
•• Random Field Theory:Random Field Theory:
Each node value in the field is considered as aEach node value in the field is considered as a
component of a random vector. Thecomponent of a random vector. The
dependences between node values aredependences between node values are
described by autodescribed by auto--covariance function.covariance function.
9. Example of CD Approach(1)Example of CD Approach(1)
i-1,j i,j
i,j-1
1,1
Nx,Ny
Nx,1
1,Ny
Nx,j
The joint probability of the lattice process can be expressed mathematically as,
)Pr( 11212111 S=Z...,S=Z,S=Z,S=Z,...,S=Z,S=Z pN,Nqi,j-l,ji-ki,j,, yx
where,
Sk is a state of cell (i,j), which is one of the n states describing the geological
system,
Nx is the maximum number of cells in the horizontal direction,
Ny is the maximum number of cells in the vertical direction.
10. Example of CD Approach(2)Example of CD Approach(2)
i-1,j i,j
i,j-1
1,1
Nx,Ny
Nx,1
1,Ny
Nx,j
)Pr()Pr(
)Pr(
)Pr(
)Pr(
)Pr(
111111212
11121232122
11111
11111
11212111
S=Z.S=Z|S=Z
.S=Z,S=Z,S=Z|S=Z
...S=Z,...,S=Z,S=Z|S=Z
...S=Z,...,S=Z,S=Z|S=Z
=S=Z...,S=Z,S=Z,S=Z,...,S=Z,S=Z
,,,
,,,f,
,ri,j-l,ji-ki,j
,t-N,NqN,-NpN,N
pN,Nli,j-q,ji-ki,j,,
yxyxyx
yx
where, Pr(z1,1 =S1) is the marginal probability of state S1
11. Example of CD Approach(3)Example of CD Approach(3)
i-1,j i,j
i,j-1
1,1
Nx,Ny
Nx,1
1,Ny
Nx,j
Introducing some nearest neighbour property according to Markov chain theory,
)Pr()Pr(
)Pr(
)Pr()Pr()Pr(
)Pr()Pr(
)Pr(
111111221
111
111212111221312422
1111
11212111
S=Z.S=Z|S=Z
...S=Z|S=Z
.S=Z|S=Z...S=Z|S=Z.S=Z,S=Z|S=Z
...S=Z,S=Z|S=Z...S=Z,S=Z|S=Z
=S=Z...,S=Z,S=Z,S=Z,...,S=Z,S=Z
,,,
x-N,rN,
,,g,-Nd,N,,,
fi,j-t,ji-ki,ja-N,NlN,-NpN,N
pN,Nqi,j-l,ji-ki,j,,
yy
xx
yxyxyx
yx
12. LU Decomposition Method (1)LU Decomposition Method (1)
,
,
i
j
X
Y
0
Z
Z
sij
1
p
,
,
,
2 3
,
The algorithm for generating random fields with a given covariance structure
based on the covariance matrix of the system is as follows:
1) Build the covariance matrix C of the system. The elements of C are denoted
by,
)( Z,Z= Covc jiij
and if i=j the covariances becomes the variances.
...),(
.....
....
...),(
),(..),(
2
1
2
2
12
121
2
2
1
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
=
p
i
Zp
Z
Z
pZ
ZZCov
ZZCov
ZZCovZZCov
σ
σ
σ
σ
C
13. LU Decomposition Method(2)LU Decomposition Method(2)
,
,
i
j
X
Y
0
Z
Z
sij
1
p
,
,
,
2 3
,
In case of stationary random field, the elements of the covariance matrix are
given as:
)s(=c ij
2
Zij ρσ
σ2
Z is the variance of the process Z,
ρ(sij) is the auto-correlation function, and
sij is the distance vector between point i and point j.
)y-y(+)x-x(=s
2
ji
2
jiij
For pairs of values Zi and Zj with i = 1,...p and j = 1,...p
determine (xi - xj) and (yi - yj)
where, (xi,yi) are the coordinates of point Zi and point (xj,yj) are the
corresponding coordinates for point Zj. The distance sij between two points is,
15. Properties of AutoProperties of Auto--correlation Matrixcorrelation Matrix
1) All the diagonal elements are equal
to one, i.e. correlation between the
point and itself is perfect (complete
correlation).
2) If ρij = 0, this means no correlation
between i and j.
3) All the off-diagonal elements are
called autocorrelation coefficients
and they are less than one.
4) The autocorrelation matrix is
symmetric, i.e., ρij = ρ ji.
5) According to the stationarity
assumption:
ρ12 = ρ23 =...= ρp-1p,
ρ13 = ρ 24 =...= ρp-2p,
and so on.
16. LU Decomposition Method (3)LU Decomposition Method (3)
2) One has to decompose the covariance matrix by the Cholesky factorization
method (Square-Root method),
C = L U
where,
L is a unique lower triangular matrix,
U is a unique upper triangular matrix, and
U is LT , i.e., U is the transpose of L.
1
1
1
1
i1
i1
11
1/2
i
2
ii ii ik
k
i
ij ij ik jk
kjj
ij
c
= ,1 i pl
c
= - ,1 i pl c l
1
= - ,1< j <i pl c l l
l
=0 , i < j pl
−
=
−
=
≤ ≤
⎡ ⎤
≤ ≤⎢ ⎥
⎢ ⎥⎣ ⎦
⎡ ⎤
≤⎢ ⎥
⎣ ⎦
≤
∑
∑
17. LU Decomposition Method (4)LU Decomposition Method (4)
3) Generation of normally distributed p-dimensional sequence of independent
random numbers with zero mean and unit standard deviation N(0,1) which can
be expressed as,
T
1 2{ , ,..., }pε ε ε=ε
where, ε is vector of normally distributed random numbers, and εi is the i-th
random number drawn from N(0,1).
4) Multiplication of the independent random vector ε by the triangular matrix U
to get a vector of auto-correlated random numbers. This vector can be
expressed by matrix multiplication convention as,
X = U ε
where, X is a vector of multi-variate normal random Np(0,I), 0 is zero mean
vector (p*1), and I is the identity matrix (p*p).
Z = µ + X
21. Nearest Neighbour MethodNearest Neighbour Method
Whittle’s Model [1954],
∑≠ij
ijiji ε+ZW=Z
Zi is a random variable satisfying the nearest neighbour relation,
εi is uncorrelated normal random number with E(εi) = 0, and Var(εi) = σi
2 ,
i=1,2,...p, and
Wij are weighting coefficients.
Anisotropic first-order auto-regressive (Smith and Freeze [1979b])
ε+Z+Zα+Z+Zα=Z ijij+ij-yji+ji-xij )()( 1111
α x is an auto-regressive parameter expressing the degree of dependence of
Zij on its two neighbouring values Zi-1j and Zi+1j, (|αx|<1), and
α y is an auto-regressive parameter expressing the degree of dependence of
Zij on its two neighbouring values Zij-1 and Zij+1, (|α y| < 1).
i,j
i,j+1
i+1,j
i,j-1
i-1,j,,
,
,
,
N=2 N=3
N=4
22. Nearest Neighbour Method (1)Nearest Neighbour Method (1)
=Z W Z + ε
W is called the p*p connectivity matrix, or the p*p spatial lag operator of scaled
weights, wkl.
The elements of the connectivity matrix wkl are defined as,
i,j
i,j+1
i+1,j
i,j-1
i-1,j,,
,
,
,
N=2 N=3
N=4
N
w
=w
*
kl
kl
where, k = 1,2,...p, l = 1,2,...p, and k ≠ l,
w*
kl = α x if the blocks k and l are contiguous in the x-direction,
w*
kl = α y if the blocks k and l are contiguous in the y-direction, and
w*
kl = 0 otherwise, i.e., if k = l, or if blocks k and l are not contiguous, and
N is the total number of blocks surrounding block k, i.e.,
N=4 if block k is located inside the domain,
N=3 if block k is located on the boundary of the domain, and
N=2 if block k is located at a corner of the domain.
23. Nearest Neighbour Method (2)Nearest Neighbour Method (2)
+Z = W Z ε
Z ~ 0 and σZ.
ε ~ 0 and σε.
To simulate the predetermined standard deviation σz :
- Start from a random vector ε with σε =1.
- ε is pre-multiplied by an appropriate factor η to yield σz .
η+Z = W Z ε
Solution
( ) η= -1
Z I - W ε
Determination of η
{ }
2
. T
Z
E
σ
=
Z Z
R
24. Nearest Neighbour Method (3)Nearest Neighbour Method (3)
2 2
2
1 1 1
2
2
2
2
1
( ) (( ) ) (( ) ( ) )
1
1
,
1
1. ( )
ε
Z
- - T T -
ε
Z
Z
Z
m
η σ
σ
σ
η
σ
Taking the trace of a matrix
p tr η
σ
σ
η
V
= =
=
=
=
R = V .
V I - W . I - W I - W . I - W
R = V
V
Vm = tr V/p, and the symbol "tr" is the trace of the matrix, tr (V) = Σ vii , i= 1,…,p
25. Nearest Neighbour Method (4)Nearest Neighbour Method (4)
1
1
( )
' ( )
Z
m
Z
z
m
σ
V
σ
V
−
−
= +
Z = I - W ε
Z µ I - W ε
The analysis of the covariance function describing the generated random field
with first-order dependence is approximately an exponential decay function.
The advantage is: at the beginning of any simulation the matrix (I - W) must be
inverted only once. For each realization of the process Z, the inverted matrix (I -
W)-1 is simply multiplied by the generated random vector ηε.
The drawback of this method is computing the inverse matrix.
27. Turning Bands Method(TBM)Turning Bands Method(TBM)
The TBM was first proposed by Matheron [1973] and applied by the school of
mines in Paris.
Its basic concept is to transform a multidimensional simulation into the sum of a
series of equivalent uni-dimensional simulations
28. TBM ProcedureTBM Procedure
TBMTBM is a repetition of a two steps:is a repetition of a two steps:
1. a realization of a random process with a prescribed auto1. a realization of a random process with a prescribed auto--
covariance function and zero mean is generated on one line.covariance function and zero mean is generated on one line.
-- TheThe CholeskyCholesky decomposition method can be used (but withdecomposition method can be used (but with
much smaller correlation matrix dimensions) ormuch smaller correlation matrix dimensions) or
-- AutoAuto--regression methods, like nearest neighbour.regression methods, like nearest neighbour.
2. Orthogonal projection of the generated line process to each p2. Orthogonal projection of the generated line process to each pointoint
in the simulated twoin the simulated two-- or threeor three--dimensional random field.dimensional random field.
The two steps are repeated for a given number of lines and thenThe two steps are repeated for a given number of lines and then aa
final value is assigned to each grid point in the field by takinfinal value is assigned to each grid point in the field by taking ag a
weighted average over the total number of lines.weighted average over the total number of lines.
29. Background of the MethodBackground of the Method
Let Zi(u), i = 1,...L a set of N independent realizations of a one-dimensional,
second order stationarity stochastic process on a line u with an auto-correlation
function ρ1(uo), where uo is the spatial lag on the line.
The values given by the relation,
uZ
L
=x,y,zZ
L
i
is ∑=1
)(
1
)(
is a realization of a two- or three-
dimensional process with zero
mean.
The subscript s represents the
term " simulated " or " synthetic ".
[ ])()(:3 1 uu
du
d
=uρcaseD oo
o
o ρ−
)(:2
0
s
s
s
ρ
πρ
2
=
)u-(
du)u(
caseD
2
o
2
oo1
∫−
30. Spectral Turning Bands Method (STBM)Spectral Turning Bands Method (STBM)
To circumvent the difficulty, an expression for the spectral density function of
the one-dimensional processes as a function of the radial spectral density
function of the two-dimensional process is used.
This expression is given in a Fourier space by,
2
1( ) ( )
2
Z
SS
σω = ω
The spectral density function of
the uni-dimensional process
S1(ω) along the turning bands
lines is given by one half of the
radial spectral density function
S(ω) of the two-dimensional
process multiplied by the
variance of the two-dimensional
process.
31. Implementation of the STBM in 2D (1)Implementation of the STBM in 2D (1)
(1) Generation of One-dimensional Uni-variate Process on The Turning Bands
Line: (Standard Fourier Integration method)
∑∫ ≈
+=
ω
ω
ω
ω
ωω
all
j
ui
all
ui
dWedWe=uX
uiYuZuX
j
)()()(
)()()(
X is the sum of a complex series of sinusoidal functions of varying wavelength,
each magnified by complex random amplitude with zero mean dW(ωj), ωj=j.∆ω.
∑
∫
=
+=
+==
M
j
jji
all
j
udWuZ
udWuZuX
1
)cos()()(
)cos()()()}(Re{
φωω
φωω ω
ω
where, φj represents independent random angles which is uniformly distributed
between 0 and 2π,
M is the number of harmonics used in the calculations, ωj =(j-.5) ∆ω, j =1,2,...M,
and ∆ω is the discretized frequency which is given by ωmax/M, and ωmax is the
maximum frequency used in the calculations.
32. Implementation of the STBM in 2D (2)Implementation of the STBM in 2D (2)
1( ) 4 ( ).j jdW Sω ω ω⎡ ⎤= ∆⎣ ⎦
where, S1(ωj) is the spectral density function of the real process Z(u) on the line.
S1(ω) is assumed to be insignificant outside the region [- ωmax,+ ωmax]
1
1
'
1
1
( ) 2 ( ). cos( )
( ) 2 ( ). cos( )
j
j
M
i j j
j
M
i j j
j
Z u S u
Z u S u
ω ω ω φ
ω ω ω φ
=
=
⎡ ⎤= ∆ +⎣ ⎦
⎡ ⎤= ∆ +⎣ ⎦
∑
∑
where, ω 'j = ωj + δω
The frequency δω is a small random frequency added here in order to avoid
periodicities. δω is uniformly distributed between - ∆ω’ /2 and ∆ω’ /2, where, ∆ω’
is a small frequency, ∆ω’<<∆ω. ∆ω’ is taken equal to ∆ω /20 according to
Shinozuka and Jan [1972]
33. Implementation of the STBM in 2D (3)Implementation of the STBM in 2D (3)
( )
1/ 222
2
2
3/ 22 2 2 2
2 .exp. var
( ) exp
( )
2 1
yx
Z
x y
Z x y
x x y y
D Anistorpic Co iance
ss
Cov
Spectrum
S
σ
λ λ
σ λ λ
π λ ω λ ω
−
⎡ ⎤⎛ ⎞⎧ ⎫⎧ ⎫ ⎪ ⎪⎢ ⎥⎜ ⎟= − +⎨ ⎬ ⎨ ⎬⎢ ⎥⎜ ⎟⎪ ⎪⎩ ⎭ ⎩ ⎭⎝ ⎠⎢ ⎥⎣ ⎦
=
+ +
s
ω
34. Implementation of the STBM in 2D (4)Implementation of the STBM in 2D (4)
(2) Distribution of The Turning Bands Lines and The Number of Lines:
Random orientation versus evenly spaced (it converges faster)
8-16 lines are satisfactory choice in case of isotropic correlation.
(3) Spectral Discretization:
∆ω must be kept small enough.
M must be kept large enough
(M∆ω >50).
Mantoglou and Wilson [1982] WRR
(4) Physical Discretization:
∆u < min {∆x, ∆y, ∆z}
(5) Length of The Turning Bands
Lines:
Minimum length is determined by the
orientation of the line and the domain
size.
35. Spectral Turning Bands Method:Spectral Turning Bands Method:
Projection from many linesProjection from many lines
36. Comparison Between Various MethodsComparison Between Various Methods
Item
Method
〈K 〉
m/day
σK
m/day
〈 Y 〉 σY
-0.8 1.3
1.3
1.3
-0.8
-0.8
λx
M
λy
m
NNG 1 2 αx =.98
1.2
1.2
1.2
αy =.50
0.73
MVG 1 2 0.73
TBG 1 2 0.73
Monte-Carlo=100
Domain dimensions =15 ×15 ms
Domain discretization = 1 ×1 ms
38. MosaicMosaic FaciesFacies ((Discrete) ModelsDiscrete) Models
In this approach one is aiming to construct
- formation geological units, its geometric characteristics.
- lithologies.
- units dimensions (length, thickness, and width),
- orientations and frequency of occurrence, etc..
Types of discrete models:Types of discrete models:
•• ObjectObject--based Models.based Models.
•• Sequential based Models:Sequential based Models:
--Markov Chains in 1Markov Chains in 1--D, 2D, 2--D etc.D etc.
--Markov Random Fields.Markov Random Fields.
--TruncatedTruncated GaussianGaussian Fields.Fields.
--Sequential Indicator Simulation Models.Sequential Indicator Simulation Models.
--Random Lines Models.Random Lines Models.
--Random Sets.Random Sets.
39. ObjectObject--based Modelsbased Models
This model consider only two states (like sand and shale formation).
Two parameters are considered:
1. The density of the random objects per unit of volume.
2. Statistical distribution of the sizes of the objects.
41. Theory of OneTheory of One--dimensional Markov Chaindimensional Markov Chain
SS S
i0 1 i+1i-1 N2
l k q
,:)Pr(
)Pr(
pSZ|SZ
SZ,...,SZ,SZ,SZ|SZ
lkl1-iki
p0r3-in2-il1-iki
===
======
...
.....
....
....
..
1
21
11211
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
=
nnn
lk
n
pp
p
p
ppp
p
1,...,0
1
pp
n
k
lklk =≥ ∑=
wp k
N
lk
N
=
∞→
)(
lim
1,0
...,
1
1
=≥
==
∑
∑
=
=
n
k
kk
klkl
n
l
ww
n,1k,wpw
Transition prob.
Marginal prob.
61. Markov Random Field ModelMarkov Random Field Model
- Originally developed for image processing.
- Similarity between image description and reservoir description.
- The method does not use variogram or auto-correlation to describe
the relation between the neighbouring locations but it is based on the
theory of conditional probabilities.
Procedure (Simulated annealing or Metropolis algorithm):
1. The states in the system are generated by arbitrary distribution over
the lattice.
2. Two grid points are selected at random from the lattice.
3. Simple exchange of the grid points states based on conditional
probabilities.
4. After each trial, a new permutation of the states of the grid points is
created.
5. The procedure continues iteratively until the marginal and the
transition probabilities of the states in the system is stabilized.
63. Random Lines ModelRandom Lines Model
Poisson Random Lines Model
Switzer [1965].
The method generates random lines
from Poisson distribution on a circle.
The lines are used to represent
boundaries between different soils.