SlideShare a Scribd company logo
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Probabilistic Error Bounds for Order Reduction
of Smooth Nonlinear Models
Mohammad G. Abdo
and
Hany S. Abdel-Khalik
and
Presented by: Congjian Wang
North Carolina State University
Nuclear Department
mgabdo@ncsu.edu and abdelkhalik@ncsu.edu
June 16, 2014
1 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
2 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
2 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
Probabilistic error bounds are mostly used in the linear moulding.
2 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
Probabilistic error bounds are mostly used in the linear moulding.
Reduction errors need to be propagated across various
interfaces such as parameter interface(i.e. cross sections), state
function(i.e. flux) and response of interest(i.e. reaction rates,
detector response etc..).
2 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
We will adopt one formal mathematical definition that has been
developed back in the 1960s in the signal processing community.
3 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
We will adopt one formal mathematical definition that has been
developed back in the 1960s in the signal processing community.
Definition
A nonlinear function f with n inputs is said to be reducable and of
intrinsic dimension r (0 ≤ r ≤ n) if there exists a non linear function g
with r inputs and an n × r matrix Q such that r is the smallest integer
satisfying:
f (x) = g ˜x ;
where x ∈ Rn and ˜x = QT x ∈ Rr
3 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
4 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
Snapshot reduction algorithm (Gradient-free)(Reduces response
interface).
4 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
Snapshot reduction algorithm (Gradient-free)(Reduces response
interface).
Gradient-based reduction algorithm(Reduces parameter interface).
4 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction
Consider the reducible model under inspection to be described by:
y = f (x) , (1)
The algorithm proceeds as follows:
5 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction
Consider the reducible model under inspection to be described by:
y = f (x) , (1)
The algorithm proceeds as follows:
1 Generate k random parameters realizations: {xi }k
i=1.
2 Execute the forward model in Eq.[1] k times and record the
corresponding k variations of the responses: yi = f (xi )
k
i=1
,
referred to as snapshots, and aggregate them in a matrix as
follows:
Y = y1 y2 · · · yk ∈ Rm×k
.
3 Calculate the singular value decomposition (SVD):
Y = U VT
; where U ∈ Rm×k
.
5 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction (cont.)
4 Select the dimensionality of the reduced space for the responses
to be ry , such that ry ≤ min (m, k). Identify the active subspace
as the range of the first ry columns of the matrix U, denoted by
Ury . Note that in practice ry is increased until the error
upper-bound in step 5 meets a user-defined error tolerance.
5 For a general response y, calculate the error resulting from the
reduction as: ey = I − Ury Ury
T
y .
6 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
7 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times, each time with a random
realization of the input parameters, and aggregate the pseudo
response derivatives in a matrix:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
7 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times, each time with a random
realization of the input parameters, and aggregate the pseudo
response derivatives in a matrix:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
2 Calculate the SVD: G = WSPT
, and select the first rx columns
of W (denoted by Wrx ) to span the active subspace for the
parameters such that:
ex = I − Wrx WT
rx
x .
7 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Notice that discarding components in the parameter space will
give rise to errors in the response space even if no reduction in
the response space is rendered.
8 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Notice that discarding components in the parameter space will
give rise to errors in the response space even if no reduction in
the response space is rendered.
To distinguish between different errors at different levels we
introduce:
8 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors
1
f (x) − Qy QT
y f (x)
f (x)
≤
y
y ,
where Qy is a matrix whose orthonormal columns span the
response subspace Sy and
y
y is the user-defined tolerance for
the relative error in response due to reduction in response space
only .
9 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors
1
f (x) − Qy QT
y f (x)
f (x)
≤
y
y ,
where Qy is a matrix whose orthonormal columns span the
response subspace Sy and
y
y is the user-defined tolerance for
the relative error in response due to reduction in response space
only .
2
f (x) − f Qx QT
x x
f (x)
, ≤ x
y
Similarly, Qx is a matrix whose orthonormal columns span an
active subspace Sx in the parameter space and x
y is the
user-defined tolerance for the relative error in response due to
reduction in parameter space only.
9 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors (cont.)
3
f (x) − Qy QT
y f Qx QT
x x
f (x)
≤
xy
y ,
where
xy
y is the user-defined tolerance for the relative error in
response due to simultaneous reductions in both spaces.
10 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
11 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
Dixon’s theory
It all started by Dixon(1983) when he estimated the largest
and/or smallest eigen value and hence the condition number of a
real positive definite matrix A.
11 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
Dixon’s theory
It all started by Dixon(1983) when he estimated the largest
and/or smallest eigen value and hence the condition number of a
real positive definite matrix A.
His work relies on a basic set of theorems and lemmas[3, 7] that
we will introduce in the following few slides.
11 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
12 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
12 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
12 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
Let θ ∈ R > 1.
12 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
Let θ ∈ R > 1.
⇒
P xT
Ax ≤ λ1 ≤ θxT
Ax ≥ 1 −
2
π
n
θ
. (2)
12 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
13 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
13 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
Then the previous theorem can be written as:
P Bx ≤ (σ1 = B ) ≤
√
θ Bx ≥ 1 −
2
π
n
θ
. (3)
13 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
Then the previous theorem can be written as:
P Bx ≤ (σ1 = B ) ≤
√
θ Bx ≥ 1 −
2
π
n
θ
. (3)
Selecting θ = α2 2
π n ; where α > 1 yields:
P B ≤ α
2
π
√
n max
i=1,2,··· ,k
Bx(i) ≥ 1 − α−k . (4)
13 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
14 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
The model is subjected to both types of reduction at both
parameter and response interfaces. Thes responses are
aggregated in Yx and Yy respectively.
14 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
The model is subjected to both types of reduction at both
parameter and response interfaces. Thes responses are
aggregated in Yx and Yy respectively.
The bound for each case is:
x
y = α1
2
π
√
N max
i=1,2,··· ,k1
Y − Yx
wi ,
y
y = α2
2
π
√
N max
i=1,2,··· ,k2
Y − Yy
wi ,
14 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds(cont.)
Then the response error due to both reductions can be
calculated from:
P Y − Yxy
≤ x
y +
y
y ≥ 1 − α
−k1
1 1 − α
−k2
2 (5)
15 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 1
The first numerical test is an algebraic prototype nonlinear model
where:
y = f (x) ; f : Rn → Rm; n = 15; m = 10 such that:
















y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
















= B ×


















aT
1 x
(aT
2 x)2
(1.4 ∗ aT
2 x + 1.5 ∗ aT
3 x)2
1
1+exp(−aT
2 x)
cos(0.8aT
4 x + 1.6aT
5 x)
(aT
6 x + aT
7 ) ∗ [(aT
7 )2 + sin(aT
8 x)]
(1 + 0.1exp(−aT
8 x))[(aT
9 x)2 + (aT
10x)2]
aT
9 x + 0.2aT
10x
aT
10x
aT
9 x + 8aT
10x


















where ai ∈ Rn; i = 1, 2, · · · , m and B is a random m × m matrix.
16 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
17 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
The objective is to test the proposed probabilistic error bound
due to reductions at both the parameter and response spaces.
17 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
The objective is to test the proposed probabilistic error bound
due to reductions at both the parameter and response spaces.
The computer code employed is TSUNAMI-2D, a control module
in SCALE 6.1 [1], wherein the derivatives are provided by SAMS,
the sensitivity analysis module for SCALE 6.1.
17 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 1
The dimension of the parameter space is n = 15, and the response space is
m = 10, and a user defined tolerance of 10−5 is selected. The parameter active
subspace is found to have a size of rx = 9 whereas the response active
subspace is ry = 9.number of tests is 10000.
Fig. 1 shows the function behavior plotted along a randomly selected direction in
the parameter space.
Figure : Function behavior along a random input direction.
18 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Table I. shows the minimum theoretical probabilities
Pact = number of successes
total number of tests predicted by the theorem and the actual
probability resulting from the numerical test.
Table : Algebraic Model Results
Error Bound Pact Ptheo
Y−Yx
Y ≤ x
y 1.0 0.9
Y−Yy
Y ≤
y
y 0.998 0.9
Y−Yxy
Y ≤ x
y +
y
y 1.0 0.81
19 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Relative Errors
Next we show the relative error Y−Yxy
Y due to both reductions
vs. the theoretical upper bound predicted by the theory x
y +
y
y .
Figure : Theoretical and actual error for case study 1.
20 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
For the pin cell model the full input subspace (cross sections) had a size
of n = 1936, whereas the output (material flux) was of size m = 176.
The cross-sections of the fuel, clad, moderator and gap were perturbed
by 30%(relative perturbations). Based on a user defined tolerance of
10−5, the sizes of the input and output active subspaces are rx = 900
and ry = 165, respectively.
Table II shows the minimum theoretical probabilities predicted by the
theorem and the probability resulted from the numerical test.
Table : Algebraic Model Results
Error Bound Pact Ptheo
Y−Yx
Y ≤ x
y 1.0 0.9
Y−Yy
Y ≤
y
y 1.0 0.9
Y−Yxy
Y ≤ x
y +
y
y 1.0 0.81
21 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Relative Errors
Next we show the relative error Y−Yxy
Y due to both reductions
vs. theoretical upper bound predicted by the theory x
y +
y
y .
Figure : Theoretical and actual error for case study 2.
22 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Conclusions
This manuscript has equipped our previously developed ROM
techniques with probabilistic error metrics that bound the
maximum errors resulting from the reduction.
Given that reduction algorithms can be applied at any of the
various model interfaces, e.g., parameters, state, and responses,
the developed metric effectively aggregates the associated errors
to estimate an error bound on the response of interest.
The results show that we can start to break the linear moulding
and start to explore nonlinear smooth functions.
These functionality will prove essential in our ongoing work
focusing on extension of ROM techniques to multi-physics
models.
23 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography I
SCALE:A Comperhensive Modeling and Simulation Suite for
Nuclear Safety Analysis and Design,ORNL/TM-2005/39, Version
6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,June
2011. Available from Radiation Safety Information Computational
Center at Oak Rodge National Laboratory as CCC-785.
Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced
order modeling applied to nonlinear models, IJNME, 91 (2012),
pp. 929–949.
J. D. DIXON, Estimating extremal eigenvalues and condition
numbers of matrices, SIAM, 20 (1983), pp. 812–814.
24 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography II
N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding
structure with randomness:probabilistic algorithms for
constructing approximate matrix decompositions, SIAM, 53
(2011), pp. 217–288.
P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A
randomized algorithm for the approximation of matrices, tech.
report, Yale University.
J. A. TROPP, User-friendly tools for random matrices.
S. S. WILKS, Mathematical statistics, John Wiley, New York,
1st ed., 1962.
25 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography III
F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast
randomized algorithm for the approximation of matrices,
preliminary report, Yale University.
26 / 27
Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Questions/Suggestions?
27 / 27

More Related Content

What's hot

GATE Computer Science Solved Paper 2004
GATE Computer Science Solved Paper 2004GATE Computer Science Solved Paper 2004
GATE Computer Science Solved Paper 2004
Rohit Garg
 
Cs 2008(1)
Cs 2008(1)Cs 2008(1)
Cs 2008(1)
Ravi Rajput
 
About functional SIR
About functional SIRAbout functional SIR
About functional SIR
tuxette
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practice
tuxette
 
TMPA-2017: Generating Cost Aware Covering Arrays For Free
TMPA-2017: Generating Cost Aware Covering Arrays For Free TMPA-2017: Generating Cost Aware Covering Arrays For Free
TMPA-2017: Generating Cost Aware Covering Arrays For Free
Iosif Itkin
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
Arvind Krishnaa
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
SSA KPI
 
Approximation algorithms
Approximation algorithmsApproximation algorithms
Approximation algorithms
Ganesh Solanke
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologyLilac Liu Xu
 
Spectral factorization
Spectral factorizationSpectral factorization
Spectral factorization
Senthil kumarasamy
 
Volume computation and applications
Volume computation and applications Volume computation and applications
Volume computation and applications
Vissarion Fisikopoulos
 
Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007 Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007
Rohit Garg
 
Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem
sumit gyawali
 
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
Waqas Tariq
 
Cia iii 17 18 qp
Cia iii 17 18 qpCia iii 17 18 qp
Cia iii 17 18 qp
Shivaji Sinha
 
31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity
Andres Mendez-Vazquez
 
Spark summit talk, july 2014 powered by reveal
Spark summit talk, july 2014 powered by revealSpark summit talk, july 2014 powered by reveal
Spark summit talk, july 2014 powered by reveal
Debasish Das
 
Review and evaluations of shortest path algorithms
Review and evaluations of shortest path algorithmsReview and evaluations of shortest path algorithms
Review and evaluations of shortest path algorithms
Pawan Kumar Tiwari
 
Ec2203 digital electronics questions anna university by www.annaunivedu.org
Ec2203 digital electronics questions anna university by www.annaunivedu.orgEc2203 digital electronics questions anna university by www.annaunivedu.org
Ec2203 digital electronics questions anna university by www.annaunivedu.org
annaunivedu
 

What's hot (19)

GATE Computer Science Solved Paper 2004
GATE Computer Science Solved Paper 2004GATE Computer Science Solved Paper 2004
GATE Computer Science Solved Paper 2004
 
Cs 2008(1)
Cs 2008(1)Cs 2008(1)
Cs 2008(1)
 
About functional SIR
About functional SIRAbout functional SIR
About functional SIR
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practice
 
TMPA-2017: Generating Cost Aware Covering Arrays For Free
TMPA-2017: Generating Cost Aware Covering Arrays For Free TMPA-2017: Generating Cost Aware Covering Arrays For Free
TMPA-2017: Generating Cost Aware Covering Arrays For Free
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
Approximation algorithms
Approximation algorithmsApproximation algorithms
Approximation algorithms
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in Epidemiology
 
Spectral factorization
Spectral factorizationSpectral factorization
Spectral factorization
 
Volume computation and applications
Volume computation and applications Volume computation and applications
Volume computation and applications
 
Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007 Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007
 
Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem
 
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)
 
Cia iii 17 18 qp
Cia iii 17 18 qpCia iii 17 18 qp
Cia iii 17 18 qp
 
31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity31 Machine Learning Unsupervised Cluster Validity
31 Machine Learning Unsupervised Cluster Validity
 
Spark summit talk, july 2014 powered by reveal
Spark summit talk, july 2014 powered by revealSpark summit talk, july 2014 powered by reveal
Spark summit talk, july 2014 powered by reveal
 
Review and evaluations of shortest path algorithms
Review and evaluations of shortest path algorithmsReview and evaluations of shortest path algorithms
Review and evaluations of shortest path algorithms
 
Ec2203 digital electronics questions anna university by www.annaunivedu.org
Ec2203 digital electronics questions anna university by www.annaunivedu.orgEc2203 digital electronics questions anna university by www.annaunivedu.org
Ec2203 digital electronics questions anna university by www.annaunivedu.org
 

Similar to AbdoSummerANS_mod3

Development of Multi-Level ROM
Development of Multi-Level ROMDevelopment of Multi-Level ROM
Development of Multi-Level ROM
Mohammad
 
An Algorithm For Vector Quantizer Design
An Algorithm For Vector Quantizer DesignAn Algorithm For Vector Quantizer Design
An Algorithm For Vector Quantizer Design
Angie Miller
 
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICDESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
VLSICS Design
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace Reduction
Mohammad
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
theijes
 
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
The Statistical and Applied Mathematical Sciences Institute
 
15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf
AllanKelvinSales
 
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Paris Women in Machine Learning and Data Science
 
Projection methods for stochastic structural dynamics
Projection methods for stochastic structural dynamicsProjection methods for stochastic structural dynamics
Projection methods for stochastic structural dynamics
University of Glasgow
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Willy Marroquin (WillyDevNET)
 
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
The Statistical and Applied Mathematical Sciences Institute
 
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
IJERA Editor
 
Instance Based Learning in Machine Learning
Instance Based Learning in Machine LearningInstance Based Learning in Machine Learning
Instance Based Learning in Machine Learning
Pavithra Thippanaik
 
Metody logiczne w analizie danych
Metody logiczne w analizie danych Metody logiczne w analizie danych
Metody logiczne w analizie danych
Data Science Warsaw
 
DSP_Lab_MAnual_-_Final_Edition[1].docx
DSP_Lab_MAnual_-_Final_Edition[1].docxDSP_Lab_MAnual_-_Final_Edition[1].docx
DSP_Lab_MAnual_-_Final_Edition[1].docx
ParthDoshi66
 
DSP_Lab_MAnual_-_Final_Edition.pdf
DSP_Lab_MAnual_-_Final_Edition.pdfDSP_Lab_MAnual_-_Final_Edition.pdf
DSP_Lab_MAnual_-_Final_Edition.pdf
ParthDoshi66
 
Discrete wavelet transform-based RI adaptive algorithm for system identification
Discrete wavelet transform-based RI adaptive algorithm for system identificationDiscrete wavelet transform-based RI adaptive algorithm for system identification
Discrete wavelet transform-based RI adaptive algorithm for system identification
IJECEIAES
 
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
The Statistical and Applied Mathematical Sciences Institute
 

Similar to AbdoSummerANS_mod3 (20)

Development of Multi-Level ROM
Development of Multi-Level ROMDevelopment of Multi-Level ROM
Development of Multi-Level ROM
 
ANSSummer2015
ANSSummer2015ANSSummer2015
ANSSummer2015
 
An Algorithm For Vector Quantizer Design
An Algorithm For Vector Quantizer DesignAn Algorithm For Vector Quantizer Design
An Algorithm For Vector Quantizer Design
 
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICDESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
 
Propagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace ReductionPropagation of Error Bounds due to Active Subspace Reduction
Propagation of Error Bounds due to Active Subspace Reduction
 
xldb-2015
xldb-2015xldb-2015
xldb-2015
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
CLIM: Transition Workshop - Statistical Emulation with Dimension Reduction fo...
 
15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf
 
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
 
Projection methods for stochastic structural dynamics
Projection methods for stochastic structural dynamicsProjection methods for stochastic structural dynamics
Projection methods for stochastic structural dynamics
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...
 
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
CLIM Program: Remote Sensing Workshop, Statistical Emulation with Dimension R...
 
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...
 
Instance Based Learning in Machine Learning
Instance Based Learning in Machine LearningInstance Based Learning in Machine Learning
Instance Based Learning in Machine Learning
 
Metody logiczne w analizie danych
Metody logiczne w analizie danych Metody logiczne w analizie danych
Metody logiczne w analizie danych
 
DSP_Lab_MAnual_-_Final_Edition[1].docx
DSP_Lab_MAnual_-_Final_Edition[1].docxDSP_Lab_MAnual_-_Final_Edition[1].docx
DSP_Lab_MAnual_-_Final_Edition[1].docx
 
DSP_Lab_MAnual_-_Final_Edition.pdf
DSP_Lab_MAnual_-_Final_Edition.pdfDSP_Lab_MAnual_-_Final_Edition.pdf
DSP_Lab_MAnual_-_Final_Edition.pdf
 
Discrete wavelet transform-based RI adaptive algorithm for system identification
Discrete wavelet transform-based RI adaptive algorithm for system identificationDiscrete wavelet transform-based RI adaptive algorithm for system identification
Discrete wavelet transform-based RI adaptive algorithm for system identification
 
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
MUMS: Transition & SPUQ Workshop - Gradient-Free Construction of Active Subsp...
 

More from Mohammad Abdo

ABDO_MLROM_PHYSOR2016
ABDO_MLROM_PHYSOR2016ABDO_MLROM_PHYSOR2016
ABDO_MLROM_PHYSOR2016Mohammad Abdo
 
MultiLevelROM2_Washinton
MultiLevelROM2_WashintonMultiLevelROM2_Washinton
MultiLevelROM2_WashintonMohammad Abdo
 
MC2015Posterlandscape
MC2015PosterlandscapeMC2015Posterlandscape
MC2015PosterlandscapeMohammad Abdo
 
MultiLevelROM_ANS_Summer2015_RevMarch23
MultiLevelROM_ANS_Summer2015_RevMarch23MultiLevelROM_ANS_Summer2015_RevMarch23
MultiLevelROM_ANS_Summer2015_RevMarch23Mohammad Abdo
 
ProbErrorBoundROM_MC2015
ProbErrorBoundROM_MC2015ProbErrorBoundROM_MC2015
ProbErrorBoundROM_MC2015Mohammad Abdo
 
FurtherInvestegationOnProbabilisticErrorBounds_final
FurtherInvestegationOnProbabilisticErrorBounds_finalFurtherInvestegationOnProbabilisticErrorBounds_final
FurtherInvestegationOnProbabilisticErrorBounds_finalMohammad Abdo
 

More from Mohammad Abdo (6)

ABDO_MLROM_PHYSOR2016
ABDO_MLROM_PHYSOR2016ABDO_MLROM_PHYSOR2016
ABDO_MLROM_PHYSOR2016
 
MultiLevelROM2_Washinton
MultiLevelROM2_WashintonMultiLevelROM2_Washinton
MultiLevelROM2_Washinton
 
MC2015Posterlandscape
MC2015PosterlandscapeMC2015Posterlandscape
MC2015Posterlandscape
 
MultiLevelROM_ANS_Summer2015_RevMarch23
MultiLevelROM_ANS_Summer2015_RevMarch23MultiLevelROM_ANS_Summer2015_RevMarch23
MultiLevelROM_ANS_Summer2015_RevMarch23
 
ProbErrorBoundROM_MC2015
ProbErrorBoundROM_MC2015ProbErrorBoundROM_MC2015
ProbErrorBoundROM_MC2015
 
FurtherInvestegationOnProbabilisticErrorBounds_final
FurtherInvestegationOnProbabilisticErrorBounds_finalFurtherInvestegationOnProbabilisticErrorBounds_final
FurtherInvestegationOnProbabilisticErrorBounds_final
 

AbdoSummerANS_mod3

  • 1. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Probabilistic Error Bounds for Order Reduction of Smooth Nonlinear Models Mohammad G. Abdo and Hany S. Abdel-Khalik and Presented by: Congjian Wang North Carolina State University Nuclear Department mgabdo@ncsu.edu and abdelkhalik@ncsu.edu June 16, 2014 1 / 27
  • 2. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Motivation ROM plays a vital role in many desiplines, specially for computationally intensive applications. 2 / 27
  • 3. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Motivation ROM plays a vital role in many desiplines, specially for computationally intensive applications. It i s mandatory to equip reduced order models with error metrics to credibly defend the predictions of the reduced model. 2 / 27
  • 4. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Motivation ROM plays a vital role in many desiplines, specially for computationally intensive applications. It i s mandatory to equip reduced order models with error metrics to credibly defend the predictions of the reduced model. Probabilistic error bounds are mostly used in the linear moulding. 2 / 27
  • 5. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Motivation ROM plays a vital role in many desiplines, specially for computationally intensive applications. It i s mandatory to equip reduced order models with error metrics to credibly defend the predictions of the reduced model. Probabilistic error bounds are mostly used in the linear moulding. Reduction errors need to be propagated across various interfaces such as parameter interface(i.e. cross sections), state function(i.e. flux) and response of interest(i.e. reaction rates, detector response etc..). 2 / 27
  • 6. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces We will adopt one formal mathematical definition that has been developed back in the 1960s in the signal processing community. 3 / 27
  • 7. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces We will adopt one formal mathematical definition that has been developed back in the 1960s in the signal processing community. Definition A nonlinear function f with n inputs is said to be reducable and of intrinsic dimension r (0 ≤ r ≤ n) if there exists a non linear function g with r inputs and an n × r matrix Q such that r is the smallest integer satisfying: f (x) = g ˜x ; where x ∈ Rn and ˜x = QT x ∈ Rr 3 / 27
  • 8. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Reduction Algorithms In our context, reduction algorithms refer to two different algorithms, each is used at a different interface: 4 / 27
  • 9. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Reduction Algorithms In our context, reduction algorithms refer to two different algorithms, each is used at a different interface: Snapshot reduction algorithm (Gradient-free)(Reduces response interface). 4 / 27
  • 10. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Reduction Algorithms In our context, reduction algorithms refer to two different algorithms, each is used at a different interface: Snapshot reduction algorithm (Gradient-free)(Reduces response interface). Gradient-based reduction algorithm(Reduces parameter interface). 4 / 27
  • 11. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Snapshot Reduction Consider the reducible model under inspection to be described by: y = f (x) , (1) The algorithm proceeds as follows: 5 / 27
  • 12. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Snapshot Reduction Consider the reducible model under inspection to be described by: y = f (x) , (1) The algorithm proceeds as follows: 1 Generate k random parameters realizations: {xi }k i=1. 2 Execute the forward model in Eq.[1] k times and record the corresponding k variations of the responses: yi = f (xi ) k i=1 , referred to as snapshots, and aggregate them in a matrix as follows: Y = y1 y2 · · · yk ∈ Rm×k . 3 Calculate the singular value decomposition (SVD): Y = U VT ; where U ∈ Rm×k . 5 / 27
  • 13. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Snapshot Reduction (cont.) 4 Select the dimensionality of the reduced space for the responses to be ry , such that ry ≤ min (m, k). Identify the active subspace as the range of the first ry columns of the matrix U, denoted by Ury . Note that in practice ry is increased until the error upper-bound in step 5 meets a user-defined error tolerance. 5 For a general response y, calculate the error resulting from the reduction as: ey = I − Ury Ury T y . 6 / 27
  • 14. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Gradient-baised Reduction This algorithm may be described by the following steps: 7 / 27
  • 15. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Gradient-baised Reduction This algorithm may be described by the following steps: 1 Execute the adjoint model k times, each time with a random realization of the input parameters, and aggregate the pseudo response derivatives in a matrix: G = dR pseudo 1 dx x1 · · · dR pseudo k dx xk . 7 / 27
  • 16. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Gradient-baised Reduction This algorithm may be described by the following steps: 1 Execute the adjoint model k times, each time with a random realization of the input parameters, and aggregate the pseudo response derivatives in a matrix: G = dR pseudo 1 dx x1 · · · dR pseudo k dx xk . 2 Calculate the SVD: G = WSPT , and select the first rx columns of W (denoted by Wrx ) to span the active subspace for the parameters such that: ex = I − Wrx WT rx x . 7 / 27
  • 17. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Notice that discarding components in the parameter space will give rise to errors in the response space even if no reduction in the response space is rendered. 8 / 27
  • 18. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Notice that discarding components in the parameter space will give rise to errors in the response space even if no reduction in the response space is rendered. To distinguish between different errors at different levels we introduce: 8 / 27
  • 19. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Different Errors 1 f (x) − Qy QT y f (x) f (x) ≤ y y , where Qy is a matrix whose orthonormal columns span the response subspace Sy and y y is the user-defined tolerance for the relative error in response due to reduction in response space only . 9 / 27
  • 20. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Different Errors 1 f (x) − Qy QT y f (x) f (x) ≤ y y , where Qy is a matrix whose orthonormal columns span the response subspace Sy and y y is the user-defined tolerance for the relative error in response due to reduction in response space only . 2 f (x) − f Qx QT x x f (x) , ≤ x y Similarly, Qx is a matrix whose orthonormal columns span an active subspace Sx in the parameter space and x y is the user-defined tolerance for the relative error in response due to reduction in parameter space only. 9 / 27
  • 21. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Different Errors (cont.) 3 f (x) − Qy QT y f Qx QT x x f (x) ≤ xy y , where xy y is the user-defined tolerance for the relative error in response due to simultaneous reductions in both spaces. 10 / 27
  • 22. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The previous relative errors can be estimated using Dixon’s Theory[3]. 11 / 27
  • 23. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The previous relative errors can be estimated using Dixon’s Theory[3]. Dixon’s theory It all started by Dixon(1983) when he estimated the largest and/or smallest eigen value and hence the condition number of a real positive definite matrix A. 11 / 27
  • 24. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The previous relative errors can be estimated using Dixon’s Theory[3]. Dixon’s theory It all started by Dixon(1983) when he estimated the largest and/or smallest eigen value and hence the condition number of a real positive definite matrix A. His work relies on a basic set of theorems and lemmas[3, 7] that we will introduce in the following few slides. 11 / 27
  • 25. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Theorem If A ∈ Rnxn is a real positive definite matrix whose eigen values are λ1 ≥ λ2 ≥ · · · ≥ λn > 0. 12 / 27
  • 26. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Theorem If A ∈ Rnxn is a real positive definite matrix whose eigen values are λ1 ≥ λ2 ≥ · · · ≥ λn > 0. Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere. 12 / 27
  • 27. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Theorem If A ∈ Rnxn is a real positive definite matrix whose eigen values are λ1 ≥ λ2 ≥ · · · ≥ λn > 0. Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere. x = x1 · · · xn T ; n ≥ 2 and xi ∼ U(−1, 1) over S. 12 / 27
  • 28. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Theorem If A ∈ Rnxn is a real positive definite matrix whose eigen values are λ1 ≥ λ2 ≥ · · · ≥ λn > 0. Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere. x = x1 · · · xn T ; n ≥ 2 and xi ∼ U(−1, 1) over S. Let θ ∈ R > 1. 12 / 27
  • 29. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Theorem If A ∈ Rnxn is a real positive definite matrix whose eigen values are λ1 ≥ λ2 ≥ · · · ≥ λn > 0. Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere. x = x1 · · · xn T ; n ≥ 2 and xi ∼ U(−1, 1) over S. Let θ ∈ R > 1. ⇒ P xT Ax ≤ λ1 ≤ θxT Ax ≥ 1 − 2 π n θ . (2) 12 / 27
  • 30. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The next corollary has been explored by many authors[8, 5, 6, 4] and has been employed in different applications, it gave the modern texture to Dixon’s bound. Corollary if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky factor of A. 13 / 27
  • 31. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The next corollary has been explored by many authors[8, 5, 6, 4] and has been employed in different applications, it gave the modern texture to Dixon’s bound. Corollary if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky factor of A. And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2 i ). 13 / 27
  • 32. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The next corollary has been explored by many authors[8, 5, 6, 4] and has been employed in different applications, it gave the modern texture to Dixon’s bound. Corollary if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky factor of A. And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2 i ). Then the previous theorem can be written as: P Bx ≤ (σ1 = B ) ≤ √ θ Bx ≥ 1 − 2 π n θ . (3) 13 / 27
  • 33. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces The next corollary has been explored by many authors[8, 5, 6, 4] and has been employed in different applications, it gave the modern texture to Dixon’s bound. Corollary if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky factor of A. And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2 i ). Then the previous theorem can be written as: P Bx ≤ (σ1 = B ) ≤ √ θ Bx ≥ 1 − 2 π n θ . (3) Selecting θ = α2 2 π n ; where α > 1 yields: P B ≤ α 2 π √ n max i=1,2,··· ,k Bx(i) ≥ 1 − α−k . (4) 13 / 27
  • 34. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Propagating error bounds Consider a physical model: y = f (x) where f : Rn → Rm . 14 / 27
  • 35. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Propagating error bounds Consider a physical model: y = f (x) where f : Rn → Rm . The model is subjected to both types of reduction at both parameter and response interfaces. Thes responses are aggregated in Yx and Yy respectively. 14 / 27
  • 36. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Propagating error bounds Consider a physical model: y = f (x) where f : Rn → Rm . The model is subjected to both types of reduction at both parameter and response interfaces. Thes responses are aggregated in Yx and Yy respectively. The bound for each case is: x y = α1 2 π √ N max i=1,2,··· ,k1 Y − Yx wi , y y = α2 2 π √ N max i=1,2,··· ,k2 Y − Yy wi , 14 / 27
  • 37. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks ROM Dixon 1983 Propagating the error bound accross different interfaces Propagating error bounds(cont.) Then the response error due to both reductions can be calculated from: P Y − Yxy ≤ x y + y y ≥ 1 − α −k1 1 1 − α −k2 2 (5) 15 / 27
  • 38. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 1 The first numerical test is an algebraic prototype nonlinear model where: y = f (x) ; f : Rn → Rm; n = 15; m = 10 such that:                 y1 y2 y3 y4 y5 y6 y7 y8 y9 y10                 = B ×                   aT 1 x (aT 2 x)2 (1.4 ∗ aT 2 x + 1.5 ∗ aT 3 x)2 1 1+exp(−aT 2 x) cos(0.8aT 4 x + 1.6aT 5 x) (aT 6 x + aT 7 ) ∗ [(aT 7 )2 + sin(aT 8 x)] (1 + 0.1exp(−aT 8 x))[(aT 9 x)2 + (aT 10x)2] aT 9 x + 0.2aT 10x aT 10x aT 9 x + 8aT 10x                   where ai ∈ Rn; i = 1, 2, · · · , m and B is a random m × m matrix. 16 / 27
  • 39. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 2 The second case study involves a realistic neutron transport of a PWR pin cell model. 17 / 27
  • 40. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 2 The second case study involves a realistic neutron transport of a PWR pin cell model. The objective is to test the proposed probabilistic error bound due to reductions at both the parameter and response spaces. 17 / 27
  • 41. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 2 The second case study involves a realistic neutron transport of a PWR pin cell model. The objective is to test the proposed probabilistic error bound due to reductions at both the parameter and response spaces. The computer code employed is TSUNAMI-2D, a control module in SCALE 6.1 [1], wherein the derivatives are provided by SAMS, the sensitivity analysis module for SCALE 6.1. 17 / 27
  • 42. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 1 The dimension of the parameter space is n = 15, and the response space is m = 10, and a user defined tolerance of 10−5 is selected. The parameter active subspace is found to have a size of rx = 9 whereas the response active subspace is ry = 9.number of tests is 10000. Fig. 1 shows the function behavior plotted along a randomly selected direction in the parameter space. Figure : Function behavior along a random input direction. 18 / 27
  • 43. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Table I. shows the minimum theoretical probabilities Pact = number of successes total number of tests predicted by the theorem and the actual probability resulting from the numerical test. Table : Algebraic Model Results Error Bound Pact Ptheo Y−Yx Y ≤ x y 1.0 0.9 Y−Yy Y ≤ y y 0.998 0.9 Y−Yxy Y ≤ x y + y y 1.0 0.81 19 / 27
  • 44. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Relative Errors Next we show the relative error Y−Yxy Y due to both reductions vs. the theoretical upper bound predicted by the theory x y + y y . Figure : Theoretical and actual error for case study 1. 20 / 27
  • 45. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Case Study 2 For the pin cell model the full input subspace (cross sections) had a size of n = 1936, whereas the output (material flux) was of size m = 176. The cross-sections of the fuel, clad, moderator and gap were perturbed by 30%(relative perturbations). Based on a user defined tolerance of 10−5, the sizes of the input and output active subspaces are rx = 900 and ry = 165, respectively. Table II shows the minimum theoretical probabilities predicted by the theorem and the probability resulted from the numerical test. Table : Algebraic Model Results Error Bound Pact Ptheo Y−Yx Y ≤ x y 1.0 0.9 Y−Yy Y ≤ y y 1.0 0.9 Y−Yxy Y ≤ x y + y y 1.0 0.81 21 / 27
  • 46. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Case Study 1 Case Study 2 Relative Errors Next we show the relative error Y−Yxy Y due to both reductions vs. theoretical upper bound predicted by the theory x y + y y . Figure : Theoretical and actual error for case study 2. 22 / 27
  • 47. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Conclusions This manuscript has equipped our previously developed ROM techniques with probabilistic error metrics that bound the maximum errors resulting from the reduction. Given that reduction algorithms can be applied at any of the various model interfaces, e.g., parameters, state, and responses, the developed metric effectively aggregates the associated errors to estimate an error bound on the response of interest. The results show that we can start to break the linear moulding and start to explore nonlinear smooth functions. These functionality will prove essential in our ongoing work focusing on extension of ROM techniques to multi-physics models. 23 / 27
  • 48. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Bibliography I SCALE:A Comperhensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design,ORNL/TM-2005/39, Version 6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,June 2011. Available from Radiation Safety Information Computational Center at Oak Rodge National Laboratory as CCC-785. Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced order modeling applied to nonlinear models, IJNME, 91 (2012), pp. 929–949. J. D. DIXON, Estimating extremal eigenvalues and condition numbers of matrices, SIAM, 20 (1983), pp. 812–814. 24 / 27
  • 49. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Bibliography II N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding structure with randomness:probabilistic algorithms for constructing approximate matrix decompositions, SIAM, 53 (2011), pp. 217–288. P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A randomized algorithm for the approximation of matrices, tech. report, Yale University. J. A. TROPP, User-friendly tools for random matrices. S. S. WILKS, Mathematical statistics, John Wiley, New York, 1st ed., 1962. 25 / 27
  • 50. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Bibliography III F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast randomized algorithm for the approximation of matrices, preliminary report, Yale University. 26 / 27
  • 51. Motivation Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Bibliography Thanks Questions/Suggestions? 27 / 27