This document describes a multi-level reduced order modeling approach with robust error bounds. It discusses applying dimensionality reduction algorithms to extract active subspaces from reduced complexity models, then equipping the reduced model with an error bound. A case study applies this approach to a nuclear reactor assembly model by extracting active subspaces from individual pin cell models to build a reduced order model in a more computationally efficient way than using the full assembly model.
1. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Multi-level Reduced Order Modeling with Robust Error
Bounds
Mohammad G. Abdo
and
Hany S. Abdel-Khalik
School of Nuclear Engineering
Purdue University
mgabdo@ncsu.edu and abdelkhalik@purdue.edu
June 30, 2015
1 / 51
2. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Motivation
ROM is indispensible
for analysis with
repetitive executions.
ROM premised on the
assumption: intrinsic
dimensionality
nominal
dimensionality.
ROM discards
componants with
negligible impact on
reactor attributes of
interest and hence
must be equipped
with error metrics.
Can extract "active subspaces" from a reduced complexity model that
undergoes similar physics. 2 / 51
3. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Objectives
Apply the reduction and identify the active subspaces in a much more
efļ¬cient methodology.
Equip the reduced model with a robust error bound that can test the
representitaveness of the active subspaces and hence deļ¬ne a
validation domain that includes different conditions and different
scenarios.
3 / 51
4. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Deļ¬nition
A nonlinear function f is said to be reducable if there exist matrices
Urx ā RnĆrx and/or Ury ā RmĆry such that:
f(x) ā Ury UT
ry
f(Urx UT
rx
x)
f(x)
ā¤
4 / 51
5. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Reduction Algorithms
In our context, reduction algorithms refer to two different algorithms [4],
each is used at a different interface:
Gradient-free Snapshot Reduction Algorithm (Reduces response interface).
Gradient-based Reduction Algorithm(Reduces parameter interface).
5 / 51
6. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
Ļ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
6 / 51
7. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
Ļ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
2 Execute the forward model k times and record the corresponding k
responses: Ļi = f xi
k
i=1 , and aggregate them into:
= Ļ1 Ļ2 Ā· Ā· Ā· Ļk ā RmĆk .
6 / 51
8. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
Ļ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
2 Execute the forward model k times and record the corresponding k
responses: Ļi = f xi
k
i=1 , and aggregate them into:
= Ļ1 Ļ2 Ā· Ā· Ā· Ļk ā RmĆk .
6 / 51
9. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Snapshot Reduction (cont.)
3 Calculate the singular value decomposition (SVD):
= Uy Sy VT
y ; where Uy ā RmĆk .
4 Collect the ļ¬rst ry columns of Uy in Ury to span the active response
subspace, where ry ā¤ min (m, k).
7 / 51
10. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Snapshot Reduction (cont.)
3 Calculate the singular value decomposition (SVD):
= Uy Sy VT
y ; where Uy ā RmĆk .
4 Collect the ļ¬rst ry columns of Uy in Ury to span the active response
subspace, where ry ā¤ min (m, k).
7 / 51
11. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-based Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times to get:
G =
dR
pseudo
1
dx
x1
Ā· Ā· Ā·
dR
pseudo
k
dx
xk
.
8 / 51
12. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-based Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times to get:
G =
dR
pseudo
1
dx
x1
Ā· Ā· Ā·
dR
pseudo
k
dx
xk
.
2 From SVD of: G = Ux Sx VT
x , one can pick the ļ¬rst rx columns of Ux
(denoted by Urx ) to span the active parameter subspace.
8 / 51
13. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation
To estimate the error resulting from the reduction, the ijth entry of the
operator E can be written as:
[E]ij =
fi xj ā Ury (i, :) UT
ry
(i, :) fi Urx UT
rx
xj
fi xj
,
where Urx ā RnĆrx and Ury ā RmĆry are matrices whose orthonormal
columns span the parameter and response spaces respectively.
We need to estimate an upper bound for the error in each response.
9 / 51
14. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation (cont.)
The 2-norm of E (or each row in E) can be estimated using:
P E ā¤ Ī· max
i=1,2,...s
Ew(i) ā„ 1 ā
1
Ī·2
0
pdfw2
1
(t) dt
s
(2)
where E ā RmĆN with N being the number of sampled responses and w
is an N-dimensional random vector sampled from a known distribution D
This probabilistic statement has its roots in Dixonās theory [1983], where
he sampled w(i) from a standard normal distribution and found an
analytic value for the probability in terms of the multiplier Ī·
It is intuitive that if the user presets a probability of sucess then the value
of the multiplier Ī· depends solely on D from which w is sampled.
careful inspection showed that the estimated error can be multiple order
of magnitudes larger than the actual error (unneccessarily conservative).
10 / 51
15. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation (cont.)
This motivated the numerical inspection of many distributions and the
selection of the most practical one (i.e. which gave the least multiplier Ī·).
The inspection showed that the distribution which gave the least Ī· is the
binomial distribution. [2, 3]
11 / 51
16. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Distribution Selection
Figure : Uniform Distribution. Figure : Gaussian Distribution.
Estimated norm is orders of magnitude off.
12 / 51
17. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Distribution Selection [cont.]
The binomial shows a linear
structure arround the 45-degree
solid line.
This means that even if the case
is a failure case (i.e. The actual
norm is greater than the
bound),the estimated norm will
still be very close to the actual
norm
This appealing behaviour
motivates the use of the binomial
distribution to get rid of
unneccessarily conservative
bounds.
Figure : Binomial Distribution
13 / 51
18. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1
Benchmark lattice for Peach Bottom Atomic Power Station Unit2 (PB-2,
1112MWe BWR designed by OECD/NEA and manufactured by General
Electric).
Figure : 7x7 BWR Benchmark.
14 / 51
19. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
15 / 51
20. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
Can we extract the active subspace from running only subdomain of the
problem? Or a reduced complexity model?
15 / 51
21. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
Can we extract the active subspace from running only subdomain of the
problem? Or a reduced complexity model?
Figure : Calculation Levels.
http://www.nrc.gov/about-nrc/emerg-preparedness/images/fuel-pellet-
assembly.jpg
15 / 51
22. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!.
16 / 51
23. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
16 / 51
24. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
16 / 51
25. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
16 / 51
26. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
17 / 51
27. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
18 / 51
28. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
19 / 51
29. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
20 / 51
30. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
21 / 51
31. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
22 / 51
32. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
23 / 51
33. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
24 / 51
34. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why donāt we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ā R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
25 / 51
35. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous ļ¬gure defends that active parameter subspaces for the 9
pins are pretty close.
26 / 51
36. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous ļ¬gure defends that active parameter subspaces for the 9
pins are pretty close.
This motivates that the active parameter subspace for the whole
assembly might be revealed from sampling a pin or more !!
26 / 51
37. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous ļ¬gure defends that active parameter subspaces for the 9
pins are pretty close.
This motivates that the active parameter subspace for the whole
assembly might be revealed from sampling a pin or more !!
Tests Description:
Identify the parameter active subspace for one (or more) pin cells
Construct an error bound for each response
test the identiļ¬ed subspace on different pins then on the whole assembly.
If successful, test it in different conditions !!
26 / 51
38. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
Nominal dimension for the parameter space n = 127nuclides
Ć7reactions Ć56 energy groups = 49784
3 pins are depleted to 30 GWd/MTU then used to extract the subspace
(2.93% UO2 with 3 % gd, 1.94%UO2,2.93%UO2 ) and rx is taken to be
1500
Test1: The subspace is tested on the highest enrichment pin and a
completely different one.
First two ļ¬gures will show the error in ļ¬ux within two selective energy
ranges: 1.85-3.0 MeV and 0.625-1.01 eV for the most dominant Pin Cell.
Then a ļ¬gure showing the maximum and mean errors over all energies
and hence are taken as the bounds.
Test2: This is repeated for another pin cell (Mixture 4)
Test3: The extracted subspace is then employed on the whole assembly
depleted to the same point (30GWd/MTU). The results for this test is
shown in 9 ļ¬gures showing the maximum and mean error for the 9
unique mixtures. 27 / 51
39. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 1
Figure : Fast Flux Error (Mix 500, LF). Figure : Thermal Flux Error(Mix 500, LF).
28 / 51
40. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 2
Figure : Error Boubds (Mix 500, LF). Figure : Actual Errors(Mix 4, LF).
The left ļ¬gure shows the typical bounds doesnāt exceed 3% for
maximum error and is less than 0.7% for mean error.
29 / 51
41. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
30 / 51
42. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
31 / 51
43. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
32 / 51
44. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
33 / 51
45. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-ļ¬delity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-ļ¬delity model.
34 / 51
46. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2
The assembly is depleted to (60 GWd/MTU)
The next 9 ļ¬gures show the maximum and mean errors for the 9 different
mixtures.
This test aims to inspect the behaviour of the active subspace extracted
from the low ļ¬delity model at different composition due to different
burnup.
35 / 51
47. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
36 / 51
48. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
37 / 51
49. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
38 / 51
50. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
39 / 51
51. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-ļ¬delity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-ļ¬delity model.
40 / 51
52. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3
The assembly is depleted to the End of the ļ¬rst cycle (20 GWd/MTU)
and at Cold conditions
The next 9 ļ¬gures show the maximum and mean errors for the 9 different
mixtures.
This test aims to inspect the behaviour of the active subspace extracted
from the low ļ¬delity model at different composition due to different
burnup and at different temperature.
41 / 51
53. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
42 / 51
54. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
43 / 51
55. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
44 / 51
56. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
45 / 51
57. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-ļ¬delity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-ļ¬delity model.
46 / 51
58. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Conclusions
ROM errors are reliably quantiļ¬ed using realistic bounds (i.e., actual
error is close to error bound).
Provide-a-ļ¬rst-of-a-kind approach to quantify errors resulting from
dimensionality reduction in nuclear reactor calculations.
Can be used to experiment with different ROM techniques to determine
optimum performance for application of interest.
Quantify errors resulting from homogenization theory (a form of
dimensionality reduction). Multi-physics ROM, where one physics
determines active subspace for next physics.
Efļ¬cient rendering of active subspaces for expensive model, MLROM
47 / 51
59. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Conclusions
ROM errors are reliably quantiļ¬ed using realistic bounds (i.e., actual
error is close to error bound).
Provide-a-ļ¬rst-of-a-kind approach to quantify errors resulting from
dimensionality reduction in nuclear reactor calculations.
Can be used to experiment with different ROM techniques to determine
optimum performance for application of interest.
Quantify errors resulting from homogenization theory (a form of
dimensionality reduction). Multi-physics ROM, where one physics
determines active subspace for next physics.
Efļ¬cient rendering of active subspaces for expensive model, MLROM
Using MLROM enables the application of ROM on all models that was
expensive enough to prohebit executing the model to extract the
subspace.
This enables the determination of the validation space and One can
make a statement about how good is the subspace if used with
conditions different than those used in the sampling process. 47 / 51
60. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Acknowledgements
Iād like to acknowledge the support of the Department of Nuclear
Engineering at North Carolina State University to complete this work in
support of my PhD.
48 / 51
61. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Bibliography I
SCALE:A Comperhensive Modeling and Simulation Suite for Nuclear
Safety Analysis and Design,ORNL/TM-2005/39, Version 6.1, Oak Ridge
National Laboratory, Oak Ridge, Tennessee,June 2011. Available from
Radiation Safety Information Computational Center at Oak Rodge
National Laboratory as CCC-785.
M. G. ABDO AND H. S. ABDEL-KHALIK, Propagation of error bounds
due to active subspace reduction, ANS, 110 (2014), pp. 196ā199.
, Further investigation of error bounds for reduced order modeling,
ANS MC2015, (2015).
Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced order
modeling applied to nonlinear models, IJNME, 91 (2012), pp. 929ā949.
J. D. DIXON, Estimating extremal eigenvalues and condition numbers of
matrices, SIAM, 20 (1983), pp. 812ā814.
49 / 51
62. Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Bibliography II
N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding structure with
randomness:probabilistic algorithms for constructing approximate matrix
decompositions, SIAM, 53 (2011), pp. 217ā288.
P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A randomized
algorithm for the approximation of matrices, tech. report, Yale University.
J. A. TROPP, User-friendly tools for random matrices.
S. S. WILKS, Mathematical statistics, John Wiley, New York, 1st ed.,
1962.
F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast
randomized algorithm for the approximation of matrices, preliminary
report, Yale University.
50 / 51