This paper introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM works by discarding model components deemed to have negligible impact, but this introduces reduction errors. The paper derives an expression to calculate probabilistic error bounds for the discarded components. Numerical experiments on a pin cell model demonstrate the approach, showing the error bounds capture the actual errors with high probability, even when the ROM is applied under different physics conditions. The error bounding technique allows ROM algorithms to self-adapt and ensure reduction errors remain below user-defined tolerances.
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
Asymptotic features of Hessian Matrix in Receding Horizon Model Predictive Co...TELKOMNIKA JOURNAL
In this paper, Receding Horizon Model Predictive Control (RH-MPC) having a quadratic objective
function is studied through the Singular Value Decomposition (SVD) and Singular Vectors of its Hessian
Matrix. Contrary to the previous work, non-equal and medium sized control and prediction horizons are
considered and it is shown that the Singular Values converge to the open loop magnitude response of the
system and singular vectors contain the phase information. Earlier results focused on classical formulation
of Generalized Predictive Control (GPC), whereas, current work proves the applicability to modern
formulation. Although, method can easily be extended to MIMO systems, only SISO system examples
are presented.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
non parametric methods for power spectrum estimatonBhavika Jethani
non-parametric methods for power spectrum estimation which includes bartlett method, welch method , blackman and tukey methods and also the comparision of all these methods
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
Asymptotic features of Hessian Matrix in Receding Horizon Model Predictive Co...TELKOMNIKA JOURNAL
In this paper, Receding Horizon Model Predictive Control (RH-MPC) having a quadratic objective
function is studied through the Singular Value Decomposition (SVD) and Singular Vectors of its Hessian
Matrix. Contrary to the previous work, non-equal and medium sized control and prediction horizons are
considered and it is shown that the Singular Values converge to the open loop magnitude response of the
system and singular vectors contain the phase information. Earlier results focused on classical formulation
of Generalized Predictive Control (GPC), whereas, current work proves the applicability to modern
formulation. Although, method can easily be extended to MIMO systems, only SISO system examples
are presented.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
non parametric methods for power spectrum estimatonBhavika Jethani
non-parametric methods for power spectrum estimation which includes bartlett method, welch method , blackman and tukey methods and also the comparision of all these methods
The rise in IT spending is fueling the increased adoption of the bring-your-own-device (BYOD) culture in the region, and given its inherent advantages for employees and employers, BYOD adoption is bound to grow further in the coming years.
However, BYOD adoption is accompanied by IT security risks arising out of lack of awareness about device security among employees. The situation is compounded by insufficient network resources and the lack of formal BYOD policies at organizations to manage security risks emanating from use of personal devices on official servers and networks.
CIOs in the region need to respond by preparing IT networks and formulating a BYOD policies, which are designed to manage this increased demand for BYOD and mobile diversity in the region.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
Algorithms based on minimum volume constraint or sum of squared distances constraint is
widely used in Hyperspectral image unmixing. However, there are few works about performing
comparison between these two algorithms. In this paper, comparison analysis between two
algorithms is presented to evaluate the performance of two constraints under different situations. Comparison is implemented from the following three aspects: flatness of simplex, initialization effects and robustness to noise. The analysis can provide a guideline on which constraint should be adopted under certain specific tasks.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Urban strategies to promote resilient cities The case of enhancing Historic C...inventionjournals
This research tackles disaster prevention problems in dense urban areas, concentrating on the urban fire challenge in Historic Cairo district, Egypt, through disaster risk management approach. The study area suffers from the strike of several urban fire outbreaks, that resulted in disfiguring historic monuments and destroying unregulated traditional markets. Therefore, the study investigates the significance of hazard management and how can urban strategies improve the city resilient through reducing the impact of natural and man-made threats. The main findings of the research are the determination of the vulnerability factors in Historic Cairo district, either regarding management deficiency or issues related to the existing urban form. It is found that the absence of the mitigation and preparedness phases is the main problem in the risk management cycle in the case study. Additionally, the coping initiatives adopted by local authorities to address risks are random and insufficient. The study concludes with recommendations which invoke incorporating hazard management stages (pre disaster, during disaster and post disaster) into the process of evolving development planning. Finally, solutions are offered to mitigate, prepare, respond and recover from fire disasters in the case study. The solutions include urban policies, land-use planning, urban design outlines, safety regulation and public awareness and training.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
Function Approximation is a popular engineering problems used in system identification or Equation
optimization. Due to the complex search space it requires, AI techniques has been used extensively to spot
the best curves that match the real behavior of the system. Genetic algorithm is known for their fast
convergence and their ability to find an optimal structure of the solution. We propose using a genetic
algorithm as a function approximator. Our attempt will focus on using the polynomial form of the
approximation. After implementing the algorithm, we are going to report our results and compare it with
the real function output.
Estimating Reconstruction Error due to Jitter of Gaussian Markov ProcessesMudassir Javed
This paper presents estimation of reconstruction error due to jitter of Gaussian Markov Processes. Two samples are considered for the analysis in two different situations. In one situation, the first sample does not have jitter while the other one is effected by jitter. In the second situation, both the samples are effected by jitter. The probability density functions of the jitter are given by Uniform Distribution and Erlang Distribution. Statistical averaging is applied to conditional expectation of random variable of jitter. From that, conditional variance is obtained which is defined as reconstruction error function and by knowing that, the reconstruction error of a Gaussian Markov Process is determined.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
block diagram and signal flow graph representation
Probabilistic Error Bounds for Reduced Order Modeling M&C2015
1. ANS MC2015 - Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte
Carlo (MC) Method • Nashville, TN • April 19-23, 2015, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2015)
Probabilistic Error Bounds for Reduced Order Modeling
Mohammad G. Abdo and Hany S. Abdel-Khalik
School of Nuclear Engineering, Purdue University
400 Central Drive, Purdue Campus, NUCL Bldg., West Lafayette, IN 47906
abdo@purdue.edu; abdelkhalik@purdue.edu
ABSTRACT
Reduced order modeling has proven to be an effective tool when repeated execution of reactor
analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the
associated reactor physics models is sufficiently small when compared to the nominal
dimensionality of the input and output data streams. By employing a truncation technique with roots
in linear algebra matrix decomposition theory, ROM effectively discards all components of the input
and output data that have negligible impact on reactor attributes of interest. This manuscript
introduces a mathematical approach to quantify the errors resulting from the discarded ROM
components. As supported by numerical experiments, the introduced analysis proves that the
contribution of the discarded components could be upper-bounded with an overwhelmingly high
probability. The reverse of this statement implies that the ROM algorithm can self-adapt to
determine the level of the reduction needed such that the maximum resulting reduction error is below
a given tolerance limit that is set by the user.
Key Words: Reduced order modeling, Error bounds
1 INTRODUCTION
Recently, there has been an increased interest in reduced order modeling algorithms for reactor
physics simulation. This is primarily driven by the best-estimates plus uncertainty (BEPU)
approach, first championed by the industry until its adoption into law by the US-NRC in 1988. To
fully realize the benefits of the BEPU approach, the uncertainties of the simulation predictions
must be properly characterized. Uncertainty characterization (UC) implies the capabilities to
identify, quantify, and prioritize the various sources of uncertainties. These three capabilities
require repeated model executions which proves to be an increasingly taxing endeavor, especially
with the continuous increase in the modeling details sought to improve fidelity.
Reduced order modeling is premised on the observation that the true dimensionality of reactor
physics simulation codes is rather small, implying that the associated uncertainty sources that
affect model behavior must also be rather small notwithstanding their nominal number is very
large. With a small number of uncertainty sources, uncertainty characterization becomes a
computationally tractable practice. This follows because the computational cost of performing UC
depends on the number of uncertainty sources, which absent reduction could number in the
millions for typical reactor physics simulation.
2 ROM ERROR BOUND CONSTRUCTION
To describe the contribution of this manuscript, basic definition of ROM is first introduced.
Consider a model of reactor physics simulation of the form:
2. Authors’ names, use et al. if more than 3
Page 2 of 9
y f x (1)
where
n
x are reactor physics parameters, e.g., cross-sections, m
y are reactor responses of
interest e.g., eigenvalue, peak clad temperature, etc., and n and m are the numbers of parameters
and responses, respectively. The simulation, represented by the function f, is assumed to be a black
box. The goal of any ROM approach is replace the original simulation with an approximate
representation f that can be used in lieu of the original simulation for computationally intensive
analyses such as UC. To ensure reliability of the ROM approximation f , the following criterion
must be satisfied:
f x f x for all x S (2)
where is a to-be-determined upper-bound, and S defines the region of applicability. If such
upper-bound exists, one can adjust the level of reduction to ensure that the bound matches the
confidence one has in the original simulation predictions. In such case, both f and f would provide
the same level of confidence for any subsequent analysis.
In our analysis, the ROM approximation f has the general form:
f x f x Ν Κ
where both N and K are rank-deficient matrix operators such that:
m m
N , dim R mrN , and min( , )mr m n ,
n n
K , dim R nrK , and min( , )nr m n .
These matrices identify active subspaces in the space of input parameters and output responses.
The implication is that few degrees of freedom in the input space are needed to capture all possible
model variations, and the output responses have only a small number of degrees of freedom as
well, implying large degrees of correlation exist therein. This knowledge allows one to craft
uncertainty quantification and sensitivity analysis techniques in such a manner that reduces the
required number of forward and/or adjoint model executions necessary to complete the respective
analyses. See earlier work for more details on these approaches [1, 2].
In practice, the error operator is inaccessible but can be sampled and aggregated in a matrix E
whose th
ij element represents the error in the th
i response of the th
j sample, written as:
,: ,: ( )
[ ]
T T
i j y y i x x j
ij
i j
f x i i f x
f x
Q Q Q Q
E (3)
The matrix E calculates the discarded component of function f. Each row of E represents a
response, implying that if one treats each row as a matrix, it is possible to calculate a different error
bound for each response. This allows one to compute the individual responses’ errors since each
response is expected to have its own reduction error. To achieve that: consider a matrix m N
E
and a random vector N
w where N is the number of sampled responses such that wi ,
3. Short version of title as entered by author on web page
Page 3 of 9
where is binomial distribution with a probability of success of 0.9. Then w can be used to
estimate the largest and smallest eigenvalue and hence the 2-norm of E via:
2
2
1
1
1,2,
0
max 1 ,
s
i
wi s
w pdf t dt
E E (4)
where the multiplier 1 w is numerically evaluated to be 1.0164 for binomial distribution and
for a success probability of 0.9. For more details about this approach and the proof of Eq. (4), the
reader may consult the following references [3, 4, 5, 6].
The main goal of this paper is to show that one can satisfy Eq. (4) for any given N and K matrices
with an overwhelmingly high probability. Computing an error bound for general reduction
operators is important because in general multi-physics models, one may obtain a reduction
operator using a lower-fidelity model when the high fidelity model is too expensive to execute in
search of the ROM active subspace. Another situation occurs when the input for a given physics
model is produced by another physics model. In such case, one could use the forward model
executions of the upstream physics model to calculate an active subspace for the downstream
physics. Therefore, it is important to capture the reduction errors for general matrix reduction
operators.
If the distribution of w and the multiplier are selected such that the integral on the right hand
side is 0.1, the probability that the estimated bound is larger than the 2-norm of the error is given
by: 1 10 s
p
, where s is a small integer that corresponds to an additional number of matrix-
vector multiplications. Typically, we employ a value of s equal to 10 to ensure extremely high
probability. In support of verifying the proposed algorithm however, this manuscript will employ
s=1 to give rise to situations when the estimated error bound fails to bound the actual errors with
probability of 10%. Multiple numerical experiments will be devised to test the upper-bound and
the probability of failure as predicted by the theory.
3 NUMERICAL EXPERIMENTS AND RESULTS
This section will employ a number of experiments to demonstrate the ability to calculate an upper-
bound on the reduction error. The first experiment will focus on a direct ROM application to
identify the active subspace and calculate the associated reduction error and probability of failure.
The second experiment will employ the reduction operators determined using a given set of
conditions (low burnup, hot full power) to test its adequacy for other conditions (higher burnup
and cold conditions). This capability will prove useful in model validation activities relying on the
use of the proposed reduction techniques, where now one must determine whether the developed
reduced model will be adequate for a wide range of operating conditions.
4. Authors’ names, use et al. if more than 3
Page 4 of 9
3.1 Case Study 1:
This case study employs a pin cell depleted to (3.0 GWd/MTU) as the reference model used to
identify the active subspaces for the parameters and the responses spaces. SCALE 6.1 is used for
the computational purposes, sequences like t-depl, t-newt, tsunami-2d and SAMS 5.0 are needed
for the depletion, neutronics calculations and sensitivity analysis respectively [7]. The original
parameter space contains 7 nuclides * 2 reactions * 238 energy groups=3332 parameters, whereas
the nominal dimension for the response space is 238 representing material flux at 238 energy
groups. For illustration, a very small rank is assumed to ensure that the actual errors are large
enough to possibly fail the theoretical error bound proposed here. In all the tests, a value of 1s
is employed to maximize the number of failures for the sake of demonstration. In the series of
figures below, the actual probability of failure is indicated on the top of the left graphs.
Figs. 1 through 4 display the results of the first case study. The same responses are employed for
both case studies. In the odd-numbered figures, the response is the total collision rate in the energy
range 1.85 to 2.35 MeV. The even figures show the same response but in the thermal range between
0.975 and 1.0 eV. We use these small ranges to depict the power of the reduction in capturing
localized responses. In each of the figures, the left graph compares the actual error resulting from
the reduction to the error bound calculated from Eq. (4). The 45-degree solid line indicates the
limit of the failure region, i.e., when the actual error exceeds the bound predicted by the theory.
The right graphs show the actual variation of the response due to a random perturbation of 30% in
the parameters.
Figure 1. Fast Collision Rate Errors – Parameter Reduction Only
5. Short version of title as entered by author on web page
Page 5 of 9
Figure 2. Thermal Collision Rate Errors – Parameter Reduction Only
Figs. 1 and 2 show a parameter-only-based reduction, meaning that the reduction is rendered at
the parameter space. Both the level of reduction in terms of the rank of the active parameter
subspace rx, and the actual probability of failure are shown on the top of the right graph. Reader
should remember that we picked s and the multiplier in equation (4) such that the probability of
success is 0.9. In reality s is picked to be 5 which results in a probability of success of 99.999%.
Figs. 3 and 4 employ response-based reduction only, implying no reduction in the parameter space.
The rank of response active subspace ry is indicated in a similar manner.
Figure 3. Fast Collision Rate Errors – Response Reduction Only
6. Authors’ names, use et al. if more than 3
Page 6 of 9
Figure 4. Thermal Collision Rate Errors – Response Reduction Only
Notice that the reduction errors calculated will depend on whether the parameter-based reduction
captures the important parameter directions that control the model response variations. Moreover,
the response reduction, if not captured correctly, will miss directions along which the response is
expected to vary. This situation will be clearer when we consider different physics conditions as
done in the next case study.
3.2 Case Study 2:
This case study employs the active subspaces extracted from the previous reference model to
predict the response variations at different physics conditions. We employ a 24 GWd/MTU
depleted fuel simulated at cold conditions. This emulates the effect of starting up a reactor with a
once-burned fuel.
Figs. 5 through 8 correspond respectively to Figs. 1 through 4, where now the model is being
evaluated at different physics conditions, using the reduction results from the previous case study,
i.e., same ranks for parameter and response spaces, same responses, and same size of parameter
perturbations. The idea here is to check whether the model reduced at hot conditions could be
employed at sufficiently different physics conditions.
Figure 5. Fast Collision Rate Errors - Parameter Reduction Only
7. Short version of title as entered by author on web page
Page 7 of 9
Figure 6. Thermal Collision Rate Errors - Parameter Reduction Only
Figs. 5 and 6 show that the actual errors and the predicted bounds due to the parameter reduction
are slightly higher than the errors in figs. 1 and 2. This indicates that the active subspace extracted
using the reference model have approximately the same level of accuracy at new physics
conditions.
Figs. 7 and 8 behave in a different fashion, where now results indicate that the actual errors and
their bounds have noticeably increased beyond those in Figs. 3 and 4. This indicates that the
responses at the new physics conditions are changing along new directions in the response space
that are not captured by the reference physics models. Also, notice that in all cases, the actual
probability of failure is always less than the theoretical value of 1-10-s
. Smaller number of
failures were observed in Figs. 7 and 8; the reason for this remains to be investigated. We recall
here that the failure probability is chosen to be 10% which is extremely high. In reality, the
failure probability is set to be extremely small to ensure that its dependence on core conditions is
negligible.
Figure 7. Fast Collision Rate Errors - Response Reduction Only
8. Authors’ names, use et al. if more than 3
Page 8 of 9
Figure 8. Thermal Collision Rate Errors - Response Reduction Only
4 CONCLUSIONS:
This manuscript has investigated the ability of ROM techniques to upper-bound the error
resulting from the reduction. This is an important characteristic for any ROM to ensure reliability
of the reduced model for subsequent engineering analyses, such as uncertainty and sensitivity
analysis. More importantly, this summary has shown a practical way by which the ROM errors
can be evaluated for general reduction operators. This is invaluable when dealing with high
fidelity codes that can only be executed few times, and it is difficult to extract their active
subspaces. Another important application of this work is the reduction of multi-physics models,
where the active subspace generated by one physics model is used as the basis for reducing the
input space for another physics model.
5 ACKNOWLEDGEMENTS:
The first author would like to acknowledge the support received from the Department of Nuclear
Engineering at North Carolina State University to complete this work in support of his PhD.
6 REFERENCES:
1. Youngsuk Bang, Jason Hite, And Hany S. Abdel-Khalik, “Hybrid reduced order modeling
applied to nonlinear models,” IJNME; 91: pp.929–949 (2012).
2. Hany S. Abdel-Khalik, Et Al., “Overview of Hybrid Subspace Methods for Uncertainty
Quantification and Sensitivity Analysis,” Annals of Nuclear Energy, Vol. 52, February, (2013):
Pages 28–46. A Tutorial on Applications of dimensionality reduction and function
approximation.
3. S. S. Wilks, Mathematical statistics, John Wiley, New York, 1st ed. 1962.
4. John D. Dixon, “Estimating extremal eigenvalues and condition numbers of matrices,” SIAM
1983; 20(2): 812–814.
9. Short version of title as entered by author on web page
Page 9 of 9
5. Mohammad G. Abdo, and Hany S. Abdel-Khalik, “Propagation of error bounds due to Active
Subspace reduction,” Transactions of the American Nuclear Society, Reno, NV, Vol. 110,
pp.196-199 (2014).
6. Mohammad G. Abdo, and Hany S. Abdel-Khalik, “Further investigation of error bounds for
reduced order modeling,” submitted to ANS MC2015: Joint International Conference on
Mathematics and Computation (M&C), Super Computing in Nuclear Applications (SNA), and
the Monte Carlo (MC) method, Nashville, TN, April, 19-23, 2015.
7. SCALE: A Comprehensive Modeling and Simulation Suit for Safety Analysis and Design,
ORNL/TM- 2005/39, Version 6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,
2011.