This document discusses probabilistic error bounds for order reduction of smooth nonlinear models. It begins with motivation for using reduced order models (ROM) in computationally intensive applications and the need for error metrics. It then provides background on Dixon's theory for probabilistic error bounds, which has mostly been used for linear models. The document outlines snapshot and gradient-based reduction algorithms to reduce the response and parameter interfaces of a model. It defines different types of errors that can occur from reducing these interfaces and discusses propagating the errors across interfaces using Dixon's theory. Numerical tests and results are briefly mentioned along with conclusions.
Objective: The main target of this project is to study the Baby-Step Giant-Step algorithm and propose an approach for the betterment of the algorithm for solving Elliptic Curve Discrete Logarithmic Problem.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
Objective: The main target of this project is to study the Baby-Step Giant-Step algorithm and propose an approach for the betterment of the algorithm for solving Elliptic Curve Discrete Logarithmic Problem.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
TMPA-2017: Generating Cost Aware Covering Arrays For Free Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Generating Cost Aware Covering Arrays For Free
Mustafa Kemal Tas, Hanefi Mercan, Gülşen Demiröz, Kamer Kaya, Cemal Yilmaz, Sabanci University
For video follow the link: https://youtu.be/Wkdd4A0rRjE
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Computing the volume of a convex body is a fundamental problem in computational geometry and optimization. In this talk we discuss the computational complexity of this problem from a theoretical as well as practical point of view. We show examples of how volume computation appear in applications ranging from combinatorics to algebraic geometry.
Next, we design the first practical algorithm for polytope volume approximation in high dimensions (few hundreds).
The algorithm utilizes uniform sampling from a convex region and efficient boundary polytope oracles.
Interestingly, our software provides a framework for exploring theoretical advances since it is believed, and our experiments provide evidence for this belief, that the current asymptotic bounds are unrealistically high.
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)Waqas Tariq
This paper describes an extremely fast polynomial time algorithm, the Near Optimal Vertex Cover Algorithm (NOVCA) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of (i) including the vertex having maximum degree in the vertex cover and (ii) rendering the degree of a vertex to zero by including all its adjacent vertices. The two versions of algorithm, NOVCA-I and NOVCA-II, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any other available vertex cover algorithms.
Ec2203 digital electronics questions anna university by www.annaunivedu.organnaunivedu
EC2203 Digital Electronics Anna University Important Questions for 3rd Semester ECE , EC2203 Digital Electronics Important Questions, 3rd Sem Question papers,
http://www.annaunivedu.org/digital-electronics-ec-2203-previous-year-question-paper-for-3rd-sem-ece-anna-univ-question/
TMPA-2017: Generating Cost Aware Covering Arrays For Free Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Generating Cost Aware Covering Arrays For Free
Mustafa Kemal Tas, Hanefi Mercan, Gülşen Demiröz, Kamer Kaya, Cemal Yilmaz, Sabanci University
For video follow the link: https://youtu.be/Wkdd4A0rRjE
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Computing the volume of a convex body is a fundamental problem in computational geometry and optimization. In this talk we discuss the computational complexity of this problem from a theoretical as well as practical point of view. We show examples of how volume computation appear in applications ranging from combinatorics to algebraic geometry.
Next, we design the first practical algorithm for polytope volume approximation in high dimensions (few hundreds).
The algorithm utilizes uniform sampling from a convex region and efficient boundary polytope oracles.
Interestingly, our software provides a framework for exploring theoretical advances since it is believed, and our experiments provide evidence for this belief, that the current asymptotic bounds are unrealistically high.
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)Waqas Tariq
This paper describes an extremely fast polynomial time algorithm, the Near Optimal Vertex Cover Algorithm (NOVCA) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of (i) including the vertex having maximum degree in the vertex cover and (ii) rendering the degree of a vertex to zero by including all its adjacent vertices. The two versions of algorithm, NOVCA-I and NOVCA-II, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any other available vertex cover algorithms.
Ec2203 digital electronics questions anna university by www.annaunivedu.organnaunivedu
EC2203 Digital Electronics Anna University Important Questions for 3rd Semester ECE , EC2203 Digital Electronics Important Questions, 3rd Sem Question papers,
http://www.annaunivedu.org/digital-electronics-ec-2203-previous-year-question-paper-for-3rd-sem-ece-anna-univ-question/
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICVLSICS Design
In VLSI technology, designers main concentration were on area required and on performance of the
device. In VLSI design power consumption is one of the major concerns due to continuous increase in chip
density and decline in size of CMOS circuits and frequency at which circuits are operating. By considering
these parameter logical circuits are designed using quaternary voltage mode logic and quaternary current
mode logic. Power consumption required for quaternary voltage mode logic is 51.78 % less as compared
to binary . Area in terms of number of transistor required for quaternary voltage mode logic is 3 times
more as compared to binary. As quaternary voltage mode circuit required large area as compared to
quaternary current mode circuit but power consumption required in quaternary voltage mode circuit is less
than that required in quaternary current mode circuit .
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The retrieval algorithms in remote sensing generally involve complex physical forward models that are nonlinear and computationally expensive to evaluate. Statistical emulation provides an alternative with cheap computation and can be used to calibrate model parameters and to improve computational efficiency of the retrieval algorithms. We introduce a framework of combining dimension reduction of input and output spaces and Gaussian process emulation
technique. The functional principal component analysis (FPCA) is chosen to reduce to the output space of thousands of dimensions by orders of magnitude. In addition, instead of making restrictive assumptions regarding the correlation structure of the high-dimensional input space,
we identity and exploit the most important directions of this space and thus construct a Gaussian process emulator with feasible computation. We will present preliminary results obtained from applying our method to OCO-2 data, and discuss how our framework can be generalized in
distributed systems. This is joint work with Jon Hobbs, Alex Konomi, Pulong Ma, and Anirban Mondal, and Joon Jin Song.
Abstract : Motivated by the recovery and prediction of electricity consumption time series, we extend Nonnegative Matrix Factorization to take into account external features as side information. We consider general linear measurement settings, and propose a framework which models non-linear relationships between external features and the response variable. We extend previous theoretical results to obtain a sufficient condition on the identifiability of NMF with side information. Based on the classical Hierarchical Alternating Least Squares (HALS) algorithm, we propose a new algorithm (HALSX, or Hierarchical Alternating Least Squares with eXogeneous variables) which estimates NMF in this setting. The algorithm is validated on both simulated and real electricity consumption datasets as well as a recommendation system dataset, to show its performance in matrix recovery and prediction for new rows and columns.
Special Plenary Lecture at the International Conference on VIBRATION ENGINEERING AND TECHNOLOGY OF MACHINERY (VETOMAC), Lisbon, Portugal, September 10 - 13, 2018
http://www.conf.pt/index.php/v-speakers
Propagation of uncertainties in complex engineering dynamical systems is receiving increasing attention. When uncertainties are taken into account, the equations of motion of discretised dynamical systems can be expressed by coupled ordinary differential equations with stochastic coefficients. The computational cost for the solution of such a system mainly depends on the number of degrees of freedom and number of random variables. Among various numerical methods developed for such systems, the polynomial chaos based Galerkin projection approach shows significant promise because it is more accurate compared to the classical perturbation based methods and computationally more efficient compared to the Monte Carlo simulation based methods. However, the computational cost increases significantly with the number of random variables and the results tend to become less accurate for a longer length of time. In this talk novel approaches will be discussed to address these issues. Reduced-order Galerkin projection schemes in the frequency domain will be discussed to address the problem of a large number of random variables. Practical examples will be given to illustrate the application of the proposed Galerkin projection techniques.
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
The retrieval algorithms in remote sensing generally involve complex physical forward models that are nonlinear and computationally expensive to evaluate. Statistical emulation provides an alternative with cheap computation and can be used to calibrate model parameters and to improve computational efficiency of the retrieval algorithms. We introduce a framework of combining dimension reduction of input and output spaces and Gaussian process emulation technique. The functional principal component analysis (FPCA) is chosen to reduce to the output space of thousands of dimensions by orders of magnitude. In addition, instead of making restrictive assumptions regarding the correlation structure of the high-dimensional input space, we identity and exploit the most important directions of this space and thus construct a Gaussian process emulator with feasible computation. We will present preliminary results obtained from applying our method to OCO-2 data, and discuss how our framework can be generalized in distributed systems.
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...IJERA Editor
The Reed-Solomon codes RS are widely used in communication systems, in particular forming part of the specification for the ETSI digital terrestrial television standard. In this paper a simple algorithm for error detection in the Chien Search block is proposed. This algorithm is based on a simple factorization of the error locator polynomial, which allows reducing the number of components required to implement the proposed algorithm on FPGA board. Consequently, it reduces the power consumption with a percentage which can reach 50 % compared to the basic RS decoder. First, we developed the design of Chien Search Block Second, we generated and simulated the hardware description language source code using Quartus software tools,finally we implemented the proposed algorithm of Chien search block for Reed-Solomon codesRS (255, 239) on FPGA board to show both the reduced hardware resources and low complexity compared to the basic algorithm.
Slides were formed by referring to the text Machine Learning by Tom M Mitchelle (Mc Graw Hill, Indian Edition) and by referring to Video tutorials on NPTEL
Welcome to the Digital Signal Processing (DSP) Lab Manual. This manual is designed to be your comprehensive guide throughout your DSP laboratory sessions. Digital Signal Processing is a fundamental field in electrical engineering and computer science that deals with the manipulation of digital signals to achieve various objectives, such as filtering, transformation, and analysis. In this lab, you will have the opportunity to apply theoretical knowledge to practical, hands-on exercises that will deepen your understanding of DSP concepts.
This manual is structured to provide you with step-by-step instructions, explanations, and insights into the experiments you'll be performing. Each experiment is carefully designed to reinforce your understanding of fundamental DSP principles and help you develop the skills necessary for signal processing applications. Whether you are a student or an instructor, this manual is intended to facilitate a productive and enriching DSP lab experience.
Welcome to the Digital Signal Processing (DSP) Lab Manual. This manual is designed to be your comprehensive guide throughout your DSP laboratory sessions. Digital Signal Processing is a fundamental field in electrical engineering and computer science that deals with the manipulation of digital signals to achieve various objectives, such as filtering, transformation, and analysis. In this lab, you will have the opportunity to apply theoretical knowledge to practical, hands-on exercises that will deepen your understanding of DSP concepts.
This manual is structured to provide you with step-by-step instructions, explanations, and insights into the experiments you'll be performing. Each experiment is carefully designed to reinforce your understanding of fundamental DSP principles and help you develop the skills necessary for signal processing applications. Whether you are a student or an instructor, this manual is intended to facilitate a productive and enriching DSP lab experience.
Discrete wavelet transform-based RI adaptive algorithm for system identificationIJECEIAES
In this paper, we propose a new adaptive filtering algorithm for system identifica- tion. The algorithm is based on the recursive inverse (RI) adaptive algorithm which suffers from low convergence rates in some applications; i.e., the eigenvalue spread of the autocorrelation matrix is relatively high. The proposed algorithm applies discrete-wavelet transform (DWT) to the input signal which, in turn, helps to overcome the low convergence rate of the RI algorithm with relatively small step-size(s). Different scenarios has been investigated in different noise environments in system identification setting. Experiments demonstrate the advantages of the proposed DWT recursive inverse (DWT-RI) filter in terms of convergence rate and mean-square-error (MSE) compared to the RI, discrete cosine transform LMS (DCT-LMS), discretewavelet transform LMS (DWT-LMS) and recursive-least-squares (RLS) algorithms under same conditions.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
1. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Probabilistic Error Bounds for Order Reduction
of Smooth Nonlinear Models
Mohammad G. Abdo
and
Hany S. Abdel-Khalik
and
Presented by: Congjian Wang
North Carolina State University
Nuclear Department
mgabdo@ncsu.edu and abdelkhalik@ncsu.edu
June 16, 2014
1 / 27
2. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
2 / 27
3. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
2 / 27
4. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
Probabilistic error bounds are mostly used in the linear moulding.
2 / 27
5. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Motivation
ROM plays a vital role in many desiplines, specially for
computationally intensive applications.
It i s mandatory to equip reduced order models with error metrics
to credibly defend the predictions of the reduced model.
Probabilistic error bounds are mostly used in the linear moulding.
Reduction errors need to be propagated across various
interfaces such as parameter interface(i.e. cross sections), state
function(i.e. flux) and response of interest(i.e. reaction rates,
detector response etc..).
2 / 27
6. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
We will adopt one formal mathematical definition that has been
developed back in the 1960s in the signal processing community.
3 / 27
7. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
We will adopt one formal mathematical definition that has been
developed back in the 1960s in the signal processing community.
Definition
A nonlinear function f with n inputs is said to be reducable and of
intrinsic dimension r (0 ≤ r ≤ n) if there exists a non linear function g
with r inputs and an n × r matrix Q such that r is the smallest integer
satisfying:
f (x) = g ˜x ;
where x ∈ Rn and ˜x = QT x ∈ Rr
3 / 27
8. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
4 / 27
9. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
Snapshot reduction algorithm (Gradient-free)(Reduces response
interface).
4 / 27
10. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Reduction Algorithms
In our context, reduction algorithms refer to two different
algorithms, each is used at a different interface:
Snapshot reduction algorithm (Gradient-free)(Reduces response
interface).
Gradient-based reduction algorithm(Reduces parameter interface).
4 / 27
11. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction
Consider the reducible model under inspection to be described by:
y = f (x) , (1)
The algorithm proceeds as follows:
5 / 27
12. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction
Consider the reducible model under inspection to be described by:
y = f (x) , (1)
The algorithm proceeds as follows:
1 Generate k random parameters realizations: {xi }k
i=1.
2 Execute the forward model in Eq.[1] k times and record the
corresponding k variations of the responses: yi = f (xi )
k
i=1
,
referred to as snapshots, and aggregate them in a matrix as
follows:
Y = y1 y2 · · · yk ∈ Rm×k
.
3 Calculate the singular value decomposition (SVD):
Y = U VT
; where U ∈ Rm×k
.
5 / 27
13. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Snapshot Reduction (cont.)
4 Select the dimensionality of the reduced space for the responses
to be ry , such that ry ≤ min (m, k). Identify the active subspace
as the range of the first ry columns of the matrix U, denoted by
Ury . Note that in practice ry is increased until the error
upper-bound in step 5 meets a user-defined error tolerance.
5 For a general response y, calculate the error resulting from the
reduction as: ey = I − Ury Ury
T
y .
6 / 27
14. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
7 / 27
15. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times, each time with a random
realization of the input parameters, and aggregate the pseudo
response derivatives in a matrix:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
7 / 27
16. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Gradient-baised Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times, each time with a random
realization of the input parameters, and aggregate the pseudo
response derivatives in a matrix:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
2 Calculate the SVD: G = WSPT
, and select the first rx columns
of W (denoted by Wrx ) to span the active subspace for the
parameters such that:
ex = I − Wrx WT
rx
x .
7 / 27
17. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Notice that discarding components in the parameter space will
give rise to errors in the response space even if no reduction in
the response space is rendered.
8 / 27
18. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Notice that discarding components in the parameter space will
give rise to errors in the response space even if no reduction in
the response space is rendered.
To distinguish between different errors at different levels we
introduce:
8 / 27
19. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors
1
f (x) − Qy QT
y f (x)
f (x)
≤
y
y ,
where Qy is a matrix whose orthonormal columns span the
response subspace Sy and
y
y is the user-defined tolerance for
the relative error in response due to reduction in response space
only .
9 / 27
20. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors
1
f (x) − Qy QT
y f (x)
f (x)
≤
y
y ,
where Qy is a matrix whose orthonormal columns span the
response subspace Sy and
y
y is the user-defined tolerance for
the relative error in response due to reduction in response space
only .
2
f (x) − f Qx QT
x x
f (x)
, ≤ x
y
Similarly, Qx is a matrix whose orthonormal columns span an
active subspace Sx in the parameter space and x
y is the
user-defined tolerance for the relative error in response due to
reduction in parameter space only.
9 / 27
21. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Different Errors (cont.)
3
f (x) − Qy QT
y f Qx QT
x x
f (x)
≤
xy
y ,
where
xy
y is the user-defined tolerance for the relative error in
response due to simultaneous reductions in both spaces.
10 / 27
22. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
11 / 27
23. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
Dixon’s theory
It all started by Dixon(1983) when he estimated the largest
and/or smallest eigen value and hence the condition number of a
real positive definite matrix A.
11 / 27
24. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The previous relative errors can be estimated using Dixon’s
Theory[3].
Dixon’s theory
It all started by Dixon(1983) when he estimated the largest
and/or smallest eigen value and hence the condition number of a
real positive definite matrix A.
His work relies on a basic set of theorems and lemmas[3, 7] that
we will introduce in the following few slides.
11 / 27
25. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
12 / 27
26. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
12 / 27
27. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
12 / 27
28. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
Let θ ∈ R > 1.
12 / 27
29. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Theorem
If A ∈ Rnxn is a real positive definite matrix whose eigen values
are λ1 ≥ λ2 ≥ · · · ≥ λn > 0.
Let S := x ∈ Rn; xT x = 1 be a unit hyper sphere.
x = x1 · · · xn
T
; n ≥ 2 and xi ∼ U(−1, 1) over S.
Let θ ∈ R > 1.
⇒
P xT
Ax ≤ λ1 ≤ θxT
Ax ≥ 1 −
2
π
n
θ
. (2)
12 / 27
30. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
13 / 27
31. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
13 / 27
32. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
Then the previous theorem can be written as:
P Bx ≤ (σ1 = B ) ≤
√
θ Bx ≥ 1 −
2
π
n
θ
. (3)
13 / 27
33. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
The next corollary has been explored by many authors[8, 5, 6, 4]
and has been employed in different applications, it gave the
modern texture to Dixon’s bound.
Corollary
if B ∈ Rmxn such that A = LLT = BT B; where L = BT is the cholesky
factor of A.
And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2
i ).
Then the previous theorem can be written as:
P Bx ≤ (σ1 = B ) ≤
√
θ Bx ≥ 1 −
2
π
n
θ
. (3)
Selecting θ = α2 2
π n ; where α > 1 yields:
P B ≤ α
2
π
√
n max
i=1,2,··· ,k
Bx(i) ≥ 1 − α−k . (4)
13 / 27
34. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
14 / 27
35. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
The model is subjected to both types of reduction at both
parameter and response interfaces. Thes responses are
aggregated in Yx and Yy respectively.
14 / 27
36. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds
Consider a physical model:
y = f (x) where f : Rn
→ Rm
.
The model is subjected to both types of reduction at both
parameter and response interfaces. Thes responses are
aggregated in Yx and Yy respectively.
The bound for each case is:
x
y = α1
2
π
√
N max
i=1,2,··· ,k1
Y − Yx
wi ,
y
y = α2
2
π
√
N max
i=1,2,··· ,k2
Y − Yy
wi ,
14 / 27
37. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
ROM
Dixon 1983
Propagating the error bound accross different interfaces
Propagating error bounds(cont.)
Then the response error due to both reductions can be
calculated from:
P Y − Yxy
≤ x
y +
y
y ≥ 1 − α
−k1
1 1 − α
−k2
2 (5)
15 / 27
38. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 1
The first numerical test is an algebraic prototype nonlinear model
where:
y = f (x) ; f : Rn → Rm; n = 15; m = 10 such that:
y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
= B ×
aT
1 x
(aT
2 x)2
(1.4 ∗ aT
2 x + 1.5 ∗ aT
3 x)2
1
1+exp(−aT
2 x)
cos(0.8aT
4 x + 1.6aT
5 x)
(aT
6 x + aT
7 ) ∗ [(aT
7 )2 + sin(aT
8 x)]
(1 + 0.1exp(−aT
8 x))[(aT
9 x)2 + (aT
10x)2]
aT
9 x + 0.2aT
10x
aT
10x
aT
9 x + 8aT
10x
where ai ∈ Rn; i = 1, 2, · · · , m and B is a random m × m matrix.
16 / 27
39. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
17 / 27
40. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
The objective is to test the proposed probabilistic error bound
due to reductions at both the parameter and response spaces.
17 / 27
41. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
The second case study involves a realistic neutron transport of a
PWR pin cell model.
The objective is to test the proposed probabilistic error bound
due to reductions at both the parameter and response spaces.
The computer code employed is TSUNAMI-2D, a control module
in SCALE 6.1 [1], wherein the derivatives are provided by SAMS,
the sensitivity analysis module for SCALE 6.1.
17 / 27
42. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 1
The dimension of the parameter space is n = 15, and the response space is
m = 10, and a user defined tolerance of 10−5 is selected. The parameter active
subspace is found to have a size of rx = 9 whereas the response active
subspace is ry = 9.number of tests is 10000.
Fig. 1 shows the function behavior plotted along a randomly selected direction in
the parameter space.
Figure : Function behavior along a random input direction.
18 / 27
43. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Table I. shows the minimum theoretical probabilities
Pact = number of successes
total number of tests predicted by the theorem and the actual
probability resulting from the numerical test.
Table : Algebraic Model Results
Error Bound Pact Ptheo
Y−Yx
Y ≤ x
y 1.0 0.9
Y−Yy
Y ≤
y
y 0.998 0.9
Y−Yxy
Y ≤ x
y +
y
y 1.0 0.81
19 / 27
44. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Relative Errors
Next we show the relative error Y−Yxy
Y due to both reductions
vs. the theoretical upper bound predicted by the theory x
y +
y
y .
Figure : Theoretical and actual error for case study 1.
20 / 27
45. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 2
For the pin cell model the full input subspace (cross sections) had a size
of n = 1936, whereas the output (material flux) was of size m = 176.
The cross-sections of the fuel, clad, moderator and gap were perturbed
by 30%(relative perturbations). Based on a user defined tolerance of
10−5, the sizes of the input and output active subspaces are rx = 900
and ry = 165, respectively.
Table II shows the minimum theoretical probabilities predicted by the
theorem and the probability resulted from the numerical test.
Table : Algebraic Model Results
Error Bound Pact Ptheo
Y−Yx
Y ≤ x
y 1.0 0.9
Y−Yy
Y ≤
y
y 1.0 0.9
Y−Yxy
Y ≤ x
y +
y
y 1.0 0.81
21 / 27
46. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Case Study 1
Case Study 2
Relative Errors
Next we show the relative error Y−Yxy
Y due to both reductions
vs. theoretical upper bound predicted by the theory x
y +
y
y .
Figure : Theoretical and actual error for case study 2.
22 / 27
47. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Conclusions
This manuscript has equipped our previously developed ROM
techniques with probabilistic error metrics that bound the
maximum errors resulting from the reduction.
Given that reduction algorithms can be applied at any of the
various model interfaces, e.g., parameters, state, and responses,
the developed metric effectively aggregates the associated errors
to estimate an error bound on the response of interest.
The results show that we can start to break the linear moulding
and start to explore nonlinear smooth functions.
These functionality will prove essential in our ongoing work
focusing on extension of ROM techniques to multi-physics
models.
23 / 27
48. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography I
SCALE:A Comperhensive Modeling and Simulation Suite for
Nuclear Safety Analysis and Design,ORNL/TM-2005/39, Version
6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,June
2011. Available from Radiation Safety Information Computational
Center at Oak Rodge National Laboratory as CCC-785.
Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced
order modeling applied to nonlinear models, IJNME, 91 (2012),
pp. 929–949.
J. D. DIXON, Estimating extremal eigenvalues and condition
numbers of matrices, SIAM, 20 (1983), pp. 812–814.
24 / 27
49. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography II
N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding
structure with randomness:probabilistic algorithms for
constructing approximate matrix decompositions, SIAM, 53
(2011), pp. 217–288.
P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A
randomized algorithm for the approximation of matrices, tech.
report, Yale University.
J. A. TROPP, User-friendly tools for random matrices.
S. S. WILKS, Mathematical statistics, John Wiley, New York,
1st ed., 1962.
25 / 27
50. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Bibliography III
F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast
randomized algorithm for the approximation of matrices,
preliminary report, Yale University.
26 / 27
51. Motivation
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Bibliography
Thanks
Questions/Suggestions?
27 / 27