This document describes a new statistical emulation approach for probabilistic seismic demand assessment. The approach uses Gaussian process emulation to model the relationship between engineering demand parameter (EDP) and intensity measure (IM), as an alternative to the standard cloud method. The emulator is trained using data from nonlinear time-history analyses of a case study building. Two "assumed realities" are generated to test the emulator's performance compared to the cloud method. Results show the emulator approach provides improved coverage probability and average length over the cloud method, while maintaining similar accuracy. The emulator is more flexible than the cloud method and can better estimate EDP-IM relationships that do not follow a power law.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
Adaptive response surface by kriging using pilot points for structural reliab...IOSR Journals
Structural reliability analysis aims to compute the probability of failure by considering system uncertainties. However, this approach may require very time-consuming computation and becomes impracticable for complex structures especially when complex computer analysis and simulation codes are involved such as finite element method. Approximation methods are widely used to build simplified approximations, or metamodels providing a surrogate model of the original codes. The most popular surrogate model is the response surface methodology, which typically employs second order polynomial approximation using least-squares regression techniques. Several authors have been used response surface methods in reliability analysis. However, another approximation method based on kriging approach has successfully applied in the field of deterministic optimization. Few studies have treated the use of kriging approximation in reliability analysis and reliability-based design optimization. In this paper, the kriging approximation is used an alternative to the traditional response surface method, to approximate the performance function of the reliability analysis. The main objective of this work is to develop an efficient global approximation while controlling the computational cost and accurate prediction. A pilot point method is proposed to the kriging approximation in order to increase the prior predictivity of the approximation, which the pilot points are good candidates for numerical simulation. In other words, the predictive quality of the initial kriging approximation is improved by adding adaptive information called “pilot points” in areas where the kriging variance is maximum. This methodology allows for an efficient modeling of highly non-linear responses, while the number of simulations is reduced compared to Latin Hypercubes approach. Numerical examples show the efficiency and the interest of the proposed method.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
FORECASTING ENERGY CONSUMPTION USING FUZZY TRANSFORM AND LOCAL LINEAR NEURO F...ijsc
This paper proposes a hybrid approach based on local linear neuro fuzzy (LLNF) model and fuzzy transform (F-transform), termed FT-LLNF, for prediction of energy consumption. LLNF models are powerful in modelling and forecasting highly nonlinear and complex time series. Starting from an optimal
linear least square model, they add nonlinear neurons to the initial model as long as the model's accuracy is improved. Trained by local linear model tree learning (LOLIMOT) algorithm, the LLNF models provide maximum generalizability as well as the outstanding performance. Besides, the recently introduced technique of fuzzy transform (F-transform) is employed as a time series pre-processing method. The
technique of F-transform, established based on the concept of fuzzy partitions, eliminates noisy variations of the original time series and results in a well-behaved series which can be predicted with higher accuracy. The proposed hybrid method of FT-LLNF is applied to prediction of energy consumption in the
United States and Canada. The prediction results and comparison to optimized multi-layer perceptron (MLP) models and the LLNF itself, revealed the promising performance of the proposed approach for energy consumption prediction and its potential usage for real world applications.
3D Graph Drawings: Good Viewing for Occluded VerticesIJERA Editor
The growing studies show that the human brain can comprehend increasingly complex structures if they are
displayed as objects in three dimensional spaces. In addition to that, recent technological advances have led to
the production of a lot of data, and consequently have led to many large and complex models of 3D graph
drawings in many domains. Good Drawing (Visualization) resolves the problems of the occluded structures of
the graph drawings and amplifies human understanding, thus leading to new insights, findings and predictions.
We present method for drawing 3D graphs which uses a force-directed algorithm as a framework.
The main result of this work is that, 3D graph drawing and presentation techniques are combined available at
interactive speed. Even large graphs with hundreds of vertices can be meaningfully displayed by enhancing the
presentation with additional attributes of graph drawings and the possibility of interactive user navigation.
In the implementation, we interactively visualize many 3D graphs of different size and complexity to support
our method. We show that Gephi Software is capable of producing good viewpoints for 3D graph drawing, by
its built-in force directed layout algorithms.
Calculation of solar radiation by using regression methodsmehmet şahin
Abstract. In this study, solar radiation was estimated at 53 location over Turkey with
varying climatic conditions using the Linear, Ridge, Lasso, Smoother, Partial least, KNN
and Gaussian process regression methods. The data of 2002 and 2003 years were used to
obtain regression coefficients of relevant methods. The coefficients were obtained based on
the input parameters. Input parameters were month, altitude, latitude, longitude and landsurface
temperature (LST).The values for LST were obtained from the data of the National
Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer
(NOAA-AVHRR) satellite. Solar radiation was calculated using obtained coefficients in
regression methods for 2004 year. The results were compared statistically. The most
successful method was Gaussian process regression method. The most unsuccessful method
was lasso regression method. While means bias error (MBE) value of Gaussian process
regression method was 0,274 MJ/m2, root mean square error (RMSE) value of method was
calculated as 2,260 MJ/m2. The correlation coefficient of related method was calculated as
0,941. Statistical results are consistent with the literature. Used the Gaussian process
regression method is recommended for other studies.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Building data fusion surrogate models for spacecraft aerodynamic problems wit...Shinwoo Jang
Abstract. This work concerns a construction of surrogate models for a specific aerodynamic data base. This data base is generally available from wind tunnel testing or from CFD aerodynamic simulations and contains aerodynamic coefficients for different flight conditions and configurations (such as Mach number, angle-of-attack, vehicle configuration angle) encountered over different space vehicles mission. The main peculiarity of aerodynamic data base is a specific design of
experiment which is a union of grids of low and high fidelity data with considerably different sizes. Universal algorithms can’t approximate accurately such significantly non-uniform data. In this work a fast and accurate algorithm was developed which takes into account different fidelity of the data and special design of experiments.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
Algorithms based on minimum volume constraint or sum of squared distances constraint is
widely used in Hyperspectral image unmixing. However, there are few works about performing
comparison between these two algorithms. In this paper, comparison analysis between two
algorithms is presented to evaluate the performance of two constraints under different situations. Comparison is implemented from the following three aspects: flatness of simplex, initialization effects and robustness to noise. The analysis can provide a guideline on which constraint should be adopted under certain specific tasks.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
FORECASTING ENERGY CONSUMPTION USING FUZZY TRANSFORM AND LOCAL LINEAR NEURO F...ijsc
This paper proposes a hybrid approach based on local linear neuro fuzzy (LLNF) model and fuzzy transform (F-transform), termed FT-LLNF, for prediction of energy consumption. LLNF models are powerful in modelling and forecasting highly nonlinear and complex time series. Starting from an optimal
linear least square model, they add nonlinear neurons to the initial model as long as the model's accuracy is improved. Trained by local linear model tree learning (LOLIMOT) algorithm, the LLNF models provide maximum generalizability as well as the outstanding performance. Besides, the recently introduced technique of fuzzy transform (F-transform) is employed as a time series pre-processing method. The
technique of F-transform, established based on the concept of fuzzy partitions, eliminates noisy variations of the original time series and results in a well-behaved series which can be predicted with higher accuracy. The proposed hybrid method of FT-LLNF is applied to prediction of energy consumption in the
United States and Canada. The prediction results and comparison to optimized multi-layer perceptron (MLP) models and the LLNF itself, revealed the promising performance of the proposed approach for energy consumption prediction and its potential usage for real world applications.
3D Graph Drawings: Good Viewing for Occluded VerticesIJERA Editor
The growing studies show that the human brain can comprehend increasingly complex structures if they are
displayed as objects in three dimensional spaces. In addition to that, recent technological advances have led to
the production of a lot of data, and consequently have led to many large and complex models of 3D graph
drawings in many domains. Good Drawing (Visualization) resolves the problems of the occluded structures of
the graph drawings and amplifies human understanding, thus leading to new insights, findings and predictions.
We present method for drawing 3D graphs which uses a force-directed algorithm as a framework.
The main result of this work is that, 3D graph drawing and presentation techniques are combined available at
interactive speed. Even large graphs with hundreds of vertices can be meaningfully displayed by enhancing the
presentation with additional attributes of graph drawings and the possibility of interactive user navigation.
In the implementation, we interactively visualize many 3D graphs of different size and complexity to support
our method. We show that Gephi Software is capable of producing good viewpoints for 3D graph drawing, by
its built-in force directed layout algorithms.
Calculation of solar radiation by using regression methodsmehmet şahin
Abstract. In this study, solar radiation was estimated at 53 location over Turkey with
varying climatic conditions using the Linear, Ridge, Lasso, Smoother, Partial least, KNN
and Gaussian process regression methods. The data of 2002 and 2003 years were used to
obtain regression coefficients of relevant methods. The coefficients were obtained based on
the input parameters. Input parameters were month, altitude, latitude, longitude and landsurface
temperature (LST).The values for LST were obtained from the data of the National
Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer
(NOAA-AVHRR) satellite. Solar radiation was calculated using obtained coefficients in
regression methods for 2004 year. The results were compared statistically. The most
successful method was Gaussian process regression method. The most unsuccessful method
was lasso regression method. While means bias error (MBE) value of Gaussian process
regression method was 0,274 MJ/m2, root mean square error (RMSE) value of method was
calculated as 2,260 MJ/m2. The correlation coefficient of related method was calculated as
0,941. Statistical results are consistent with the literature. Used the Gaussian process
regression method is recommended for other studies.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Building data fusion surrogate models for spacecraft aerodynamic problems wit...Shinwoo Jang
Abstract. This work concerns a construction of surrogate models for a specific aerodynamic data base. This data base is generally available from wind tunnel testing or from CFD aerodynamic simulations and contains aerodynamic coefficients for different flight conditions and configurations (such as Mach number, angle-of-attack, vehicle configuration angle) encountered over different space vehicles mission. The main peculiarity of aerodynamic data base is a specific design of
experiment which is a union of grids of low and high fidelity data with considerably different sizes. Universal algorithms can’t approximate accurately such significantly non-uniform data. In this work a fast and accurate algorithm was developed which takes into account different fidelity of the data and special design of experiments.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
Algorithms based on minimum volume constraint or sum of squared distances constraint is
widely used in Hyperspectral image unmixing. However, there are few works about performing
comparison between these two algorithms. In this paper, comparison analysis between two
algorithms is presented to evaluate the performance of two constraints under different situations. Comparison is implemented from the following three aspects: flatness of simplex, initialization effects and robustness to noise. The analysis can provide a guideline on which constraint should be adopted under certain specific tasks.
Presentation of an Algorithm for Secure Data Transmission based on Optimal Ro...IJECEIAES
This paper proposes a comprehensive algorithm for secure data transmission via communication conductors considering route optimization, shielding and data authentication. Using of appropriate coding method causes more efficiency for suggested algorithm during electromagnetic field attack occurrence. In this paper, MOM simulation via FIKO software is done for field distribution. Due to critical situation of malfunctioning of data transferring, appropriate shield is designed and examined by shielding effectiveness (SE) criterion resulted of MOM simulation; finally to achieve reliability of data security, MAC hash function is used for space with field attack probability, turbo code is employed.
Multivariable Parametric Modeling of a Greenhouse by Minimizing the Quadratic...TELKOMNIKA JOURNAL
This paper concerns the identification of a greenhouse described in a multivariable linear system
with two inputs and two outputs (TITO). The method proposed is based on the least squares identification
method, without being less efficient, presents an iterative calculation algorithm with a reduced
computational cost. Moreover, its recursive character allows it to overcome, with a good initialization, slight
variations of parameters, inevitable in a real multivariable process. A comparison with other method s
recently proposed in the literature demonstrates the advantage of this method. Simulations obtained will be
exposed to showthe effectiveness and application of the method on multivariable systems.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
A Weighted Duality based Formulation of MIMO SystemsIJERA Editor
This work is based on the modeling and analysis of multiple-input multiple-output (MIMO) system in downlink communication system. We take into account a recent work on the ratio of quadratic forms to formulate the weight matrices of quadratic norm in a duality structure. This enables us to achieve exact solutions for MIMO system operating under Rayleigh fading channels. We outline couple of scenarios dependent on the structure of eigenvalues to investigate the system behavior. The results obtained are validated by means of Monte Carlo simulations.
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings
Real-time PMU Data Recovery Application Based on Singular Value DecompositionPower System Operation
Phasor measurement units (PMUs) allow for the enhancement of power system monitoring and control applications and they will prove even more crucial in the future, as the grid becomes more decentralized and subject to higher uncertainty. Tools that improve PMU data quality and facilitate data analytics workflows are thus needed. In this work, we leverage a previously described algorithm to develop a python application for PMU data recovery. Because of its intrinsic nature, PMU data can be dimensionally reduced using singular value decomposition (SVD). Moreover, the high spatio-temporal correlation can be leveraged to estimate the value of measurements that are missing due to drop-outs. These observations are at the base of the data recovery application described in this work. Extensive testing is performed to study the performance under different data drop-out scenarios, and the results show very high recovery accuracy. Additionally, the application is designed to take advantage of a high performance PMU data platform called PredictiveGrid™, developed by PingThings.
KEYWORDS
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
New approach to calculating the fundamental matrix IJECEIAES
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix ''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation it shows this technique of use in real-time applications
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
Relevance Vector Machines for Earthquake Response Spectra drboon
This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
Similar to New emulation based approach for probabilistic seismic demand (20)
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
New emulation based approach for probabilistic seismic demand
1. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
Paper N° 3611 (Abstract ID)
Registration Code: S- J1463141579
NEW EMULATION-BASED APPROACH FOR PROBABILISTIC SEISMIC
DEMAND
S. Minas(1)
, R.E. Chandler(2)
and T. Rossetto(3)
(1)
EngD student, University College of London, s.minas@ucl.ac.uk
(2)
Professor, University College of London, r.chandler@ucl.ac.uk
(3)
Professor, University College of London, t.rossetto@ucl.ac.uk
Abstract
An advanced statistical emulation-based approach to compute the probabilistic seismic demand is introduced here. The
emulation approach, which is a version of kriging, uses a mean function as a first approximation of the mean conditional
distribution of Engineering Demand Parameter given Intensity Measure (EDP|IM) and then models the approximation errors
as a Gaussian Process (GP). The main advantage of the kriging emulator is its flexibility, as it does not impose a fixed
mathematical form on the EDP|IM relationship as in other approaches (i.e. standard cloud method). In this study, a case-
study building representing the Special-Code vulnerability class, is analyzed at a high level of fidelity, namely nonlinear
dynamic analysis. For the evaluation of the emulator, two different scenarios are considered, each corresponding to a
different “assumed reality” represented by an artificially generated IM:EDP relationship derived from the sample of real
analysis data. These “assumed realities” are used as a reference point to assess the performance of the emulation-based
approach, and to compare against the outcomes of the standard cloud analysis. A number of input configurations are tested,
utilizing two sampling processes (i.e. random and stratified sampling) and varying the number of training inputs. The
outcomes of the current work show that the proposed statistical emulation-based approach, when calibrated with high
fidelity analysis data and combined with the advanced IM INp , outperforms the standard cloud method in terms of coverage
probability and average length, while insignificant differences are obtained in the mean squared error estimations (MSE).
The improved performance of emulator over the cloud method is maintained in both “assumed realities” tested showing the
capability of the former approach to better estimate the EDP|IM relationship for the cases that does necessarily follow a
favorable pattern (e.g. power-law).
Keywords: probabilistic seismic demand; statistical emulation; kriging;
1. Introduction
Fragility functions constitute an essential component of the Performance-based earthquake engineering (PBEE)
framework [1]. In the current state-of-the-art, there are three main methods for deriving fragility functions:
empirical, analytical and based on experts’ judgement [2]. The former approach is considered to be the most
credible one, while the latter is only used when none of the other two approaches can be implemented [3].
However, empirical approaches have several limitations, including the difficulty in properly characterizing the
levels of seismic intensity and a non-subjective damage state allocation; but more importantly, the lack of
sufficient seismic damage data in most areas of the world.
Analytical methods are becoming increasingly popular as they offer the user the flexibility to choose from
analysis methodologies, various structural parameters, characteristics of input ground motions etc. Several
approaches exist for the analytical fragility assessment of various building classes. The present study aims to
advance methods and tools for analytical fragility analysis of mid-rise RC buildings. This is achieved by
investigating alternative statistical approaches for fitting the conditional distribution of Engineering Demand
Parameter given Intensity Measure (EDP|IM) to be used for the derivation of fragility curves. In particular,
emulation surrogate models are adopted to provide estimates, and the associated variances, for the conditional
distribution of EDP|IM relationship that would be obtained if it was possible to run the analysis models at all
2. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
2
possible combinations of their inputs, i.e. any possible ground motion input. The emulation approach, which is a
version of kriging, uses a mean function as a first approximation of the mean conditional distribution and then
models the approximation errors as a Gaussian Process (GP). The main advantage is the flexibility associated
with this approach, as it does not impose a fixed mathematical form on the EDP|IM relationship as in the
standard cloud method (discussed below).
A new statistical emulation approach which provides estimates and variances, for the conditional distribution of
EDP|IM relationship is introduced here, as a part of a wider analytical fragility assessment framework. The
capabilities of this new approach are investigated when trying to predict two artificially generated ‘true’
functions and tested against the outcomes of the standard cloud analysis.
2. Cloud analysis
One of the most commonly used methods for characterizing the relationship between EDP and IM is the cloud
method [4–6]. Within this method, a structure is subjected to a series of ground motion (GM) time-histories
associated with respective IM values in order to estimate the associated seismic response. Nonlinear dynamic
analysis (e.g. nonlinear time-history analysis - NLTHA), is usually used to estimate the response of the building,
expressed in terms of EDP, when subjected to the aforementioned GMs. The resultant peak values of EDP for
given IM levels when recorded form a scatter of points, the so-called “cloud”. A statistical regression model is
used to fit the cloud of data points which is represented by a simple power-law model. According to this model,
the EDP is considered to vary roughly as a power-law of the form b
aIM such that, after taking logarithms, the
relationship can be expressed as in equation (1):
ln ln lnEDP a b IM e (1)
where EDP is the conditional median of the demand given the IM , a ,b are the parameters of the regression,
and e is a zero mean random variable representing the variability of ln EDP given the IM . However, some
situations exhibit substantial heteroskedasticity (i.e. non-constant variance), which needs to be modelled
explicitly within the fragility analysis framework [7]; for example by performing linear regressions locally in a
region of IM values of interest. The use of logarithmic transformation indicates that the EDPs are assumed to be
lognormally distributed conditional upon the values of the IMs; this is a common assumption that has been
confirmed as reasonable in many past studies [8,9].
Apart from the advantages of simplicity and rapidity over alternative demand fragility assessment methods,
discussed in following sections, cloud analysis has several restrictions. Firstly, the assumption that the relation of
IM and EDP is represented appropriately by a linear model in the log space. This assumption may be valid for a
short range of IM and EDP combinations but not for the entire cloud response space. Secondly, the dispersion of
the probability distribution of EDP given IM is constant, while several studies have shown that the dispersion of
EDP might increase for increasing IM values [10]. Lastly, the accuracy of the cloud method is highly dependent
on the selection of earthquake records used as an input [11].
3. Statistical emulation
Complex mathematical models exist in all scientific and engineering areas aiming to capture the real-world
processes and simulate sufficiently the real-systems. These mathematical models are usually translated into
computer codes and may require significant computational resources. The mathematical functions and the related
computer codes may be referred to as simulators [12]. A simulator, as a function (.)f , takes deterministic
vectors of inputs x and generates unique outputs, )(xy f for each given input. However, the results are
subject to uncertainties – for example, due to uncertainties in the inputs and in the simulator construction itself
[13]. To assess the effect of these uncertainties, one option is to use uncertainty and sensitivity analysis methods
requiring many simulator runs, as discussed at [14]. However, this is impractical for complex simulators that are
computationally demanding to run [12]. To overcome this, an approximation )(ˆ xf of the simulator )(xf ,
3. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
3
known as a statistical emulator, may be introduced to act as a surrogate. If )(ˆ xf is a good proxy of the
simulator, it can be used to carry out uncertainty and sensitivity analysis but with significantly less effort.
An emulator is a simple statistical representation of the simulator and is used for the rapid estimation of the
simulation outputs. The emulator provides an approximation, )(ˆ xf say, to the true simulator output )(xf ; and
also provides a standard deviation to quantify the associated approximation error. A small number of data points,
obtained by running the simulator at carefully chosen configurations of the inputs, is required to train the
emulator. The emulator developed for the present study is a Gaussian process (GP) model. GP models are
described in the next subsection; and contrasted with the standard approach of cloud analysis in Section 2.
In the standard GP approach to the modelling of simulator outputs, the output function )(xf is regarded as a
realized value of a random process such that the outputs at distinct values of x jointly follow a normal (or
Gaussian) distribution. In most applications, )(xf is a ‘smooth’ function in the sense that a small variation in
the input x will result in a small perturbation in the output )(xf . Smoothness of a Gaussian process is ensured
by specifying an appropriate structure for the covariance between process values at distinct values of x. For
example, the use of a Gaussian covariance model (not to be confused with the “Gaussian” in the GP, which
relates to the distribution, rather than the covariance structure), results in smooth realizations that are infinitely
differentiable.
In the context of analytical fragility estimation, the input x represents a ground motion sequence and the output
)(xf represents the simulated EDP for that sequence. However, fragility estimates are rarely presented as
functions of an entire ground motion sequence: rather, they are presented as a function of an intensity measure
( )t t x , say, that is used as a proxy for the complete sequence. In practice, the relationship between any such
scalar proxy measure and the EDP is imperfect so that the simulator outputs, regarded as a function of t rather
than x, are no longer smooth. As an example of this, two different ground motion records characterized by the
same value of t (i.e. having the same IM) will not in general generate identical outputs. As a result, the standard
emulation approach must be modified slightly to apply it in the context of fragility estimation.
The required modification is still to regard )(xf as the realized value of a random process, but now such that
the conditional distributions of )(xf given ( )t x themselves form a Gaussian process. Specifically, write
)(xf as:
( ) ( ) ( )f t x x (2)
where ( )t represents the systematic variation of the output with the IM and ( ) x is a discrepancy term with
[ ( )] 0E x , representing uncertainty due to the fact that t does not fully capture the relevant information in
x . In the first instance it will be convenient to assume that ( ) x is normally distributed with a constant
variance, 2
(this assumption can be checked when applying the methodology); also that ( ) x is independent of
( )t , and that 1( ) x is independent of 2( ) x when 1 2x x . Note that in equation (2), the term ( ) x is
deliberately used instead of ( )t . The reason for doing this is to allow two simulator outputs with different x
but same value of t , to be considered as separate, allowing for scatter about the overall mean curve in a plot of
)(xf against ( )t x . Equation (2) then immediately yields the conditional distribution of EDP given IM as
normal, with expected value ( )IM and variance 2
.
To exploit GP emulation methodology now, a final additional assumption is that ( )t varies smoothly with t and
hence can itself be considered as a realization of a GP. Specifically, denote the expected value of ( )t by ( )m t ,
denote its variance by 2
and specify a correlation function (such as the Gaussian) that ensures smooth variation
with t. Under the Gaussian correlation function, the covariance between 1( )t and 2( )t is:
4. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
4
2
1 22
2
exp
2
t t
(3)
where is a parameter controlling the rate of decay of the correlation between function values at increasingly
separated values of t . As an illustrative example, the mean function ( )m t could be specified as linear, i.e.
0 1( )m t t : this represents a first approximation to the function ( )t in Eq.(2), with the correlation
structure allowing for smooth variation in the “approximation error” ( ) ( ) ( )t m t z t , say.
Combining all of the above, in the case where the mean function ( )m t is linear we find that values of )(xf at
distinct values of x are jointly normally distributed such that:
- The expected value of )(xf is: 0 1t
- The variance of )(xf is equal to: 2 2
Var ( ) +Vart x
- For 1 2x x , the covariance between 1( )f x and 2( )f x is:
1 2 1 1 2 2
2
1 22
1 2 2
Cov , Cov ,
Cov , exp
2
f f t t
t t
t t
x x x x
(4)
where 1 1( )t t x and 2 2( )t t x .
Notice that these expressions depend on the full input vector x only through the IM ( )t x . As a result, the
simulator outputs can be analyzed as though they are functions of t alone and, specifically, subjected to a
standard GP analysis to estimate the systematic variation ( )t . The only difference between this formulation
and a standard GP emulation problem is that when 1 2t t , the covariance between the simulator outputs is
2 2
rather than just 2
. The additional 2
term allows for additional variance associated with the
discrepancy term ( ) x in equation (2).
Having formulated the problem in terms of Gaussian processes, the simulator can be run a few times at different
values of x , and the resulting outputs can be used to estimate the parameters in the GP model which, as set out
above, are the regression coefficients 0 and 1 , the variances 2
and 2
, and the correlation decay parameter
. Having estimated the parameters, the results can be used to interpolate optimally between the existing
simulator runs so as to construct an estimate of the demand curve ( )t and associated standard deviation. The
optimal interpolation relies on standard results for calculating conditional distributions in a Gaussian framework.
The approach is used extensively in geostatistics to interpolate between observations made at different spatial
locations, where it is often referred to as “kriging” [15]. In geostatistics, the term 2
is referred to as a ‘nugget’
and accounts for local-scale variation or measurement error. Nonetheless, in the present context the above
argument shows that it can be derived purely by considering the structure of the probabilistic seismic demand
problem.
It is noteworthy to mention that similarly to the cloud method, the emulator analysis is also carried in terms of
ln EDP , due to the normality and heteroscedasticity assumptions, which are better justified on the log scale.
5. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
5
4. Methodology
4.1 Defining ‘true’ functions
In this study two different scenarios are considered, each corresponding to a different “reality” represented by an
artificially generated IM:EDP relationship derived from a sample of real analysis data, as described in Section
4.3. In each relationship, the expected value of some function of the EDP is given by a mean function ( )t as in
equation (2); and ( )t is represented as a sum of a deterministic mean function ( )m t and an approximation
error ( )z t :
( ) ( ) ( )t m t z t (5)
The deterministic mean function ( )m t is represented by a simple regression model. Traditional cloud analysis
corresponds to a situation in which ( )t is the expected value of ln( )EDP , the mean function ( )m t is a linear
function of ln( )t and the approximation error ( )z t is zero. However, the framework considered here allows an
exploration of a wider range of situations.
Our two chosen “assumed realities” are as follows:
1. Reality 1: the expected value of ln( )EDP is an exact linear function of ln( )IM with no approximation
error (i.e. with ( ) 0z t in equation (5)). This “assumed reality” is the situation for which the standard
cloud method is designed.
2. Reality 2: the function ( )t represents the expected value of EDP rather than ln( )EDP , and a non-zero
approximation error ( )z t is included in equation (5) so that ( )t is no longer an exact linear function of
ln( )t . The approximation error is generated as a Gaussian random field (making sure that the resultant
“true” function is monotonically increasing). This case can be used as an example where both models
(standard and emulator) are wrong, and will shed light to the capability of these two models to estimate
the EDP|IM relationship.
The two generated true functions are illustrated in Figure 1. As it becomes evident from the description of each
“assumed reality”, the objective is to explore the capability of the methods to predict EDP|IM relationship under
favorable or less favorable situations. This is done by using a subset of the generated EDMs to calibrate both the
traditional and emulator methods, and then using the results to predict the remainder of the generated EDMs. As
well as predicting the values themselves, 95% prediction intervals are computed as a way of quantifying the
uncertainty in the predictions.
To this aim, the coverage probability of the 95% prediction intervals (hereafter called coverage), the average
length of these intervals and the mean squared error (MSE) are the metrics used to assess the performance of the
emulation and standard approach. Coverage shows how accurate the uncertainty assessment is, by calculating the
proportion of intervals containing the actual EDP. If the uncertainty assessment is accurate, the coverages from a
method should be equal to their nominal value of 0.95.
The average interval length assesses the amount of uncertainty in the predictions intervals and is computed as:
1
95 95
Average Length
n
i i
i
UL LL
n
(6)
where UL95i and LL95i are respectively the upper and lower ends of the ith prediction interval. Ideally,
prediction intervals should be as short as possible subject to the correct coverage.
6. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
6
Figure 1 - Two artificially generated “true” functions for the high-fidelity analysis of the Special-Code case study.
The MSE assesses empirically the quality of the vector of n predictions, ˆY , with respect to an simulated EDP,
Y , and is defined as:
2
1
1 ˆ
n
i i
i
MSE Y Y
n
(7)
4.2 Tailoring emulator – Preparation and training
A case study building design is analyzed at a high level of fidelity, as discussed in detail in Section 4.3. The
resultant dataset, expressed in terms of IM and EDP is used as the input to train the emulator in order to provide
estimates and variances, for the conditional distribution of EDP|IM relationship. To this aim, a computer code
implementing the proposed framework is scripted in R [16].
The full sample is split to a training set and a test set. Four different cases are investigated, three utilizing
training samples of 10, 25, 50 training inputs, representing a small, medium and large sample, and one using the
full training input sample consisting of number of points associated with the ground motions that push the
structures to nonlinear range [17]. Regarding the reduced samples, two sampling strategies are considered,
including a random sampling, and a stratified sampling approach. For the stratified sampling, the full input
sample is divided into 5 strata (bins) of equal intensity measure width, and then random sampling is applied to
select the user-specified number of inputs (e.g. 2, 5, 10 inputs from each bin).
Following the sampling procedure, the selected inputs are then used to train the emulator. As stated in previous
sections, the setup of the emulation model allows one to implement a regression model to describe the mean
function ( )t and is suitable for the nature of the problem of interest. In this study, the authors decided to
maintain a power-law regression model, as in the standard cloud method; however, alternative models, such as
linear or higher order polynomials may also be used.
The covariance parameters, as well as the nugget are estimated by using the variofit() function, which is included
in the geoR package [18]. Ordinary least squares is used to fit the covariance model to an empirical variogram:
the Gaussian model is used in the present work. The resultant covariance parameters are then used to predict the
remaining EDPs and to calculate the 95% prediction intervals.
4.3 Case study structure and ground motion selection
A regular reinforced concrete (RC) moment resisting frame (MRF) is modelled and implemented as a case study
for the current research work. This structure is designed according to the latest Italian seismic code (or NIBC08;
[19]), fully consistent with Eurocode 8 (EC8; [20]), following the High Ductility Class (DCH) rules, and
7. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
7
represents the Special-Code vulnerability class, hereafter called the Special-Code building. Interstorey heights,
span of each bay and cross-sections dimensions for the case-study building are reported in Figure 2. The
considered frame is regular (in plan and elevation). Details regarding the design and the modelling of the
building are available in [21] and [17].
Static pushover (PO) analysis is carried out by applying increments of lateral loads to the side nodes of the
structure. These lateral loads are proportionally distributed with respect to the interstorey heights (triangular
distribution). The PO analysis is carried out until a predefined target displacement is reached, corresponding to
the expected collapse state.
Figure 2 - Elevation dimensions and member cross-sections of the Special-Code RC frame.
Table 1 summarizes the structural and dynamic properties associated with the case-study building model, namely
mass of the system m, fundamental period T1 as well as the modal mass participation at the first mode of
vibration.
Table 1 - Structural and dynamic properties of the case-study building.
Building Type
Total mass, m
(tn)
T1
(s)
Modal Mass Participation
(1st Mode)
Special-Code 172.9 0.506 92.8 %
Figure 3 shows the static PO curve associated with the studied building, and is reported in terms of top center-of-
mass displacement divided by the total height of the structure (i.e., the roof drift ratio, RDR) along the horizontal
axis of the diagram, and base shear divided by the building’s seismic weight along the vertical axis (i.e., base
shear coefficient).
A set of 150 unscaled ground motion records from the SIMBAD database (Selected Input Motions for
displacement-Based Assessment and Design; [22]), is used here. SIMBAD includes a total of 467 tri-axial
accelerograms, consisting of two horizontal (X-Y) and one vertical (Z) components, generated by 130 worldwide
seismic events (including main shocks and aftershocks). In particular, the database includes shallow crustal
earthquakes with moment magnitudes (Mw) ranging from 5 to 7.3 and epicentral distances R ≤ 35 km. A subset
of 150 records is considered here to provide a significant number of strong-motion records of engineering
relevance for the applications presented in this paper. These records are selected by first ranking the 467 records
in terms of their PGA values (by using the geometric mean of the two horizontal components) and then keeping
the component with the largest PGA value (for the 150 stations with highest mean PGA).
Nonlinear Dynamic Procedures (NDP), and particularly NLTHA is utilized to estimate the seismic response of
the studied frame, representing a high level of fidelity analysis, namely high fidelity.
8. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
8
Figure 3 - Static PO curve for the Special-Code RC frame building.
In the current study, the focus is laid on the deformation-based EDP, defined as maximum (over all stories) peak
interstorey drift ratio (denoted as MIDR).
With regard to the IM input, previous studies agree that most informative IMs (i.e. vector-valued and advanced
IMs respectively) are more suitable for predicting the seismic response of structure, and eventually the seismic
fragility [17,23], as they also perform well under most IM selection criteria. Therefore the advanced scalar IM
INp [24] is used herein. INp , which is based on the spectral ordinate Sa(T1) and the parameter Np, is defined as:
paN NTSI p 1 (8)
where α parameter is assumed to be α = 0.4 based on the tests conducted by the authors and Np is defined
as:
11
1,
1
,...,
TS
TS
TS
TTS
N
a
N
i ia
a
Navga
p
N
(9)
TN corresponds to the maximum period of interest and lays within a range of 2 and 2.5T1, as suggested by
the authors.
5. Evaluation of emulation for probabilistic seismic demand against cloud analysis
All the steps discussed in the Methodology section are applied here in order to build the emulator, which
provides estimates, and the associated variances, for the conditional distribution of EDP|IM relationship. The
outcomes of the emulation approach are then compared to standard method outcomes, namely the cloud method.
The results of the NDA analysis of the Special-Code building, expressed in terms of MIDR and the advanced
IM, INp are presented here. The reason for selecting this EDP:IM combination is based on previous studies
carried by the authors regarding the selection of optimal IMs for the fragility analysis of mid-rise RC buildings
[17].
Figure 4 illustrates the mean estimations and the associated 95% confidence intervals of the emulation and cloud
approach, utilizing the full sample for the two “assumed realities” defined in Section 4.1. Table 2 shows the
training inputs of the emulator, namely the sample size (stratified sampling used here) and the covariance model,
in conjunction with the metrics used to assess the performance of the emulation and the cloud approach, for the
two “assumed realities” investigated. A close examination of the results presented in Table 2 shows that the
coverage probability for the case of the emulator is always closely matching the nominal coverage probability,
with ±0.5% difference for both generated true functions, while the associated difference for the cloud case is
reaching up to +3.3%. This means that the cloud approach is producing conservative confidence intervals, i.e.
wider confidence intervals. As a result, the estimated average length of the emulation approach is always less
9. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
9
Figure 4 - Mean estimations and associated 95% confidence intervals of emulation (left panels) and the cloud approach
(right panels) for Reality 1 (top row) and Reality 2 (bottom row).
than the respective average length of the cloud, indicating that the predictions of the emulator are less uncertain.
The reduction of the average length recorded in the case of emulator (comparing to cloud) is between 4.0-6.7%
for larger sample sizes (i.e. full and 50-points sample), and 10.5-13.6% smaller sample sizes (25- and 10-points
sample). Note that the biggest differences in average length are observed for the Reality 2, showing the
capability of the emulator to better estimate the EDP|IM relationship for the cases that does necessarily follow a
favorable pattern (e.g. power-law). Last, insignificant differences are obtained in terms of the MSE computed for
both for models, with the difference never exceeding ±1.7% for Realities 2, and ±3.3% for Reality 1.
10. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
10
Table 2 - Emulator’s inputs and metrics to assess the performance of the emulation and the cloud approach.
Sample
size
Covariance
Model
Reality
MSE
Emulator
MSE
Cloud
Average
Length
Average
Length
Cloud
Coverage
(%)
Coverage
(%)
Emulator Emulator Cloud
Full Gaussian 1 0.397 0.410 1.657 1.722 95.00 98.33
50 Gaussian 1 1.004 1.019 1.653 1.743 95.18 97.44
25 Gaussian 1 4.245 4.159 1.689 1.866 94.60 96.72
10 Gaussian 1 4.352 4.238 1.696 1.886 94.89 97.08
Full Gaussian 2 4.717 4.692 1.989 2.087 95.00 98.33
50 Gaussian 2 5.050 5.035 1.982 2.114 95.14 97.44
25 Gaussian 2 8.931 9.086 1.996 2.268 94.54 97.01
10 Gaussian 2 8.555 8.513 2.026 2.284 94.45 96.65
6. Conclusions and future work
In this study, a new statistical emulation approach is introduced for estimating the mean and the associated
variance of the conditional distribution of EDP|IM relationship, overcoming some limitations of the standard
cloud method. The capabilities of this new approach are investigated when trying to predict different “assumed
realities” and the results are compared against the standard cloud analysis results.
To this aim, a RC mid-rise building representing the Special-Code vulnerability class of the Italian building
stock is used as a case-study structure. An artificially generated IM:EDP relationships derived from a sample of
real analysis data, obtained from the aforementioned case-study building, is considered. In addition, various
sampling strategies and covariance structures are also explored in order to shed light to the capabilities of the
emulator under different conditions. Random sampling strategy showed significant sensitivity to the selection of
training inputs, especially for the cases of smaller sample sizes; as a result, stratified sampling is used here.
In the presented case study, the coverage probability for the proposed emulation model is always closely
matching the nominal coverage probability and also resulting a significant reduction of the average length,
comparing to the respective average length of the cloud method. Particularly for the Reality 2 case, the average
length reduction reaches 13.6%, highlighting the flexibility of the emulator over the standard approach.
As a part of an ongoing research work, the proposed methodology has been also applied to case-studies
associated with numerous input variations. These variations include the employment of buildings of different
vulnerability class (i.e. Pre-Code buildings), the choice of additional scalar IMs (i.e. peak ground acceleration
(PGA), spectral acceleration Sa(T1) amongst others), and the implementation of different levels of analysis
fidelity (i.e. low-fidelity analysis, based on simplified analysis methods, such as the variant of capacity spectrum
method, FRACAS[25]). The kriging exploits the relationship of neighboring points of the data sample,
explaining why this approach is better suited with advanced IMs, as the scatter relation is reduced. However,
applying the emulator to a wider scatter (e.g. low-fidelity results expressed in terms of peak ground parameters),
may not always provide improvements that justify the additional work required. As a result, the future steps of
this work will explore the full potential of this approach by employing a Bayesian framework to account for the
uncertainty related to the emulator’s covariance parameters. Furthermore, further validation will be carried out,
investigating more rigorous ground motion selection procedures, and additional building case studies (i.e. RC
buildings with infills).
11. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
11
Acknowledgements
Funding for this research work has been provided by the Engineering and Physical Sciences Research Council
(EPSRC) in the UK and the AIR Worldwide Ltd through the Urban Sustainability and Resilience program at
University College London.
7. References
[1] Deierlein GG, Krawinkler H, Cornell CA. A framework for performance-based earthquake engineering. Pacific
Conf Earthq Eng 2003;273:1–8. doi:10.1061/9780784412121.173.
[2] Porter K a, Farokhnia K, Cho IH, Grant D, Jaiswal K, Wald D. Global Vulnerability Estimation Methods for the
Global Earthquake Model. 15th World Conf. Earthq. Eng., Lisbon, Portugal: 2012, p. 24–8.
[3] D’Ayala D, Meslem A, Vamvatsikos D, Porter K, Rossetto T, Crowley H, et al. Guidelines for Analytical
Vulnerability Assessment of low-mid-rise Buildings – Methodology. Pavia, Italy: GEM Foundation; 2013.
[4] Bazzurro P, Cornell CA, Shome N, Carballo JE. Three proposals for characterizing MDOF nonlinear seismic
response. J Struct Eng 1998;124:1281–9. doi:Doi 10.1061/(Asce)0733-9445(1998)124:11(1281).
[5] Luco C, Cornell CA. Seismic drift demands for two SMRF structures with brittle connections. Structural
Engineering World Wide. Struct Eng World Wide; Elsevier Sci Ltd, Oxford, Engl 1998:paper T158-3.
[6] Jalayer F. Direct probabilistic seismic analysis: implementing non-linear dynamic assessments. Stanford University,
2003.
[7] Modica A, Stafford PJ. Vector fragility surfaces for reinforced concrete frames in Europe. Bull Earthq Eng
2014;12:1725–53. doi:10.1007/s10518-013-9571-z.
[8] Ibarra LF, Krawinkler H. Global Collapse of Frame Structures under Seismic Excitations Global Collapse of Frame
Structures under Seismic Excitations. Berkeley, California: 2005.
[9] Cornell CA, Jalayer F, Hamburger RO, Foutch DA. Probabilistic Basis for 2000 SAC Federal Emergency
Management Agency Steel Moment Frame Guidelines. J Struct Eng 2002;128:526–33. doi:10.1061/(ASCE)0733-
9445(2002)128:4(526).
[10] Vamvatsikos D, Cornell CA. Incremental dynamic analysis. Earthq Eng Struct Dyn 2002;31:491–514.
doi:10.1002/eqe.141.
[11] Jalayer F, De Risi R, Manfredi G. Bayesian Cloud Analysis: efficient structural fragility assessment using linear
regression. Bull Earthq Eng 2014;13:1183–203. doi:10.1007/s10518-014-9692-z.
[12] O’Hagan A. Bayesian analysis of computer code outputs: A tutorial. Reliab Eng Syst Saf 2006;91:1290–300.
doi:10.1016/j.ress.2005.11.025.
[13] Kennedy MC, O’Hagan A. Bayesian Calibration of Computer Models. J R Stat Soc Ser B (Statistical Methodol
2001;63:425–64. doi:10.1111/1467-9868.00294.
[14] Saltelli A, Chan K, Scott E. Sensitivity analysis. Wiley series in probability and statistics. Chichester: Wiley; 2000.
[15] Sacks J, Welch WJ, Mitchell JSB, Henry PW. Design and Experiments of Computer Experiments. Stat Sci
1989;4:409–23.
[16] R Core Team. R: A language and environment for statistical computing. 2014.
[17] Minas S, Galasso C, Rossetto T. Spectral Shape Proxies and Simplified Fragility Analysis of Mid- Rise Reinforced
Concrete Buildings. 12th Int. Conf. Appl. Stat. Probab. Civ. Eng. ICASP12, Vancouver, Canada: 2015, p. 1–8.
[18] Ribeiro Jr P, Diggle P. geoR: A package for geostatistical analysis. R-NEWS 2001;1:15–8.
[19] Decreto Ministeriale del 14/01/2008. Norme Tecniche per le Costruzioni. Rome: Gazzetta Ufficiale della
Repubblica Italiana, 29.; 2008.
[20] EN 1998-1. Eurocode 8: Design of structures for earthquake resistance – Part 1: General rules, seismic actions and
rules for buildings. The European Union Per Regulation 305/2011, Directive 98/34/EC, Directive2004/18/EC; 2004.
12. 16th
World Conference on Earthquake Engineering, 16WCEE 2017
Santiago Chile, January 9th to 13th 2017
12
[21] De Luca F, Elefante L, Iervolino I, Verderame GM. Strutture esistenti e di nuova progettazione : comportamento
sismico a confronto. Anidis 2009 XIII Convegno - L’ Ing. Sismica Ital., Bologna: 2009.
[22] Smerzini C, Galasso C, Iervolino I, Paolucci R. Ground motion record selection based on broadband spectral
compatibility. Earthq Spectra 2013;30:1427–48. doi:10.1193/052312EQS197M.
[23] Ebrahimian H, Jalayer F, Lucchini A, Mollaioli F, Manfredi G. Preliminary ranking of alternative scalar and vector
intensity measures of ground shaking. Bull Earthq Eng 2015;13:2805–40. doi:10.1007/s10518-015-9755-9.
[24] Bojórquez E, Iervolino I. Spectral shape proxies and nonlinear structural response. Soil Dyn Earthq Eng
2011;31:996–1008. doi:10.1016/j.soildyn.2011.03.006.
[25] Rossetto T, Gehl P, Minas S, Galasso C, Duffour P, Douglas J, et al. FRACAS: A capacity spectrum approach for
seismic fragility assessment including record-to-record variability. Eng Struct 2016;125:337–48.
doi:10.1016/j.engstruct.2016.06.043.