This study uses Relevance Vector Machine (RVM) regression to develop a probabilistic model for the average horizontal component of 5%-damped earthquake response spectra. Unlike conventional models, the proposed approach does not require a functional form, and constructs the model based on a set predictive variables and a set of representative ground motion records. The RVM uses Bayesian inference to determine the confidence intervals, instead of estimating them from the mean squared errors on the training set. An example application using three predictive variables (magnitude, distance and fault mechanism) is presented for sites with shear wave velocities ranging from 450 m/s to 900 m/s. The predictions from the proposed model are compared to an existing parametric model. The results demonstrate the validity of the proposed model, and suggest that it can be used as an alternative to the conventional ground motion models. Future studies will investigate the effect of additional predictive variables on the predictive performance of the model.
This document proposes a new model for segmenting overlapping, near-circular objects in images. It combines a multi-layer nonlocal phase field prior model that favors configurations of possibly overlapping near-circular shapes, with a new additive image likelihood model that accounts for intensities summing in overlapping regions. The full posterior energy is minimized using gradient descent to obtain a maximum a posteriori estimate of the object instances. The model is tested on synthetic and fluorescence microscopy cell nucleus image data.
This paper proposes a parameterized model order reduction technique for efficient global sensitivity analysis of coupled coils over a design space. It uses parameterized models of the electromagnetic matrices and Krylov matrices from the original and adjoint systems, derived using interpolation. Numerical results confirm the efficiency and accuracy of the proposed method for sensitivity analysis across the entire design space of interest.
11 construction productivity and cost estimation using artificial Vivan17
This chapter discusses using artificial neural networks (ANNs) to estimate construction project productivity and costs. ANNs can learn from previous examples to predict outputs like cost and schedule based on input data. The chapter provides an overview of ANNs and examples of their use in construction cost and duration estimation. It then presents a framework for developing ANNs for productivity and cost predictions, and provides a detailed case study applying ANNs to estimate productivity of precast installation activities. The case study ANN was able to predict installation times with an average error of around 20%, demonstrating the potential of ANNs for aiding construction cost and schedule estimates.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKINGcsandit
In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking
body parts of articulated objects. An articulated object (human target) is represented as a
dynamic Markov network of the different constituent parts. The proposed approach combines
local information of individual body parts and other spatial constraints influenced by
neighboring parts. The movement of the relative parts of the articulated body is modeled with
local information of displacements from the Markov network and the global information from
other neighboring parts. We explore the effect of certain model parameters (including the
number of parts tracked; number of Monte-Carlo cycles, etc.) on system accuracy and show that
ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to
other methods on a number of real-time video datasets containing single targets.
This document proposes a new model for segmenting overlapping, near-circular objects in images. It combines a multi-layer nonlocal phase field prior model that favors configurations of possibly overlapping near-circular shapes, with a new additive image likelihood model that accounts for intensities summing in overlapping regions. The full posterior energy is minimized using gradient descent to obtain a maximum a posteriori estimate of the object instances. The model is tested on synthetic and fluorescence microscopy cell nucleus image data.
This paper proposes a parameterized model order reduction technique for efficient global sensitivity analysis of coupled coils over a design space. It uses parameterized models of the electromagnetic matrices and Krylov matrices from the original and adjoint systems, derived using interpolation. Numerical results confirm the efficiency and accuracy of the proposed method for sensitivity analysis across the entire design space of interest.
11 construction productivity and cost estimation using artificial Vivan17
This chapter discusses using artificial neural networks (ANNs) to estimate construction project productivity and costs. ANNs can learn from previous examples to predict outputs like cost and schedule based on input data. The chapter provides an overview of ANNs and examples of their use in construction cost and duration estimation. It then presents a framework for developing ANNs for productivity and cost predictions, and provides a detailed case study applying ANNs to estimate productivity of precast installation activities. The case study ANN was able to predict installation times with an average error of around 20%, demonstrating the potential of ANNs for aiding construction cost and schedule estimates.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
A Computationally Efficient Algorithm to Solve Generalized Method of Moments ...Waqas Tariq
Generalized method of moment estimating function enables one to estimate regression parameters consistently and efficiently. However, it involves one major computational problem: in complex data settings, solving generalized method of moments estimating function via Newton-Raphson technique gives rise often to non-invertible Jacobian matrices. Thus, parameter estimation becomes unreliable and computationally inefficient. To overcome this problem, we propose to use secant method based on vector divisions instead of the usual Newton-Raphson technique to estimate the regression parameters. This new method of estimation demonstrates a decrease in the number of non-convergence iterations as compared to the Newton-Raphson technique and provides reliable estimates.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKINGcsandit
In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking
body parts of articulated objects. An articulated object (human target) is represented as a
dynamic Markov network of the different constituent parts. The proposed approach combines
local information of individual body parts and other spatial constraints influenced by
neighboring parts. The movement of the relative parts of the articulated body is modeled with
local information of displacements from the Markov network and the global information from
other neighboring parts. We explore the effect of certain model parameters (including the
number of parts tracked; number of Monte-Carlo cycles, etc.) on system accuracy and show that
ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to
other methods on a number of real-time video datasets containing single targets.
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
This paper presents an approach based on applying an aggregated predictor formed by multiple versions of a multilayer neural network with a back-propagation optimization algorithm for helping the engineer to get a list of the most appropriate well-test interpretation models for a given set of pressure/ production data. The proposed method consists of three stages: (1) data decorrelation through principal component analysis to reduce the covariance between the variables and the dimension of the input layer in the artificial neural network, (2) bootstrap replicates of the learning set where the data is repeatedly sampled with a random split of the data into train sets and using these as new learning sets, and (3) automatic reservoir model identification through aggregated predictor formed by a plurality vote when predicting a new class. This method is described in detail to ensure successful replication of results. The required training and test dataset were generated by using analytical solution models. In our case, there were used 600 samples: 300 for training, 100 for cross-validation, and 200 for testing. Different network structures were tested during this study to arrive at optimum network design. We notice that the single net methodology always brings about confusion in selecting the correct model even though the training results for the constructed networks are close to 1. We notice also that the principal component analysis is an effective strategy in reducing the number of input features, simplifying the network structure, and lowering the training time of the ANN. The results obtained show that the proposed model provides better performance when predicting new data with a coefficient of correlation approximately equal to 95% Compared to a previous approach 80%, the combination of the PCA and ANN is more stable and determine the more accurate results with lesser computational complexity than was feasible previously. Clearly, the aggregated predictor is more stable and shows less bad classes compared to the previous approach.
This document presents a new approach for multiclass image segmentation and categorization using Bayesian networks and spatial Markov kernels. It first constructs an over-segmented image and Bayesian network to model relationships between image elements. Interactive segmentation is performed to match pixels to an outline provided by the user. The segmented image is then categorized using a spatial Markov kernel algorithm based on visual keywords assigned to image blocks. The approach achieves 93.5% accuracy on test images. It provides a probabilistic way to model image segmentation and allows new knowledge to be incorporated through the Bayesian network framework.
The step construction of penalized spline in electrical power load dataTELKOMNIKA JOURNAL
Electricity is one of the most pressing needs for human life. Electricity is required not only for lighting but also to carry out activities of daily life related to activities Social and economic community. The problems is currently a limited supply of electricity resulting in an energy crisis. Electrical power is not storable therefore it is a vital need to make a good electricity demand forecast. According to this, we conducted an analysis based on power load. Given a baseline to this research, we applied penalized splines (P-splines) which led to a powerful and applicable smoothing technique. In this paper, we revealed penalized spline degree 1 (linear) with 8 knots is the best model since it has the lowest GCV (Generelized Cross Validation). This model have become a compelling model to predict electric power load evidenced by of Mean Absolute Percentage Error (MAPE=0.013) less than 10%.
In this paper, block-oriented systems with linear parts based on Laguerre functions is used to
approximation of a cone crusher dynamics. Adaptive recursive least squares algorithm is used to
identification of Laguerre model. Various structures of Hammerstein, Wiener, Hammerstein-Wiener models
are tested and the MATLAB simulation results are compared. The mean square error is used for models
validation.It has been found that Hammerstein-Wiener with orthonormal basis functions improves the
quality of approximation plant dynamics. The mean square error for this model is 11% on average
throughout the considered range of the external disturbances amplitude. The analysis also showed that
Wiener model cannot provide sufficient approximation accuracy of the cone crusher dynamics. During the
process it is unstable due to the high sensitivity to disturbances on the output.The Hammerstein-Wiener
model will be used to the design nonlinear model predictive control application.
Iterative Determinant Method for Solving Eigenvalue Problemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document discusses resource allocation for dense millimeter wave (mmWave) cellular networks assisted by device-to-device (D2D) communication. It formulates the problem into two sub-problems: 1) joint access and backhaul resource allocation and 2) joint D2D access and forwarding link resource allocation. It proposes using game theory to model the sub-problems as non-cooperative games and develops centralized and decentralized algorithms to obtain resource allocation solutions. Simulation results show the algorithms can effectively mitigate the impact of blockages on network performance.
Robust Evolutionary Approach to Mitigate Low Frequency Oscillation in a Multi...IDES Editor
This paper proposes a new optimization algorithm
known as Modified Shuffled Frog Leaping Algorithm (MSFLA)
for optimal designing of PSSs controller. The design problem
of the proposed controller is formulated as an optimization
problem and MSFLA is employed to search for optimal
controller parameters. An eigenvalue based objective function
reflecting the combination of damping factor and damping
ratio is optimized for different operating conditions. The
proposed approach is applied to optimal design of multimachine
power system stabilizers. Three different power
systems, A Single Machine Infinite Bus (SMIB), four-machine
of Kundur and ten-machine New England systems are
considered. The obtained results are evaluated and compared
with other results obtained by Genetic Algorithm (GA).
Eigenvalue analysis and nonlinear system simulations assure
the effectiveness and robustness of the proposed controller in
providing good damping characteristic to system oscillations
and enhancing the system dynamic stability under different
operating conditions and disturbances.
IDENTIFICATION OF DELAMINATION SIZE AND LOCATION OF COMPOSITE LAMINATE FROM TIME DOMAIN DATA OF MAGNETOSTRICTIVE SENSOR AND ACTUATOR USING ARTIFICIAL NEURAL NETWORK.
This document summarizes an experimental investigation on circular hollow steel columns infilled with lightweight concrete, with and without glass fiber reinforced polymer (GFRP), under cyclic loading. Specimens of different steel tube lengths, cross-sections, thicknesses, and lightweight concrete grades were tested. An artificial neural network was used to predict the ultimate load capacity and axial shortening based on the experimental results. The neural network predictions were found to be in acceptable agreement with the experimental results based on linear regression analysis.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
Metric Projections to Identify Critical Points in Electric Power Systemstheijes
The identification of weak nodes and branches involved have been analyzed with different technical of analysis as: sensitivities, modal and of the singular minimum value, applying the Jacobian matrix of load flows. We show up a metric projections application to identify weak nodes and branches with more participation in the electric power system.
EXPLOITING THE DISCRIMINATING POWER OF THE EIGENVECTOR CENTRALITY MEASURE TO ...ijfcstjournal
Graph Isomorphism is one of the classical problems of graph theory for which no deterministic polynomial-time algorithm is currently known, but has been neither proven to be NP-complete. Several
heuristic algorithms have been proposed to determine whether or not two graphs are isomorphic (i.e.,structurally the same). In this paper, we analyze the discriminating power of the well-known centrality measures on real-world network graphs and propose to use the sequence (either the non-decreasing or
non-increasing order) of eigenvector centrality (EVC) values of the vertices of two graphs as a precursor step to decide whether or not to further conduct tests for graph isomorphism. The eigenvector centrality of
a vertex in a graph is a measure of the degree of the vertex as well as the degrees of its neighbors. As the
EVC values of the vertices are the most distinct, we hypothesize that if the non-increasing (or nondecreasing)
order of listings of the EVC values of the vertices of two test graphs are not the same, then the two graphs are not isomorphic. If two test graphs have an identical non-increasing order of the EVC sequence, then they are declared to be potentially isomorphic and confirmed through additional heuristics.
We test our hypothesis on random graphs (generated according to the Erdos-Renyi model) and we observe
the hypothesis to be indeed true: graph pairs that have the same sequence of non-increasing order of EVC
values have been confirmed to be isomorphic using the well-known Nauty software.
Fluoride Recognition of Amide- and Pyrrole-Based Receptors: A Theoretical Study drboon
The novel amide-based receptors, N-(anthracen-1-yl)-1H-pyrrole-2-carboxamide (1) and N-(8-(1H-pyrrole-2-carboxamido) anthracen-1-yl)-1H-pyrrole-2-carboxamide (2) have been designed and investigated for their halide ion recognition using the density functional theory calculations in gas and solvent phases. Electronic and thermodynamic properties of halide ion binding complexes of receptors were investigated. Intermolecular interactions in all the studied complexes occurring via hydrogen bonding are found. The designed receptors 1 and 2 are found to be excellent selectivity for fluoride ion in both gas and solvent phases.
Using Adobe Photoshop to Scale the Rate of the Shape’s Deformation By Colour ...drboon
Interior designers are quite cognizant of the colour significance as a basic element of design, in order to achieve aesthetic and functional demands. This paper aims to present an empirical model for interpreting the relationship between the colour’ contrast and highlighting the foreground objects, by measuring the values of the deformation via using Adobe Photoshop software. The experiment contained practical steps of calculation and analysing the amount of the chromatic deformation of the foreground objects, which is represented by 6 samples model. These samples of coloured spots are tested within two phases; with coloured background based on Itten colour wheel, and with a neutral background “greyscale wheel”; and comparing the results by calculating the amount of distortion, through measuring angle's values. The findings showed that using of contrast application is useful as an empirical method for scaling the chromatic interaction between the foreground and the background. By using T-Test analysis the findings emphasized that the colour contrast had a significant impact on highlighting or distorting the foreground shapes.
CS0: A Project Based, Active Learning Coursedrboon
The recruitment and retention of students in early computer programming classes has been the focus of many Computer Science and Informatics programs. This paper describes an initiative underway at Indiana University South Bend to improve the retention rate in computer science and informatics. The approach described in this work is inspired by the SCALE-UP project, and describes the design and implementation of an instructor-guided, active learning environment which allows students to gradually acquire the necessary critical thinking, problem solving, and programming skills required for success in computer science and informatics.
An International Delphi Study to Build a Foundation for an Undergraduate Leve...drboon
This paper is based on research that was conducted to identify and validate the competency areas included in the body of knowledge developed by a consortium of the Society of Manufacturing Engineers (SME), the Association for Manufacturing Excellence (AME), and the Shingo Prize for three levels of certification examinations in lean manufacturing, namely Bronze, Silver, and Gold. The focus of the paper is to delineate the results obtained from the Bronze level certification exam that can be applied to lay a foundation for developing an undergraduate-level curriculum in lean manufacturing. A modified Delphi technique that included a pre-Delphi round followed with three rounds of Delphi questionnaire iterations was used in the study. Seventy-six experts, from six different countries, selected to serve on the Delphi panel rated the importance of competency areas for testing at each level of lean certification using a 5-point Likert scale and provided additional comments. A convergence of opinion on the competency areas provided a basis for validating the body of knowledge. Forty-two prioritized competency areas that emerged from the study were grouped into five major domains: (a) Enablers for Lean, (b) Lean Core Operations, (c) Business Core Operations – Support Functions, (d) Quality, Cost and Delivery Measures, and (e) Business Results.
Cyclic Elastoplastic Large Displacement Analysis and Stability Evaluation of ...drboon
This paper deals with the cyclic elastoplastic large displacement analysis and stability evaluation of steel tubular braces subjected to axial tension and compression. The inelastic cyclic performance of cold-formed steel braces made of circular hollow sections is examined through finite element analysis using the commercial computer program ABAQUS. First some of the most important parameters considered in the practical design and ductility evaluation of steel braces of tubular sections are presented. Then the details of finite element modeling and numerical analysis are described. Later the accuracy of the analytical model employed in the analysis is substantiated by comparing the analytical results with the available test data in the literature. Finally the effects of some important structural and material parameters on cyclic inelastic behavior of steel tubular braces are discussed and evaluated.
Career Road Strategy Model, Complementary of Competency Models and Strategic ...drboon
This document summarizes a journal article that presents a model for strategic career road planning. It begins by discussing traditional job analysis and competency models, noting their limitations in directing career paths according to organizational strategic goals. It then reviews literature on career success and different approaches to career planning. Next, it compares traditional job analysis to competency models, highlighting key differences in their focus, purpose, and descriptive versus prescriptive nature. Finally, it proposes a model that considers an employee's independence and the roles of stakeholders to guide strategic career planning in line with an organization's objectives.
A Novel Finite Element Model for Annulus Fibrosus Tissue Engineering Using Ho...drboon
In this work, a novel finite element model using the mechanical homogenization techniques of the human annulus fibrosus (AF) is proposed to accurately predict relevant moduli of the AF lamella for tissue engineering application. A general formulation for AF homogenization was laid out with appropriate boundary conditions. The geometry of the fibre and matrix were laid out in such a way as to properly mimic the native annulus fibrosus tissue’s various, location-dependent geometrical and histological states. The mechanical properties of the annulus fibrosus calculated with this model were then compared with the results obtained from the literature for native tissue. Circumferential, axial, radial, and shear moduli were all in agreement with the values found in literature. This study helps to better understand the anisotropic nature of the annulus fibrosus tissue, and possibly could be used to predict the structure-function relationship of a tissue-engineered AF.
PEA Analysis: A Perspective Approach to Entrepreneurship Analysis in Engineeringdrboon
As our technological capabilities increase, engineers have an increasing obligation to address market (societal) needs efficiently and sustainably. Such efficiency and sustainability is derived from entrepreneurial aspects of engineering solutions. Therefore, along with being a proponent of scientific solutions to societal/market needs, engineers also have to be effective entrepreneurs. The effectiveness of an engineering solution is not only measured by its scientific sophistication, but also its usefulness and contribution towards market (societal) needs. However, engineers seldom undertake entrepreneurial thinking whilst developing technology solutions, most efforts being expended on scientific sophistication. This is mainly due to the lack of suitable analysis technique that would enable engineers to undertake such evaluation. In this paper, a quantified perspective based analysis technique for evaluation of entrepreneurial engineering solution is presented called the PEA Analysis method.
Bender’s Decomposition Method for a Large Two-stage Linear Programming Modeldrboon
Linear Programming method (LP) can solve many problems in operations research and can obtain optimal solutions. But, the problems with uncertainties cannot be solved so easily. These uncertainties increase the complexity scale of the problems to become a large-scale LP model. The discussion started with the mathematical models. The objective is to minimize the number of the system variables subjecting to the decision variable coefficients and their slacks and surpluses. Then, the problems are formulated in the form of a Two-stage Stochastic Linear (TSL) model incorporated with the Bender’s Decomposition method. In the final step, the matrix systems are set up to support the MATLAB programming development of the primal-dual simplex and the Bender’s decomposition method, and applied to solve the example problem with the assumed four numerical sets of the decision variable coefficients simultaneously. The simplex method (primal) failed to determine the results and it was computational time-consuming. The comparison of the ordinary primal, primal-random, and dual method, revealed advantageous of the primal-random. The results yielded by the application of Bender’s decomposition method were proven to be the optimal solutions at a high level of confidence.
Fuzzy Logic Modeling Approach for Risk Area Assessment for Hazardous Material...drboon
The assessment of area in risk of HazMat transportation is very beneficial for the planning of the management of such area. We prioritized the affected area using HazMat-Risk Area Index (HazMatRAI) developed on the basis of Fuzzy Logic. The purpose of such development is to reduce limits of the criteria used for the assessment which we found exist when displaying data related to Hazmat represented by iceberg. In this regard, we categorized type of Membership Function according to Fuzzy set method in order to match the existing criteria, both solid and abstract ones. The conditions of Fuzzy Number and Characteristic are used respectively so that all risk levels are covered. However, the displaying of HazMat-Risk Area Index needs weighing of each criterion that is used for the assessment which significance of each level varies. We used Saaty’s Analytic Hierarchy Process (AHP) to establish weighing value obtained from such assessment. Therefore it is beneficial for the preparation of area with HazMatRAI value is high, hence proper preparation for the management in case of critical situation.
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
This paper presents an approach based on applying an aggregated predictor formed by multiple versions of a multilayer neural network with a back-propagation optimization algorithm for helping the engineer to get a list of the most appropriate well-test interpretation models for a given set of pressure/ production data. The proposed method consists of three stages: (1) data decorrelation through principal component analysis to reduce the covariance between the variables and the dimension of the input layer in the artificial neural network, (2) bootstrap replicates of the learning set where the data is repeatedly sampled with a random split of the data into train sets and using these as new learning sets, and (3) automatic reservoir model identification through aggregated predictor formed by a plurality vote when predicting a new class. This method is described in detail to ensure successful replication of results. The required training and test dataset were generated by using analytical solution models. In our case, there were used 600 samples: 300 for training, 100 for cross-validation, and 200 for testing. Different network structures were tested during this study to arrive at optimum network design. We notice that the single net methodology always brings about confusion in selecting the correct model even though the training results for the constructed networks are close to 1. We notice also that the principal component analysis is an effective strategy in reducing the number of input features, simplifying the network structure, and lowering the training time of the ANN. The results obtained show that the proposed model provides better performance when predicting new data with a coefficient of correlation approximately equal to 95% Compared to a previous approach 80%, the combination of the PCA and ANN is more stable and determine the more accurate results with lesser computational complexity than was feasible previously. Clearly, the aggregated predictor is more stable and shows less bad classes compared to the previous approach.
This document presents a new approach for multiclass image segmentation and categorization using Bayesian networks and spatial Markov kernels. It first constructs an over-segmented image and Bayesian network to model relationships between image elements. Interactive segmentation is performed to match pixels to an outline provided by the user. The segmented image is then categorized using a spatial Markov kernel algorithm based on visual keywords assigned to image blocks. The approach achieves 93.5% accuracy on test images. It provides a probabilistic way to model image segmentation and allows new knowledge to be incorporated through the Bayesian network framework.
The step construction of penalized spline in electrical power load dataTELKOMNIKA JOURNAL
Electricity is one of the most pressing needs for human life. Electricity is required not only for lighting but also to carry out activities of daily life related to activities Social and economic community. The problems is currently a limited supply of electricity resulting in an energy crisis. Electrical power is not storable therefore it is a vital need to make a good electricity demand forecast. According to this, we conducted an analysis based on power load. Given a baseline to this research, we applied penalized splines (P-splines) which led to a powerful and applicable smoothing technique. In this paper, we revealed penalized spline degree 1 (linear) with 8 knots is the best model since it has the lowest GCV (Generelized Cross Validation). This model have become a compelling model to predict electric power load evidenced by of Mean Absolute Percentage Error (MAPE=0.013) less than 10%.
In this paper, block-oriented systems with linear parts based on Laguerre functions is used to
approximation of a cone crusher dynamics. Adaptive recursive least squares algorithm is used to
identification of Laguerre model. Various structures of Hammerstein, Wiener, Hammerstein-Wiener models
are tested and the MATLAB simulation results are compared. The mean square error is used for models
validation.It has been found that Hammerstein-Wiener with orthonormal basis functions improves the
quality of approximation plant dynamics. The mean square error for this model is 11% on average
throughout the considered range of the external disturbances amplitude. The analysis also showed that
Wiener model cannot provide sufficient approximation accuracy of the cone crusher dynamics. During the
process it is unstable due to the high sensitivity to disturbances on the output.The Hammerstein-Wiener
model will be used to the design nonlinear model predictive control application.
Iterative Determinant Method for Solving Eigenvalue Problemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This document discusses resource allocation for dense millimeter wave (mmWave) cellular networks assisted by device-to-device (D2D) communication. It formulates the problem into two sub-problems: 1) joint access and backhaul resource allocation and 2) joint D2D access and forwarding link resource allocation. It proposes using game theory to model the sub-problems as non-cooperative games and develops centralized and decentralized algorithms to obtain resource allocation solutions. Simulation results show the algorithms can effectively mitigate the impact of blockages on network performance.
Robust Evolutionary Approach to Mitigate Low Frequency Oscillation in a Multi...IDES Editor
This paper proposes a new optimization algorithm
known as Modified Shuffled Frog Leaping Algorithm (MSFLA)
for optimal designing of PSSs controller. The design problem
of the proposed controller is formulated as an optimization
problem and MSFLA is employed to search for optimal
controller parameters. An eigenvalue based objective function
reflecting the combination of damping factor and damping
ratio is optimized for different operating conditions. The
proposed approach is applied to optimal design of multimachine
power system stabilizers. Three different power
systems, A Single Machine Infinite Bus (SMIB), four-machine
of Kundur and ten-machine New England systems are
considered. The obtained results are evaluated and compared
with other results obtained by Genetic Algorithm (GA).
Eigenvalue analysis and nonlinear system simulations assure
the effectiveness and robustness of the proposed controller in
providing good damping characteristic to system oscillations
and enhancing the system dynamic stability under different
operating conditions and disturbances.
IDENTIFICATION OF DELAMINATION SIZE AND LOCATION OF COMPOSITE LAMINATE FROM TIME DOMAIN DATA OF MAGNETOSTRICTIVE SENSOR AND ACTUATOR USING ARTIFICIAL NEURAL NETWORK.
This document summarizes an experimental investigation on circular hollow steel columns infilled with lightweight concrete, with and without glass fiber reinforced polymer (GFRP), under cyclic loading. Specimens of different steel tube lengths, cross-sections, thicknesses, and lightweight concrete grades were tested. An artificial neural network was used to predict the ultimate load capacity and axial shortening based on the experimental results. The neural network predictions were found to be in acceptable agreement with the experimental results based on linear regression analysis.
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
Metric Projections to Identify Critical Points in Electric Power Systemstheijes
The identification of weak nodes and branches involved have been analyzed with different technical of analysis as: sensitivities, modal and of the singular minimum value, applying the Jacobian matrix of load flows. We show up a metric projections application to identify weak nodes and branches with more participation in the electric power system.
EXPLOITING THE DISCRIMINATING POWER OF THE EIGENVECTOR CENTRALITY MEASURE TO ...ijfcstjournal
Graph Isomorphism is one of the classical problems of graph theory for which no deterministic polynomial-time algorithm is currently known, but has been neither proven to be NP-complete. Several
heuristic algorithms have been proposed to determine whether or not two graphs are isomorphic (i.e.,structurally the same). In this paper, we analyze the discriminating power of the well-known centrality measures on real-world network graphs and propose to use the sequence (either the non-decreasing or
non-increasing order) of eigenvector centrality (EVC) values of the vertices of two graphs as a precursor step to decide whether or not to further conduct tests for graph isomorphism. The eigenvector centrality of
a vertex in a graph is a measure of the degree of the vertex as well as the degrees of its neighbors. As the
EVC values of the vertices are the most distinct, we hypothesize that if the non-increasing (or nondecreasing)
order of listings of the EVC values of the vertices of two test graphs are not the same, then the two graphs are not isomorphic. If two test graphs have an identical non-increasing order of the EVC sequence, then they are declared to be potentially isomorphic and confirmed through additional heuristics.
We test our hypothesis on random graphs (generated according to the Erdos-Renyi model) and we observe
the hypothesis to be indeed true: graph pairs that have the same sequence of non-increasing order of EVC
values have been confirmed to be isomorphic using the well-known Nauty software.
Fluoride Recognition of Amide- and Pyrrole-Based Receptors: A Theoretical Study drboon
The novel amide-based receptors, N-(anthracen-1-yl)-1H-pyrrole-2-carboxamide (1) and N-(8-(1H-pyrrole-2-carboxamido) anthracen-1-yl)-1H-pyrrole-2-carboxamide (2) have been designed and investigated for their halide ion recognition using the density functional theory calculations in gas and solvent phases. Electronic and thermodynamic properties of halide ion binding complexes of receptors were investigated. Intermolecular interactions in all the studied complexes occurring via hydrogen bonding are found. The designed receptors 1 and 2 are found to be excellent selectivity for fluoride ion in both gas and solvent phases.
Using Adobe Photoshop to Scale the Rate of the Shape’s Deformation By Colour ...drboon
Interior designers are quite cognizant of the colour significance as a basic element of design, in order to achieve aesthetic and functional demands. This paper aims to present an empirical model for interpreting the relationship between the colour’ contrast and highlighting the foreground objects, by measuring the values of the deformation via using Adobe Photoshop software. The experiment contained practical steps of calculation and analysing the amount of the chromatic deformation of the foreground objects, which is represented by 6 samples model. These samples of coloured spots are tested within two phases; with coloured background based on Itten colour wheel, and with a neutral background “greyscale wheel”; and comparing the results by calculating the amount of distortion, through measuring angle's values. The findings showed that using of contrast application is useful as an empirical method for scaling the chromatic interaction between the foreground and the background. By using T-Test analysis the findings emphasized that the colour contrast had a significant impact on highlighting or distorting the foreground shapes.
CS0: A Project Based, Active Learning Coursedrboon
The recruitment and retention of students in early computer programming classes has been the focus of many Computer Science and Informatics programs. This paper describes an initiative underway at Indiana University South Bend to improve the retention rate in computer science and informatics. The approach described in this work is inspired by the SCALE-UP project, and describes the design and implementation of an instructor-guided, active learning environment which allows students to gradually acquire the necessary critical thinking, problem solving, and programming skills required for success in computer science and informatics.
An International Delphi Study to Build a Foundation for an Undergraduate Leve...drboon
This paper is based on research that was conducted to identify and validate the competency areas included in the body of knowledge developed by a consortium of the Society of Manufacturing Engineers (SME), the Association for Manufacturing Excellence (AME), and the Shingo Prize for three levels of certification examinations in lean manufacturing, namely Bronze, Silver, and Gold. The focus of the paper is to delineate the results obtained from the Bronze level certification exam that can be applied to lay a foundation for developing an undergraduate-level curriculum in lean manufacturing. A modified Delphi technique that included a pre-Delphi round followed with three rounds of Delphi questionnaire iterations was used in the study. Seventy-six experts, from six different countries, selected to serve on the Delphi panel rated the importance of competency areas for testing at each level of lean certification using a 5-point Likert scale and provided additional comments. A convergence of opinion on the competency areas provided a basis for validating the body of knowledge. Forty-two prioritized competency areas that emerged from the study were grouped into five major domains: (a) Enablers for Lean, (b) Lean Core Operations, (c) Business Core Operations – Support Functions, (d) Quality, Cost and Delivery Measures, and (e) Business Results.
Cyclic Elastoplastic Large Displacement Analysis and Stability Evaluation of ...drboon
This paper deals with the cyclic elastoplastic large displacement analysis and stability evaluation of steel tubular braces subjected to axial tension and compression. The inelastic cyclic performance of cold-formed steel braces made of circular hollow sections is examined through finite element analysis using the commercial computer program ABAQUS. First some of the most important parameters considered in the practical design and ductility evaluation of steel braces of tubular sections are presented. Then the details of finite element modeling and numerical analysis are described. Later the accuracy of the analytical model employed in the analysis is substantiated by comparing the analytical results with the available test data in the literature. Finally the effects of some important structural and material parameters on cyclic inelastic behavior of steel tubular braces are discussed and evaluated.
Career Road Strategy Model, Complementary of Competency Models and Strategic ...drboon
This document summarizes a journal article that presents a model for strategic career road planning. It begins by discussing traditional job analysis and competency models, noting their limitations in directing career paths according to organizational strategic goals. It then reviews literature on career success and different approaches to career planning. Next, it compares traditional job analysis to competency models, highlighting key differences in their focus, purpose, and descriptive versus prescriptive nature. Finally, it proposes a model that considers an employee's independence and the roles of stakeholders to guide strategic career planning in line with an organization's objectives.
A Novel Finite Element Model for Annulus Fibrosus Tissue Engineering Using Ho...drboon
In this work, a novel finite element model using the mechanical homogenization techniques of the human annulus fibrosus (AF) is proposed to accurately predict relevant moduli of the AF lamella for tissue engineering application. A general formulation for AF homogenization was laid out with appropriate boundary conditions. The geometry of the fibre and matrix were laid out in such a way as to properly mimic the native annulus fibrosus tissue’s various, location-dependent geometrical and histological states. The mechanical properties of the annulus fibrosus calculated with this model were then compared with the results obtained from the literature for native tissue. Circumferential, axial, radial, and shear moduli were all in agreement with the values found in literature. This study helps to better understand the anisotropic nature of the annulus fibrosus tissue, and possibly could be used to predict the structure-function relationship of a tissue-engineered AF.
PEA Analysis: A Perspective Approach to Entrepreneurship Analysis in Engineeringdrboon
As our technological capabilities increase, engineers have an increasing obligation to address market (societal) needs efficiently and sustainably. Such efficiency and sustainability is derived from entrepreneurial aspects of engineering solutions. Therefore, along with being a proponent of scientific solutions to societal/market needs, engineers also have to be effective entrepreneurs. The effectiveness of an engineering solution is not only measured by its scientific sophistication, but also its usefulness and contribution towards market (societal) needs. However, engineers seldom undertake entrepreneurial thinking whilst developing technology solutions, most efforts being expended on scientific sophistication. This is mainly due to the lack of suitable analysis technique that would enable engineers to undertake such evaluation. In this paper, a quantified perspective based analysis technique for evaluation of entrepreneurial engineering solution is presented called the PEA Analysis method.
Bender’s Decomposition Method for a Large Two-stage Linear Programming Modeldrboon
Linear Programming method (LP) can solve many problems in operations research and can obtain optimal solutions. But, the problems with uncertainties cannot be solved so easily. These uncertainties increase the complexity scale of the problems to become a large-scale LP model. The discussion started with the mathematical models. The objective is to minimize the number of the system variables subjecting to the decision variable coefficients and their slacks and surpluses. Then, the problems are formulated in the form of a Two-stage Stochastic Linear (TSL) model incorporated with the Bender’s Decomposition method. In the final step, the matrix systems are set up to support the MATLAB programming development of the primal-dual simplex and the Bender’s decomposition method, and applied to solve the example problem with the assumed four numerical sets of the decision variable coefficients simultaneously. The simplex method (primal) failed to determine the results and it was computational time-consuming. The comparison of the ordinary primal, primal-random, and dual method, revealed advantageous of the primal-random. The results yielded by the application of Bender’s decomposition method were proven to be the optimal solutions at a high level of confidence.
Fuzzy Logic Modeling Approach for Risk Area Assessment for Hazardous Material...drboon
The assessment of area in risk of HazMat transportation is very beneficial for the planning of the management of such area. We prioritized the affected area using HazMat-Risk Area Index (HazMatRAI) developed on the basis of Fuzzy Logic. The purpose of such development is to reduce limits of the criteria used for the assessment which we found exist when displaying data related to Hazmat represented by iceberg. In this regard, we categorized type of Membership Function according to Fuzzy set method in order to match the existing criteria, both solid and abstract ones. The conditions of Fuzzy Number and Characteristic are used respectively so that all risk levels are covered. However, the displaying of HazMat-Risk Area Index needs weighing of each criterion that is used for the assessment which significance of each level varies. We used Saaty’s Analytic Hierarchy Process (AHP) to establish weighing value obtained from such assessment. Therefore it is beneficial for the preparation of area with HazMatRAI value is high, hence proper preparation for the management in case of critical situation.
Detecting Urban Change of Salem City of Tamil Nadu, India from 1990 to 2010 U...drboon
This document summarizes a study that analyzed urban change in Salem City, Tamil Nadu, India from 1990 to 2010 using geospatial technology. Satellite images from 1990 and 2010 were classified to map land use and detect changes. The results showed moderate urban growth, with expansion towards suburban areas likely due to lower land costs and proximity to industries. Specific land use changes included increases in mining, tanks, scrub forest, commercial/industrial areas, suburban areas, and roads. NDVI and principal component analyses helped validate changes in vegetation and identify new urban areas. Overall, the study highlights how remote sensing can effectively monitor urban development and inform planning.
Callogenesis and Organogenesis from Inflorescence Segments of Curcuma Alismat...drboon
This document summarizes research on callogenesis and organogenesis from inflorescence segments of two Curcuma plant species. The main points are:
1) Somatic embryos were induced from young inflorescences of both plant species when cultured in darkness on medium with various concentrations of 2,4-D, with the highest growth occurring on medium with 14 mg/l 2,4-D.
2) When these somatic embryos were cultured on medium with different sugar types to induce shoot organogenesis, the highest percentage of new shoot formation was achieved using 0.25 g/l maltose medium for both plant species.
3) The research aims to develop an efficient method for clonal
A Study of Hole Drilling on Stainless Steel AISI 431 by EDM Using Brass Tube ...drboon
When a depth hole is drilled by EDM, taper is occurred which is not desired in the process. This research was focused on influence of EDM parameters on material removal rate (MRR), electrode wear rate (EWR) and tapered hole of martensitic stainless steel AISI 431. The considered factors consist of electrical current, on-time, duty factor, water pressure and servo rate. The experimental results reveal that MRR increases when increasing of servo rate. The taper of hole increases with increasing of electrical current and servo rate. However, it is reverse proportion to water pressure and duty factor.
Daylighting Analysis of Pedentive Dome’s Mosque Design during Summer Solstice...drboon
In this study, the analysis is to measure lighting performance of single pedentive dome type in mosque design built during the Ottoman Empire in Istanbul, Turkey. The selected case studies are the Firuzaga and Orhan Gazi Mosque. This study investigates whether Turkish style’s pedentive dome mosque design provides efficient indoor daylighting in the Orhan Gazi Mosque in compression with the Firuzaga Mosque. This assessment is simulated during summer solstice occurred when the sun is perimetering at its most northern position along the Tropic of Cancer. This study applies simulation analysis using Autodesk software known as 3DStudio Max Design 2011 programme. The weather data file was used to provide weather information and climate changes of the study area. The analysis shows that both mosques have mostly an evenly distributed illuminance level with Scale 3, 4 and 5. The Orhan Gazi Mosque has slightly higher illuminance levels compared to those of the Firuzaga Mosque. The study concludes that the pedentive dome mosque design has an effect on the mosque indoor daylighting. Having excellent illuminance level distributed at all the locations is one of the crucial reasons why the mosques with pedentive dome roof cover are built by Ottoman master builders.
Validating Measurements of Perceived Ease Comprehension and Ease of Navigatio...drboon
Many universities are realizing that the implementation and use of online learning tool become a competitive advantage to address the actual learning needs. The purpose of this study is to determine the factors that influence users’ perceived ease of use of Webct an online learning tool. We administrated a questionnaire to undergraduate students from an university in Quebec, Canada. The results tend to corroborate that ease of comprehension and ease of navigation are the key factors which influence the perceived ease of use of WebCT. More specifically, the terms used in educational web applications must be as simple and relevant as possible. Jargon and technical terms in the wording of text used for links should be carefully avoided. This research is extending the finding of IT adoption studies by specifying what make an online tool easy to use.
Mathematical Modeling of Thin Layer Drying Kinetics of Tomato Influence of Ai...drboon
Thin-layer drying kinetics of Tomato was experimentally investigated in a pilot scale convective dryer. Experiments were performed at air temperatures of 40, 60, and 80ºC and at three relative humidity of 20%, 40% and 60% and constant air velocity of 2 m/s. In order to select a suitable form of the drying curve, 9 different thin layer drying models were fitted to experimental data. The high values of coefficient of determination and the low values of reduced sum square errors and root mean square error indicated that the Midilli et al. model could satisfactorily illustrate the drying curve of tomato. the Midilli et al. model had the highest value of R2 (0.9997), the lowest SSE (0.22662) and RMSE (0.0040912) for relative humidity of 20% and air velocity of 2 m/s. the Midilli et al. model had the highest value of R2 (0.99946), the lowest SSE (0.46702) and RMSE (0.0051192) for relative humidity of 40% and air velocity of 2 m/s. the Midilli et al. model had the highest value of R2 (0.99952), the lowest SSE (0.438982) and RMSE (0.0050188) for relative humidity of 60% and air velocity of 2 m/s. The Midilli et al. model was found to satisfactorily describe the drying behavior of tomato.
An Experimental Evaluation of Energy Saving in a Split-type Air Conditioner w...drboon
1) The study experimentally evaluates the energy saving potential of an air conditioner retrofitted with various evaporative cooling systems.
2) Adding evaporative cooling systems, such as a cellulose cooling pad, water curtain, or water spray, decreases the temperature of air entering the condenser unit.
3) This lower inlet air temperature improves the system performance significantly, increasing COPR by 6-48% and decreasing electrical consumption by 4-15%, compared to the air conditioner without evaporative cooling.
Oxygen Excess Control of Industrial Combustion Through The Use of Automotive ...drboon
The objective of this study is to present a simple and low cost method of determining the flue gases oxygen concentration. The method makes use of the Lambda sensor, a part of the fuel injection system of the modern automobile’s engine. A combustion chamber was mounted with a heated Lambda sensor installed in its chimney. Residual oxygen concentrations in the flue gases were estimated by the use of the Nernst equation and compared to a reference combustion analyser. The observed average deviation in the measurements was of about 5 % which is in the range of interest to the industrial combustion.
Numerical Analysis of Turbulent Diffusion Combustion in Porous Mediadrboon
Turbulent methane-air combustion in porous burner is numerically investigated. Several computed field variables considered include temperature, stream function, and species mass fractions. The one-step reaction considered consists of 4 species. The analysis was done through a comparison with the gas-phase combustion. Porous combustion is found to level down the peak temperature while giving more uniform distribution throughout the domain. The porous combustion as in a burner is proved wider flame stability limits and can hold an extended range of firing capabilities due to an energy recirculation.
This document discusses a method for predicting the dynamic response and flutter characteristics of structures using experimental modal parameters when the exact system properties like mass and stiffness are unknown. The method uses modal parameters obtained from ground vibration tests in finite element and computational fluid dynamics software to analyze transient response and flutter speeds. It was validated on a tapered aluminum plate structure by comparing results obtained using experimental modal data to those from a finite element model using the actual material properties. Close agreement was observed between the two methods, showing this approach can accurately analyze structures without prior knowledge of system configurations.
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
This document provides a summary of a master's thesis that analyzes smart composite beams using a meshfree method. The objectives of the project are to define shape function construction for meshfree methods, derive the solution for smart composite beams, and compare the meshfree method results to exact solutions for three-layered and four-layered composite beams. The document introduces meshfree methods, describes the moving least squares shape functions used, and presents the analysis of smart composite beams by deriving the governing equations and compatible displacement fields for the substrate beam and piezoelectric layer.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
APPLICATION OF PARTICLE SWARM OPTIMIZATION TO MICROWAVE TAPERED MICROSTRIP LINEScseij
This document discusses using Particle Swarm Optimization (PSO) to design a tapered microstrip transmission line to match an arbitrary load to a 50Ω line. PSO was used to optimize the impedances of a three section tapered line to minimize reflections. Simulations found impedances that gave good matching at 5GHz. PSO converged to solutions in under 1000 iterations. This demonstrates PSO's effectiveness in solving multi-objective microwave engineering optimization problems.
Application of particle swarm optimization to microwave tapered microstrip linescseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering
because of their powerful features. In this work special design is done for a tapered transmission line used
for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50
Ω line using three section tapered transmission line with impedances in decreasing order from the load. So
the problem becomes optimizing an equation with three unknowns with various conditions. The optimized
values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in
solving this kind of multiobjective optimization problems.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
GENERALIZED LEGENDRE POLYNOMIALS FOR SUPPORT VECTOR MACHINES (SVMS) CLASSIFIC...IJNSA Journal
In this paper, we introduce a set of new kernel functions derived from the generalized Legendre polynomials to obtain more robust and higher support vector machine (SVM) classification accuracy. The generalized Legendre kernel functions are suggested to provide a value of how two given vectors are like each other by changing the inner product of these two vectors into a greater dimensional space. The proposed kernel functions satisfy the Mercer’s condition and orthogonality properties for reaching the optimal result with low number support vector (SV). For that, the new set of Legendre kernel functions could be utilized in classification applications as effective substitutes to those generally used like Gaussian, Polynomial and Wavelet kernel functions. The suggested kernel functions are calculated in compared to the current kernels such as Gaussian, Polynomial, Wavelets and Chebyshev kernels by application to various non-separable data sets with some attributes. It is seen that the suggested kernel functions could give competitive classification outcomes in comparison with other kernel functions. Thus, on the basis test outcomes, we show that the suggested kernel functions are more robust about the kernel parameter change and reach the minimal SV number for classification generally.
This summary provides an overview of the key points from the document:
1) The document presents the use of General Regression Neural Networks (GRNN) to predict propagation path loss in an urban environment based on measurements taken in Kavala, Greece.
2) Two neural network models are studied - one for path loss prediction and another using error control. Their performance is compared to measured path loss values based on error metrics.
3) For line-of-sight predictions, the GRNN model achieves better performance than empirical models due to using multiple input parameters and generalization. For non-line-of-sight, a third GRNN model including street orientation has the lowest error rates.
Application of support vector machines for prediction of anti hiv activity of...Alexander Decker
This document describes a study that used support vector machines (SVM) to develop a quantitative structure-activity relationship (QSAR) model to predict the anti-HIV activity of TIBO derivatives. The SVM model achieved high correlation (q2=0.96) and low error (RMSE=0.212), outperforming artificial neural networks and multiple linear regression models developed on the same data set. The results indicate that SVM is a valuable tool for QSAR modeling and predicting anti-HIV activity of chemical compounds.
New emulation based approach for probabilistic seismic demandSebastian Contreras
This document describes a new statistical emulation approach for probabilistic seismic demand assessment. The approach uses Gaussian process emulation to model the relationship between engineering demand parameter (EDP) and intensity measure (IM), as an alternative to the standard cloud method. The emulator is trained using data from nonlinear time-history analyses of a case study building. Two "assumed realities" are generated to test the emulator's performance compared to the cloud method. Results show the emulator approach provides improved coverage probability and average length over the cloud method, while maintaining similar accuracy. The emulator is more flexible than the cloud method and can better estimate EDP-IM relationships that do not follow a power law.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
The document compares two algorithms for hyperspectral image unmixing - one based on minimum volume constraint and one based on sum of squared distances constraint. It analyzes the performance of the two algorithms under different conditions like flatness of the endmember simplex, effects of initialization, and robustness to noise. The analysis shows that the sum of squared distances constraint performs better than the volume constraint for non-regular simplex shapes and is more robust to random initialization and noise. The comparison provides guidance on which constraint is more suitable for specific hyperspectral unmixing tasks.
This document summarizes a method for calculating the sensitivity matrix that defines the linear relationship between circuit parameters and poles/response of an RLC network. The sensitivity matrix enables efficient statistical analysis and yield predictions. It is obtained by taking derivatives of the poles and transfer function, which are calculated from the eigenvalues and eigenvectors of the network's state equation. An example RLC circuit demonstrates calculating the sensitivity matrix and using it to predict yield based on Monte Carlo simulations.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Macromodel of High Speed Interconnect using Vector Fitting Algorithmijsrd.com
At high frequency efficient macromodeling of high speed interconnects is all time challenging task. We have presented systematic methodologies to generate rational function approximations of high-speed interconnects using vector fitting technique for any type of termination conditions and construct efficient multiport model, which is easily and directly compatible with circuit simulators.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
Adaptive Multiscale Stereo Images Matching Based on Wavelet Transform Modulus...CSCJournals
In this paper we propose a multiscale stereo correspondence matching method based on wavelets transform modulus maxima. Exploitation of maxima modulus chains has given us the opportunity to refine the search for corresponding. Based on the wavelet transform we construct maps of modules and phases for different scales, then extracted the maxima and then we build chains of maxima. Points constituents maxima modulus chains will be considered as points of interest in matching processes. The availability of all its multiscale information, allows searching under geometric constraints, for each point of interest in the left image corresponding one of the best points of constituent chains of the right image. The experiment results demonstrate that the number of corresponding has a very clear decrease when the scale increases. In several tests we obtained the uniqueness of the corresponding by browsing through the fine to coarse scales and calculations remain very reasonable. Abdelhak EZZINE aezzine@uae.ac.ma 39 imm serghiniya Rue liban ENSAT/ SIC/LABTIC Abdelmalek ESSAADI University Tangier, 99000, Morocco
The document proposes a new method called the Brownian correlation metric prototypical network (BCMPN) for fault diagnosis of rotating machinery. The BCMPN uses a multi-scale mask preprocessing mechanism to improve model performance. It extracts multi-scale features using dilation convolution and an effective light channel attention module. For classification, it measures the difference between the joint feature function and product of marginal distributions using Brownian distance, unlike existing methods that use Euclidean or cosine distance. Experiments on gear dataset and laboratory data show the BCMPN performs better than other methods for problems with few training samples and zero samples in the target domain.
Path Loss Prediction by Robust Regression Methodsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Similar to Relevance Vector Machines for Earthquake Response Spectra (20)
11(7) 2020 ITJEMAST's published research articlesdrboon
This document summarizes a research study that examined the relationship between positivity, positive affect, negative affect, and perceived stress among cardiac patients. The study hypothesized that perceived stress would mediate the relationship between positivity and positive/negative affect. Researchers surveyed 519 cardiac patients, assessing positivity, perceived stress, and positive/negative affect. The results found that perceived stress mediated the relationship between positive affect and positivity, as well as between negative affect and positivity. Specifically, positivity was negatively correlated with negative affect and perceived stress, while perceived stress was positively correlated with negative affect. The findings suggest that reducing stress and increasing positive emotions can help reduce negative feelings in cardiac patients.
11(4) 2020 ITJEMAST Multidisciplinary Research Articlesdrboon
Research papers 2020 Behavioral finance; Personality traits; Behavioral factors; Overconfidence bias; Locus of control; Decision-making; Biased behavior Carbon (CO2) emissions; Economic Growth; Energy consumption; Trade; ARDL Approach; Granger Causality; Energy use Pedestrian start-up time; Street crosswalk, Pedestrian traffic signals; Pedestrians traffic lights; zebra crossings; Intersection crossings Service Attributes; Relationship quality; Relationship outcomes; Banking services; Electronic Customer Relationship Management; Virtual relationships; eBanking; eCRM College town landscape; College town character; Campus community; Urban identity; College town space; Sense of a place; Public Space; University gardens; Cultural identity; Campus identity; Businesses in college towns Emotional quotient; Self-emotional appraisal; Workplace Advice Network (WAN) Centrality; Service Sector Organizations; Sociometric matrix; Interconnectivity of nodes
11(3) 2020 ITJEMAST Multidisciplinary Research Articles drboon
Non-destructive testing method Heat loss Thermal conductivity Specific heat Know-how Psychological contract breach Employees' Workplace behaviour Workplace spirituality Human resource management (HRM) Power sector Positive classroom Male teachers Classroom management system Public primary schools Private primary school Positive motivation students Quality primary education Grout rheology Construction workings High-precision lining Tunneling complex Cement slurry Reinforcement solutions Smart building systems Green architecture Green roof Green design Sustainable environmental architecture Smart energy management Architecture technology Neo-Functionalism Trade integration CPEC agreement Economic integration Regional cooperation Pak-China relations Pak-Iran relations Central Asia Republics Sino-Pakistan Agreement
11(2)2020 International Transaction Journal of Engineering, Management, & Ap...drboon
Multidisciplinary Management, Journalism and Mass Communication Science (Information and Media Sciences), Political Sciences (International Affairs), Global Studies), Animal Sciences, Feeding Technology, Healthcare Management.
V8(3) 2017:: International Transaction Journal of Engineering, Management, & ...drboon
Research articles published in V8(3) 2017:: International Transaction Journal of Engineering, Management, & Applied Sciences & Technologies ==>
Awareness of Passive Design on Apartment Façade Designs in Putrajaya, Malaysia
127
Comparative Analysis of Low-Cost Housing Policies in Malaysia and Nigeria
139
A Study on Kevin Lynch’s Urban Design Elements: Precinct 9 East Putrajaya
153
Investigating Urban Design Elements of Bandar Baru Sentul, Kuala Lumpur
169
A Study on Sharing Home Ownership Schemes in Malaysia
183
The Impact of Window to Wall Ratio (WWR) and Glazing Type on Energy Consumption in Air-Conditioned Office Buildings
197
Competitiveness Factors of Thai Construction Industry within the AEC Context: A Qualitative Approach
209
Application of Confirmatory Factor Analysis in Government Construction Procurement Problems in Thailand
221
In 3 sentences:
The document discusses the key elements to consider when designing streets for livable cities, including pedestrians, vehicles, parking, and transportation options. It emphasizes the need for a comprehensive approach that considers all users and aspects, such as transportation, safety, the environment, and the economy. The goal is to create streets that are social spaces where people can easily and safely walk, bike, access transit, and spend time, rather than only focus on traffic flow.
Impact of Building Envelope Modification on Energy Performance of High-Rise A...drboon
This document summarizes a research study that investigated the impact of building envelope modifications on the energy performance of high-rise apartments in Kuala Lumpur, Malaysia. Three high-rise apartment buildings were modeled using EnergyPlus software to analyze the effects of thermal insulation and glazing type on potential energy savings. The study found that integrating passive envelope design measures like improved insulation and higher performing glazing could help reduce energy consumption and peak cooling loads in the apartments. Modifying elements of the building envelope, especially the walls and windows, may enable significant energy savings potential for high-rise residential buildings in hot and humid climates.
Enhancement of Space Environment Via Healing Gardendrboon
Green nature, sunlight and fresh air have been known as important component of healing in healthcare facilities. This paper presents the finding of an exploratory study on healing garden elements in healthcare facilities. The purpose of the paper is to find the elements of healing gardens and its healing factors in the existing garden design. In conducting this research study, site observation and informal interview at selected healthcare facilities have been performed. The study reveals the elements of existing garden design, the interactivity and the end users expectation on a garden. The finding shows that lacking some of the elements of garden design lead to less user friendliness and interactivity in the garden. It also shows that the visibility, accessibility, quietness and comfortable condition in the garden give impact to the utilization of the garden.
Design of Quadruped Walking Robot with Spherical Shelldrboon
We propose a new quadruped walking robot with a spherical shell, called "QRoSS." QRoSS is a transformable robot that can store its legs in the spherical shell. The shell not only absorbs external forces from all directions, but also improves mobile performance because of its round shape. In rescue operations at a disaster site, carrying robots into a site is dangerous for operators because doing so may result in a second accident. If QRoSS is used, instead of carrying robots in, they are thrown in, making the operation safe and easy. This paper reports details of the design concept and development of the prototype model. Basic experiments were conducted to verify performance, which includes landing, rising and walking through a series of movements.
Motion Analysis of Pitch Rotation Mechanism for Posture Control of Butterfly-...drboon
We developed a small flapping robot on the basis of movements made by a butterfly with a low flapping frequency of approximately 10 Hz, a few degrees of freedom of the wings, and a large flapping angle. In this study, we clarify the pitch rotation mechanism that is used to control its posture during takeoff for different initial pitch and flapping angles by the experiments of both manufactured robots and simulation models. The results indicate that the pitch angle can be controlled by altering the initial pitch angle at takeoff and the flapping angles. Furthermore, it is suggested that the initial pitch angle generates a proportional increase in the pitch angle during takeoff, and that certain flapping angles are conducive to increasing the tendency for pitch angle transition. Thus, it is shown that the direction of the flight led by periodic changing in the pitch angle can be controlled by optimizing control parameters such as initial pitch and flapping angles.
Analysis of Roll Rotation Mechanism of a Butterfly for Development of a Small...drboon
1) The document analyzes the roll rotation mechanism of a butterfly through computational fluid dynamics simulations using boundary conditions from high-speed camera footage.
2) It finds that during typical pitch rotation flight, differential pressure concentrates at the tip of the forewings, producing roughly matched reaction forces on the left and right wings.
3) During roll rotation flight, differential pressure distributes across the entire wings, with the right reaction force twice as great as the left during the initial downstroke, leading to a large change in roll angle.
Effect of Oryzalin on Growth of Anthurium andraeanum In Vitrodrboon
Apical shoots and lateral buds of Anthurium andraeanum about 0.5 cm grew very well when cultured on MS medium supplemented with NAA, kinetin, sucrose and gelrite. When brought young plantlets (the same sized) of A. andraeanum soaked in various concentrations of oryzalin with different duration times. The A. andraeanum plantlets were subcultured into the same medium every 4 weeks for 3 times. It was found that 5.0 mg/l oryzalin with 24 and 72 hours gave the best average number of leaves per bunch, plant height and diameter of bunch. These parameters were reverse proportion, when increased concentration of oryzalin, the growth rate in each parameter was decreased with thick and pale green leaves.
Role of 2,4-D on Callus Induction and Shoot Formation to Increase Number of S...drboon
Stem node of Miniature Rose with axillary bud were used as explants. These explants cultured on MS medium supplemented with different concentrations of 2,4-D. It was found that MS medium supplemented with 0.5 mg/l 2,4-D gave the highest number of green callus. The callus cultured on MS medium supplemented with different combinations of NAA and BA to form new shoot and root. From the result, we are able to find the highest number of young shoots that were induced from callus when cultured callus on MS medium supplemented with NAA and BA. When subcultured all new shoots with the same size to MS medium supplemented with different concentrations of NAA and BA, and 2,4- D for six weeks. The result was significant difference (P≤0.5) when compared the average height of plant and percentage of root formation, but their duration time for flowering were not significant different.
Seismic Capacity Comparisons of Reinforced Concrete Buildings Between Standar...drboon
Earthquakes are cause of serious damage through the building. Therefore, moment resistant frame buildings are widely used as lateral resisting system. Generally three types of moment resisting frames are designed namely Special ductile frames (SDF), Intermediate ductile frames (IDF) and Gravity load designed (GLD) frames, each of which has a certain level of ductility. Comparative studies on the seismic performance of three different ductility of building are performed in this study. The analytical models are considered about failure mode of column (i.e. shear failure, flexural to shear failure and flexural failure); beam-column joint connection, infill wall and flexural foundation. Concepts of incremental dynamic analysis are practiced to assess the required data for performance based evaluations. This study found that the lateral load capacity of GLD, IDF, and SDF building was 19.25, 27.87, and 25.92 %W respectively. The average response spectrum at the collapse state for GLD, IDF, and SDF are 0.75 g, 1.19 g, and 1.33 g, respectively. The results show that SDF is more ductile than IDF and the initial strength of SDF is close to IDF. The results indicate that all of frames are able to resistant a design earthquake.
ITJEMAST5(2): Latest Research from International Transaction Journal of Engin...drboon
An After-Stay Satisfaction Survey of Residents Living in Prefabricated Concrete Structures in Thailand
Hydrothermal Assisted Microwave Pyrolysis of Water Hyacinth for Electrochemical Capacitors Electrodes
Group Technology Paves the Road for Automation
Effect of Laser Priming on accumulation of Free Proline in Spring Durum Wheat (Triticum turgidum L.) under Salinity Stress
Livable Public Open Space for Citizen’s Quality of Life in Medan, Indonesia
ITJEMAST5(1): Latest Research from International Transaction Journal of Engin...drboon
Latest Research from International Transaction Journal of Engineering, Management, & Applied Sciences & Technologies ITJEMAST5(1):
Effects of Calcination Treatment of Diatomite on Dimethyl Ether Synthesis from Methanol
Effect of Blend Ratio on Cure Characteristics, Mechanical Properties, and Aging Resistance of Silica-filled ENR/SBR Blends
An Efficient Formulation of Off-line Model Predictive Control for Nonlinear Systems Using Polyhedral Invariant Sets
Effect of Modeling Parameters on System Hydrodynamics of Air Reactor in Chemical Looping Combustion Using CFD Simulation
Flow Behavior of Geldart A and Geldart C Particles in a Co-current Downflow Circulating Fluidized Bed Reactor
Optimization of Enzymatic Clarification from Corncob
Synthesis of Alkali Metal/CaO Sorbent for CO2 Capture at Low Temperature
Effect of Exchangeable Cations on Bentonite Swelling Characteristics of Geosy...drboon
1) The study characterized the swelling behavior of bentonite in geosynthetic clay liners (GCLs) using X-ray diffraction and scanning electron microscopy.
2) The X-ray diffraction results showed that bentonite swelling decreased with increasing valence of exchangeable cations and increasing concentration of permeant solutions. Bentonite swelling was highest with deionized water and lowest with calcium chloride solutions.
3) Scanning electron microscopy images showed that bentonite has a flake-like structure when air-dried but becomes more porous and fluffy after permeation. The porous structure decreased with increasing concentration of calcium chloride solutions.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Relevance Vector Machines for Earthquake Response Spectra
1. 2012 American Transactions on Engineering & Applied Sciences
2011 American Transactions on Engineering & Applied Sciences.
American Transactions on
Engineering & Applied Sciences
http://TuEngr.com/ATEAS, http://Get.to/Research
Relevance Vector Machines for Earthquake Response
Spectra
a* b
Jale Tezcan , Qiang Cheng
a
Department of Civil and Environmental Engineering, Southern Illinois University Carbondale,
Carbondale, IL 62901, USA
b
Department of Computer Science, Southern Illinois University Carbondale, Carbondale, IL 62901, USA
ARTICLEINFO A B S T RA C T
Article history: This study uses Relevance Vector Machine (RVM)
Received 23 August 2011
Received in revised form regression to develop a probabilistic model for the average horizontal
23 September 2011 component of 5%-damped earthquake response spectra. Unlike
Accepted 26 September 2011 conventional models, the proposed approach does not require a
Available online
26 September 2011 functional form, and constructs the model based on a set predictive
Keywords: variables and a set of representative ground motion records. The
Response spectrum RVM uses Bayesian inference to determine the confidence intervals,
Ground motion instead of estimating them from the mean squared errors on the
Supervised learning training set. An example application using three predictive
Bayesian regression variables (magnitude, distance and fault mechanism) is presented for
Relevance Vector Machines sites with shear wave velocities ranging from 450 m/s to 900 m/s.
The predictions from the proposed model are compared to an existing
parametric model. The results demonstrate the validity of the
proposed model, and suggest that it can be used as an alternative to
the conventional ground motion models. Future studies will
investigate the effect of additional predictive variables on the
predictive performance of the model.
2012 American Transactions on Engineering & Applied Sciences.
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
25
http://TUENGR.COM/ATEAS/V01/25-39.pdf
2. 1. Introduction
Reliable prediction of ground motions from future earthquakes is one of the primary
challenges in seismic hazard assessment. Conventional ground motion models are based on
parametric regression, which requires a fixed functional form for the predictive model. Because the
mechanisms governing ground motion processes are not fully understood, identification of the
mathematical form of the underlying function is a challenge. Once a functional form is selected,
the model is fit to the data and the model coefficients minimizing the mean squared errors
between the model and the data are determined. This approach, when the selected mathematical
form does not accurately represent the actual input-output relationship, is susceptible to
overfitting. Indeed, using a sufficiently complex model, one can achieve a perfect fit to the
training data, regardless of the selected mathematical form. However, a perfect fit to the
training data does not indicate the predictive performance of the model for new data.
Kernel regression offers a convenient way to perform regression without a fixed parametric
form, or any knowledge of the underlying probability distribution. A special form of kernel
regression, called the Support Vector Regression (SVR) (Drucker et al., 1997) is characterized by
its compact representation and its high generalization performance. In SVR, the training data is
first transformed into a high dimensional kernel space, and linear regression is performed on the
transformed data. The resulting model is a linear combination of nonlinear kernel functions
evaluated at a subset of the training input. Combination weights are determined by minimizing a
penalized residual function. The SVR has proved successful in many studies since its introduction
in 1997. The effectiveness of SVR in ground motion modeling has been recently demonstrated
(Tezcan and Cheng, 2011), (Tezcan et al., 2010). A well-known weakness of the SVR is the lack
of probabilistic outputs. Although the confidence intervals can be constructed using the
mean-squared errors, similar to the approach used in conventional ground motion models, the
posterior probabilities, which produce the most reliable estimate of prediction intervals, are not
given. The lack of probabilistic outputs in the SVR formulation has motivated the development of
a new kernel regression model called Relevance Vector Machine (RVM) (Tipping, 2000) which
operates in a Bayesian framework.
To overcome the limitations of parametric regression while obtaining probabilistic
26 Jale Tezcan and Qiang Cheng
3. predictions, this paper proposes a new ground motion model based on the RVM regression.
Unlike standard ground motion models, which make point estimates of the optimal value of the
weights by minimizing the fitting error, the RVM model treats the model coefficients as random
variables with independent variances and attempts to find the model that maximizes the likelihood
of the observations. This approach offers two main advantages over the conventional ground
motion models. First, the prediction uncertainty is explicitly determined using Bayesian
inference, as opposed to being estimated from the mean squared errors. Second, the complexity of
the RVM model is controlled by assigning suitable prior distributions over the model coefficients,
which reduces the overfit susceptibility of the model.
The rest of the paper is organized as follows. In Section 2, the RVM regression algorithm is
described. Section 3 is devoted to the construction of ground motion model. Starting with the
description of the ground motion data and the predictive and target variables, the training results
are presented, and the prediction procedure for new data is described. Section 4 demonstrates
computational results and compares the RVM predictions to an existing empirical parametric
model. Section 5 concludes the paper by presenting the main conclusions of this study, and
discusses the advantages and limitations of the proposed method.
2. The RVM Regression Algorithm
Given a set of input vectors , 1: and corresponding real-valued targets , the
regression task is to estimate the underlying input-output relationship. Using kernel representation
(Smola and Schölkopf, 2004), the regression function can be written as a linear combination of a
set of nonlinear kernel functions:
, (1)
where , 1… are the combination weights and is the bias term.
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
27
http://TUENGR.COM/ATEAS/V01/25-39.pdf
4. This study uses the radial basis function (RBF) kernel:
, , , 0 (2)
where is the width parameter controlling the trade-off between model accuracy and
complexity. In this study, the width parameter has been determined using cross-validation.
Assuming independent noise samples from a zero-mean Gaussian distribution,
i.e., ~ 0, , the target values can be written as:
1, … , . (3)
Recast in matrix from, Equation (3) becomes:
Φw , (4)
where ,…, , ,…, , and Φ is an 1 basis matrix with 1
and , . The likelihood of the entire set, assuming independent observations is
given by:
| , 2 . (5)
where ,…, is the vector containing the mean values of the combination weights.
To control the complexity of the model, a zero-mean Gaussian prior is used where each weight is
assigned a different variance (MacKay, 1992):
| 0, 1/ . (6)
28 Jale Tezcan and Qiang Cheng
5. In Eq. (6), ,…, where 1/ is the variance of . The posterior distribution
of the weights is obtained as:
| , , 2 | | . (7)
where the mean vector and covariance matrix are:
(8)
(9)
with
… … 0
:
. (10)
0 …
The marginal likelihood of the dataset can be determined by integrating out the weights (MacKay,
1992) as follows:
| , 2 | | (11)
where and is the identity matrix of size . Ideal Bayesian inference
requires defining prior distributions over and , followed by marginalization. This process,
however, will not result in a closed form solution. Instead, the and values maximizing
Eq. (11) can be found iteratively as follows (MacKay, 1992):
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
29
http://TUENGR.COM/ATEAS/V01/25-39.pdf
6. 1
(12)
. (13)
∑ 1
Because the nominator in Eq.(12) is a positive number with a maximum value of 1, an
value tending to infinity implies that the posterior distribution of is infinitely peaked at zero,
i.e. 0. As a consequence, the corresponding kernel function can be removed from the
model. The procedure for determining the weights and the noise variance can be summarized as
follows:
1) Select a width parameter of the kernel function and form the basis matrix Φ.
2) Initialize ,…, and .
3) Compute matrix using Eq.(10).
4) Compute the covariance matrix using Eq.(9).
5) Compute the mean vector using Eq.(8).
6) Update and using Eq.(12) and Eq.(13).
7) If ∞, set 0 and remove the corresponding column in Φ.
8) Go back to step 3 until convergence.
9) Set the remaining weights equal to .
The training input points corresponding to the remaining nonzero weights are called the
“relevance vectors”. After the weights and the noise variance are determined, the predictive mean
for a new input can be found as follows:
Φ.
(14)
T
In Eq.(14) Φ 1 x ,r x ,r … x , rN where r , r … , rN are the
relevance vectors.
30 Jale Tezcan and Qiang Cheng
7. The total predictive variance can be found by adding the noise variance to the uncertainty due
to the variance of the weights, as follows:
ΦT CΦ .
(15)
3. Construction of the Ground Motion Model
In this section, RVM regression algorithm will be used to construct a ground motion model. In
Section 4, the resulting model will be compared to an existing parametric model by Idriss (Idriss,
2008), which will be referred to as “I08 model” in this paper. To enable a fair comparison, the
dataset and the predictive variables of I08 model have been adopted in this study. The RVM
algorithm is independent of the size of the predictive variable set; additional variables can be
introduced the set of predictive variables can be customized to specific applications.
3.1 Ground Motion Data
The ground motion records used in the training have been obtained from the PEER-NGA
database (PEER, 2007). Consistent with the I08 model, a total of 942 free-field records have been
selected using the following criteria:
• Shear wave velocity at the top 30 m ranging from 450 m/s to 900 m/s,
• Magnitude larger than 4.5,
• Closest distance between the station and rupture surface (R) less than 200 km.
Detailed information regarding these records can be found in the paper by Idriss (Idriss, 2008).
3.2 Predictive and Target Variables
The predictive variable set includes moment magnitude (M), natural logarithm of the closest
distance between the station and the rupture surface in kilometers and fault mechanism (F).
Idriss finds that with the shear wave velocity ( ) constrained to 450 m/s- 900 m/s range, it has
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
31
http://TUENGR.COM/ATEAS/V01/25-39.pdf
8. negligible effect on spectral values up to 1 second. Therefore, was not used as a predictive
variable. Following the convention used in I08 model, earthquakes that have been assigned a fault
mechanism type 0 and 1 in the PEER database were merged to a single, “strike-slip” group, while
the rest were considered to be representative of “reverse” events. In the RVM model, strike-slip
and reverse earthquakes are assigned 1 and 1, respectively. The input vector
representing ith record has the following form:
. (16)
A set of eight vibration periods ( 8) ranging from 0.01 second to 4 seconds was used in
the RVM model. The output for the ith record for the vibration period is defined as:
for 1 to . (17)
In Equation (17), is the natural logarithm of the average horizontal component of 5%-
damped pseudo-acceleration response spectrum. The spectral values represent the median
value of the geometric mean of the two horizontal components, computed using non-redundant
rotations between 0 and 90 degrees (Boore, 2006).
3.3 Training of the RVM Regression Model
As a pre-processing step, and values were linearly scaled to [-1 1] to achieve
uniformity between the ranges of the predictive variables. There is no need to scale the fault
mechanism identifier ( as it was already defined to take either -1 or 1. Because kernel functions
use Euclidean distances between pairs of input vectors, such scaling will help prevent numerical
problems due to large variations between the ranges of the values that variables can take. In the
ground motion data used in this study, the ranges of the predictive variables are
4.53 7.68 , and 0.32 199.27 . Therefore, input scaling takes the
following form:
32 Jale Tezcan and Qiang Cheng
9. 2 12.21 2 4.16
, , . (18)
3.15 6.44
The optimal value of the kernel width parameter for each vibration period was
determined using 10-fold cross validation (Webb, 2002). In 10-fold cross validation, the training
data is randomly partitioned into 10 subsets of equal size; and the model is trained using 9 subsets,
and the remaining subset is used to compute the validation error. This process is repeated 10 times,
each time with a different validation subset, and the average validation error for a particular is
computed. By computing the average validation error over a range of possible values, the
optimal with the smallest average validation error is determined. The resulting values for
each period are listed in Table 1, along with the standard deviation of noise ( ), the mean value of
the constant term and the number of relevance vectors. The relevance vectors and the
combination weights are listed in Table 2.
After the RVM models, one for each vibration period, were trained, standardized residuals
were computed. Figure 1 shows the distribution of the standardized residuals, corresponding to
T=1 second, with respect to , and . The residual distribution patterns for other periods
were similar, not indicating any systematic bias.
Table 1: Kernel width parameter , logarithmic standard deviation of noise ( ), mean value of
the bias term and the number of relevance vectors ( ), for each period.
T (sec)
0.01 0.23 0.633 -3.069 7
0.05 0.32 0.666 -0.664 7
0.10 0.13 0.718 0.002 7
0.20 0.15 0.661 -15.042 6
0.50 0.25 0.695 -8.359 7
1.00 0.36 0.748 -4.670 5
2.00 0.28 0.869 -6.0548 5
4.00 0.26 0.983 -7.794 5
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
33
http://TUENGR.COM/ATEAS/V01/25-39.pdf
10. Figure 1: Standardized residuals for T=1 second.
Table 2: Mean values of the combination weights and the relevance vectors
T=0.01 s. T=0.05 s.
i Wi ri i Wi ri
1 13.258 [-0.1937 0.2676 -1] 1 -6.177 [0.7905 -0.4227 1]
2 15.393 [0.5238 -0.2268 1] 2 6.355 [-0.3841 -0.1783 -1]
3 0.4861 [ 0.8921 0.9414 -1] 3 28.555 [0.5238 0.5856 1]
4 -5.073 [0.9619 -1.0000 1] 4 -7.930 [-0.5111 0.7896 -1]
5 -4.275 [0.9619 -0.6751 1] 5 -0.402 [0.7460 -0.4021 -1]
6 -14.173 [-0.2889 0.7862 -1] 6 -12.622 [0.9619 0.9545 1]
7 -8.086 [ 0.0603 0.9789 1] 7 -16.194 [0.0603 0.9789 1]
T=0.1 s. T=0.2 s.
i Wi ri i Wi ri
1 64.423 [0.4159 -0.1499 1] 1 29.569 [-0.8921 -0.0837 -1]
2 -6.991 [ 0.9619 0.9545 1] 2 2.293 [0.7905 -0.4227 1]
3 -36.297 [0.9619 -1.0000 1] 3 35.440 [0.8921 0.6543 -1]
4 15.875 [1.0000 0.4559 -1] 4 5.7412 [0.9619 -1.0000 1]
5 -5.599 [-0.3143 0.0809 1] 5 3.5036 [-0.8222 0.1385 1]
6 -17.361 [ 0.6508 0.9961 -1] 6 -48.496 [0.0603 0.4955 -1]
7 -25.799 [-0.1302 0.9056 1]
34 Jale Tezcan and Qiang Cheng
11. Table 2 (continued).
T=0.5 s. T=1.0 s.
i Wi ri i Wi ri
1 6.4551 [0.7905 -0.4227 1] 1 1.9699 [0.7905 -0.4227 1]
2 12.825 [-0.2317 -0.2931 -1] 2 4.8873 [0.0540 -0.2785 -1]
3 0.0283 [-0.7714 0.1214 1] 3 -4.1425 [-0.7524 0.7892 1]
4 -0.806 [ 0.8921 -0.0318 -1] 4 -3.9593 [-0.7651 0.8672 -1]
5 8.4335 [0.8921 0.9414 -1] 5 3.7352 [-0.1302 -0.0121 1]
6 -0.089 [ 0.9619 0.9545 1]
7 -12.9 [ 0.0603 0.5786 -1]
T=2.0 s. T=4.0 s.
i Wi ri i Wi ri
1 7.3574 [-0.2317 -0.2931 -1] 1 0.4747 [0.7460 -0.4021 -1]
2 4.5548 [-0.0730 0.4691 1] 2 11.936 [0.7460 0.5118 -1]
3 3.0086 [ 0.9619 -1.0000 1] 3 6.8109 [0.3714 -0.0296 1]
4 -6.4695 [-1.0000 0.5142 -1] 4 -5.6050 [-0.7524 0.7892 1]
5 -5.3630 [-0.7524 0.7892 1] 5 -10.180 [0.3778 1.0000 -1]
3.4 Prediction Phase
After training, the spectral values for a new input vector , , can be determined
as follows:
1. Scale the input to the range [-1 1] using Eq. (18);
T
2. Construct the basis vector Φ 1 , , … , using the
relevance vectors from Table 2 and the kernel width parameter from Table 1;
3. Determine the median value of using Eq.(14);
4. Obtain the standard deviation of the noise from Table 1. Total uncertainty, if needed, can
be determined using Eq.(15).
4. Computational Results
The RVM model was tested using different magnitude, distance and fault mechanisms, and the
results were compared to the I08 model. Figure 2 shows the median spectral acceleration at T=1
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
35
http://TUENGR.COM/ATEAS/V01/25-39.pdf
12. second, along with the 16th and 84th percentile values bounds for strike-slip faults, for
M=5 (left) and M=7 (right). The circles in the figure show the spectral values from earthquakes
with the same fault mechanism and within ±0.25 magnitude units. Figure 3 shows the same
information for reverse faults. For periods about 1 second and longer, it was observed that the
median estimates from the RVM model were generally lower than those from the I08 model. At
very short distances, within ~20 km of the source, RVM estimates were higher for M=7, for both
strike-slip and reverse faulting earthquakes.
Figure 2: Median ±σ bounds for spectral acceleration at T=1 second, strike-slip faults.
Figure 3: Median ±σ bounds for spectral acceleration at T=1 second, reverse faults.
36 Jale Tezcan and Qiang Cheng
13. Figure 4 presents the results for vibration period T=0.2 second, for strike-slip earthquakes.
The results for the reverse faulting earthquakes were similar. For shorter vibration periods, and
M=7, RVM estimates were lower than those from the I08 model. For M=5, however, RVM
predictions equaled or exceed the I08 predictions. Regarding the variation about the median (noise
variance), the predictions from the two models were in general agreement for all vibration periods.
Figure 4: Median ±σ bounds for spectral acceleration at T=0.2 second, strike-slip faults.
5. Conclusion
This paper proposes an RVM-based model for the average horizontal component of
earthquake response spectra. Given a set of predictive variable set, and a set of ground motion
records, the RVM model predicts the most likely spectral values in addition to its variability. An
example application has been presented where the predictions from the RVM model have been
compared to an existing, parametric ground motion model. The results demonstrate the validity of
the proposed model, and suggest that it can be used as an alternative to the conventional ground
motion models.
The RVM model offers the following advantages over its conventional counterparts: (1) There
is no need to select a fixed functional form. By determining the optimal variances associated with
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
37
http://TUENGR.COM/ATEAS/V01/25-39.pdf
14. the weights, the RVM automatically detects the most plausible model; (2) The resulting RVM
model has a simple mathematical structure (weighted average of exponential basis functions), and
is based on a small number of samples that carry the most relevant information. Samples that are
not well supported by the evidence (as measured by the increase in the marginal likelihood) are
automatically pruned. (3) Because the model complexity is controlled during the training stage, the
RVM has lower risk of over-fitting.
One limitation of the proposed approach is that the resulting model may be difficult to
interpret. Because the RVM is not a physical model, it does not allow any user-defined, physical
constraints, not allowing extension of the model to scenarios not represented in the training data
set. However, in our opinion, this does not constitute a shortcoming, considering that the reliability
such practice is questionable in any regression model. Another potential limitation is that the RVM
requires a user-defined kernel width parameter, which does not have a very clear intuitive meaning,
especially when working with high dimensional input vectors. However, the optimal value of the
kernel width parameter can be determined using cross-validation, as has been done in this study.
Future studies will investigate the effect of using additional predictive variables on the
performance of the model.
6. Acknowledgements
This material is based in part upon work supported by the National Science Foundation under
Grant Number CMMI-1100735.
7. References
Boore, D.M., J. Watson-Lamprey, and N.A. Abrahamson. (2006). Orientation-independent
measures of ground motion. Bulletin of the Seismological Society of America, 96(4A),
1502-1511.
Bozorgnia, Y. and K. W. Campbell. (2004). The vertical-to-horizontal response spectral ratio and
tentative procedures for developing simplified V/H and vertical design spectra. Journal of
Earthquake Engineering, 8(2), 175-207.
Campbell, K. W. and Y. Bozorgnia. (2003). Updated Near-Source Ground-Motion (Attenuation)
Relations for the Horizontal and Vertical Components of Peak Ground Acceleration and
Acceleration Response Spectra. Bulletin of the Seismological Society of America, 93(1),
314-331.
38 Jale Tezcan and Qiang Cheng
15. Drucker, H., C. J. C. Burges, L. Kaufman, A. Smola and V. Vapnik. (1997). Support vector
regression machines, Advances in Neural Information Processing Systems 9, MIT Press.
Idriss, I. M. (2008). An NGA empirical model for estimating the horizontal spectral values
generated by shallow crustal earthquakes. Earthquake spectra, 24(1), 217-242.
MacKay, D. J. C. (1992). Bayesian interpolation. Neural computation, 4(3), 415-447.
MacKay, D. J. C. (1992). The evidence framework applied to classification networks. Neural
Computation, 4(5), 720-736.
PEER. (2007). PEER-NGA Database. http://peer.berkeley.edu/nga/index.html.
Smola, A. J. and B. Schölkopf. (2004). A tutorial on support vector regression. Statistics and
Computing, 14(3), 199-222.
Tezcan, J. and Q. Cheng. (2011). A Nonparametric Characterization of Vertical Ground Motion
Effects. Earthquake Engineering and Structural Dynamics (in print).
Tezcan, J., Q. Cheng and L. Hill. (2010). Response Spectrum Estimation using Support Vector
Machines, 5th International Conference on Recent Advances in Geotechnical Earthquake
Engineering and Soil Dynamics, San Diego, CA.
Tipping, M. (2000). The relevance vector machine. Advances in Neural Information Processing
Systems MIT Press.
Webb, A. (2002). Statistical pattern recognition, New York, John Wiley and Sons.
Dr.Jale Tezcan is an Associate Professor in the Department of Civil and Environmental
Engineering at Southern Illinois University Carbondale. She earned her Ph.D. from Rice University,
Houston, TX in 2005. Dr.Tezcan’s research interests include earthquake engineering, material
characterization, and numerical methods.
Dr.Qiang Cheng is an Assistant Professor in the Department of Computer Science at Southern
Illinois University Carbondale. He earned his Ph.D. from the University of Illinois at Urbana
Champaign, IL in 2002. Dr.Cheng’s research interests include pattern recognition, machine
learning and signal processing.
Peer Review: This article has been internationally peer-reviewed and accepted for publication
according to the guidelines given at the journal’s website.
*Corresponding author ( J. Tezcan). Tel/Fax: +001-618-4536125. E-mail address:
jale@siu.edu. 2012. American Transactions on Engineering & Applied Sciences.
Volume 1 No.1 ISSN 2229-1652 eISSN 2229-1660. Online Available at
39
http://TUENGR.COM/ATEAS/V01/25-39.pdf