When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Multi-Dimensional Features Reduction of Consistency Subset Evaluator on Unsup...CSCJournals
This paper presents the application of multi dimensional feature reduction of Consistency Subset Evaluator (CSE) and Principal Component Analysis (PCA) and Unsupervised Expectation Maximization (UEM) classifier for imaging surveillance system. Recently, research in image processing has raised much interest in the security surveillance systems community. Weapon detection is one of the greatest challenges facing by the community recently. In order to overcome this issue, application of the UEM classifier is performed to focus on the need of detecting dangerous weapons. However, CSE and PCA are used to explore the usefulness of each feature and reduce the multi dimensional features to simplified features with no underlying hidden structure. In this paper, we take advantage of the simplified features and classifier to categorize images object with the hope to detect dangerous weapons effectively. In order to validate the effectiveness of the UEM classifier, several classifiers are used to compare the overall accuracy of the system with the compliment from the features reduction of CSE and PCA. These unsupervised classifiers include Farthest First, Densitybased Clustering and k-Means methods. The final outcome of this research clearly indicates that UEM has the ability in improving the classification accuracy using the extracted features from the multi-dimensional feature reduction of CSE. Besides, it is also shown that PCA is able to speed-up the computational time with the reduced dimensionality of the features compromising the slight decrease of accuracy.
MULTI-PARAMETER BASED PERFORMANCE EVALUATION OF CLASSIFICATION ALGORITHMSijcsit
Diabetes disease is amongst the most common disease in India. It affects patient’s health and also leads to
other chronic diseases. Prediction of diabetes plays a significant role in saving of life and cost. Predicting
diabetes in human body is a challenging task because it depends on several factors. Few studies have reported the performance of classification algorithms in terms of accuracy. Results in these studies are difficult and complex to understand by medical practitioner and also lack in terms of visual aids as they arepresented in pure text format. This reported survey uses ROC and PRC graphical measures toimproveunderstanding of results. A detailed parameter wise discussion of comparison is also presented which lacksin other reported surveys. Execution time, Accuracy, TP Rate, FP Rate, Precision, Recall, F Measureparameters are used for comparative analysis and Confusion Matrix is prepared for quick review of each
algorithm. Ten fold cross validation method is used for estimation of prediction model. Different sets of
classification algorithms are analyzed on diabetes dataset acquired from UCI repository
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
A HYBRID MODEL FOR MINING MULTI DIMENSIONAL DATA SETSEditor IJCATR
This paper presents a hybrid data mining approach based on supervised learning and unsupervised learning to identify the closest data patterns in the data base. This technique enables to achieve the maximum accuracy rate with minimal complexity. The proposed algorithm is compared with traditional clustering and classification algorithm and it is also implemented with multidimensional datasets. The implementation results show better prediction accuracy and reliability.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Multi-Dimensional Features Reduction of Consistency Subset Evaluator on Unsup...CSCJournals
This paper presents the application of multi dimensional feature reduction of Consistency Subset Evaluator (CSE) and Principal Component Analysis (PCA) and Unsupervised Expectation Maximization (UEM) classifier for imaging surveillance system. Recently, research in image processing has raised much interest in the security surveillance systems community. Weapon detection is one of the greatest challenges facing by the community recently. In order to overcome this issue, application of the UEM classifier is performed to focus on the need of detecting dangerous weapons. However, CSE and PCA are used to explore the usefulness of each feature and reduce the multi dimensional features to simplified features with no underlying hidden structure. In this paper, we take advantage of the simplified features and classifier to categorize images object with the hope to detect dangerous weapons effectively. In order to validate the effectiveness of the UEM classifier, several classifiers are used to compare the overall accuracy of the system with the compliment from the features reduction of CSE and PCA. These unsupervised classifiers include Farthest First, Densitybased Clustering and k-Means methods. The final outcome of this research clearly indicates that UEM has the ability in improving the classification accuracy using the extracted features from the multi-dimensional feature reduction of CSE. Besides, it is also shown that PCA is able to speed-up the computational time with the reduced dimensionality of the features compromising the slight decrease of accuracy.
MULTI-PARAMETER BASED PERFORMANCE EVALUATION OF CLASSIFICATION ALGORITHMSijcsit
Diabetes disease is amongst the most common disease in India. It affects patient’s health and also leads to
other chronic diseases. Prediction of diabetes plays a significant role in saving of life and cost. Predicting
diabetes in human body is a challenging task because it depends on several factors. Few studies have reported the performance of classification algorithms in terms of accuracy. Results in these studies are difficult and complex to understand by medical practitioner and also lack in terms of visual aids as they arepresented in pure text format. This reported survey uses ROC and PRC graphical measures toimproveunderstanding of results. A detailed parameter wise discussion of comparison is also presented which lacksin other reported surveys. Execution time, Accuracy, TP Rate, FP Rate, Precision, Recall, F Measureparameters are used for comparative analysis and Confusion Matrix is prepared for quick review of each
algorithm. Ten fold cross validation method is used for estimation of prediction model. Different sets of
classification algorithms are analyzed on diabetes dataset acquired from UCI repository
Fault detection based on novel fuzzy modelling csijjournal
The Fault detection which is based on fuzzy modeling is investigated. Takagi-Sugeno (TS) fuzzy model can
be derived by structure and parameter identification, where only the input-output data of the identified system are available. In the structure identification step, Gustafson-Kessel clustering algorithm (GKCA) is used to detect clusters of different geometrical shapes in the data set and to obtain the point-wise membership function of the premise. In the parameter identification step, Unscented Kalman filter (UKF) is
used to estimate the parameters of the premise’s membership function. In the consequence part, Kalman filter (KF) algorithm is applied as a linear regression to estimate parameters of the TS model using the input-output data set. Then, the obtained fuzzy model is used to detect the fault. Simulations are provided to demonstrate the effectiveness of the theoretical results.
A HYBRID MODEL FOR MINING MULTI DIMENSIONAL DATA SETSEditor IJCATR
This paper presents a hybrid data mining approach based on supervised learning and unsupervised learning to identify the closest data patterns in the data base. This technique enables to achieve the maximum accuracy rate with minimal complexity. The proposed algorithm is compared with traditional clustering and classification algorithm and it is also implemented with multidimensional datasets. The implementation results show better prediction accuracy and reliability.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...IJECEIAES
Much real world decision making is based on binary categories of information that agree or disagree, accept or reject, succeed or fail and so on. Information of this category is the output of a classification method that is the domain of statistical field studies (eg Logistic Regression method) and machine learning (eg Learning Vector Quantization (LVQ)). The input argument of a classification method has a very crucial role to the resulting output condition. This paper investigated the influence of various types of input data measurement (interval, ratio, and nominal) to the performance of logistic regression method and LVQ in classifying an object. Logistic regression modeling is done in several stages until a model that meets the suitability model test is obtained. Modeling on LVQ was tested on several codebook sizes and selected the most optimal LVQ model. The best model of each method compared to its performance on object classification based on Hit Ratio indicator. In logistic regression model obtained 2 models that meet the model suitability test is a model with predictive variables scaled interval and nominal, while in LVQ modeling obtained 3 pieces of the most optimal model with a different codebook. In the data with interval-scale predictor variable, the performance of both methods is the same. The performance of both models is just as bad when the data have the predictor variables of the nominal scale. In the data with predictor variable has ratio scale, the LVQ method able to produce moderate enough performance, while on logistic regression modeling is not obtained the model that meet model suitability test. Thus if the input dataset has interval or ratio-scale predictor variables than it is preferable to use the LVQ method for modeling the object classification.
Among many data clustering approaches available today, mixed data set of numeric and category data
poses a significant challenge due to difficulty of an appropriate choice and employment of
distance/similarity functions for clustering and its verification. Unsupervised learning models for
artificial neural network offers an alternate means for data clustering and analysis. The objective of this
study is to highlight an approach and its associated considerations for mixed data set clustering with
Adaptive Resonance Theory 2 (ART-2) artificial neural network model and subsequent validation of the
clusters with dimensionality reduction using Autoencoder neural network model.
Smooth Support Vector Machine for Suicide-Related Behaviours Prediction IJECEIAES
Suicide-related behaviours need to be prevented on psychiatric patients. Prediction of those behaviours based on patient medical records would be very useful for the prevention by the psychiatric hospital. This research focused on developing this prediction at the only one psychiatric hospital of Bali Province by using Smooth Support Vector Machine method, as the further development of Support Vector Machine. The method used 30.660 patient medical records from the last five years. Data cleaning gave 2665 relevant data for this research, includes 111 patients that have suicide-related behaviours and under active treatment. Those cleaned data then were transformed into ten predictor variables and a response variable. Splitting training and testing data on those transformed data were done for building and accuracy evaluation of the method model. Based on the experiment, the best average accuracy at 63% can be obtained by using 30% of relevant data as data testing and by using training data which has one-to-one ratio in number between patients that have suicide-related behaviours and patients that have no such behaviours. In the future work, accuracy improvement need to be confirmed by using Reduced Support Vector Machine method, as the further development of Smooth Support Vector Machine.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
PERFORMANCE ASSESSMENT OF ANFIS APPLIED TO FAULT DIAGNOSIS OF POWER TRANSFORMER ecij
Continuous monitoring of Power transformer is very much essential during its operation. Incipient faults inside the tank and winding insulation needs careful attention. Traditional ratio methods and Duval triangle can be employed to diagnose the incipient faults. Many times correct diagnosis due to the
borderline problems and the existence of multiple faults may not be possible. Artificial intelligence (AI) techniques could be the best solution to handle the non linearity and complexity in the input data. In the proposed work, adaptive neuro fuzzy inference system (ANFIS), is utilized to deal with 9 incipient fault conditions including healthy condition of power transformer with sufficient DGA transformer oil samples. Comparison of the diagnosis performance of both the methods of ANFIS and the feasibility pertaining to the problem is presented. Diagnosis error in classifying the oil samples and the network structure are the main considerations of the present study.
Improved probabilistic distance based locality preserving projections method ...IJECEIAES
In this paper, a dimensionality reduction is achieved in large datasets using the proposed distance based Non-integer Matrix Factorization (NMF) technique, which is intended to solve the data dimensionality problem. Here, NMF and distance measurement aim to resolve the non-orthogonality problem due to increased dataset dimensionality. It initially partitions the datasets, organizes them into a defined geometric structure and it avoids capturing the dataset structure through a distance based similarity measurement. The proposed method is designed to fit the dynamic datasets and it includes the intrinsic structure using data geometry. Therefore, the complexity of data is further avoided using an Improved Distance based Locality Preserving Projection. The proposed method is evaluated against existing methods in terms of accuracy, average accuracy, mutual information and average mutual information.
A chi-square-SVM based pedagogical rule extraction method for microarray data...IJAAS Team
Support Vector Machine (SVM) is currently an efficient classification technique due to its ability to capture nonlinearities in diagnostic systems, but it does not reveal the knowledge learnt during training. It is important to understand of how a decision is reached in the machine learning technology, such as bioinformatics. On the other hand, a decision tree has good comprehensibility; the process of converting such incomprehensible models into an understandable model is often regarded as rule extraction. In this paper we proposed an approach for extracting rules from SVM for microarray dataset by combining the merits of both the SVM and decision tree. The proposed approach consists of three steps; the SVM-CHI-SQUARE is employed to reduce the feature set. Dataset with reduced features is used to obtain SVM model and synthetic data is generated. Classification and Regression Tree (CART) is used to generate Rules as the Last phase. We use breast masses dataset from UCI repository where comprehensibility is a key requirement. From the result of the experiment as the reduced feature dataset is used, the proposed approach extracts smaller length rules, thereby improving the comprehensibility of the system. We obtained accuracy of 93.53%, sensitivity of 89.58%, specificity of 96.70%, and training time of 3.195 seconds. A comparative analysis is carried out done with other algorithms.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Hypothesis on Different Data Mining AlgorithmsIJERA Editor
In this paper, different classification algorithms for data mining are discussed. Data Mining is about
explaining the past & predicting the future by means of data analysis. Classification is a task of data mining,
which categories data based on numerical or categorical variables. To classify the data many algorithms are
proposed, out of them five algorithms are comparatively studied for data mining through classification. There are
four different classification approaches namely Frequency Table, Covariance Matrix, Similarity Functions &
Others. As work for research on classification methods, algorithms like Naive Bayesian, K Nearest Neighbors,
Decision Tree, Artificial Neural Network & Support Vector Machine are studied & examined using benchmark
datasets like Iris & Lung Cancer.
EFFICIENT APPROACH FOR DESIGNING A PROTOCOL FOR IMPROVING THE CAPACITY OF ADH...IJCI JOURNAL
In Adhoc Network, prime issues which affects the deployment, design and performance of an Adhoc
Wireless System are Routing, MAC Scheme, TCP, Multicasting, Energy management, Pricing Scheme &
self-organization, Security & Deployment consideration. Routing protocols are designed in such a way that
it should have improvement of throughput and minimum loss of packets. Another aspect is efficient
management of energy and the requirement of protracted connectivity of the network. The routing
algorithm designed for this network should monitor the energy of the node and route the packet
accordingly. Adhoc Network in general has many limitations such as bandwidth, memory and
computational power. In Adhoc Network there are frequent path break due to mobility. Also time
synchronization is difficult & consumes more Bandwidth. Bandwidth reservations requires complex
Medium Access Control protocol. In this field the work of quantitative and qualitative metrics analysis has
been done. The analysis of protocol performance for improving the capacity of adhoc network using
probabilistic approaches of the network is yet to be proposed. Our probabilistic approach will cover
analysis of various computational parameters for different mobility structures. In our proposed method we
have distributed mobile nodes using Pareto distribution & formulated various energy models using
regression statistic.
BPSO&1-NN algorithm-based variable selection for power system stability ident...IJAEMSJORNAL
Due to the very high nonlinearity of the power system, traditional analytical methods take a lot of time to solve, causing delay in decision-making. Therefore, quickly detecting power system instability helps the control system to make timely decisions become the key factor to ensure stable operation of the power system. Power system stability identification encounters large data set size problem. The need is to select representative variables as input variables for the identifier. This paper proposes to apply wrapper method to select variables. In which, Binary Particle Swarm Optimization (BPSO) algorithm combines with K-NN (K=1) identifier to search for good set of variables. It is named BPSO&1-NN. Test results on IEEE 39-bus diagram show that the proposed method achieves the goal of reducing variables with high accuracy.
An Influence of Measurement Scale of Predictor Variable on Logistic Regressio...IJECEIAES
Much real world decision making is based on binary categories of information that agree or disagree, accept or reject, succeed or fail and so on. Information of this category is the output of a classification method that is the domain of statistical field studies (eg Logistic Regression method) and machine learning (eg Learning Vector Quantization (LVQ)). The input argument of a classification method has a very crucial role to the resulting output condition. This paper investigated the influence of various types of input data measurement (interval, ratio, and nominal) to the performance of logistic regression method and LVQ in classifying an object. Logistic regression modeling is done in several stages until a model that meets the suitability model test is obtained. Modeling on LVQ was tested on several codebook sizes and selected the most optimal LVQ model. The best model of each method compared to its performance on object classification based on Hit Ratio indicator. In logistic regression model obtained 2 models that meet the model suitability test is a model with predictive variables scaled interval and nominal, while in LVQ modeling obtained 3 pieces of the most optimal model with a different codebook. In the data with interval-scale predictor variable, the performance of both methods is the same. The performance of both models is just as bad when the data have the predictor variables of the nominal scale. In the data with predictor variable has ratio scale, the LVQ method able to produce moderate enough performance, while on logistic regression modeling is not obtained the model that meet model suitability test. Thus if the input dataset has interval or ratio-scale predictor variables than it is preferable to use the LVQ method for modeling the object classification.
Among many data clustering approaches available today, mixed data set of numeric and category data
poses a significant challenge due to difficulty of an appropriate choice and employment of
distance/similarity functions for clustering and its verification. Unsupervised learning models for
artificial neural network offers an alternate means for data clustering and analysis. The objective of this
study is to highlight an approach and its associated considerations for mixed data set clustering with
Adaptive Resonance Theory 2 (ART-2) artificial neural network model and subsequent validation of the
clusters with dimensionality reduction using Autoencoder neural network model.
Smooth Support Vector Machine for Suicide-Related Behaviours Prediction IJECEIAES
Suicide-related behaviours need to be prevented on psychiatric patients. Prediction of those behaviours based on patient medical records would be very useful for the prevention by the psychiatric hospital. This research focused on developing this prediction at the only one psychiatric hospital of Bali Province by using Smooth Support Vector Machine method, as the further development of Support Vector Machine. The method used 30.660 patient medical records from the last five years. Data cleaning gave 2665 relevant data for this research, includes 111 patients that have suicide-related behaviours and under active treatment. Those cleaned data then were transformed into ten predictor variables and a response variable. Splitting training and testing data on those transformed data were done for building and accuracy evaluation of the method model. Based on the experiment, the best average accuracy at 63% can be obtained by using 30% of relevant data as data testing and by using training data which has one-to-one ratio in number between patients that have suicide-related behaviours and patients that have no such behaviours. In the future work, accuracy improvement need to be confirmed by using Reduced Support Vector Machine method, as the further development of Smooth Support Vector Machine.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
PERFORMANCE ASSESSMENT OF ANFIS APPLIED TO FAULT DIAGNOSIS OF POWER TRANSFORMER ecij
Continuous monitoring of Power transformer is very much essential during its operation. Incipient faults inside the tank and winding insulation needs careful attention. Traditional ratio methods and Duval triangle can be employed to diagnose the incipient faults. Many times correct diagnosis due to the
borderline problems and the existence of multiple faults may not be possible. Artificial intelligence (AI) techniques could be the best solution to handle the non linearity and complexity in the input data. In the proposed work, adaptive neuro fuzzy inference system (ANFIS), is utilized to deal with 9 incipient fault conditions including healthy condition of power transformer with sufficient DGA transformer oil samples. Comparison of the diagnosis performance of both the methods of ANFIS and the feasibility pertaining to the problem is presented. Diagnosis error in classifying the oil samples and the network structure are the main considerations of the present study.
Improved probabilistic distance based locality preserving projections method ...IJECEIAES
In this paper, a dimensionality reduction is achieved in large datasets using the proposed distance based Non-integer Matrix Factorization (NMF) technique, which is intended to solve the data dimensionality problem. Here, NMF and distance measurement aim to resolve the non-orthogonality problem due to increased dataset dimensionality. It initially partitions the datasets, organizes them into a defined geometric structure and it avoids capturing the dataset structure through a distance based similarity measurement. The proposed method is designed to fit the dynamic datasets and it includes the intrinsic structure using data geometry. Therefore, the complexity of data is further avoided using an Improved Distance based Locality Preserving Projection. The proposed method is evaluated against existing methods in terms of accuracy, average accuracy, mutual information and average mutual information.
A chi-square-SVM based pedagogical rule extraction method for microarray data...IJAAS Team
Support Vector Machine (SVM) is currently an efficient classification technique due to its ability to capture nonlinearities in diagnostic systems, but it does not reveal the knowledge learnt during training. It is important to understand of how a decision is reached in the machine learning technology, such as bioinformatics. On the other hand, a decision tree has good comprehensibility; the process of converting such incomprehensible models into an understandable model is often regarded as rule extraction. In this paper we proposed an approach for extracting rules from SVM for microarray dataset by combining the merits of both the SVM and decision tree. The proposed approach consists of three steps; the SVM-CHI-SQUARE is employed to reduce the feature set. Dataset with reduced features is used to obtain SVM model and synthetic data is generated. Classification and Regression Tree (CART) is used to generate Rules as the Last phase. We use breast masses dataset from UCI repository where comprehensibility is a key requirement. From the result of the experiment as the reduced feature dataset is used, the proposed approach extracts smaller length rules, thereby improving the comprehensibility of the system. We obtained accuracy of 93.53%, sensitivity of 89.58%, specificity of 96.70%, and training time of 3.195 seconds. A comparative analysis is carried out done with other algorithms.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Hypothesis on Different Data Mining AlgorithmsIJERA Editor
In this paper, different classification algorithms for data mining are discussed. Data Mining is about
explaining the past & predicting the future by means of data analysis. Classification is a task of data mining,
which categories data based on numerical or categorical variables. To classify the data many algorithms are
proposed, out of them five algorithms are comparatively studied for data mining through classification. There are
four different classification approaches namely Frequency Table, Covariance Matrix, Similarity Functions &
Others. As work for research on classification methods, algorithms like Naive Bayesian, K Nearest Neighbors,
Decision Tree, Artificial Neural Network & Support Vector Machine are studied & examined using benchmark
datasets like Iris & Lung Cancer.
EFFICIENT APPROACH FOR DESIGNING A PROTOCOL FOR IMPROVING THE CAPACITY OF ADH...IJCI JOURNAL
In Adhoc Network, prime issues which affects the deployment, design and performance of an Adhoc
Wireless System are Routing, MAC Scheme, TCP, Multicasting, Energy management, Pricing Scheme &
self-organization, Security & Deployment consideration. Routing protocols are designed in such a way that
it should have improvement of throughput and minimum loss of packets. Another aspect is efficient
management of energy and the requirement of protracted connectivity of the network. The routing
algorithm designed for this network should monitor the energy of the node and route the packet
accordingly. Adhoc Network in general has many limitations such as bandwidth, memory and
computational power. In Adhoc Network there are frequent path break due to mobility. Also time
synchronization is difficult & consumes more Bandwidth. Bandwidth reservations requires complex
Medium Access Control protocol. In this field the work of quantitative and qualitative metrics analysis has
been done. The analysis of protocol performance for improving the capacity of adhoc network using
probabilistic approaches of the network is yet to be proposed. Our probabilistic approach will cover
analysis of various computational parameters for different mobility structures. In our proposed method we
have distributed mobile nodes using Pareto distribution & formulated various energy models using
regression statistic.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
Home Equity Diversification Plan
This long term investment strategy uses the popular 'smith manouver' technique to make your Mortgage interest tax deductible. It uses the power of dollar cost averaging and leveraging to potentially amplify gains over the long term. This strategy is Long-Term and not for the risk adverse.
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGEIJCI JOURNAL
Pseudocode is an artificial and informal language that helps programmers to develop algorithms. In this
paper a software tool is described, for translating the pseudocode into a particular programming
language. This tool takes the pseudocode as input, compiles it and translates it to a concrete programming
language. The scope of the tool is very much wide as we can extend it to a universal programming tool
which produces any of the specified programming language from a given pseudocode. Here we present the
solution for translating the pseudocode to a programming language by implementing the stages of a
compiler.
A SECURE SCHEMA FOR RECOMMENDATION SYSTEMSIJCI JOURNAL
Recommender systems have become an important tool for personalization of online services. Generating
recommendations in online services depends on privacy-sensitive data collected from the users. Traditional
data protection mechanisms focus on access control and secure transmission, which provide security only
against malicious third parties, but not the service provider. This creates a serious privacy risk for the
users. This paper aims to protect the private data against the service provider while preserving the
functionality of the system. This paper provides a general framework that, with the help of a preprocessing
phase that is independent of the inputs of the users, allows an arbitrary number of users to securely
outsource a computation to two non-colluding external servers. This paper use these techniques to
implement a secure recommender system based on collaborative filtering that becomes more secure, and
significantly more efficient than previously known implementations of such systems.
CONVEX OPTIMIZATION BASED CONGESTION CONTROL IN LAYERED SATELLITE NETWORKSIJCI JOURNAL
A multi-layered satellite network consisting of geosynchronous and nano-satellites is suited to perform
space situational awareness. The nano-satellites collect information of space objects and transfer data to
ground stations through the geosynchronous satellites. The dynamic topology of the network, large
propagation delays and bulk data transfers results in a congested network. In this paper, we present a
convex optimization based congestion control algorithm. Using snapshots of the network, operating
parameters such as incoming, outgoing rates and buffer utilization are monitored. The operating
parameters of a satellite are formulated as a convex function and using convex optimization techniques, the
incoming data rates are evaluated to minimize congestion. Performance comparison of our algorithm with
Transmission Control Protocol congestion control mechanism is presented. The simulation results show
that our algorithm reduces congestion while facilitating higher transmission rates.
COMPARATIVE ANALYSIS OF ANOMALY BASED WEB ATTACK DETECTION METHODSIJCI JOURNAL
In the present scenario, protection of websites from web-based attacks is a great challenge due to the bad intention of the malicious user over the Internet. Researchers are trying to find the optimum solution to prevent these web attack activities. There are several techniques available to prevent the web attacks from happening like firewalls, but most of the firewall is not designed to prevent the attack against the websites. Moreover, firewalls mostly work on signature-based detection method. In this paper, we have analyzed different anomaly-based detection methods for the detection of web-based attacks initiated by malicious users. Working of these methods is in a different direction to the signature-based detection method which only detects the web-based attacks for which a signature has been previously created. In this paper, we have introduced two methods: Attribute Length Method (ALM) and Attribute Character Distribution Method (ACDM) which is based on attribute values. Further, we have done the mathematical analysis of three different web attacks and compare their False Accept Rate (FAR) results for both the methods. Results analysis reveals that ALM is more efficient method than ACDM in the detection of web-based attacks.
REAL TIME ERROR DETECTION IN METAL ARC WELDING PROCESS USING ARTIFICIAL NEURA...IJCI JOURNAL
Quality assurance in production line demands reliable weld joints. Human made errors is a major cause of
faulty production. Promptly Identifying errors in the weld while welding is in progress will decrease the
post inspection cost spent on the welding process. Electrical parameters generated during welding, could
able to characterize the process efficiently. Parameter values are collected using high speed data
acquisition system. Time series analysis tasks such as filtering, pattern recognition etc. are performed over
the collected data. Filtering removes the unwanted noisy signal components and pattern recognition task
segregate error patterns in the time series based upon similarity, which is performed by Self Organized
mapping clustering algorithm. Welder’s quality is thus compared by detecting and counting number of
error patterns appeared in his parametric time series. Moreover, Self Organized mapping algorithm
provides the database in which patterns are segregated into two classes either desirable or undesirable.
Database thus generated is used to train the classification algorithms, and thereby automating the real time
error detection task. Multi Layer Perceptron and Radial basis function are the two classification
algorithms used, and their performance has been compared based on metrics such as specificity, sensitivity,
accuracy and time required in training.
ARTIFICIAL NEURAL NETWORK FOR DIAGNOSIS OF PANCREATIC CANCERIJCI JOURNAL
Cancer is malignant growth or tumour which forms due to an uncontrolled division of cells in a part of
body which may even lead to death. These are of different types depending upon the part of body affected.
If it is Pancreas then the disease is termed as Pancreatic Cancer. This paper presents an Artificial Neural
Network model to diagnose pancreatic cancer based on a set of symptoms. An ANN model is created after
analysing the actual procedure of disease diagnosis by the doctor. An approach to detect various stages of
cancer affected in pancreas is presented in the paper. Results of the study suggest the advantage of using
ANN model instead of manual disease diagnosis.
DETECTING PACKET DROPPING ATTACK IN WIRELESS AD HOC NETWORKIJCI JOURNAL
In wireless ad hoc network, packet loss is a serious issue. Either it is caused by link errors or by malicious
packet dropping. The malicious nodes in a route can intentionally drop the packets during the transmission
from source to destination. It is difficult to distinct the packet loss due to link errors and malicious
dropping. Here is a mechanism which will detect the malicious packet dropping by using the correlation
between packets. An auditing architecture based on homomorphic linear authenticator can be used to
ensure the proof of reception of packets at each node. Also to ensure the forwarding of packets at each
node, a reputation mechanism based on indirect reciprocity can be used.
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScscpconf
This paper presents a new idea for fault detection and isolation (FDI) technique which is applied to industrial system. This technique is based on Neural Networks fault-free and Faulty
behaviours Models (NNFMs). NNFMs are used for residual generation, while decision tree architecture is used for residual evaluation. The decision tree is realized with data collected
from the NNFM’s outputs and is used to isolate detectable faults depending on computed threshold. Each part of the tree corresponds to specific residual. With the decision tree, it
becomes possible to take the appropriate decision regarding the actual process behaviour by evaluating few numbers of residuals. In comparison to usual systematic evaluation of all
residuals, the proposed technique requires less computational effort and can be used for on line diagnosis. An application example is presented to illustrate and confirm the effectiveness and the accuracy of the proposed approach.
NEURAL NETWORKS WITH DECISION TREES FOR DIAGNOSIS ISSUEScsitconf
This paper presents a new idea for fault detection and isolation (FDI) technique which is
applied to industrial system. This technique is based on Neural Networks fault-free and Faulty
behaviours Models (NNFMs). NNFMs are used for residual generation, while decision tree
architecture is used for residual evaluation. The decision tree is realized with data collected
from the NNFM’s outputs and is used to isolate detectable faults depending on computed
threshold. Each part of the tree corresponds to specific residual. With the decision tree, it
becomes possible to take the appropriate decision regarding the actual process behaviour by
evaluating few numbers of residuals. In comparison to usual systematic evaluation of all
residuals, the proposed technique requires less computational effort and can be used for on line
diagnosis. An application example is presented to illustrate and confirm the effectiveness and
the accuracy of the proposed approach.
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
This paper presents an approach based on applying an aggregated predictor formed by multiple versions of a multilayer neural network with a back-propagation optimization algorithm for helping the engineer to get a list of the most appropriate well-test interpretation models for a given set of pressure/ production data. The proposed method consists of three stages: (1) data decorrelation through principal component analysis to reduce the covariance between the variables and the dimension of the input layer in the artificial neural network, (2) bootstrap replicates of the learning set where the data is repeatedly sampled with a random split of the data into train sets and using these as new learning sets, and (3) automatic reservoir model identification through aggregated predictor formed by a plurality vote when predicting a new class. This method is described in detail to ensure successful replication of results. The required training and test dataset were generated by using analytical solution models. In our case, there were used 600 samples: 300 for training, 100 for cross-validation, and 200 for testing. Different network structures were tested during this study to arrive at optimum network design. We notice that the single net methodology always brings about confusion in selecting the correct model even though the training results for the constructed networks are close to 1. We notice also that the principal component analysis is an effective strategy in reducing the number of input features, simplifying the network structure, and lowering the training time of the ANN. The results obtained show that the proposed model provides better performance when predicting new data with a coefficient of correlation approximately equal to 95% Compared to a previous approach 80%, the combination of the PCA and ANN is more stable and determine the more accurate results with lesser computational complexity than was feasible previously. Clearly, the aggregated predictor is more stable and shows less bad classes compared to the previous approach.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Neural Network-Based Actuator Fault Diagnosis for a Non-Linear Multi-Tank SystemISA Interchange
The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H1 framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks.
Similar to POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION IN BAYESIAN MODEL UPDATING (20)
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
POSTERIOR RESOLUTION AND STRUCTURAL MODIFICATION FOR PARAMETER DETERMINATION IN BAYESIAN MODEL UPDATING
1. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
DOI: 10.5121/ijci.2016.5118 193
POSTERIOR RESOLUTION AND STRUCTURAL
MODIFICATION FOR PARAMETER DETERMINATION
IN BAYESIAN MODEL UPDATING
Kanta Prajapat1
and Samit Ray-Chaudhuri2
1
Department of Civil Engineering, IIT Kanpur, Kanpur, UP-208016, India
2
Department of Civil Engineering, IIT Kanpur, Kanpur, UP-208016, India
ABSTRACT
When only a few lower modes data are available to evaluate a large number of unknown parameters, it is
difficult to acquire information about all unknown parameters. The challenge in this kind of updation
problem is first to get confidence about the parameters that are evaluated correctly using the available
data and second to get information about the remaining parameters. In this work, the first issue is resolved
employing the sensitivity of the modal data used for updation. Once it is fixed that which parameters are
evaluated satisfactorily using the available modal data the remaining parameters are evaluated employing
modal data of a virtual structure. This virtual structure is created by adding or removing some known
stiffness to or from some of the stories of the original structure. A 12-story shear building is considered for
the numerical illustration of the approach. Results of the study show that the present approach is an
effective tool in system identification problem when only a few data is available for updation.
KEYWORDS
Bayesian statistics, Modal parameters, Eigen sensitivity, Structural modification, MCMC
1. INTRODUCTION
The non-uniqueness issue associated with the inverse problem‟s solution makes the probabilistic
approach more reliable over the deterministic approach for system identification problems. In last
few decades, Bayesian model updating has rapidly arisen as a reliable and effective approach for
system identification problems probabilistically. The efficiency of Bayesian model updating
depends on various issues like the efficiency of simulation algorithm, data used for updation,
prior distributions, likelihood function etc. Many of these issues have been successfully resolved
in recent years [1-13]. Bayesian probabilistic approach is applied to localize and quantified the
amount of damage in [24] employing incomplete and noisy modal data. A novel approach for
online health monitoring and damage assessment of structures using Bayesian probabilistic
measures is presented in [25]. In this approach at first identification of the system is done in its
undamaged state and then continuous monitoring cycles are run to detect the damage in the
structure. Appropriate model class selection using response measurements of structural system by
showing examples of some linear and non-linear structural systems is shown in [6]. Bayesian
approach for updation and model class selection for Masing hysteretic structural models is
employed in [26]. A damage localization technique in structures under Bayesian inference using
vibration measurements (modal data) on a steel cantilever beam is presented [27]. Damage
detection in plate type structures is studied in [28]. Damage assessment of a slice of 7-story RC
building using Bayesian uncertainty quantification technique is studied in [29].
2. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
194
Many times in Bayeisan updation problems the available data for updation falls short to give
complete information about the all unknown parameters. The reason behind this is that the
available data is not sensitive to all of the unknown structural parameters. Therefore, information
about only those parameters can be acquired accurately for which the used data for updation is
sensitive.
When modal data is used as evidence to update the structural model many researches have
suggested different ways to take variance for prediction error model of frequency and mode shape
data types [14-18]. Most of these studies consider only two variances one for frequencies of all
modes and other for mode shape components of all modes. Only a few studies consider separate
variances for data of different modes. However, depending on various conditions all frequencies
and mode shape components of all modes may require separate variances for their prediction error
models for an efficient information extraction from these data points. This study employs a
sensitivity based approach recently given by the authors [24] to derive the variances for prediction
error models of different data points to efficiently extract the information from these data points.
In this work, a novel Bayesian approach is presented to determine those parameters for which the
available data for updation is not sensitive. The first thing in this kind of problem is to first get
those parameters that can be successfully resolved using the available data. For this purpose a
data sensitivity based term named as parameter impact is introduced in this work. It is shown that
this newly introduced term successfully separate those parameters which can be resolved using
available data from those which cannot be resolved. After this separation the resolved parameters
are considered as known parameters. To resolve the remaining parameters a virtual structure is
created by adding some high stiffness to those stories whose stiffness is successfully determined
previously. It is observed that the modal data of this virtual structure is capable to give
information about previously unresolved parameters. Ideally the approach is only effective when
the modal data of this virtual structure can be determined using modal data of the original
structure. A current research is in progress by the authors for this purpose, however, present study
assumes that modal data of this virtual structure is known (using eigenvalue analysis).
Since a shear building approximation represents most of the civil engineering structures
appropriately, a 12-stoery numerical shear building model is used for the illustration of the
approach. Only fundamental mode data is used for updating the stiffness parameters of the shear
building model. Markov chain Monte Carlo simulation technique is employed using Metropolis-
Hasting algorithm to simulate the samples from the posterior distribution. The mean of the
posterior distribution is taken as the parameter estimation of distribution to represent the unknown
stiffness parameters. Results of this study show that the present approach is very efficient to
resolve all unknown parameters even when the data available is not sensitive to the unknown
parameters.
2. BAYESIAN MODEL UPDATING WITH MODAL DATA
The need to predict the response of a physical system due to a future excitation involves the
requirement of a correct mathematical model for that system. This is done so that proper
retrofitting measures can be taken if requires based on the response of the mathematical model.
Bayesian modal updating involves parameter updation of an initially assumed crude mathematical
model based on the response of the physical system. The process of updation is assumed to be
satisfactory when the response of the mathematical model matches with the response of the
physical system for a given input.
This updation of model parameters is done using Bayes‟ theorem as given below:
3. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
195
( | ) ( )
( | )
( )
p D p
p D
p D
(1)
where, n
R is the parameter vector which need to be updated and D is the available evidence
from the system. Expression ( )p is known as the prior distribution of and ( | )p D represents
the probability of the evidence D when a belief of is taken as true and called as the likelihood
of the evidence for that belief. The total probability of the evidence D for the model is a constant
and can be given by the sum of the likelihood of the evidence for each and every belief of
which is represented as ( )p D . The expression ( | )p D is known as the posterior distribution of
the parameter vector . When the evidence D consists of modal data of the system it can be
shown that likelihood for frequency and mode shape components can be expressed as:
2
2
( ( ))
21
( / , )
2
i i
i
i
i
ip e
θ
θ (2)
1
/2 1/2
1 1
( / , ) exp ( ) ( )
(2 ) 2i i
i
T
i i i i in
p
φ φ
φ
φ θ V φ φ θ V φ φ θ
V (3)
Here, i and iφ represent the observed frequency and the mode shape vector of the th
i mode of
the system 1...i m and ( )i θ and ( )iφ θ represent the frequency and mode shape of the model
for the th
i mode respectively. In achieving (2) and (3) it is considered that the difference in model
and system response is normally distributed with zero mean. Standard deviation of the deviation
in frequency of the th
i mode is taken as i and iφV represents the covariance matrix of the
deviation in th
i mode shape vector components. Now, if it is assumed that the frequency and
mode shape of an energy mode are statically independent informatively. Further, if each mode is
independent to other mode then for m modes the likelihood of the evidence D can be given as:
1
( / ) | , . | ,i i
m
i i
i
p D p p
φθ θ φ θ V (4)
Now, to evaluate the covariance matrix iφV , mode shape components are taken as uncorrelated to
each other, resulting iφV to a diagonal matrix. Therefore, if, d is the length of the parameter
vector , then the total unknown parameters in updation problem are increased to 1d m n ,
where, n is the number of observed degrees of freedom and can be expressed as:
2
11 1
, , ...i i i
Tm
T
nn i
φ φθ θ V V (5)
Equation (4) now can be rewritten as:
1 1
( / ) | |
m n
i i j
i j
p D p p
θ θ φ θ (6)
In a recent study [24] by authors it is shown that modal data sensitivity towards structural
parameters can be used to evaluate the ratio of variance of error models for frequency and mode
shape components of different modes. In this way exhaustive information from the data used for
the updation can be obtained without increasing the number of unknown parameters (unknown
4. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
196
variances for each data point) in the updation algorithm. The present study uses this approach to
evaluate the unknown variances.
3. PROBLEM STATEMENT
Many times in a model updation problem the data used for updation falls short to acquire
information about all unknown parameters of the problem. Bayesian model updation algorithm
works on the error minimization concept to get information about unknown parameters. In most
of the Bayesian updation algorithms it is assumed that the unknown parameters are statistically
independent. This fact makes each and every individual unknown parameter solely responsible to
get any informatio about that parameter. Now, if there are some unknown parameters that are
incapable to produce a change in the modal data used for updation with a change in that
parameter for the adopted updation scheme then information for these parameters cannot be
achieved in updation process. This work presents a novel approach to get information about these
relatively hard to achieve unknown parameters with limited available data for updation. In order
to acquire information about these parameters it is required to make the data used for updation
sensitive towards the change in these parameters. Therefore, in the first stage of updation process
information about those unknown parameters can be achieved for which the used modal data is
sensitive. Next step of updation process involves the separation of accurately acquired and not
acquired unknown parameters in the first stage. In this work modal data sensitivity towards
unknown parameters is first used for an efficient posterior resolution of the parameters. Then, a
virtual structural modification based approach is used to make the available modal data sensitive
for the parameters which are not acquired accurately in the first stage of updation.
4. PROPOSED APPROACH
If modal data is considered for updation of unknown parameters ik , 1...i t here, t is the total
number of unknown parameters. Now, for square of frequency 2
I of th
I mode, its derivative with
respect to a parameter ik is given by [19-23].
2
21
.I I
TI
I
i I i ik C k k
K M
φ φ (7)
where,
.T
I I IC φ Mφ (8)
and, if for mode shape I
φ of th
I mode, its derivative can be found using below expression [19-
23].
2
2 2I
I
I
I I
i i i ik k k k
φ K M
K M M φ (9)
In Equations (7) and (9) K and M are the stiffness and mass matrix of the structure respectively.
After getting the first stage value of unknown parameters these derivatives can be obtained for
different unknown parameters. It is to be noted that to evaluate these derivatives the unknown
stiffness matrix (assuming mass matrix is known) is constructed using first stage result of
unknown parameters. In this approach a novel term is introduced for the posterior resolution of
unknown parameters. Since Bayesian model updating algorithm is based on the error
minimization between response of the system and the response of the mathematical model
defined with some parameters. The uncertainty in the value of an unknown parameter can be
assumed to be inversely proportional to the ability of that parameter to change the response of the
5. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
197
model with a change in the parameter itself for the adopted updation scheme. Now if a term
„parameter impact‟ ( k ) is defined as the absolute sum of first order derivative of each and every
modal data used for updation with respect to that parameter then it can be expressed as:
2
jI
i
I
k
i ik k
;
1...
1...
1...
I m
j n
i t
(10)
where, m is the number of considered modes, n is the number of observed degrees of freedom
and t is the total number of unknown parameters. Therefore, if ik represents the uncertainty in
parameter ik after first stage of updation it can be given as:
1
i
i
k
k
(11)
The above relation is used for the meaningful posterior resolution of the unknown parameters and
to get most uncertain parameters after first stage in this approach. After knowing the most
uncertain parameters after first stage a local damage based approach is utilized to improve the
parameter impact k of these parameter so that uncertainty in the value of these parameters can
be reduced in the second stage of updation. It is to be noted here that those parameters which has
a relatively higher impact are already determined in the first stage and can be taken as known
parameters. In the second stage a virtual structural modification is done by adding high stiffness
to those stories of the structure for which the stiffness parameters are already determined in the
first stage. It is seen that this virtual structure has a higher parameter impact than the original
structure for those parameter which are determined as the most uncertain parameter in the first
stage. Therefore, the modal data of the original structure along with the modal data of the virtual
structure can be utilized to know the remaining unknown structural parameters. The approach can
be repeated for more stages using different virtual structure till all the uncertain parameters are
achieved.
The most challenging task in this approach is to evaluate the modal data of the virtual structure
employing the modal data of the original structure. Although some techniques are available in
literature to evaluate the modal data of the virtual structure but accuracy of these techniques
depend on the number of available modes of the original structure and also most of these
techniques are applicable for small modifications only. In case when only a few lower modes data
are available these techniques are hard to rely for large modifications. However, practically only a
few lower mode data are available from the original structure and also in many cases a large
modification is required to improve the parameter impact. Authors‟ are currently trying to get an
effective approach to obtain the modal data of the virtual structure, however, in this work the
modal data for the virtual structure is simply obtained by the eigenvalue analysis and not using
modal data of original structure. Further work is needed for the practical implementation of this
approach.
5. ILLUSTRATIVE EXAMPLE
A numerically simulated 12-storey shear building frame is adopted to illustrate the approach
(Figure 1). Stiffness parameter for a story is defined as the multiplier of assumed nominal
stiffness of that story ( 8
4 10 N/mm for each story). Mass of each story is assumed to be known
( 5
1 10 kg) whether all stiffness parameters are assumed to be unknown for updation algorithm.
These unknown stiffness parameters are found out employing present approach by taking only
first mode data (frequency and mode shape). This data is generated by taking a known value of all
6. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
198
unknown parameters , 1...12ik i . Mode shape data are normalized with respect to the response of
bottom story. To simulate the practical scenario this data is contaminated by a noise of coefficient
of variation 5%. A total of 15 such contaminated data sets are then taken to find out the unknown
stiffness parameters. To avoid any biasness in the algorithm exponential prior with mean value of
2 is adopted for the choice of prior distribution of all unknown stiffness parameters. The total
unknown parameters in the algorithm are the unknown stiffness parameters, multiplication factor
of normalized variances and shape parameter of proposal distribution of this multiplication factor.
Prior distribution of these additional two unknown parameters are taken as uniform in a range
0.00001 to 10 for multiplication factor and 1 to 1000000 for shape parameter. For proposal
distribution for all the unknown parameters Gamma distribution is adopted. Metropolis-Hasting
Markov Chain Monte Carlo (MCMC) algorithm is employed to draw samples from the high
dimensional posterior distribution.
Figure 1: Schematic diagram of adopted shear building frame
Table 1 shows the result of the first stage for all unknown parameters in terms of posterior mean
and variance and percentage deviation from the actual value. First stage is defined as the updation
using the modal data of original structure. The subsequent stages are updation using the modal
data of original as well as the modified structure. The modified structure is the structure with
stiffness modification of some of the stories where a known stiffness is added to those stories.
Table 1 also shows the parameter impact ( k ) for different parameters normalized with respect
to 1k . It can be observed from Table 1 that those parameters which have higher k are
successfully determined in the first stage, however, parameters with lower k cannot be resolved
successfully in first stage. It can also be observed that the posterior variance is also a good
measure of parameter certainty. However, in case of highly noisy data (not presented here) it is
observed by the authors that posterior variance may give a false depiction of parameter accuracy.
Therefore, based on k parameters , 6...12ik i are determined as the most uncertain unknown
parameters after first stage that cannot be resolved using modal data of original structure. Figure 2
shows the Markov chain for different parameters and it can be observed form this figure that
chain is not seen to be converged for parameters , 6...12ik i . Therefore, to improve the
parameter impact of these parameters the original structure is virtually modified by adding
stiffness to those stories whose stiffness parameters are successfully found in the first stage.
7. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
199
These stories stiffness is increased to two times of its current stiffness. Now modal data of this
modified structure along with the original structure is used to determine the remaining unknown
parameters which are not resolved successfully from first stage. In this work the modal data of
virtual structure is not obtained using the modal data of the original structure but is obtained
directly from eigenvalue analysis and contaminated with noise.
Table 1: Posterior statistics at first stage (original structure)
Unknown
parameters with
actual value
Mean Variance Deviation
(%)
Normalized parameter
impact ( ik )
k1=1.0 0.9697 0.0006 3.03 1.0
k2=1.0 1.0406 0.0011 4.06 0.45
k3=1.0 0.8991 0.0024 10.09 0.44
k4=1.0 1.0322 0.0084 3.22 0.35
k5=1.0 0.8587 0.0094 14.13 0.37
k6=1.0 1.2045 0.0758 20.45 0.22
k7=1.0 0.839 0.0236 16.10 0.27
k8=1.0 1.5538 0.3277 55.38 0.12
k9=1.0 0.937 0.0573 6.30 0.15
k10=1.0 1.3531 0.3951 35.31 0.08
k11=1.0 1.1558 0.3537 15.58 0.06
k12=1.0 2.4023 0.8146 140.23 0.01
a. Well determined parameters b. Undetermined parameters
Figure 2: Markov Chain for different parameters in first stage
Results for stage 2 is shown in Table 2. It can be observed from this table that parameters
, 6...9ik i are successfully resolved in this stage which has higher k in this stage than the
previous stage. It is to be noted that in all stages k is normalized with respect to 1k only.
Figure 3 shows the Morkov chain and posterior distribution of some of the parameters for stage 2.
Results after two more modifications are shown in Table 3. Therefore, it can be concluded that
the present approach is quite efficient to determine the unknown parameters using data of only
first mode.
8. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
200
Table 2: Posterior statistics at second stage (modified structure)
Unknown
parameters with
actual value
Mean Variance Deviation
(%)
Normalized
parameter
impactk6=1.0 1.0126 0.0042 1.26 0.45
k7=1.0 0.9810 0.0156 1.90 0.40
k8=1.0 1.0469 0.0306 4.69 0.31
k9=1.0 1.0691 0.0567 6.91 0.24
k10=1.0 1.4114 0.4685 41.14 0.14
k11=1.0 0.8308 0.1343 16.92 0.15
k12=1.0 3.5123 14.2292 251.23 0.02
a. Markov chain b. Well determined
parameters
c. Ill determined
parameters
Figure 3: Second stage statistics
Table 3: Posterior statistics at final stage
Unknown
parameters with
actual value
Mean Deviation
(%)
k1=1.0 0.9697 3.03
k2=1.0 1.0406 4.06
k3=1.0 0.8991 10.09
k4=1.0 1.0322 3.22
k5=1.0 0.8587 14.13
k6=1.0 1.0126 1.26
k7=1.0 0.981 1.90
k8=1.0 1.0469 4.69
k9=1.0 1.0691 6.91
k10=1.0 1.037 3.70
k11=1.0 1.044 4.40
k12=1.0 1.1912 19.12
9. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
201
6. CONCLUSIONS
A sensitivity based novel term is introduced for the posterior resolution of unknown parameters.
It is observed that the present approach is highly effective and efficient to resolve the unknown
parameters under Bayesian inference. The result of the study shows that using the present
approach those parameters of a system can also be found for which the data available for the
updation is not very sensitive. However, the current approach is yet to be tested for the practical
scenario. Future research is needed for the complete implementation of the present approach on
real structures. In civil engineering structures the approach can be a useful tool for system
identification or damage detection when not much data is available for updation. It is also
observed that the posterior variance can also be used as a good measure of parameter accuracy.
However, in case of highly noisy data the reliability of variance based accuracy suffers.
REFERENCES
[1] Beck, J. L. and Katafygiotis, L. S. (1991) “Updating of a model and its uncertainties utilizing
dynamic test data.” Computational Stochastic Mechanics, Springer, pp125–136.
[2] Beck, J. L. and Katafygiotis, L. S. (1998) “Updating models and their uncertainties. i: Bayesian
statistical framework.” Journal of Engineering Mechanics, Vol. 124, No. 4, pp455–461.
[3] Chen, L., Qin, Z., and Liu, J. S. (2001) “Exploring hybrid monte carlo in bayesian computation.”
sigma, Vol. 2, pp2–5.
[4] Papadimitriou, C., Beck, J. L., and Katafygiotis, L. S. (2001) “Updating robust reliability using
structural test data.” Probabilistic Engineering Mechanics, Vol. 16, No. 2, pp103–113.
[5] Beck, J. L. and Au, S. K. (2002) “Bayesian updating of structural models and reliability using markov
chain monte carlo simulation.” Journal of Engineering Mechanics, Vol. 128, No. 4, pp380–391.
[6] Beck, J. L. and Yuena, K. V. (2004) “Model selection using response measurements: Bayesian
probabilistic approach.” Journal of Engineering Mechanics, Vol. 130, No. 2, pp192–203.
[7] Ching, J., Muto, M., and Beck, J. L. (2005) “Bayesian linear structural model updating using gibbs
sampler with modal data.” Proceedings of the 9th International Conference on Structural Safety and
Reliability, Millpress, pp2609–2616.
[8] Marwala, T. and Sibisi, S. (2005) “Finite element model updating using bayesian approach.” In
Proceedings of the International Modal Analysis Conference, Orlando, Florida, USA.
[9] Ching, J. and Chen, Y. C. (2007) “Transitional markov chain monte carlo method for bayesian model
updating, model class selection, and model averaging.” Journal of Engineering Mechanics, Vol. 133,
No. 7, pp816–832.
[10] Mthembu, L., Marwala, T., Friswell, M. I., and Adhikari, S. (2008) “Bayesian evidence for finite
element model updating.” arXiv preprint arXiv:0810.2643.
[11] Cheung, S. H. and Beck, J. L. (2009) “Bayesian model updating using hybrid monte carlo simulation
with application to structural dynamic models with many uncertain parameters.” Journal of
Engineering Mechanics, Vol. 135, No. 4, pp234–255.
[12] Cheung, S. H. and Beck, J. L. (2010) “Calculation of posterior probabilities for bayesian model class
assessment and averaging from posterior samples based on dynamic system data.” Computer-Aided
Civil and Infrastructure Engineering, Vol. 25, No. 5, pp304–321.
[13] Boulkaibet, I., Marwala, T., Mthembu, L., Friswell, M. I., and Adhikari, S. (2011) “Sampling
techniques in bayesian finite element model updating.” arXiv preprint arXiv:1110.3382.
[14] Christodoulou, K. and Papadimitriou, C. (2007) “Structural identification based on optimally
weighted modal residuals” Mechanical Systems and Signal Processing, Vol. 21, No. 1, pp4–23.
[15] Goller, B., and Schueller, G.I. (2011) “Investigation of model uncertainties in bayesian structural
model updating,” Journal of sound and vibration, Vol. 330, No. 25, pp6122–6136.
[16] Papadimitriou, C., Argyris, C., Papadioti, D. C., and Panetsos, P. (2014) “Uncertainty calibration of
large-order models of bridges using ambient vibration measurements,” in Proc. EWSHM-7th
European Workshop on Structural Health Monitoring.
[17] Behmanesh, I., Moaveni, B., Lombaert, G., and Papadimitriou, C. (2015) “Hierarchical bayesian
model updating for structural identification,” Mechanical Systems and Signal Processing.
10. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
202
[18] Fox, R. L., and Kapoor, M. P. (1968) “Rates of change of eigenvalues and eigenvectors,” AIAA
journal, Vol. 6, No. 12, pp2426–2429.
[19] Rogers, Lynn C. (1970) “Derivatives of eigenvalues and eigenvectors,” AIAA journal, Vol. 8, No. 5,
pp943–944.
[20] Nelson, R. B. (1976) “Simplified calculation of eigenvector derivatives” AIAA journal, Vol. 14, No.
9, pp1201–1205.
[21] Adhikari S. (1999) “Rates of change of eigenvalues and eigenvectors in damped dynamic system”
AIAA journal, Vol. 37, No. 11, pp1452–1458.
[22] Smith, D. E., and Siddhi, V. (2006) “A generalized approach for computing eigenvector design
sensitivities” in Proc. SEM annual conference and exposition on experimental and applied
mechanics.
[23] Prajapat, K., and Ray-Chaudhuri S. “Prediction Error Variances in Bayesian Model Updating
Employing Data Sensitivity” Journal of Engineering Mechanics, (under review).
[24] Sohn, H. and Law, K. H. (1997) “A bayesian probabilistic approach for structure damage detection.”
Earthquake engineering and structural dynamics, Vol. 26, No. 12, pp1259–1281.
[25] Vanik, M. W., Beck, J. L., and Au, S. K. (2000) “Bayesian probabilistic approach to structural health
monitoring.” Journal of Engineering Mechanics, Vol. 126, No. 7, pp738–745.
[26] Muto, M. and Beck, J. L. (2008) “Bayesian updating and model class selection for hysteretic
structural models using stochastic simulation.” Journal of Vibration and Control, Vol. 14 (1-2), 7–34.
[27] Huhtala, A. and Bossuyt, S. (2011) “A bayesian approach to vibration based structural health
monitoring with experimental verification.” Journal of Structural Mechanics, Vol. 44, No. 4, 330–
344.
[28] Kurata, M., Lynch, J. P., Law, K. H., and Salvino, L. W. (2012) Bayesian Model Updating Approach
for Systematic Damage Detection of Plate-Type Structures. Springer.
[29] Simoen, E., Moaveni, B., Conte, J. P., and Lombaert, G. (2013) “Uncertainty quantification in the
assessment of progressive damage in a 7-story full-scale building slice.” Journal of Engineering
Mechanics, Vol. 139, No. 12, pp1818–1830.
AUTHORS
Kanta Prajapat is a research scholar at the department Civil Engineering, Indian Institute
of Technology Kanpur and working with Dr. Samit Ray-Chaudhuri. She received her
Master of Technology degree from the same institute in 2011, and Bachelor of Engineering
degree from M.B.M. Engineering College Jodhpur in 2009.
Dr. Samit Ray-Chaudhuri is a professor at the department of Civil Engineering, Indian
Institute of Technology Kanpur. Prior to joining IITK, he was working as a postdoctoral
researcher with Professor Masanobu Shinozuka, in the department of Civil and
Environmental Engineering at the University of California, Irvine. He received his Doctor of
Philosophy degree from the University of California at Irvine, Master of Technology degree
from the Indian Institute of Technology Kanpur (IITK), and Bachelor of Engineering degree
from Bengal Engineering College, currently known as Bengal Engineering and Science University, all
majored in civil engineering.