This document compares three models for forecasting daily stock prices: linear regression, Theil's incomplete method, and multilayer perceptron (MLP). Principal component analysis was used to reduce four input variables (high, low, open prices) into one principal component, which was then used to predict closing prices. Linear regression and Theil's method produced similar results, with slightly lower error than MLP based on MAE, MAPE, and SMAPE. The linear regression and Theil's models had a near perfect R-squared of 0.9977, while MLP was 0.9974. Overall, the simple linear and Theil's models performed best at forecasting closing prices based on this single stock index data.
A Prediction Model for Taiwan Tourism Industry Stock Indexijcsit
Investors and scholars pay continuous attention to the stock market, as each day, many investors attempt to
use different methods to predict stock price trends. However, as stock price is affected by economy, politics,
domestic and foreign situations, emergency, human factor, and other unknown factors, it is difficult to
establish an accurate prediction model. This study used a back-propagation neural network (BPN) as the
research approach, and input 29 variables, such as international exchange rate, indices of international
stock markets, Taiwan stock market analysis indicators, and overall economic indicators, to predict
Taiwan’s monthly tourism industry stock index. The empirical findings show that the BPN prediction model
has better predictive accuracy, Absolute Relative Error is 0.090058, and correlation coefficient is
0.944263. The model has low error and high correlation, and can serve as reference for investors and
relevant industries
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
soft computing stock market price prediction for the nigerian stock exchangeINFOGAIN PUBLICATION
Forecasting the price movements in stock market has been a major challenge for common investors, businesses, brokers and speculators because Stock Prices are considered to be very dynamic and susceptible to quick changes. As more and more money is being invested, the investor get anxious of the future trends of the stock prices in the market and thus, creating a high desirable need for a more’ intelligent’ prediction model. Two soft computing models- Artificial Neural Network (ANN) and Fuzzy Neuro hybrid model were used to forecast the next day’s closing price. The historical trading data was obtained from the Nigerian Stock Exchange for Dangote Sugar Refinery Plc . The results showed the power of Soft Computing techniques (SC) in stock Price Prediction.
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
A Prediction Model for Taiwan Tourism Industry Stock Indexijcsit
Investors and scholars pay continuous attention to the stock market, as each day, many investors attempt to
use different methods to predict stock price trends. However, as stock price is affected by economy, politics,
domestic and foreign situations, emergency, human factor, and other unknown factors, it is difficult to
establish an accurate prediction model. This study used a back-propagation neural network (BPN) as the
research approach, and input 29 variables, such as international exchange rate, indices of international
stock markets, Taiwan stock market analysis indicators, and overall economic indicators, to predict
Taiwan’s monthly tourism industry stock index. The empirical findings show that the BPN prediction model
has better predictive accuracy, Absolute Relative Error is 0.090058, and correlation coefficient is
0.944263. The model has low error and high correlation, and can serve as reference for investors and
relevant industries
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
soft computing stock market price prediction for the nigerian stock exchangeINFOGAIN PUBLICATION
Forecasting the price movements in stock market has been a major challenge for common investors, businesses, brokers and speculators because Stock Prices are considered to be very dynamic and susceptible to quick changes. As more and more money is being invested, the investor get anxious of the future trends of the stock prices in the market and thus, creating a high desirable need for a more’ intelligent’ prediction model. Two soft computing models- Artificial Neural Network (ANN) and Fuzzy Neuro hybrid model were used to forecast the next day’s closing price. The historical trading data was obtained from the Nigerian Stock Exchange for Dangote Sugar Refinery Plc . The results showed the power of Soft Computing techniques (SC) in stock Price Prediction.
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
Financial time series forecasting has received
tremendous interest by both the individual and institutional
investors and hence by the researchers. But the high noise and
complexity residing in the financial data makes this job extremely
challenging. Over the years many researchers have used support
vector regression (SVR) quite successfully to conquer this
challenge. As the latent high noise in the data impairs the
performance, reducing the noise could be effective while
constructing the forecasting model. To accomplish this task,
integration of principal component analysis (PCA) and SVR is
proposed in this research work. In the first step, a set of technical
indicators are calculated from the daily transaction data of the
target stock and then PCA is applied to these values aiming to
extract the principle components. After filtering the principal
components, a model is finally constructed to forecast the future
price of the target stocks. The performance of the proposed
approach is evaluated with 16 years’ daily transactional data of
three leading stocks from different sectors listed in Dhaka Stock
Exchange (DSE), Bangladesh. Empirical results show that the
proposed model enhances the performance of the prediction
model and also the short-term prediction gains more accuracy
than long-term prediction.
In literature of time series prediction the autoregressive integrated moving
average(ARIMA) models have been explained clearly. This paper using the ARIMA
model, elaborates the process of building stock trend predictive model. Published
data of stock price obtained from National Stock Exchange (NSE) during the period
from Jan-2007 to Dec-2011. The results obtained revealed that for short-term
prediction the ARIMA model which has a strong prospects and for stock price
prediction even it can be positively compete with existing techniques.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Classification of mathematical modeling,
Classification based on Variation of Independent Variables,
Static Model,
Dynamic Model,
Rigid or Deterministic Models,
Stochastic or Probabilistic Models,
Comparison Between Rigid and Stochastic Models
A COMPARISON STUDY OF ESTIMATION METHODS FOR GENERALIZED JELINSKI-MORANDA MOD...ijseajournal
In this paper three methods for estimating the parameters of the generalized Jelinski-Moranda (GJM)
model are compared. The needed mathematical formulas for resolving the estimates are derived.
Because of the lack of various real data with changed input data size different simulation scenarios are
given to help achieving our goals. Illustrative algorithms for the simulation studies are given. First, the
accuracy of the GJ-M model’s estimators is checked based on two evaluation criteria. Moreover, several
generated models from the GJ-M general formula are evaluated based on three different methods of
comparison. Useful results for the software reliability modelling area are concluded.
Multi criteria decision making (MCDM) techniques in today’s organizations, as a key
to performance measurement comes more to the foreground with the advancement in the high
technology. During recent years, many studies have been conducted to obtain a ranking
among many alternatives via measuring performance of each of them against many criteria.
Managerial decision making problems like supplier selection, weapon selection, project
selection, site selection etc are dealt with many multi criteria decision making methods like
TOPSIS, AHP-TOPSIS (Technique for Order Preference by Similarity to Ideal Solution),
PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation),
ELECTRE, VIKOR etc in crisp throughout the literature. In this work, we first compare
several MCDM methodologies to validate the consistency of them on a standard dataset of
plant layout problem. We proposed M-TOPSIS, A-TOPSIS procedure to select a suitable
layout for the comparative study. Results of M-TOPSIS and A-TOPSIS have been employed
to build an unsupervised artificial neural network (ANN) to obtain a new ranking of
alternatives. This study proposes an approach of deriving the rank value, in order to get
optimal configuration, from the average of more than one set of rank results obtained through
the deployment of MCDM methodologies
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PREDICTING GROWTH OF URBAN AGGLOMERATIONS THROUGH FRACTAL ANALYSIS OF GEO-SPATIAL DATA
Location Analytics is one of the fastest emerging fields in the broad area of Business Intelligence/Data Science. By
some industry estimates, almost 80% of all data has a location dimension to it. Consequently, identification of
trends and patterns in spatially distributed information has far reaching applications ranging from urban planning, to
logistics and supply chain management, location based marketing, sales territory planning and retail store location.
In view of this, we present an approach based on Fractal Analysis (FA) of highly granular geo-spatial data.
Specifically, we use proprietary data available at approximately1 square km level for New Delhi, India provided by Indicus Analytics (India’s leading economic data analytics firm based in New Delhi). We compare and contrast the patterns and insights generated using the FA approach with other more traditional approaches such as spatial to correlation and structural similarity indices. Preliminary results indicate that there are indeed “selfsimilar” local patterns that are completely missed by spatial correlation that are accurately captured by the more sophisticated FA approach. These patterns provide deep insights into the underlying socio-economic and demographic processes and can be used to predict the spatial distribution of these variables in the future. For example, questions such as what are the pockets of population growth in a city and how will businesses and government respond to that growth can be answered using the proposed approach.
Indian Dental Academy: will be one of the most relevant and exciting training center with best faculty and flexible training programs for dental professionals who wish to advance in their dental practice,Offers certified courses in Dental implants,Orthodontics,Endodontics,Cosmetic Dentistry, Prosthetic Dentistry, Periodontics and General Dentistry.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
Financial time series forecasting has received
tremendous interest by both the individual and institutional
investors and hence by the researchers. But the high noise and
complexity residing in the financial data makes this job extremely
challenging. Over the years many researchers have used support
vector regression (SVR) quite successfully to conquer this
challenge. As the latent high noise in the data impairs the
performance, reducing the noise could be effective while
constructing the forecasting model. To accomplish this task,
integration of principal component analysis (PCA) and SVR is
proposed in this research work. In the first step, a set of technical
indicators are calculated from the daily transaction data of the
target stock and then PCA is applied to these values aiming to
extract the principle components. After filtering the principal
components, a model is finally constructed to forecast the future
price of the target stocks. The performance of the proposed
approach is evaluated with 16 years’ daily transactional data of
three leading stocks from different sectors listed in Dhaka Stock
Exchange (DSE), Bangladesh. Empirical results show that the
proposed model enhances the performance of the prediction
model and also the short-term prediction gains more accuracy
than long-term prediction.
In literature of time series prediction the autoregressive integrated moving
average(ARIMA) models have been explained clearly. This paper using the ARIMA
model, elaborates the process of building stock trend predictive model. Published
data of stock price obtained from National Stock Exchange (NSE) during the period
from Jan-2007 to Dec-2011. The results obtained revealed that for short-term
prediction the ARIMA model which has a strong prospects and for stock price
prediction even it can be positively compete with existing techniques.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Classification of mathematical modeling,
Classification based on Variation of Independent Variables,
Static Model,
Dynamic Model,
Rigid or Deterministic Models,
Stochastic or Probabilistic Models,
Comparison Between Rigid and Stochastic Models
A COMPARISON STUDY OF ESTIMATION METHODS FOR GENERALIZED JELINSKI-MORANDA MOD...ijseajournal
In this paper three methods for estimating the parameters of the generalized Jelinski-Moranda (GJM)
model are compared. The needed mathematical formulas for resolving the estimates are derived.
Because of the lack of various real data with changed input data size different simulation scenarios are
given to help achieving our goals. Illustrative algorithms for the simulation studies are given. First, the
accuracy of the GJ-M model’s estimators is checked based on two evaluation criteria. Moreover, several
generated models from the GJ-M general formula are evaluated based on three different methods of
comparison. Useful results for the software reliability modelling area are concluded.
Multi criteria decision making (MCDM) techniques in today’s organizations, as a key
to performance measurement comes more to the foreground with the advancement in the high
technology. During recent years, many studies have been conducted to obtain a ranking
among many alternatives via measuring performance of each of them against many criteria.
Managerial decision making problems like supplier selection, weapon selection, project
selection, site selection etc are dealt with many multi criteria decision making methods like
TOPSIS, AHP-TOPSIS (Technique for Order Preference by Similarity to Ideal Solution),
PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation),
ELECTRE, VIKOR etc in crisp throughout the literature. In this work, we first compare
several MCDM methodologies to validate the consistency of them on a standard dataset of
plant layout problem. We proposed M-TOPSIS, A-TOPSIS procedure to select a suitable
layout for the comparative study. Results of M-TOPSIS and A-TOPSIS have been employed
to build an unsupervised artificial neural network (ANN) to obtain a new ranking of
alternatives. This study proposes an approach of deriving the rank value, in order to get
optimal configuration, from the average of more than one set of rank results obtained through
the deployment of MCDM methodologies
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PREDICTING GROWTH OF URBAN AGGLOMERATIONS THROUGH FRACTAL ANALYSIS OF GEO-SPATIAL DATA
Location Analytics is one of the fastest emerging fields in the broad area of Business Intelligence/Data Science. By
some industry estimates, almost 80% of all data has a location dimension to it. Consequently, identification of
trends and patterns in spatially distributed information has far reaching applications ranging from urban planning, to
logistics and supply chain management, location based marketing, sales territory planning and retail store location.
In view of this, we present an approach based on Fractal Analysis (FA) of highly granular geo-spatial data.
Specifically, we use proprietary data available at approximately1 square km level for New Delhi, India provided by Indicus Analytics (India’s leading economic data analytics firm based in New Delhi). We compare and contrast the patterns and insights generated using the FA approach with other more traditional approaches such as spatial to correlation and structural similarity indices. Preliminary results indicate that there are indeed “selfsimilar” local patterns that are completely missed by spatial correlation that are accurately captured by the more sophisticated FA approach. These patterns provide deep insights into the underlying socio-economic and demographic processes and can be used to predict the spatial distribution of these variables in the future. For example, questions such as what are the pockets of population growth in a city and how will businesses and government respond to that growth can be answered using the proposed approach.
Indian Dental Academy: will be one of the most relevant and exciting training center with best faculty and flexible training programs for dental professionals who wish to advance in their dental practice,Offers certified courses in Dental implants,Orthodontics,Endodontics,Cosmetic Dentistry, Prosthetic Dentistry, Periodontics and General Dentistry.
UNIT - 5 : 20ACS04 – PROBLEM SOLVING AND PROGRAMMING USING PYTHONNandakumar P
UNIT-V INTRODUCTION TO NUMPY, PANDAS, MATPLOTLIB
Exploratory Data Analysis (EDA), Data Science life cycle, Descriptive Statistics, Basic tools (plots, graphs and summary statistics) of EDA, Philosophy of EDA. Data Visualization: Scatter plot, bar chart, histogram, boxplot, heat maps, etc.
MULTI-PARAMETER BASED PERFORMANCE EVALUATION OF CLASSIFICATION ALGORITHMSijcsit
Diabetes disease is amongst the most common disease in India. It affects patient’s health and also leads to
other chronic diseases. Prediction of diabetes plays a significant role in saving of life and cost. Predicting
diabetes in human body is a challenging task because it depends on several factors. Few studies have reported the performance of classification algorithms in terms of accuracy. Results in these studies are difficult and complex to understand by medical practitioner and also lack in terms of visual aids as they arepresented in pure text format. This reported survey uses ROC and PRC graphical measures toimproveunderstanding of results. A detailed parameter wise discussion of comparison is also presented which lacksin other reported surveys. Execution time, Accuracy, TP Rate, FP Rate, Precision, Recall, F Measureparameters are used for comparative analysis and Confusion Matrix is prepared for quick review of each
algorithm. Ten fold cross validation method is used for estimation of prediction model. Different sets of
classification algorithms are analyzed on diabetes dataset acquired from UCI repository
Forecasting S&P 500 Index Using Backpropagation Neural Network Based on Princ...Ahmet Kaplan
I and Taras Hoshovskyi simply present here a method that integrates the principal component analysis (PCA) into Backpropagation Neural Network for forecasting. Our initial aim is to show empirically that using Backpropagation Neural Network improves the forecasting results comparing to the simple regression based on the PCA.
A Predictive Stock Data Analysis with SVM-PCA Model .......................................................................1
Divya Joseph and Vinai George Biju
HOV-kNN: A New Algorithm to Nearest Neighbor Search in Dynamic Space.......................................... 12
Mohammad Reza Abbasifard, Hassan Naderi and Mohadese Mirjalili
A Survey on Mobile Malware: A War without End................................................................................... 23
Sonal Mohite and Prof. R. S. Sonar
An Efficient Design Tool to Detect Inconsistencies in UML Design Models............................................. 36
Mythili Thirugnanam and Sumathy Subramaniam
An Integrated Procedure for Resolving Portfolio Optimization Problems using Data Envelopment
Analysis, Ant Colony Optimization and Gene Expression Programming ................................................. 45
Chih-Ming Hsu
Emerging Technologies: LTE vs. WiMAX ................................................................................................... 66
Mohammad Arifin Rahman Khan and Md. Sadiq Iqbal
Introducing E-Maintenance 2.0 ................................................................................................................. 80
Abdessamad Mouzoune and Saoudi Taibi
Detection of Clones in Digital Images........................................................................................................ 91
Minati Mishra and Flt. Lt. Dr. M. C. Adhikary
The Significance of Genetic Algorithms in Search, Evolution, Optimization and Hybridization: A Short
Review ...................................................................................................................................................... 103
Initial Optimal Parameters of Artificial Neural Network and Support Vector Re...IJECEIAES
This paper presents architecture of backpropagation Artificial Neural Network (ANN) and Support Vector Regression (SVR) models in supervised learning process for cement demand dataset. This study aims to identify the effectiveness of each parameter of mean square error (MSE) indicators for time series dataset. The study varies different random sample in each demand parameter in the network of ANN and support vector function as well. The variations of percent datasets from activation function, learning rate of sigmoid and purelin, hidden layer, neurons, and training function should be applied for ANN. Furthermore, SVR is varied in kernel function, lost function and insensitivity to obtain the best result from its simulation. The best results of this study for ANN activation function is Sigmoid. The amount of data input is 100% or 96 of data, 150 learning rates, one hidden layer, trinlm training function, 15 neurons and 3 total layers. The best results for SVR are six variables that run in optimal condition, kernel function is linear, loss function is ౬ -insensitive, and insensitivity was 1. The better results for both methods are six variables. The contribution of this study is to obtain the optimal parameters for specific variables of ANN and SVR.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Decentralized data fusion approach is one in which features are extracted and processed individually and finally fused to obtain global estimates. The paper presents decentralized data fusion algorithm using factor analysis model. Factor analysis is a statistical method used to study the effect and interdependence of various factors within a system. The proposed algorithm fuses accelerometer and gyroscope data in an inertial measurement unit (IMU). Simulations are carried out on Matlab platform to illustrate the algorithm.
Tech transfer making it as a risk free approach in pharmaceutical and biotech iniaemedu
Tech transfer is a common methodology for transferring new products or an existing
commercial product to R&D or to another manufacturing site. Transferring product knowledge to the
manufacturing floor is crucial and it is an ongoing approach in the pharmaceutical and biotech
industry. Without adopting this process, no company can manufacture its niche products, let alone
market them. Technology transfer is a complicated, process because it is highly cross functional. Due
to its cross functional dependence, these projects face numerous risks and failure. If anidea cannot be
successfully brought out in the form of a product, there is no customer benefit, or satisfaction.
Moreover, high emphasis is in sustaining manufacturing with highest quality each and every time. It
is vital that tech transfer projects need to be executed flawlessly. To accomplish this goal, risk
management is crucial and project team needs to use the risk management approach seamlessly.