Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
treinamento dos Representantes Distritais de Rotaract 2016-17, evento pré Instituto Rotary 2015 do Rio de Janeiro, realizado no Copacabana Praia Hotel em 22/08/15. Tema: Gestão de Pessoas e Administração de Conflitos
Palestra para o Grupo de Representantes Distritais de Rotaract 2016-2017, evento pré Instituto Rotary 2015 do Rio de Janeiro, realizado no Copacabana Praia Hotel, sob o tema Gestão de Pessoas e Administração de Conflitos.
Legend of Tarzan, a film by David Yates with Alexander Skarsgård, Margot Robbie. The Legend of Tarzan is an upcoming 2016 American epic action adventure film.
Executive coaching situations are explored based on the Author's experience in organizations. Paradoxes in real business / executive coaching cases are presented.
Le competenze di leadership nelle professioni sanitarie(1)Felix B. Lecce
Le competenze di LEADERSHIP sono competenze trasversali di area relazionale che chi opera professionalmente in ambito sanitario deve maggiormente sviluppare se vuole gestire efficacemente la parte relazionale della propria attività lavorativa per ricavarne anche maggiori successi e migliori soddisfazioni personali e professionali.
Lo sviluppo di tali competenze ha certamente tra i suoi vantaggi principali: una migliore percezione che gli altri hanno di voi e della vostra professionalità; un’aumentata efficacia nei rapporti con persone caratterialmente “difficili”; una significativa riduzione della stressabilità relazionale in ambito lavorativo e di conseguenza, una ridotta vulnerabilità al rischio di burnout(cui tutti coloro che operano in ambito sanitario sono inevitabilmente esposti.).
treinamento dos Representantes Distritais de Rotaract 2016-17, evento pré Instituto Rotary 2015 do Rio de Janeiro, realizado no Copacabana Praia Hotel em 22/08/15. Tema: Gestão de Pessoas e Administração de Conflitos
Palestra para o Grupo de Representantes Distritais de Rotaract 2016-2017, evento pré Instituto Rotary 2015 do Rio de Janeiro, realizado no Copacabana Praia Hotel, sob o tema Gestão de Pessoas e Administração de Conflitos.
Legend of Tarzan, a film by David Yates with Alexander Skarsgård, Margot Robbie. The Legend of Tarzan is an upcoming 2016 American epic action adventure film.
Executive coaching situations are explored based on the Author's experience in organizations. Paradoxes in real business / executive coaching cases are presented.
Le competenze di leadership nelle professioni sanitarie(1)Felix B. Lecce
Le competenze di LEADERSHIP sono competenze trasversali di area relazionale che chi opera professionalmente in ambito sanitario deve maggiormente sviluppare se vuole gestire efficacemente la parte relazionale della propria attività lavorativa per ricavarne anche maggiori successi e migliori soddisfazioni personali e professionali.
Lo sviluppo di tali competenze ha certamente tra i suoi vantaggi principali: una migliore percezione che gli altri hanno di voi e della vostra professionalità; un’aumentata efficacia nei rapporti con persone caratterialmente “difficili”; una significativa riduzione della stressabilità relazionale in ambito lavorativo e di conseguenza, una ridotta vulnerabilità al rischio di burnout(cui tutti coloro che operano in ambito sanitario sono inevitabilmente esposti.).
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
A novel hybrid deep learning model for price prediction IJECEIAES
Price prediction has become a major task due to the explosive increase in the number of investors. The price prediction task has various types such as shares, stocks, foreign exchange instruments, and cryptocurrency. The literature includes several models for price prediction that can be classified based on the utilized methods into three main classes, namely, deep learning, machine learning, and statistical. In this context, we proposed several models’ architectures for price prediction. Among them, we proposed a hybrid one that incorporates long short-term memory (LSTM) and Convolution neural network (CNN) architectures, we called it CNN-LSTM. The proposed CNNLSTM model makes use of the characteristics of the convolution layers for extracting useful features embedded in the time series data and the ability of LSTM architecture to learn long-term dependencies. The proposed architectures are thoroughly evaluated and compared against state-of-the-art methods on three different types of financial product datasets for stocks, foreign exchange instruments, and cryptocurrency. The obtained results show that the proposed CNN-LSTM has the best performance on average for the utilized evaluation metrics. Moreover, the proposed deep learning models were dominant in comparison to the state-of-the-art methods, machine learning models, and statistical models.
Building a Classifier Employing Prism Algorithm with Fuzzy LogicIJDKP
Classification in data mining is receiving immense interest in recent times. As the knowledge is based on
historical data, classifications of data are essential for discovering the knowledge. To decrease the
classification complexity, the quantitative attributes of data need splitting. But the splitting using the
classical logic is less accurate. This can be overcome by the use of fuzzy logic. This paper illustrates how to
build up the classification rules using the fuzzy logic. The fuzzy classifier is built on by using the prism
decision tree algorithm. This classifier produces more realistic results than the classical one. The
effectiveness of this method is justified over a sample dataset.
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
Proposing an Appropriate Pattern for Car Detection by Using Intelligent Algor...Editor IJCATR
Nowadays, the automotive industry has attracted the attention of consumers, and product quality is considered as an
essential element in current competitive markets. Security and comfort are the main criteria and parameters of selecting a car.
Therefore, standard dataset of CAR involving six features and characteristics and 1728 instances have been used. In this paper, it
has been tried to select a car with the best characteristics by using intelligent algorithms (Random Forest, J48, SVM,
NaiveBayse) and combining these algorithms with aggregated classifiers such as Bagging and AdaBoostMI. In this study, speed
and accuracy of intelligent algorithms in identifying the best car have been taken into account.
Applying K-Means Clustering Algorithm to Discover Knowledge from Insurance Da...theijes
Data mining works to extract information known in advance from the enormous quantities of data which can lead to knowledge. It provides information that helps to make good decisions. The effectiveness of data mining in access to knowledge to achieve the goal of which is the discovery of the hidden facts contained in databases and through the use of multiple technologies. Clustering is organizing data into clusters or groups such that they have high intra-cluster similarity and low inter cluster similarity. This paper deals with K-means clustering algorithm which collect a number of data based on the characteristics and attributes of this data, and process the Clustering by reducing the distances between the data center. This algorithm is applied using open source tool called WEKA, with the Insurance dataset as its input
A MODEL-BASED APPROACH MACHINE LEARNING TO SCALABLE PORTFOLIO SELECTIONIJCI JOURNAL
This study proposes a scalable asset selection and allocation approach using machine learning that
integrates clustering methods into portfolio optimization models. The methodology applies the Uniform
Manifold Approximation and Projection method and ensemble clustering techniques to preselect assets
from the Ibovespa and S&P 500 indices. The research compares three allocation models and finds that the
Hierarchical Risk Parity model outperformed the others, with a Sharpe ratio of 1.11. Despite the
pandemic's impact on the portfolios, with drawdowns close to 30%, they recovered in 111 to 149 trading
days. The portfolios outperformed the indices in cumulative returns, with similar annual volatilities of
20%. Preprocessing with UMAP allowed for finding clusters with higher discriminatory power, evaluated
through internal cluster validation metrics, helping to reduce the problem's size during optimal portfolio
allocation. Overall, this study highlights the potential of machine learning in portfolio optimization,
providing a useful framework for investment practitioners.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
A Survey on the Clustering Algorithms in Sales Data MiningEditor IJCATR
This paper discusses different clustering techniques that can be used in sales databases. The advancement of digital data
collection and build up of data in data banks as a result of modernization in sales disciplines has brought in great challenges of data
processing for better and meaningful results due to mass data deposits. Clustering techniques therefore are quite necessary so that the
senior management in sales department can have access to processed data as they engage themselves in decision making processes.
In this paper, I focus on the retail sales data mining, classification and clustering techniques. In this study I analyze the attributes for
the prediction of buyer’s behavior and purchase performance by use of various classification methods like decision trees, C4.5
algorithm and ID3 algorithm.
1. SCHOOL OF ENGINEERING
COMPUTER ENGINEERING & INFORMATICS
DEPARTMENT
Stock Market Analysis using Data Mining and
Machine Learning Algorithms
DIPLOMA THESIS
Grivas G. Panagiotis
griva@ceid.upatras.gr
Advisor: Professor Vasileios Megalooikonomou
Patra, September 2014
2. Abstract
The huge volume of economic data today has created the need for technical
analysis and processing of information that will help investors in taking correct
decisions. The subject of Diploma Thesis is the Extraction of Useful Information
through Financial Data. For the purposes of work have taken historical data from the
daily index S&P500. The basic data mining algorithms studied are the following:
Preprocessing, Export of Technical Features, Clustering, Classification, Lag
Correlation and Forecasting. In the context of this thesis the information is organized
into seven chapters.
The first chapter is introductory part, indicating the aim and motivation of this
thesis. The second chapter presents the basic Market analysis techniques which use
graphs and indicators. The third chapter examines the Data Mining methods and
Learning Algorithms aimed at discovering patterns in the data and constructing useful
models that are closer to the characteristics studied. The fourth chapter presents the
way in which data mining techniques applied to the analysis of the shares, while
highlighting the importance of each data mining algorithm for the stock Market. The
fifth chapter analyzes the environments, Matlab and Weka, in which we perform data
mining algorithms in order to analyze stock Market data.
The sixth chapter includes the experimental procedure of the present work. In
the first section of the Chapter, Preprocessing techniques are implemented so that to
improve the quality of the shares, while errors and incorrect attribute values are
removed. The second section examines the problem of Clustering where algorithms
K-Means and Hierarchical are implemented in order to detect 'similar' shares. Initially
we evaluate the performance of Hierarchical Clustering algorithm with Euclidean and
DTW metric distances, for various types of linkages between the clusters. Then we
evaluate the performance of k-Means and Hierarchical (with ward linkage criterion)
Clustering algorithms, for various numbers of clusters. Finally we apply Clustering
algorithms , for standard number of clusters, while we assess the quality of classes
created, with techniques Intra/Inter cluster distance and Silhouette value. The third
3. section applies the Classification algorithm of k-Nearest Neighbors so that each new
stock coming in stock market to be classified in one of the predefined groups obtained
through Clustering. Furthermore the Classification method is evaluated by checking
whether the shares are categorized in the appropriate class. In the fourth section we
use the Pearson index to find Lag Correlation in shares. Originally we detect shares
with proportional or inverse temporal association with non-zero delay, and examine
whether these shares belong to the same or different classes defined at the outset after
Hierarchical-DTW Clustering process. Yet we identified the shares with proportional
or inverse correlation for delay equal to zero time. Finally applied the lag correlation
algorithm and checked for correlation between stocks not only for their entire length,
but for a window length which starts at a specified time. In the fifth section we
perform Forecasting Algorithms to a set of stocks, where we construct a suitable
prediction model (using first 225 closing values for training set) which can forecast
the last 20 closing values of shares. The forecasting methods applied are the
following: Statistical Technique ARIMA, Artificial Neural Networks (Multilayer
Perceptron), Decision Trees (M5P Tree), Support Vector Machines (SMOreg), Linear
Regression and Instance-Based Learning Algorithms (k-Nearest Neighbors). Finally
we evaluate the performance of forecasting algorithms using both the average
absolute percentage error (MAPE) between actual and predicted values and finding
the prediction accuracy for the investment reliability of the shares in 20 days term
(Trend Prediction).
The seventh chapter presents both conclusions reached after the execution of
the experiments and future extensions that could be applied to the Financial Data
Mining models we constructed.