F.Stoica, L.F.Cacovean, Using genetic algorithms and simulation as decision support in marketing strategies and long-term production planning, Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION (SMO ‘09), Budapest Tech, Hungary, September 3-5, ISSN: 1790-2769 ISBN:978-960-474-113-7, pp. 435-439, 2009
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
The document proposes a hybrid algorithm combining genetic algorithm and cuckoo search optimization to solve job shop scheduling problems. It aims to minimize makespan (completion time of all jobs) by scheduling jobs on machines. The genetic algorithm is used to explore the search space but can get trapped in local optima. Cuckoo search optimization performs local search faster than genetic algorithm and helps avoid local optima. Experimental results on benchmark problems show the hybrid algorithm yields better solutions in terms of makespan and runtime compared to genetic algorithm and ant colony optimization algorithms.
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...Amir Ziai
This document describes a study on modifying the Pareto Set Pursuing (PSP) method to solve multi-objective optimization problems with mixed continuous and discrete variables. The PSP method was originally developed for problems with only continuous variables. The modifications allow it to handle mixed variable problems. The performance of the modified PSP method is compared to other multi-objective algorithms based on metrics like efficiency, robustness, and closeness to the true Pareto front with a limited number of function evaluations. Preliminary results on benchmark problems and two engineering design examples show that the modified PSP is competitive when the number of function evaluations is limited, but its performance decreases as the number of design variables increases.
Performance Analysis of GA and PSO over Economic Load Dispatch ProblemIOSR Journals
This document presents a performance analysis of genetic algorithm (GA) and particle swarm optimization (PSO) for solving the economic load dispatch (ELD) problem in power systems. The ELD problem aims to minimize total generation cost subject to constraints, by optimizing the power output of generators. The document implements GA and PSO to solve sample ELD problems with 6 generators, comparing the results between the two algorithms under scenarios with and without transmission losses. PSO was shown to perform better, finding lower cost solutions with better convergence than GA. The document concludes PSO is more efficient for the ELD problem due to its convergence properties.
Real Estate Investment Advising Using Machine LearningIRJET Journal
This document presents a comparative study of machine learning algorithms for real estate investment advising using property price prediction. It analyzes Linear Regression using gradient descent, K-Nearest Neighbors regression, and Random Forest regression on quarterly Mumbai real estate data from 2005-2016. Features like area, rooms, distance to landmarks, amenities are used to predict prices. Random Forest regression achieved the lowest errors in predicting testing data, making it the most feasible algorithm according to the study. The authors conclude it is a promising approach for real estate trend forecasting and developing an investment advising tool.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
The document proposes a hybrid algorithm combining genetic algorithm and cuckoo search optimization to solve job shop scheduling problems. It aims to minimize makespan (completion time of all jobs) by scheduling jobs on machines. The genetic algorithm is used to explore the search space but can get trapped in local optima. Cuckoo search optimization performs local search faster than genetic algorithm and helps avoid local optima. Experimental results on benchmark problems show the hybrid algorithm yields better solutions in terms of makespan and runtime compared to genetic algorithm and ant colony optimization algorithms.
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...Amir Ziai
This document describes a study on modifying the Pareto Set Pursuing (PSP) method to solve multi-objective optimization problems with mixed continuous and discrete variables. The PSP method was originally developed for problems with only continuous variables. The modifications allow it to handle mixed variable problems. The performance of the modified PSP method is compared to other multi-objective algorithms based on metrics like efficiency, robustness, and closeness to the true Pareto front with a limited number of function evaluations. Preliminary results on benchmark problems and two engineering design examples show that the modified PSP is competitive when the number of function evaluations is limited, but its performance decreases as the number of design variables increases.
Performance Analysis of GA and PSO over Economic Load Dispatch ProblemIOSR Journals
This document presents a performance analysis of genetic algorithm (GA) and particle swarm optimization (PSO) for solving the economic load dispatch (ELD) problem in power systems. The ELD problem aims to minimize total generation cost subject to constraints, by optimizing the power output of generators. The document implements GA and PSO to solve sample ELD problems with 6 generators, comparing the results between the two algorithms under scenarios with and without transmission losses. PSO was shown to perform better, finding lower cost solutions with better convergence than GA. The document concludes PSO is more efficient for the ELD problem due to its convergence properties.
Real Estate Investment Advising Using Machine LearningIRJET Journal
This document presents a comparative study of machine learning algorithms for real estate investment advising using property price prediction. It analyzes Linear Regression using gradient descent, K-Nearest Neighbors regression, and Random Forest regression on quarterly Mumbai real estate data from 2005-2016. Features like area, rooms, distance to landmarks, amenities are used to predict prices. Random Forest regression achieved the lowest errors in predicting testing data, making it the most feasible algorithm according to the study. The authors conclude it is a promising approach for real estate trend forecasting and developing an investment advising tool.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
IRJET - House Price Prediction using Machine Learning and RPAIRJET Journal
This document discusses using machine learning and robotic process automation (RPA) to predict house prices. Specifically, it proposes using the CatBoost algorithm and RPA to extract real-time data for house price prediction. RPA involves using software robots to automate data extraction, while CatBoost will be used to predict prices based on the extracted dataset. The system aims to reduce problems faced by customers by providing more accurate price predictions compared to relying solely on real estate agents. It will extract data using RPA, clean the data, then apply machine learning algorithms like CatBoost to predict house prices based on various attributes.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
This document presents a study that develops a software effort estimation model using an Adaptive Neuro Fuzzy Inference System (ANFIS). The study evaluates the proposed ANFIS model using COCOMO81 datasets and compares its performance to an Artificial Neural Network (ANN) model and the intermediate COCOMO model. The results show that the ANFIS model provides better estimates than the ANN and COCOMO models, with lower values for metrics like the Root Mean Square Error and Magnitude of Relative Error.
The document is a report on using artificial neural networks (ANNs) to predict stock market returns. It discusses how ANNs have been applied to problems like stock exchange index prediction. It also discusses support vector machines (SVMs), a supervised learning method that can perform linear and non-linear classification. SVMs have been used for stock market prediction by analyzing training data to build a model that assigns categories or predicts values for new data points. The report includes code screenshots showing the import of libraries for SVM regression and plotting the predicted versus actual prices.
This document summarizes a research paper that proposes using a genetic algorithm to solve the travelling salesman problem (TSP). It begins by defining the TSP and explaining that it is NP-hard. The document then reviews various existing approaches that have used genetic algorithms and other metaheuristics to solve TSP. It proposes a genetic algorithm with tournament selection, two-point crossover, and interchange mutation operators. The algorithm is tested on sample problems with 15 cities and is shown to find optimal or near-optimal solutions. In conclusion, the document argues that genetic algorithms can efficiently find good solutions to TSP, especially when combined with knowledge from heuristic methods.
The document presents a methodology for predicting stock market prices using support vector machine regression (SVR) with different windowing techniques. It involves collecting historical stock market data, preprocessing the data using various windowing approaches to convert the time series to a supervised learning format, training SVR models on the windowed data with different parameters, and evaluating the models' ability to predict stock prices on testing data. The results show that de-flattening and 5-day windows achieved the lowest prediction errors compared to the actual stock prices in the testing period.
Stock market analysis using supervised machine learningPriyanshu Gandhi
This document summarizes a paper on using machine learning algorithms to predict stock prices. It discusses using open source libraries to build prediction models from historical stock data, including attributes like open, high, low, close prices and volume. Linear regression is used to identify relationships between attributes and predict future prices. The model is trained and tested on preprocessed data, and accuracy is evaluated using metrics like R^2 and RMSE. Common mistakes like data leakage and overfitting are also discussed.
PERFORMANCE ANALYSIS and PREDICTION of NEPAL STOCK MARKET (NEPSE) for INVESTM...Hari KC
The document presents research analyzing stock market performance and predicting stock prices of four Nepali companies using regression techniques. It finds that radial basis function regression most accurately predicts prices, with prediction errors ranging from 0.74-13.65%. Analysis of stock sentiment, moving averages, and interest rate correlations is also performed. Based on the analysis, Agricultural Development Bank Ltd. is identified as having the best indicators and lowest prediction error, making it the recommended investment priority.
This document discusses machine learning algorithms and concepts such as:
- Supervised and unsupervised learning are introduced, as well as other learning paradigms like semi-supervised and reinforcement learning.
- The concepts of training error, generalization error, underfitting, overfitting and model capacity are explained.
- Hyperparameters are defined as parameters that control model capacity and are optimized using validation sets to avoid overfitting.
- Cross-validation techniques like k-fold cross-validation are introduced to better estimate model performance when data is limited.
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
System model.Chapter One(GEOFFREY GORDON)Towfiq218
This document provides an overview of system modeling concepts. It defines what a system is and basic system components like entities, attributes, and activities. It discusses different types of systems like open vs closed systems, stochastic vs deterministic activities, and continuous vs discrete systems. It also covers various types of models like physical, mathematical, static, and dynamic models. Specific examples are provided to illustrate concepts like static and dynamic physical and mathematical models. Principles of modeling like block-building, relevance, accuracy, and aggregation are also covered.
This document describes a final year project by four students at Himalaya College of Engineering in Nepal to analyze and predict stock market prices using artificial neural networks. The project aims to develop a neural network model to forecast stock prices on the Nepal Stock Exchange. Various technical, fundamental, and statistical analysis methods are currently used to predict stock prices but with limited success due to the complex nature of financial markets. The project outlines the design of the neural network, selection of input parameters, data collection, model training and testing. The goal is to apply neural networks to help forecast stock prices in Nepal's stock market.
The document discusses different types of mathematical models, including deterministic and probabilistic models. It provides examples of each. It also discusses building, verifying, and refining mathematical models. Additionally, it covers optimization models, their components including objective functions and constraints. Finally, it discusses specific types of optimization models like linear programming, network flow programming, and integer programming.
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
APPROACHES IN USING EXPECTATIONMAXIMIZATION ALGORITHM FOR MAXIMUM LIKELIHOOD ...cscpconf
EM algorithm is popular in maximum likelihood estimation of parameters for state-space models. However, extant approaches for the realization of EM algorithm are still not able to fulfill the task of identification systems, which have external inputs and constrained parameters. In this paper, we propose new approaches for both initial guessing and MLE of the parameters of a constrained state-space model with an external input. Using weighted least square for the initial guess and the partial differentiation of the joint log-likelihood function for the EM algorithm, we estimate the parameters and compare the estimated values with the “actual” values, which are set to generate simulation data. Moreover, asymptotic variances of the estimated parameters are calculated when the sample size is large, while statistics of the estimated parameters are obtained through bootstrapping when the sample size issmall. The results demonstrate that the estimated values are close to the “actual” values.Consequently, our approaches are promising and can applied in future research.
This document discusses key concepts in statistics including point estimation, bias, variance, consistency, maximum likelihood estimation (MLE), and Bayesian statistics. It provides definitions and explanations of these terms. For example, it defines bias as the difference between the expected value of an estimator and the true parameter value, and explains that MLE chooses parameters that maximize the likelihood of observing the sample data. It also compares frequentist and Bayesian approaches to estimation.
An intelligent framework using hybrid social media and market data, for stock...Eslam Nader
The document proposes an intelligent framework that uses hybrid social media and market data for stock prediction analysis. It motivates the need for such a framework to increase liquidity in the Egyptian stock market. The proposed framework includes 5 stages: 1) ontology building, 2) data collection from stock market exchanges, Twitter, and Google, 3) data analysis and cleaning, 4) data clustering and filtering using K-means, genetic algorithms, and a hybrid approach, 5) building a rule engine using deep learning. The framework will be experimentally evaluated by predicting stock prices over 10 working days and comparing results to actual market prices.
An intelligent scalable stock market prediction systemHarshit Agarwal
Comparitive study of stock market prediction system using ANN and GONN. Sentiment analysis also done on yahoo news feed. Deployment done on hadoop cluster.
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
This document summarizes recent advances and applications of the cuckoo search algorithm, a nature-inspired metaheuristic optimization algorithm developed in 2009. Cuckoo search mimics the brood parasitism breeding behavior of some cuckoo species. It uses a combination of local and global search achieved through random walks and Levy flights to efficiently explore the search space. Studies show cuckoo search often finds optimal solutions faster than genetic algorithms and particle swarm optimization. The algorithm has been applied to diverse optimization problems and continues to be improved and extended to multi-objective optimization.
Turnover Prediction of Shares Using Data Mining Techniques : A Case Study csandit
Predicting the Total turnover of a company in the ever fluctuating Stock market has always proved to be a precarious situation and most certainly a difficult task at hand. Data mining is a
well-known sphere of Computer Science that aims at extracting meaningful information from large databases. However, despite the existence of many algorithms for the purpose of
predicting future trends, their efficiency is questionable as their predictions suffer from a high
error rate. The objective of this paper is to investigate various existing classification algorithms
to predict the turnover of different companies based on the Stock price. The authorized datasetfor predicting the turnover was taken from www.bsc.com and included the stock market valuesof various companies over the past 10 years. The algorithms were investigated using the ‘R’
tool. The feature selection algorithm, Boruta, was run on this dataset to extract the important
and influential features for classification. With these extracted features, the Total Turnover of
the company was predicted using various algorithms like Random Forest, Decision Tree, SVM and Multinomial Regression. This prediction mechanism was implemented to predict the turnover of a company on an everyday basis and hence could help navigate through dubious
stock markets trades. An accuracy rate of 95% was achieved by the above prediction process.
Moreover, the importance of the stock market attributes was established as well.
The document summarizes several papers presented at a conference on evolutionary computation and representations. It provides brief 1-2 sentence summaries of the main topics, objectives, and conclusions of 14 different papers presented at the conference. The papers covered a wide range of topics including ant algorithms, genetic programming, evolutionary multi-objective optimization, reinforcement learning, and hybrid evolutionary algorithms.
A general frame for building optimal multiple SVM kernelsinfopapers
Dana Simian, Florin Stoica, A General Frame for Building Optimal Multiple SVM Kernels, Large-Scale Scientific Computing, Lecture Notes in Computer Science, 2012, Volume 7116/2012, 256-263, DOI: 10.1007/978-3-642-29843-1_29
Algebraic Approach to Implementing an ATL Model Checkerinfopapers
Laura Florentina Stoica, Florian Mircea Boian, Algebraic Approach to Implementing an ATL Model Checker, STUDIA Univ. Babes Bolyai, INFORMATICA, Volume LVII, Number 2, 2012, pp. 73-82
IRJET - House Price Prediction using Machine Learning and RPAIRJET Journal
This document discusses using machine learning and robotic process automation (RPA) to predict house prices. Specifically, it proposes using the CatBoost algorithm and RPA to extract real-time data for house price prediction. RPA involves using software robots to automate data extraction, while CatBoost will be used to predict prices based on the extracted dataset. The system aims to reduce problems faced by customers by providing more accurate price predictions compared to relying solely on real estate agents. It will extract data using RPA, clean the data, then apply machine learning algorithms like CatBoost to predict house prices based on various attributes.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
This document presents a study that develops a software effort estimation model using an Adaptive Neuro Fuzzy Inference System (ANFIS). The study evaluates the proposed ANFIS model using COCOMO81 datasets and compares its performance to an Artificial Neural Network (ANN) model and the intermediate COCOMO model. The results show that the ANFIS model provides better estimates than the ANN and COCOMO models, with lower values for metrics like the Root Mean Square Error and Magnitude of Relative Error.
The document is a report on using artificial neural networks (ANNs) to predict stock market returns. It discusses how ANNs have been applied to problems like stock exchange index prediction. It also discusses support vector machines (SVMs), a supervised learning method that can perform linear and non-linear classification. SVMs have been used for stock market prediction by analyzing training data to build a model that assigns categories or predicts values for new data points. The report includes code screenshots showing the import of libraries for SVM regression and plotting the predicted versus actual prices.
This document summarizes a research paper that proposes using a genetic algorithm to solve the travelling salesman problem (TSP). It begins by defining the TSP and explaining that it is NP-hard. The document then reviews various existing approaches that have used genetic algorithms and other metaheuristics to solve TSP. It proposes a genetic algorithm with tournament selection, two-point crossover, and interchange mutation operators. The algorithm is tested on sample problems with 15 cities and is shown to find optimal or near-optimal solutions. In conclusion, the document argues that genetic algorithms can efficiently find good solutions to TSP, especially when combined with knowledge from heuristic methods.
The document presents a methodology for predicting stock market prices using support vector machine regression (SVR) with different windowing techniques. It involves collecting historical stock market data, preprocessing the data using various windowing approaches to convert the time series to a supervised learning format, training SVR models on the windowed data with different parameters, and evaluating the models' ability to predict stock prices on testing data. The results show that de-flattening and 5-day windows achieved the lowest prediction errors compared to the actual stock prices in the testing period.
Stock market analysis using supervised machine learningPriyanshu Gandhi
This document summarizes a paper on using machine learning algorithms to predict stock prices. It discusses using open source libraries to build prediction models from historical stock data, including attributes like open, high, low, close prices and volume. Linear regression is used to identify relationships between attributes and predict future prices. The model is trained and tested on preprocessed data, and accuracy is evaluated using metrics like R^2 and RMSE. Common mistakes like data leakage and overfitting are also discussed.
PERFORMANCE ANALYSIS and PREDICTION of NEPAL STOCK MARKET (NEPSE) for INVESTM...Hari KC
The document presents research analyzing stock market performance and predicting stock prices of four Nepali companies using regression techniques. It finds that radial basis function regression most accurately predicts prices, with prediction errors ranging from 0.74-13.65%. Analysis of stock sentiment, moving averages, and interest rate correlations is also performed. Based on the analysis, Agricultural Development Bank Ltd. is identified as having the best indicators and lowest prediction error, making it the recommended investment priority.
This document discusses machine learning algorithms and concepts such as:
- Supervised and unsupervised learning are introduced, as well as other learning paradigms like semi-supervised and reinforcement learning.
- The concepts of training error, generalization error, underfitting, overfitting and model capacity are explained.
- Hyperparameters are defined as parameters that control model capacity and are optimized using validation sets to avoid overfitting.
- Cross-validation techniques like k-fold cross-validation are introduced to better estimate model performance when data is limited.
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
System model.Chapter One(GEOFFREY GORDON)Towfiq218
This document provides an overview of system modeling concepts. It defines what a system is and basic system components like entities, attributes, and activities. It discusses different types of systems like open vs closed systems, stochastic vs deterministic activities, and continuous vs discrete systems. It also covers various types of models like physical, mathematical, static, and dynamic models. Specific examples are provided to illustrate concepts like static and dynamic physical and mathematical models. Principles of modeling like block-building, relevance, accuracy, and aggregation are also covered.
This document describes a final year project by four students at Himalaya College of Engineering in Nepal to analyze and predict stock market prices using artificial neural networks. The project aims to develop a neural network model to forecast stock prices on the Nepal Stock Exchange. Various technical, fundamental, and statistical analysis methods are currently used to predict stock prices but with limited success due to the complex nature of financial markets. The project outlines the design of the neural network, selection of input parameters, data collection, model training and testing. The goal is to apply neural networks to help forecast stock prices in Nepal's stock market.
The document discusses different types of mathematical models, including deterministic and probabilistic models. It provides examples of each. It also discusses building, verifying, and refining mathematical models. Additionally, it covers optimization models, their components including objective functions and constraints. Finally, it discusses specific types of optimization models like linear programming, network flow programming, and integer programming.
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
APPROACHES IN USING EXPECTATIONMAXIMIZATION ALGORITHM FOR MAXIMUM LIKELIHOOD ...cscpconf
EM algorithm is popular in maximum likelihood estimation of parameters for state-space models. However, extant approaches for the realization of EM algorithm are still not able to fulfill the task of identification systems, which have external inputs and constrained parameters. In this paper, we propose new approaches for both initial guessing and MLE of the parameters of a constrained state-space model with an external input. Using weighted least square for the initial guess and the partial differentiation of the joint log-likelihood function for the EM algorithm, we estimate the parameters and compare the estimated values with the “actual” values, which are set to generate simulation data. Moreover, asymptotic variances of the estimated parameters are calculated when the sample size is large, while statistics of the estimated parameters are obtained through bootstrapping when the sample size issmall. The results demonstrate that the estimated values are close to the “actual” values.Consequently, our approaches are promising and can applied in future research.
This document discusses key concepts in statistics including point estimation, bias, variance, consistency, maximum likelihood estimation (MLE), and Bayesian statistics. It provides definitions and explanations of these terms. For example, it defines bias as the difference between the expected value of an estimator and the true parameter value, and explains that MLE chooses parameters that maximize the likelihood of observing the sample data. It also compares frequentist and Bayesian approaches to estimation.
An intelligent framework using hybrid social media and market data, for stock...Eslam Nader
The document proposes an intelligent framework that uses hybrid social media and market data for stock prediction analysis. It motivates the need for such a framework to increase liquidity in the Egyptian stock market. The proposed framework includes 5 stages: 1) ontology building, 2) data collection from stock market exchanges, Twitter, and Google, 3) data analysis and cleaning, 4) data clustering and filtering using K-means, genetic algorithms, and a hybrid approach, 5) building a rule engine using deep learning. The framework will be experimentally evaluated by predicting stock prices over 10 working days and comparing results to actual market prices.
An intelligent scalable stock market prediction systemHarshit Agarwal
Comparitive study of stock market prediction system using ANN and GONN. Sentiment analysis also done on yahoo news feed. Deployment done on hadoop cluster.
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
This document summarizes recent advances and applications of the cuckoo search algorithm, a nature-inspired metaheuristic optimization algorithm developed in 2009. Cuckoo search mimics the brood parasitism breeding behavior of some cuckoo species. It uses a combination of local and global search achieved through random walks and Levy flights to efficiently explore the search space. Studies show cuckoo search often finds optimal solutions faster than genetic algorithms and particle swarm optimization. The algorithm has been applied to diverse optimization problems and continues to be improved and extended to multi-objective optimization.
Turnover Prediction of Shares Using Data Mining Techniques : A Case Study csandit
Predicting the Total turnover of a company in the ever fluctuating Stock market has always proved to be a precarious situation and most certainly a difficult task at hand. Data mining is a
well-known sphere of Computer Science that aims at extracting meaningful information from large databases. However, despite the existence of many algorithms for the purpose of
predicting future trends, their efficiency is questionable as their predictions suffer from a high
error rate. The objective of this paper is to investigate various existing classification algorithms
to predict the turnover of different companies based on the Stock price. The authorized datasetfor predicting the turnover was taken from www.bsc.com and included the stock market valuesof various companies over the past 10 years. The algorithms were investigated using the ‘R’
tool. The feature selection algorithm, Boruta, was run on this dataset to extract the important
and influential features for classification. With these extracted features, the Total Turnover of
the company was predicted using various algorithms like Random Forest, Decision Tree, SVM and Multinomial Regression. This prediction mechanism was implemented to predict the turnover of a company on an everyday basis and hence could help navigate through dubious
stock markets trades. An accuracy rate of 95% was achieved by the above prediction process.
Moreover, the importance of the stock market attributes was established as well.
The document summarizes several papers presented at a conference on evolutionary computation and representations. It provides brief 1-2 sentence summaries of the main topics, objectives, and conclusions of 14 different papers presented at the conference. The papers covered a wide range of topics including ant algorithms, genetic programming, evolutionary multi-objective optimization, reinforcement learning, and hybrid evolutionary algorithms.
A general frame for building optimal multiple SVM kernelsinfopapers
Dana Simian, Florin Stoica, A General Frame for Building Optimal Multiple SVM Kernels, Large-Scale Scientific Computing, Lecture Notes in Computer Science, 2012, Volume 7116/2012, 256-263, DOI: 10.1007/978-3-642-29843-1_29
Algebraic Approach to Implementing an ATL Model Checkerinfopapers
Laura Florentina Stoica, Florian Mircea Boian, Algebraic Approach to Implementing an ATL Model Checker, STUDIA Univ. Babes Bolyai, INFORMATICA, Volume LVII, Number 2, 2012, pp. 73-82
Intelligent agents in ontology-based applicationsinfopapers
The document describes the development of an intelligent agent using JADE that is linked to a knowledge base system implemented with Protege and Algernon. The agent delivers useful information to users from the web or other agents based on their preferences. The knowledge base contains ontologies defined in Protege and facts that can be queried using if-then rules in Algernon. The example application was developed in Java Studio Creator to demonstrate an intelligent information agent.
Using the Breeder GA to Optimize a Multiple Regression Analysis Modelinfopapers
Florin Stoica, Cornel Gheorghe Boitor, Using the Breeder genetic algorithm to optimize a multiple regression analysis model used in prediction of the mesiodistal width of unerupted teeth, International Journal of Computers, Communications & Control, Vol 9, No 1, pp. 62-70, ISSN 1841-9836, february 2014
This document describes building a bridge between JADE agent applications and web frontends using JavaServer Faces (JSF) technology. It connects a JSF web application to a proxy agent in JADE. The proxy agent handles user requests by retrieving agent information from the JADE Directory Facilitator and updating the JSF user interface. Business objects like AgentInfo are used to share data between the JSF application and JADE agents. The proxy agent has a cyclic behavior that processes incoming requests by launching one-shot behaviors to interface with the Directory Facilitator and notify the JSF application.
A new Evolutionary Reinforcement Scheme for Stochastic Learning Automatainfopapers
F. Stoica, E. M. Popa, A new Evolutionary Reinforcement Scheme for Stochastic Learning Automata, Proceedings of the 12th WSEAS International Conference on COMPUTERS, Heraklion, Greece, July 23-25, ISBN: 978-960-6766-85-5, ISSN: 1790-5109, pp. 268-273, 2008
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Be...infopapers
Dana Simian, Florin Stoica, Corina Simian, Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Behaviour, Lecture Notes in Computer Science, LNCS 5910 (2010), I. Lirkov, S. Margenov, and J. Wasniewski (Eds.), Springer-Verlag Berlin Heidelberg, pp. 361-368
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
This document describes the implementation of an ATL (Alternating-Time Temporal Logic) model checker using relational algebra concepts. The model checker is implemented in a client-server paradigm, with the client allowing interactive construction of ATL models as directed multi-graphs. The server embeds an ATL model checking algorithm using ANTLR and relational algebra expressions translated to SQL queries. A key contribution is the implementation of the Pre(A,Θ) function, which computes the set of states agents A can enforce the system into states in Θ in one move, using relational algebra expressions and SQL. The tool was developed to improve applicability of ATL model checking for general-purpose software design.
Generic Reinforcement Schemes and Their Optimizationinfopapers
Dana Simian, Florin Stoica, Generic Reinforcement Schemes and Their Optimization, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, April 28-30, 2011, pp. 332-337
F. Stoica, D. Simian, C. Simian, A new co-mutation genetic operator, Proceedings of the 9th WSEAS International Conference on Evolutionary Computing, Sofia, Bulgaria, ISBN 978-960-6766-58-9, ISSN 1790-5109, pp. 76-81, May 2008
An Executable Actor Model in Abstract State Machine Languageinfopapers
The document presents an actor model implemented in AsmL (Abstract State Machine Language). Key aspects include:
- Actors communicate asynchronously via message passing and have mailboxes to receive messages. They process messages sequentially.
- The AsmL implementation models actors as classes with mailboxes and message sending methods. Multiple actors can update mailboxes concurrently without conflicts.
- An example multicast protocol is modeled, where a director actor sends messages to other actors and waits for acknowledgments.
Modeling the Broker Behavior Using a BDI Agentinfopapers
Modeling the Broker Behavior Using a BDI Agent
The document discusses modeling the behavior of a broker agent using a Belief-Desire-Intention (BDI) framework. It presents an algebraic description of computation tree logic (CTL) to model the broker's mental state. A BDI logic is then defined using operators for beliefs, desires, and intentions. Finally, a case study is described where a BDI agent model represents a broker whose behavior is determined by market conditions, demands, offers, prices, and number of operations. The agent's mental state and behavior are formally specified and can be model checked using the described CTL and BDI logics.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
An AsmL model for an Intelligent Vehicle Control Systeminfopapers
F. Stoica, An AsmL model for an Intelligent Vehicle Control System, Proceedings of the 11th WSEAS Int. Conf. on COMPUTERS: Computer Science and Technology, vol. 4, Crete Island, Greece, ISBN: 978-960-8457-92-8, pp. 323-328, July 2007
Laura Florentina Stoica, Florian Mircea Boian, Florin Stoica, A Distributed CTL Model Checker, Proceeding of 10th International Conference on e-Business, ICE-B 2013, Reykjavik Iceland, paper 33, 29-31 July, pp. 379-386, ISBN: 978-989-8565-72-3, 2013
A new Reinforcement Scheme for Stochastic Learning Automatainfopapers
F. Stoica, E. M. Popa, I. Pah, A new reinforcement scheme for stochastic learning automata – Application to Automatic Control, Proceedings of the International Conference on e-Business, Porto, Portugal, ISBN 978-989-8111-58-6, pp. 45-50, July 2008
Deliver Dynamic and Interactive Web Content in J2EE Applicationsinfopapers
F. Stoica, Deliver dynamic and interactive Web content in J2EE applications, Proceedings of the Central and East European Conference in Business Information Systems, Cluj-Napoca, Romania, ISBN 973-656-648-X, pp. 780-789, 2004
Optimizing a New Nonlinear Reinforcement Scheme with Breeder genetic algorithminfopapers
Florin Stoica, Dana Simian, Optimizing a New Nonlinear Reinforcement Scheme with Breeder genetic algorithm, Proceedings of the Recent Advances in Neural Networks, Fuzzy Systems & Evolutionary Computing,13-15 June 2010, Iasi, Romania, ISSN: 1790-2769, ISBN: 978-960-474-194-6, pp. 273-278
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
This document discusses business process analysis, simulation, and optimization. It provides an overview of structural and statistical analysis, capacity analysis using dynamic simulation methods, visualization and numeric simulation techniques in business process simulation. Optimization techniques are discussed for selecting optimal scenarios. The benefits of simulation over real-world testing are speed and low cost. Process simulation best practices and caveats are also covered, such as ensuring the right model, parameters, and expertise for the intended goals.
Modeling and simulation is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model.
STOCK PRICE PREDICTION USING ML TECHNIQUESIRJET Journal
This document discusses using machine learning techniques like LMS and LSTM algorithms to predict stock prices. It summarizes previous research on stock price prediction that used techniques like artificial neural networks, support vector machines, and recurrent neural networks. The document then describes the proposed system for stock price prediction, which involves preprocessing data, splitting it into training and test sets, analyzing the data with LMS and LSTM algorithms, and outputting predictions in graph and report formats. It concludes that combining multiple algorithms into hybrid models can improve prediction accuracy while reducing computational complexity compared to single models.
This document provides an overview of modeling and simulation. It defines modeling as representing a system to enable predicting the effects of changes. Simulation involves running experiments on a model. The key steps in modeling and simulation projects are: 1) identifying the problem, 2) formulating and developing the model, 3) validating the model, 4) designing simulation experiments, 5) performing simulations, and 6) analyzing and presenting results. Modeling and simulation can be used for a variety of purposes including education, design evaluation, forecasting, and risk assessment.
Performance Comparisons among Machine Learning Algorithms based on the Stock ...IRJET Journal
This document compares the performance of various machine learning algorithms for predicting stock market performance based on stock market data and news data. It applies algorithms like linear regression, random forest, decision tree, K-nearest neighbors, logistic regression, linear discriminant analysis, XGBoost classifier, and Gaussian naive Bayes to datasets containing stock market values, news articles, and Reddit posts. It evaluates the algorithms based on metrics like accuracy, recall, precision and F1 score. The results suggest that linear discriminant analysis achieved the best performance at predicting stock market values based on the given datasets and evaluation metrics.
The document proposes developing an artificial intelligence-based stock trading system using particle swarm optimization. It finds that using 30 neural networks with a 100-day moving time interval to select the top 3 stock picks daily based on the highest recommendations produces the most stable and profitable results. The system uses swarm intelligence to search for the globally best-performing neural network each day to make trading decisions.
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Analysis of selection schemes for solving job shop scheduling problem using g...eSAT Journals
Abstract Scheduling problems have the standard consideration in the field of manufacturing. Among the various types of scheduling problems, the job shop scheduling problem is one of the most interesting NP-hard problems. As the job shop scheduling is an optimization problem, Genetic algorithm was selected to solve it In this study. Selection scheme is one of the important operators of Genetic algorithm. The choice of selection method to be applied for solving problems has a wide role in the Genetic algorithm process. The speed of convergence towards the optimum solution for the chosen problem is largely determined by the selection mechanism used in the Genetic algorithm. Depending upon the selection scheme applied, the population fitness over the successive generations could be improved. There are various type of selection schemes in genetic algorithm are available, where each selection scheme has its own feasibility for solving a particular problem. In this study, the selection schemes namely Stochastic Universal Sampling (SUS), Roulette Wheel Selection (RWS), Rank Based Roulette Wheel Selection (RRWS) and Binary Tournament Selection (BTS) were chosen for implementation. The characteristics of chosen selection mechanisms of Genetic algorithm for solving the job shop scheduling problem were analyzed. The Genetic algorithm with four different selection schemes is tested on instances of 7 benchmark problems of different size. The result shows that the each of the four selection schemes of Genetic algorithm have been successfully applied to the job shop scheduling problems efficiently and the performance of Stochastic Universal Sampling selection method is better than all other four selection schemes. Keywords: Genetic Algorithm, Makespan, Selection schemes
Airline Revenue Management by using Genetic AlgorithmPRATHAMESH REGE
The document discusses using a genetic algorithm to optimize airline revenue management. It proposes applying genetic algorithms to determine which ticket booking terminals should remain open to maximize profit. The algorithm would consider historical booking data, customer demand forecasts, and ticket prices to select the terminals where booking higher fare tickets would generate the most revenue. It provides an example case study demonstrating how the genetic algorithm evaluates multiple booking requests to choose the terminal that contributes most to overall profit. The results show the genetic algorithm approach can increase total revenue compared to a simple first-come, first-served booking system.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
Function Approximation is a popular engineering problems used in system identification or Equation
optimization. Due to the complex search space it requires, AI techniques has been used extensively to spot
the best curves that match the real behavior of the system. Genetic algorithm is known for their fast
convergence and their ability to find an optimal structure of the solution. We propose using a genetic
algorithm as a function approximator. Our attempt will focus on using the polynomial form of the
approximation. After implementing the algorithm, we are going to report our results and compare it with
the real function output.
A Comparison of Traditional Simulation and MSAL (6-3-2015)Bob Garrett
This document compares traditional simulation approaches to the Model-Simulation-Analysis-Looping (MSAL) approach. It provides background information on system modeling and simulation basics, including conceptual models, simulation programs, sensitivity analysis, Monte Carlo methods, and simulation optimization. It then discusses risk and uncertainty, modeling systems of systems, and the current state of modeling and simulation in systems engineering. Finally, it introduces the MSAL approach, which uses graphs, analytics, and repeated simulation loops to address the increased complexity and uncertainty in systems of systems compared to traditional approaches. The MSAL approach aims to provide benefits like improved handling of uncertainty and complexity.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IRJET- Machine Learning: Survey, Types and ChallengesIRJET Journal
This document provides an overview of machine learning, including its types and challenges. It discusses supervised and unsupervised machine learning algorithms. Supervised learning uses labeled training data to predict discrete or continuous output values. Unsupervised learning finds hidden patterns in unlabeled data through clustering. Common supervised algorithms are logistic regression, decision trees, k-nearest neighbors and common unsupervised algorithm is clustering. The document also gives examples to explain machine learning concepts and algorithms.
Visualizing and Forecasting Stocks Using Machine LearningIRJET Journal
This document discusses using machine learning techniques like regression and LSTM models to predict stock market returns. It first provides background on the challenges of predicting the stock market due to its unpredictable nature. It then describes obtaining stock price data from Yahoo Finance to use as the dataset. The document outlines using regression analysis to build a relationship between stock prices and time and using LSTM due to its ability to learn from sequence data. It then reviews related work applying machine learning like neural networks and genetic algorithms to optimize stock prediction. The methodology section provides more detail on preprocessing the dataset and using regression and LSTM models to make predictions and compare results.
Similar to Using genetic algorithms and simulation as decision support in marketing strategies and long-term production planning (20)
Laura F. Cacovean, Florin Stoica, Dana Simian, A New Model Checking Tool, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, pp. 358-363, April 28-30, 2011
CTL Model Update Implementation Using ANTLR Toolsinfopapers
This document summarizes a research paper that presents an algorithm for updating CTL (Computational Tree Logic) models. The algorithm is implemented using ANTLR (ANother Tool for Language Recognition) tools. The paper first provides background on CTL syntax, semantics and model checking. It then defines five primitive update operations for CTL models including adding/removing states and relations. Several semantic characterizations are presented to achieve admissible updates. The algorithm works by selecting paths that do not satisfy a formula and applying the primitive operations. Finally, a case study on updating a model of an elevator control system is described to illustrate the algorithm.
Generating JADE agents from SDL specificationsinfopapers
This document provides information about an international journal on computers, communications, and control. It lists the editorial organization, editorial board members, and details of a paper presented at ICCCC 2006. The paper is titled "Generating JADE agents from SDL specifications" and discusses how to automatically generate JADE agents from specifications written using the Specification and Description Language (SDL).
An evolutionary method for constructing complex SVM kernelsinfopapers
D. Simian, F. Stoica, An Evolutionary Method for Constructing Complex SVM Kernels, Recent Advances in Mathematics and Computers in Biology and Chemistry, Proceedings of the 10th International Conference on Mathematics and Computers in Biology and Chemistry, MCBC’09, Prague, Chech Republic, WSEAS Press, ISBN 978-960-474-062-8, ISSN 1790-5125, pp.172-178, 2009
Evaluation of a hybrid method for constructing multiple SVM kernelsinfopapers
Dana Simian, Florin Stoica, Evaluation of a hybrid method for constructing multiple SVM kernels, Recent Advances in Computers, Proceedings of the 13th WSEAS International Conference on Computers, Recent Advances in Computer Engineering Series, WSEAS Press, Rodos, Greece, July 23-25, 2009, ISSN: 1790-5109, ISBN: 978-960-474-099-4, pp. 619-623
Interoperability issues in accessing databases through Web Servicesinfopapers
Florin Stoica, Laura Florentina Cacovean, Interoperability Issues in Accessing Databases through Web Services, Proceedings of the Recent Advances in Neural Networks, Fuzzy Systems & Evolutionary Computing, 13-15 June 2010, Iaşi, Romania, ISSN: 1790-2769, ISBN: 978-960-474-194-6, pp. 279-284
Using Ontology in Electronic Evaluation for Personalization of eLearning Systemsinfopapers
I. Pah, F. Stoica, L. F. Cacovean, E. M. Popa, Using Ontology in Electronic Evaluation for Personalization of eLearning Systems, Proceedings of the 8th WSEAS International Conference on APPLIED INFORMATICS and COMMUNICATIONS (AIC’08), Rhodes, Greece, August 20-22, ISSN: 1790-5109, ISBN: 978-960-6766-94-7, pp. 332-337, 2008
Models for a Multi-Agent System Based on Wasp-Like Behaviour for Distributed ...infopapers
D. Simian, F. Stoica, C. Simian, Models for a Multi-Agent System Based on Wasp-like Behaviour for Distributed Patients Repartition, Proceedings of the 9th WSEAS International Conference on Evolutionary Computing, Sofia, Bulgaria, ISBN 978-960-6766-58-9, ISSN 1790-5109, pp. 82-86, May 2008
A New Nonlinear Reinforcement Scheme for Stochastic Learning Automatainfopapers
Dana Simian, Florin Stoica, A New Nonlinear Reinforcement Scheme for Stochastic Learning Automata, Proceedings of the 12th WSEAS International Conference on AUTOMATIC CONTROL, MODELLING & SIMULATION, 29-31 May 2010, Catania, Italy, ISSN 1790-5117, ISBN 978-954-92600-5-2, pp. 450-454
Automatic control based on Wasp Behavioral Model and Stochastic Learning Auto...infopapers
F. Stoica, D. Simian, Automatic control based on Wasp Behavioral Model and Stochastic Learning Automata, Proceedings of the 10th International Conference on Mathematical Methods, Computational Techniques & Intelligent Systems, Corfu Island, Greece, ISBN 978-960-474-012-3, pp. 289-294, October 2008
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Using genetic algorithms and simulation as decision support in marketing strategies and long-term production planning
1. Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION
Using genetic algorithms and simulation as decision support in
marketing strategies and long-term production planning
FLORIN STOICA
Computer Science Department
Faculty of Sciences
University “Lucian Blaga” Sibiu
Str. Dr. Ion Ratiu 5-7, 550012, Sibiu
ROMANIA
florin.stoica@ulbsibiu.ro
LAURA FLORENTINA CACOVEAN
Computer Science Department
Faculty of Sciences
University “Lucian Blaga” Sibiu
Str. Dr. Ion Ratiu 5-7, 550012, Sibiu
ROMANIA
laura.cacovean@ulbsibiu.ro
Abstract: - This paper represents an approach of using simulation models and genetic algorithms for generating an
aggregative production plan to maximize the total profit of the firm. The described methodology provides a tool which
assists the company management in decision which of the suitable solutions will become the production plan. The
entire system is composed by a business information system - a database, a simulation model and a genetic algorithm.
The purpose of the integrated system is to help operative management personnel to take decisions with respect to long-term
production planning and marketing strategies.
Key-Words: - Production planning, Genetic algorithms, Co-mutation
1 Introduction
Most real life optimization and scheduling problems are
too complex to be solved completely. The complexity of
real life problems often exceeds the ability of classic
methods. In such cases decision-makers prepare and
execute a set of scenarios on the simulation model and
hope that at least one scenario will be good enough to be
used as a production plan.
A long time goal for scheduling optimization research
has been to find an approach that will lead to qualitative
solutions in a relatively short computational time. The
development of decision-making methodologies is
currently headed in the direction of simulation and
search algorithm integration. This leads to a new
approach, which successfully joins simulation and
optimization. The proposed approach supports man-machine
interaction in operational planning.
A group of widely known meta-heuristic search
algorithms are genetic algorithms (GA). Evolutionary
algorithms are a very effective tool that enables solving
complicated practical optimization problems. An
important characteristic of evolutionary algorithms is
their simplicity and versatility. Their main drawback is
a long calculation time, which, however, is not a serious
disadvantage nowadays with advanced computer
technology and does not limit their use for searching for
almost optimal solutions, if are not real-time restrictions
[2].
With computer imitation of simplified and idealized
evolution, an individual solution-chromosome represents
a possible solution to our problem. Chromosome fitness
is calculated with a fitness function. After being
evaluated with a fitness function, each chromosome in
population receives its fitness value.
The optimization is based on a genetic algorithm
which uses a new co-mutation operator called LR-Mijn,
capable of operating on a set of adjacent bits in one
single step.
In present, there is a major interest in design of
powerful mutation operators, in order to solve practical
problems which can not be efficiently resolved using
standard genetic operators. These new operators are
called co-mutation operators. In [7] was presented a co-mutation
operator called Mijn, capable of operating on a
set of adjacent bits in one single step. In [12] we
introduced and studied a new co-mutation operator
which we denoted by LR-Mijn and we proved that it
offers superior performances than Mijn operator.
The paper is organized as follows. In Section 2 we
make a brief presentation of the architecture of our
decision-support system, called GA-SIM. We also
present the basic idea of evolutionary method adopted
for optimization process. In section 3 is introduced the
co-mutation operator LR-Mijn. The evolutionary
algorithm based on the LR-Mijn operator, used within the
decision-support system is presented in section 4.
Section 5 contains the description of the simulation
model and its integration in the GA-SIM system.
Conclusions and further directions of study can be found
in section 6.
ISSN: 1790-2769 435 ISBN: 978-960-474-113-7
2. Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION
2 The architecture of the decision-support
system
Companies need to be flexible to compete for the market
share and adapt to market demands by offering
competitive prices and quality. This calls for a wide
assortment of products or product types, small
production costs, etc. Simulation is a strong interactive
tool that helps decision-makers improves the efficiency
of enterprise actions. The ability of simulation to show a
real process on the computer with the consideration of
uncertainty is a big advantage when analyzing system
behavior in complex situations [15].
Our system is composed by a business information
system - a database, a simulation model and a genetic
algorithm, and is called in the following the GA-SIM
system.
A simulation model will be used for fitness function
computation of genetic algorithm results, as well as for
visual representation of qualitative evaluation of a
chosen production plan following genetic algorithm
optimization.
The value of the fitness function is computed by the
simulation model, with uses the respective chromosome
as input data. The value returned from the simulation
model is used to evaluate that chromosome, which
encodes a possible production plan.
By applying the genetic operations on the members of
a population of decisional rules at the moment t, it
results a new population of rules that shall be used at the
moment t+1. The population from the initial moment, t =
0 is generated randomly and the genetic operations are
applied iteratively until the moment T (at which the
condition for stopping the algorithm is accomplished).
The above iterative process may be interpreted
economically as it follows. The evolution process has as
objective the finding of the successful individuals. The
binary strings of these individuals (chromosomes) with
high fitness values (a high profit) will be the basis for
building a new generation (population). The strings with
lower fitness values, which represent decisions to
produce with a low profit, find few successors (or none)
in the following generation.
The purpose of the integrated system is to aid
operative management personnel in production planning
and marketing strategies (incorporated in simulation
model). The main advantage of the presented system is
to enhance man-machine interaction in production
planning, since the computer is able to produce several
acceptable schedules using the given data and a set of
criteria. The user then selects the most suitable schedule
and modifies it, if necessary.
After completion of the optimization process, the
most suitable production plans are simulated on the
visual model of the system. Using chosen parameters
and according to defined criteria, the decision-maker is
motivated to search for results which will have the most
advantageous influence on the whole production process.
The result, which is selected after simulation on the
visual simulation model, becomes the proposed
production plan
kth generation
Fitness function
Simulation model
New individual
evolution (co-mutation)
Result of fitness
eval. (profit)
Individual
fitness value
Selection
Last
generation
Optimal
production plan
Data
Business
Information
System
Fig. 1 The architecture of the decision-support system
Every chromosome codes o possible production plan.
The quality of a chromosome is represented by the
amount of profit which result from the simulation model
if it receives as input data the production plan coded in
that chromosome.
It is noticed that, through the modification of the
simulation model, it is obtained a more varied general
frame which leads to more diversified suggestions
concerning the decisions that refer to the quantities of
company products that shall be offered on the market.
3 The LR-Mijn operator
In this section we define the co-mutation operator called
LR-Mijn. Our LR-Mijn operator finds the longest
sequence of σp elements, situated in the left or in the
right of the position p. If the longest sequence is in the
left of p, the LR-Mijn behaves as Mijn, otherwise the LR-Mijn
will operate on the set of bits starting from p and
going to the right.
Let us consider a generic alphabet A = {a1, a2, …, as}
composed by s ≥2 different symbols. The set of all
sequences of length l over the alphabet A will be
ISSN: 1790-2769 436 ISBN: 978-960-474-113-7
3. Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION
denoted with Σ = Al.
In the following we shall denote with σ a generic
string, and σ = σl-1…σ0 ∈ Σ= Al, where σq ∈ A ∀ q ∈
{0, …, l-1}. Through σ(q,i) we denote that on position q
within the sequence σ there is the symbol ai of the
alphabet A.
σz
p, j denotes the presence of z symbols aj within the
sequence σ, starting from the position p and going left
right ,
n
and
p i
σ ( , ) specify the presence of symbol ai on
left ,
m
position p within the sequence σ, between right symbols
an on the right and left symbols am on the left. We
right ,
i
suppose that σ = σ(l-1)... σ(p+left+1,m)
p i
σ ( , ) σ(p-right-
left ,
i
1,n)... σ(0).
The Mijn operator is the mutation operator defined in
[7]:
Mijn : σ ∈ Σ, p ∈ {0, … , l-1} → σ’ ∈ Σ’ ⊂ Σ, where p
is randomly chosen
(i) σ = σl-1…σp+nσp+n-1,i σn-1
p, j σp-1…σ0 ⎯⎯⎯→ Mijn
σ’ = σl-1…σp+nσp+n-1,jσn-1
p,i σp-1…σ0, for n < l – p + 1
and
(ii) σ = σ -p
l σp-1…σ0 ⎯⎯⎯→ Mijn σ’ = σ -p
p, j
l σp-1…σ0,
p,k
for n = l – p + 1, with ak ≠ aj randomly chosen in A.
In [12], we introduced and study the properties of LR-Mijn
co-mutation operator
Definition 3.1 Formally, the LR-Mijn operator is defined
as follows:
(i) If p ≠ right and p ≠ l – left – 1,
LR-Mijn(σ)=
⎧
σ σ σ σ
⎪ ⎪ ⎪ ⎪ ⎪
σ σ
σ σ σ σ
⎨
⎪ ⎪ ⎪ ⎪ ⎪
⎩
right ,
i
l p left i p m
( 1)... ( 1, ) ( , )
= − + +
left m
p right n for left right
( 1, )... (0)
,
− − >
,
l p left m p n
( 1)... ( 1, ) ( , )
= − + +
p right i for left right
( 1, )... (0)
,
− − <
σ σ
or for left =
right with probability
, 0.5
left
rightt
right rleft
right n
left i
σ σ
(ii) If p = right and p ≠ l – left – 1,
right ,
i
σ = σ(l-1)... σ(p+left+1,m)
p i
σ ( , ) and
left ,
i
LR-Mijn(σ) = σ(l-1)... σ(p+left+1,m)
right ,
k
p k
σ ( , ) ,
left ,
i
where k ≠ i (randomly chosen).
(iii) If p ≠ right and p = l – left – 1,
right ,
i
σ =
p i
σ ( , ) σ(p-right-1,n)... σ(0) and
left ,
i
LR-Mijn(σ) =
right ,
i
p k
σ ( , ) σ(p-right-1,n)... σ(0), where k ≠ i
left ,
k
(randomly chosen).
(iv) If p = right and p = l – left –1, σ =
right ,
i
p i
σ ( , )
left ,
i
LR-Mijn(σ)=
⎧
σ σ
⎪ ⎪ ⎪
σ σ
⎨
⎪ ⎪ ⎪
σ σ
⎩
right i
p k for left right where k i
( , ) , ,
= > ≠
left k
right k
p k for left right where k i
( , ) , ,
= < ≠
=
, 0.5
,
left ,
i
,
,
or for left right with probability
left
rightt
right rleft
As an example, let us consider the binary case, the
string σ = 11110000 and the randomly chosen
application point p = 2. In this case, σ2 = 0, so we have to
find the longest sequence of 0 within string σ, starting
from position p. This sequence goes to the right, and
because we have reached the end of the string, and no
occurrence of 1 has been met, the new string obtained
after the application of LR-Mijn is 11110111.
The commutation operator LR-Mijn allows long
jumps, thus the search can reach very far points from
where the search currently is. We proved in [12] that the
LR-Mijn operator performs more long jumps than Mijn,
which leads to a better convergence of an evolutionary
algorithm based on the LR-Mijn in comparison with an
algorithm based on the Mijn operator.
In the following, we will consider that A is the binary
alphabet, A = {0, 1}.
4 The evolutionary algorithm based on
LR-Mijn operator
The basic scheme for our algorithm, called in the
following LR-MEA, is described as follows:
Procedure LR-MEA
begin
t = 0
Initialize randomly population P(t) with P elements;
Evaluate P (t) by using fitness function;
while not Terminated
for j = 1 to P-1 do
- select randomly one element among the
best T% from P(t);
ISSN: 1790-2769 437 ISBN: 978-960-474-113-7
4. Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION
- mutate it using LR-Mijn;
- evaluate the obtained offspring;
- insert it into P’(t).
end for
Choose the best element from P(t) and
Insert it into P’(t)
P(t+1) = P’(t)
t = t + 1
end while
end
5 The simulation model
The simulation model is implemented as an Excel
application. It is based on real data provided by the
Business Information System but also on forecasted data
(e.g. sales quantities and values for the following
months). The model take account of many dates and
variables: sales (values and quantities), raw material
costs, discounts, packaging costs, direct & indirect labor
costs, energy cost, depreciation from the rate of
exchange, warehousing and logistic costs, transport
costs, advertising & promotions, etc. The main
workbook contains few sheets, a VBA module, and
results are synthesized in the following pivot table:
Fig. 2 A pivot table from the simulation model
The role of the simulation model in the genetic
algorithm is presented in the Figure 3.
In fact, the simulation model represents the
implementation of the fitness function of the genetic
algorithm, needed in the procedure LR-MEA to evaluate
each member (chromosome) of the population.
The probability to select a certain chromosome
(possible production plan) in the next generation is
related with its performance in simulated conditions (the
profit provided by the simulation model). That profit
determines the fitness value of the respective possible
solution of our optimization problem.
Chromosome
(Possible production plan)
Profit
Fitness function
Simulation
Model
(Excel app)
Evaluation
module
(Java)
JExcelAPI
Fig. 3 Implementation of the fitness function through the
simulation model
Because the GA-SIM system is implemented in Java
as main language, was necessary a bridge between Java
code and the simulation model, implemented in Excel.
Our choice for this purpose was JExcelAPI
(http://jexcelapi.sourceforge.net/), a mature, open source
java API enabling developers to read, write, and modify
Excel spreadsheets dynamically [14].
6 Conclusions and further directions of
study
The purpose of the GA-SIM integrated system is to aid
operative management personnel in production planning
and marketing strategies (incorporated in simulation
model). The main advantage of the presented system is
to enhance man-machine interaction in production
planning, since the computer is able to produce several
acceptable schedules using the given data and a set of
criteria.
The GA-SIM system is fully implemented, and
currently is under evaluation in a big company from
Sibiu, Romania, which was interested in its acquisition.
As a further direction of study we want to compare
the results obtained by using different genetic operators
and to evaluate real codifications of variables, instead of
current binary one.
ISSN: 1790-2769 438 ISBN: 978-960-474-113-7
5. Proceedings of the 9th WSEAS International Conference on SIMULATION, MODELLING AND OPTIMIZATION
References:
[1] Jaber A. Q, Hidehiko Y., F., Ramli R., Machine
Learning in Production Systems Design Using
Genetic Algorithms, International Journal of
Computational Intelligence, No. 4, 2008, pp. 72-79.
[2] Kofjač D., Kljajić M., Application of Genetic
Algorithms and Visual Simulation in a Real-Case
Production Optimization, WSEAS TRANSACTIONS
on SYSTEMS and CONTROL, Issue 12, Volume 3,
December 2008, pp. 992-1001.
[3] Radhakrishnan P., Prasad V. M., Gopalan M.R.,
Optimizing Inventory Using Genetic Algorithm for
Efficient Supply Chain Management, Journal of
Computer Science 5 (3), 2009, pp. 233 - 241.
[4] Lo Chih-Yao, Advance of Dynamic Production-
Inventory Strategy for Multiple Policies Using
Genetic Algorithm, Information Technology Journal
7 (4), 2008, pp. 647-653.
[5] De Falco I., An introduction to Evolutionary
Algorithms and their application to the Aerofoil
Design Problem – Part I: the Algorithms, von
Karman Lecture Series on Fluid Dynamics,
Bruxelles, April 1997
[6] De Falco I., Del Balio R, Della Cioppa A.,
Tarantino E., A Comparative Analysis of
Evolutionary Algorithms for Function Optimisation,
Research Institute on Parallel Information Systems,
National Research Council of Italy, 1998
[7] De Falco I, A. Iazzetta, A. Della Cioppa, Tarantino
E., The Effectiveness of Co-mutation in Evolutionary
Algorithms: the Mijn operator, Research Institute on
Parallel Information Systems, National Research
Council of Italy, 2000
[8] De Falco I., Iazzetta A., Della Cioppa A., Tarantino
E., Mijn Mutation Operator for Aerofoil Design
Optimisation, Research Institute on Parallel
Information Systems, National Research Council of
Italy, 2001
[9] Chen J., Using Genetic Algorithms to Solve a
Production-Inventory Model, International Journal
of Business and Management, Vol. 2, No. 2, 2007,
pp. 38-41.
[10] Krishnakumar K., Goldberg D., Control system
optimization using genetic algorithm, Journal of
Guidance, Control, and Dynamics, no. 15(3), 1992,
pp. 735-740.
[11] Zhu X., Huang Y., Doyle J., Genetic algorithms
and simulated annealing for robustness analysis,
Proceedings of the American Control Conference,
Albuquerque, New Mexico, 1997, pp. 3756-3760.
[12] Stoica F., Simian D., Simian C., A new co-mutation
genetic operator, Proceedings of the 9th
WSEAS International Conference on Evolutionary
Computing, Sofia, Bulgaria, May 2008, pp. 76-81.
[13] Vapnik V., The Nature of Statistical Learning
Theory, Springer Verlag, 1995.
[14] Java Excel API, http://jexcelapi.sourceforge.net
[15] Kljajić M., Bernik I., Škraba A., Leskovar R.,
Integral simulation approach to decision assessment
in enterprises, Shaping future with simulation:
proceedings of the 4th International Eurosim 2001
Congress, Delft University of Technology, 2001.
ISSN: 1790-2769 439 ISBN: 978-960-474-113-7