Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
This document describes a deep reinforcement learning method called DQN that achieved human-level performance on 49 Atari 2600 games. The DQN uses a convolutional neural network to learn successful policies for playing games directly from raw pixel inputs. It outperformed existing reinforcement learning methods on 43 of the 49 games and achieved over 75% of a human tester's score on 29 games. The DQN was able to stably train large neural networks using reinforcement learning and stochastic gradient descent to learn policies from high-dimensional visual inputs with minimal prior knowledge.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
This document discusses genomic meta-analysis and summarization techniques. It introduces MetaQC for quality control, MetaDE for detecting differentially expressed genes through meta-analysis, and MetaPCA for integrative visualization of multiple genomic studies. MetaQC uses quality measures to determine inclusion/exclusion of studies in meta-analysis. MetaDE detects biomarkers statistically significant across studies using Fisher's and adaptive weighting methods. MetaPCA integrates multiple genomic datasets by finding a common principal component space.
- The document describes a reinforcement learning method using deep neural networks called DQN that was able to learn successful policies to play 49 Atari 2600 games directly from raw pixel inputs, outperforming prior methods on 43 games.
- DQN trained large neural networks using a reinforcement learning signal and stochastic gradient descent in a stable manner. Its performance was comparable to human-level performance on over half the games.
- The method took high-dimensional video game inputs and used a convolutional neural network architecture to learn policies without additional domain knowledge beyond the inputs, actions, and rewards.
Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-MeshIDES Editor
Forecasting always plays an important role in
business, technology and many others and it helps
organizations to increase profits, reduce lost sales and more
efficient production planning. A parallel algorithm for
forecasting reported recently on OTIS-Mesh[9]. This parallel
algorithm requires 5( Root n– 1) electronic steps and 4 optical
steps. In this paper we present an improved parallel algorithm
for time series short term forecasting using OTIS-Mesh. This
parallel algorithm requires 5(-1) electronic steps and 1 optical
step using same number of I/O ports as considered in [9] and
shown to be an improvement over the parallel algorithm for
time series forecasting using OTIS-Mesh [9].
Multi objective predictive control a solution using metaheuristicsijcsit
The application of multi objective model predictive control approaches is significantly limited with
computation time associated with optimization algorithms. Metaheuristics are general purpose heuristics
that have been successfully used in solving difficult optimization problems in a reasonable computation
time. In this work , we use and compare two multi objective metaheuristics, Multi-Objective Particle
swarm Optimization, MOPSO, and Multi-Objective Gravitational Search Algorithm, MOGSA, to generate
a set of approximately Pareto-optimal solutions in a single run. Two examples are studied, a nonlinear
system consisting of two mobile robots tracking trajectories and avoiding obstacles and a linear multi
variable system. The computation times and the quality of the solution in terms of the smoothness of the
control signals and precision of tracking show that MOPSO can be an alternative for real time
applications.
This document describes a deep reinforcement learning method called DQN that achieved human-level performance on 49 Atari 2600 games. The DQN uses a convolutional neural network to learn successful policies for playing games directly from raw pixel inputs. It outperformed existing reinforcement learning methods on 43 of the 49 games and achieved over 75% of a human tester's score on 29 games. The DQN was able to stably train large neural networks using reinforcement learning and stochastic gradient descent to learn policies from high-dimensional visual inputs with minimal prior knowledge.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
This document discusses genomic meta-analysis and summarization techniques. It introduces MetaQC for quality control, MetaDE for detecting differentially expressed genes through meta-analysis, and MetaPCA for integrative visualization of multiple genomic studies. MetaQC uses quality measures to determine inclusion/exclusion of studies in meta-analysis. MetaDE detects biomarkers statistically significant across studies using Fisher's and adaptive weighting methods. MetaPCA integrates multiple genomic datasets by finding a common principal component space.
- The document describes a reinforcement learning method using deep neural networks called DQN that was able to learn successful policies to play 49 Atari 2600 games directly from raw pixel inputs, outperforming prior methods on 43 games.
- DQN trained large neural networks using a reinforcement learning signal and stochastic gradient descent in a stable manner. Its performance was comparable to human-level performance on over half the games.
- The method took high-dimensional video game inputs and used a convolutional neural network architecture to learn policies without additional domain knowledge beyond the inputs, actions, and rewards.
Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-MeshIDES Editor
Forecasting always plays an important role in
business, technology and many others and it helps
organizations to increase profits, reduce lost sales and more
efficient production planning. A parallel algorithm for
forecasting reported recently on OTIS-Mesh[9]. This parallel
algorithm requires 5( Root n– 1) electronic steps and 4 optical
steps. In this paper we present an improved parallel algorithm
for time series short term forecasting using OTIS-Mesh. This
parallel algorithm requires 5(-1) electronic steps and 1 optical
step using same number of I/O ports as considered in [9] and
shown to be an improvement over the parallel algorithm for
time series forecasting using OTIS-Mesh [9].
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Bat Algorithm is Better Than Intermittent Search StrategyXin-She Yang
This document compares the bat algorithm to the intermittent search strategy for balancing exploration and exploitation in metaheuristic optimization algorithms. It reviews several metaheuristic algorithms and analyzes the theoretical basis for optimal balancing of exploration and exploitation phases. Equations are presented for the optimal ratio of exploration and exploitation phases in 2D problems based on the intermittent search strategy. The bat algorithm is described and its ability to achieve near-optimal balancing is demonstrated through numerical experiments on test functions. The document concludes higher dimensional problems require more exploration effort to find global optima with limited computations.
increasing the action gap - new operators for reinforcement learningRyo Iwaki
The document introduces new operators called consistent Bellman operators for reinforcement learning. These operators aim to increase the "action gap" or difference in value between the optimal action and suboptimal actions at each state. Increasing the action gap makes value function approximation and estimation errors less impactful on the induced greedy policy. The consistent Bellman operator incorporates a notion of local policy consistency to devalue suboptimal actions while preserving optimal values, providing a first-order solution to inconsistencies from function approximation. Experiments showed these operators achieve overwhelming performance on Atari 2600 games and other tasks.
Hyperparameter optimization with approximate gradientFabian Pedregosa
This document discusses hyperparameter optimization using approximate gradients. It introduces the problem of optimizing hyperparameters along with model parameters. While model parameters can be estimated from data, hyperparameters require methods like cross-validation. The document proposes using approximate gradients to optimize hyperparameters more efficiently than costly methods like grid search. It derives the gradient of the objective with respect to hyperparameters and presents an algorithm called HOAG that approximates this gradient using inexact solutions. The document analyzes HOAG's convergence and provides experimental results comparing it to other hyperparameter optimization methods.
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
The document provides an overview of self-organizing maps (SOM). It defines SOM as an unsupervised learning technique that reduces the dimensions of data through the use of self-organizing neural networks. SOM is based on competitive learning where the closest neural network unit to the input vector (the best matching unit or BMU) is identified and adjusted along with neighboring units. The algorithm involves initializing weight vectors, presenting input vectors, identifying the BMU, and updating weights of the BMU and neighboring units. SOM can be used for applications like dimensionality reduction, clustering, and visualization.
Now a day enormous amount of data is getting explored through Internet of Things (IoT) as technologies
are advancing and people uses these technologies in day to day activities, this data is termed as Big Data
having its characteristics and challenges. Frequent Itemset Mining algorithms are aimed to disclose
frequent itemsets from transactional database but as the dataset size increases, it cannot be handled by
traditional frequent itemset mining. MapReduce programming model solves the problem of large datasets
but it has large communication cost which reduces execution efficiency. This proposed new pre-processed
k-means technique applied on BigFIM algorithm. ClustBigFIM uses hybrid approach, clustering using kmeans
algorithm to generate Clusters from huge datasets and Apriori and Eclat to mine frequent itemsets
from generated clusters using MapReduce programming model. Results shown that execution efficiency of
ClustBigFIM algorithm is increased by applying k-means clustering algorithm before BigFIM algorithm as
one of the pre-processing technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
This document describes a proposed image indexing and retrieval algorithm using Texture Local Tetra Pattern (LTrP) with Gabor Transform.
The algorithm first finds the direction of each pixel and divides patterns into four parts based on the center pixel direction. It then calculates tetra patterns and separates them into binary patterns. Histograms are constructed from the binary patterns to form a feature vector.
The feature vectors of images in a medical image database are compared to a query image to retrieve similar images. Examples show a heart image used as the query to successfully retrieve related heart images from the database. Performance of the combined Gabor Transform and LTrP approach is analyzed.
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
INVERSIONOF MAGNETIC ANOMALIES DUE TO 2-D CYLINDRICAL STRUCTURES –BY AN ARTIF...ijsc
Application of Artificial Neural Network Committee Machine (ANNCM) for the inversion of magnetic
anomalies caused by a long-2D horizontal circular cylinder is presented. Although, the subsurface targets
are of arbitrary shape, they are assumed to be regular geometrical shape for convenience of mathematical
analysis. ANNCM inversion extract the parameters of the causative subsurface targets include depth to the
centre of the cylinder (Z), the inclination of magnetic vector(Ɵ)and the constant term (A)comprising the
radius(R)and the intensity of the magnetic field(I). The method of inversion is demonstrated over a
theoretical model with and without random noise in order to study the effect of noise on the technique and
then extended to real field data. It is noted that the method under discussion ensures fairly accurate results
even in the presence of noise. ANNCM analysis of vertical magnetic anomaly near Karimnagar, Telangana,
India, has shown satisfactory results in comparison with other inversion techniques that are in vogue.The
statistics of the predicted parameters relative to the measured data, show lower sum error (<9.58%) and
higher correlation coefficient (R>91%) indicating that good matching and correlation is achieved between
the measured and predicted parameters.
Another Adaptive Approach to Novelty Detection in Time Series csandit
This paper introduces a novel approach to novelty detection in time series data. The approach uses a neural network model to predict individual samples in a time series. Novelty is detected based on both the prediction error and the changes to the neural network weights from gradient descent learning. The relationship between prediction error and weight changes is key to the approach. The method is demonstrated on both artificial and real ECG time series data, showing it can detect small perturbations in the data even when noise is present. The approach is computationally efficient and could be useful for online novelty detection applications.
Enhancing Academic Event Participation with Context-aware and Social Recommen...Dejan Kovachev
The plethora of talks and presentations taking place at academic conferences makes it difficult, especially for young researchers to attend the
right talks or discuss with participants and potential collaborators with similar interests. Participants may not have a priori knowledge that allows
them to select the right talks or informal interactions with other participants. In this paper we present the context-aware mobile
recommendation services (CAMRS) based on the current context (whereabouts at the venue, popularity and activities of talks and presentations)
sensed at the conference venue. Additionally, we augment the current context with the academic community context of conference participants
which is inferred by using social network analysis and link prediction on large-scale co-authorship and citation networks of participants. By
combining the dynamic and social context of participants, we are able to recommend talks and people that may be interesting to a particular
participant. We evaluated CAMRS using data from two large digital libraries - the DBLP and CiteSeerX, and participants from two conferences -
ICWL 2010 and EC-TEL 2011. The result shows that the new approach can recommend novel talks and helps participants in establishing new
connections at conference venue.
The document proposes extending the IMS Learning Design information model to support mobile and contextual learning. It presents a Context-Aware m-Learning Design model that incorporates context elements like location and environment. The model is evaluated through usability testing and server log file analysis. The goal is to facilitate adoption of mobile learning tools that are compliant with the IMS Learning Design specification.
Context-aware preference modeling with factorizationBalázs Hidasi
- The document outlines Balázs Hidasi's research on context-aware recommendation models using factorization techniques.
- It introduces context-aware algorithms like iTALS and iTALSx that estimate preferences using ALS learning and scale linearly with data.
- Methods for speeding up ALS through approximate solutions like ALS-CG and ALS-CD are described, providing significant speed gains.
- A General Factorization Framework (GFF) is presented that allows experimenting with novel context-aware preference models beyond traditional approaches.
[CARS2012@RecSys]Optimal Feature Selection for Context-Aware Recommendation u...YONG ZHENG
This document summarizes a research paper on optimal feature selection for context-aware recommendation systems using differential relaxation. The paper proposes a differential context relaxation (DCR) model that applies different context relaxations to different components of a recommendation algorithm to maximize their contributions. It uses binary particle swarm optimization to efficiently find optimal context relaxations and outperforms exhaustive search. Experimental results on a food preference dataset show the effects of different contexts and context-linked features. The paper discusses limitations and opportunities for future work to address sparsity issues.
Introduction
Foreign Object Damage
– An aviation perspective
Health, Safety and Environment – a holistic approach
Engaging the human element
Culture
Leadership’s role
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Bat Algorithm is Better Than Intermittent Search StrategyXin-She Yang
This document compares the bat algorithm to the intermittent search strategy for balancing exploration and exploitation in metaheuristic optimization algorithms. It reviews several metaheuristic algorithms and analyzes the theoretical basis for optimal balancing of exploration and exploitation phases. Equations are presented for the optimal ratio of exploration and exploitation phases in 2D problems based on the intermittent search strategy. The bat algorithm is described and its ability to achieve near-optimal balancing is demonstrated through numerical experiments on test functions. The document concludes higher dimensional problems require more exploration effort to find global optima with limited computations.
increasing the action gap - new operators for reinforcement learningRyo Iwaki
The document introduces new operators called consistent Bellman operators for reinforcement learning. These operators aim to increase the "action gap" or difference in value between the optimal action and suboptimal actions at each state. Increasing the action gap makes value function approximation and estimation errors less impactful on the induced greedy policy. The consistent Bellman operator incorporates a notion of local policy consistency to devalue suboptimal actions while preserving optimal values, providing a first-order solution to inconsistencies from function approximation. Experiments showed these operators achieve overwhelming performance on Atari 2600 games and other tasks.
Hyperparameter optimization with approximate gradientFabian Pedregosa
This document discusses hyperparameter optimization using approximate gradients. It introduces the problem of optimizing hyperparameters along with model parameters. While model parameters can be estimated from data, hyperparameters require methods like cross-validation. The document proposes using approximate gradients to optimize hyperparameters more efficiently than costly methods like grid search. It derives the gradient of the objective with respect to hyperparameters and presents an algorithm called HOAG that approximates this gradient using inexact solutions. The document analyzes HOAG's convergence and provides experimental results comparing it to other hyperparameter optimization methods.
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
The document provides an overview of self-organizing maps (SOM). It defines SOM as an unsupervised learning technique that reduces the dimensions of data through the use of self-organizing neural networks. SOM is based on competitive learning where the closest neural network unit to the input vector (the best matching unit or BMU) is identified and adjusted along with neighboring units. The algorithm involves initializing weight vectors, presenting input vectors, identifying the BMU, and updating weights of the BMU and neighboring units. SOM can be used for applications like dimensionality reduction, clustering, and visualization.
Now a day enormous amount of data is getting explored through Internet of Things (IoT) as technologies
are advancing and people uses these technologies in day to day activities, this data is termed as Big Data
having its characteristics and challenges. Frequent Itemset Mining algorithms are aimed to disclose
frequent itemsets from transactional database but as the dataset size increases, it cannot be handled by
traditional frequent itemset mining. MapReduce programming model solves the problem of large datasets
but it has large communication cost which reduces execution efficiency. This proposed new pre-processed
k-means technique applied on BigFIM algorithm. ClustBigFIM uses hybrid approach, clustering using kmeans
algorithm to generate Clusters from huge datasets and Apriori and Eclat to mine frequent itemsets
from generated clusters using MapReduce programming model. Results shown that execution efficiency of
ClustBigFIM algorithm is increased by applying k-means clustering algorithm before BigFIM algorithm as
one of the pre-processing technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
This document describes a proposed image indexing and retrieval algorithm using Texture Local Tetra Pattern (LTrP) with Gabor Transform.
The algorithm first finds the direction of each pixel and divides patterns into four parts based on the center pixel direction. It then calculates tetra patterns and separates them into binary patterns. Histograms are constructed from the binary patterns to form a feature vector.
The feature vectors of images in a medical image database are compared to a query image to retrieve similar images. Examples show a heart image used as the query to successfully retrieve related heart images from the database. Performance of the combined Gabor Transform and LTrP approach is analyzed.
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
INVERSIONOF MAGNETIC ANOMALIES DUE TO 2-D CYLINDRICAL STRUCTURES –BY AN ARTIF...ijsc
Application of Artificial Neural Network Committee Machine (ANNCM) for the inversion of magnetic
anomalies caused by a long-2D horizontal circular cylinder is presented. Although, the subsurface targets
are of arbitrary shape, they are assumed to be regular geometrical shape for convenience of mathematical
analysis. ANNCM inversion extract the parameters of the causative subsurface targets include depth to the
centre of the cylinder (Z), the inclination of magnetic vector(Ɵ)and the constant term (A)comprising the
radius(R)and the intensity of the magnetic field(I). The method of inversion is demonstrated over a
theoretical model with and without random noise in order to study the effect of noise on the technique and
then extended to real field data. It is noted that the method under discussion ensures fairly accurate results
even in the presence of noise. ANNCM analysis of vertical magnetic anomaly near Karimnagar, Telangana,
India, has shown satisfactory results in comparison with other inversion techniques that are in vogue.The
statistics of the predicted parameters relative to the measured data, show lower sum error (<9.58%) and
higher correlation coefficient (R>91%) indicating that good matching and correlation is achieved between
the measured and predicted parameters.
Another Adaptive Approach to Novelty Detection in Time Series csandit
This paper introduces a novel approach to novelty detection in time series data. The approach uses a neural network model to predict individual samples in a time series. Novelty is detected based on both the prediction error and the changes to the neural network weights from gradient descent learning. The relationship between prediction error and weight changes is key to the approach. The method is demonstrated on both artificial and real ECG time series data, showing it can detect small perturbations in the data even when noise is present. The approach is computationally efficient and could be useful for online novelty detection applications.
Enhancing Academic Event Participation with Context-aware and Social Recommen...Dejan Kovachev
The plethora of talks and presentations taking place at academic conferences makes it difficult, especially for young researchers to attend the
right talks or discuss with participants and potential collaborators with similar interests. Participants may not have a priori knowledge that allows
them to select the right talks or informal interactions with other participants. In this paper we present the context-aware mobile
recommendation services (CAMRS) based on the current context (whereabouts at the venue, popularity and activities of talks and presentations)
sensed at the conference venue. Additionally, we augment the current context with the academic community context of conference participants
which is inferred by using social network analysis and link prediction on large-scale co-authorship and citation networks of participants. By
combining the dynamic and social context of participants, we are able to recommend talks and people that may be interesting to a particular
participant. We evaluated CAMRS using data from two large digital libraries - the DBLP and CiteSeerX, and participants from two conferences -
ICWL 2010 and EC-TEL 2011. The result shows that the new approach can recommend novel talks and helps participants in establishing new
connections at conference venue.
The document proposes extending the IMS Learning Design information model to support mobile and contextual learning. It presents a Context-Aware m-Learning Design model that incorporates context elements like location and environment. The model is evaluated through usability testing and server log file analysis. The goal is to facilitate adoption of mobile learning tools that are compliant with the IMS Learning Design specification.
Context-aware preference modeling with factorizationBalázs Hidasi
- The document outlines Balázs Hidasi's research on context-aware recommendation models using factorization techniques.
- It introduces context-aware algorithms like iTALS and iTALSx that estimate preferences using ALS learning and scale linearly with data.
- Methods for speeding up ALS through approximate solutions like ALS-CG and ALS-CD are described, providing significant speed gains.
- A General Factorization Framework (GFF) is presented that allows experimenting with novel context-aware preference models beyond traditional approaches.
[CARS2012@RecSys]Optimal Feature Selection for Context-Aware Recommendation u...YONG ZHENG
This document summarizes a research paper on optimal feature selection for context-aware recommendation systems using differential relaxation. The paper proposes a differential context relaxation (DCR) model that applies different context relaxations to different components of a recommendation algorithm to maximize their contributions. It uses binary particle swarm optimization to efficiently find optimal context relaxations and outperforms exhaustive search. Experimental results on a food preference dataset show the effects of different contexts and context-linked features. The paper discusses limitations and opportunities for future work to address sparsity issues.
Introduction
Foreign Object Damage
– An aviation perspective
Health, Safety and Environment – a holistic approach
Engaging the human element
Culture
Leadership’s role
Semantic Technologies in Learning Environments -Promises and Challenges-Dragan Gasevic
The document discusses the promises and challenges of semantic technologies in learning environments. It describes how semantics can improve search, reusability, and provide richer metadata for learning objects. However, challenges remain in ontology development, integrating different approaches, motivating contribution from students, and addressing usability and privacy concerns in personalized learning systems.
Slides from the presentation of TFMAP at SIGIR 2012.
TFMAP, is a Collaborative Filtering model that directly maximizes Mean Average Precision with the aim of creating an optimally ranked list of items for individual users under a given context. TFMAP uses tensor factorization to model implicit feedback data (e.g., purchases, clicks) along with contextual information
Review: Google I/O 2015 Building context aware appsEmpatika
The document discusses building context-aware apps using human sensors as a metaphor. It explains that context-aware apps have three main components: sensors, algorithms, and user experience (UX). The document also provides examples of Google use cases for context-aware apps like finding a car or tracking exercise, and emphasizes the importance of energy efficiency and providing what users truly need. It asks for ideas on how these concepts could apply to an "App in the Air" and includes a link to a relevant video.
Filip Maertens - Artificial Intelligence: Building Emotion & Context aware Re...BAQMaR
Argus Labs has created a sensor fusion platform that can understand the context, behavior, and mood of mobile users in real-time using deep learning. The platform can detect emotions, activities, and habits based on sensor data from devices. Argus Labs works with industries like insurance, media, and healthcare to apply contextual insights about users for applications like personalized recommendations, usage-based insurance, and diagnostic support.
The right learning delivered at the right time can have a significant impact on productivity, but when wrong, it can be a costly distraction. Organizations looking to create a competitive advantage have to deliver differentiated solutions to their people. As jobs become more specialized and the workforce increasingly diverse, the time for cookie-cutter learning programs has passed. To really increase the productivity of people in your organization, their experience needs to be personalized — the information and recommendations they receive and the actions they take must all be relevant and helpful.
Context-aware learning combines situational and environmental information with other information to proactively offer enriched, usable content, functions and experiences that are hyper-personalized and relevant to the receiver. By leveraging a broad range of information about an individual, this tailored learning creates significantly greater value.
Join Steve Parker, SPHR, and vice president at SumTotal Systems, the world’s leader in learning and the first provider of context-aware HR technology, to learn how to drive greater impact with your learning programs through context. You’ll learn:
How data drives context, and how to get the right data fast.
Why the right technology is important, and why you don’t have to get rid of what you already have.
How to put people, not process, at the center of your learning programs.
Join us for an exciting webinar on this significant breakthrough in learning delivery and break free from one-size-fits-all learning. With SumTotal, talent is boundless.
Empirical Evaluation of Active Learning in Recommender SystemsUniversity of Bergen
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor quality data during training. Active learning aims to remedy this problem by focusing on obtaining better quality data that more aptly reflects a user’s preferences. In attempt to do that, an active learning strategy selects the best items to be presented to the user in order to acquire her ratings and hence improve the output of the RS.
In this seminar, I present a set of active learning strategies with different characteristics and the evaluation results with respect to several evaluation measures (i.e., MAE, NDCG, Precision, Coverage, Recommendation Quality, and, Quantity of the acquired ratings and contextual conditions).
The traditional evaluation of active learning strategies has two major flaws: (1) Performance has been evaluated for each user independently (ignoring system-wide improvements) (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). Addressing these flaws, I present that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system-centric). Hence, I present a novel offline evaluation methodology and use it to evaluate some novel and state of the art rating elicitation strategies.
While the first set of experiments was done offline, the true value of active learning must be evaluated in an online setting. Hence, in the second part of the seminar, I present a novel active learning approach that exploits some additional information of the user (i.e. the user’s personality) to deal with the cold start problem in an up-and-running mobile context-aware RS called STS, that provides users with recommendations for places of interest (POIs). The results of live user studies, have shown that the proposed AL approach significantly increases the quantity of the ratings and contextual conditions acquired from the user as well as the recommendation accuracy.
Mobile teaching and learning in higher education is approaching a tipping point. One of the most significant promises of mobile learning is the ability for faculty members, teachers, and students to use their own mobile computing devices. In the US, 75% of American teens have cell phones and almost 30% have smartphones with Internet capabilities. In universities, the numbers appear to be much higher. It seems instructionally sound and fiscally prudent for institutions and faculty members to leverage the existing devices in which students are most comfortable. The purpose of this paper is to (1) critically examine the definitions and affordances of mobile learning in higher education, (2) identify the ways mobile teaching and learning have been and could be accomplished in higher education, (3) identify the challenges to implementing mobile teaching and learning in higher education.
This document discusses the concept of mobile learning in context. It describes how computers and mobile devices are becoming ubiquitous and context-aware. Sensors in environments and on mobile devices can provide contextual information to enhance learning experiences. However, mobile phones are still often seen only as toys in classrooms rather than learning tools. The document advocates for leveraging context through ubiquitous computing to design new approaches to mobile and ambient learning.
[Decisions2013@RecSys]The Role of Emotions in Context-aware RecommendationYONG ZHENG
The document discusses the role of emotions in context-aware recommender systems (CARS). It explores two classes of CARS algorithms: context-aware splitting approaches and differential context modeling. For context-aware splitting approaches, it examines which emotional contexts are most frequently used to split items or users. For differential context modeling, it analyzes which emotional dimensions are selected or weighted most highly for different algorithm components. The experimental results found that the emotions of end emotion and dominant emotion were the most influential across approaches. User splitting also generally outperformed item splitting.
Context-Aware Access Control and Presentation of Linked DataLuca Costabello
My PhD Thesis defence slideshow. The work discusses the influence of mobile context in accessing Linked Data from handheld devices. The work dissects this issue into two research questions: how to enable context-aware adaptation for Linked Data consumption, and how to protect access to RDF stores from context-aware devices.
Context-Aware Points of Interest Suggestion with Dynamic Weather Data ManagementMatthias Braunhofer
Weather plays an important role in tourists’ decision-making and, for instance, some places or activities must not be even suggested under dangerous weather conditions. In this paper we present a context-aware recommender system, named STS, that computes recommendations suited for the weather conditions at the recommended places of interest (POI) by exploiting a novel model-based context-aware recommendation technique. In a live user study we have compared the performance of the system with a variant that does not exploit weather data when generating recommendations. The results of our experiment have shown that the proposed approach obtains a higher perceived recommendation quality and choice satisfaction.
A Context-Aware Retrieval System for Mobile Applicationsmarcopavan83
We present a prototype recommendation system for mobile applications that exploits a rather general description of the user’s context. One of the main features of the proposed so- lution is the proactive and completely automated procedure of querying the apps marketplace, able to retrieve a set of apps and to rank them on the basis of the current situation of the user. We also present a first experimental evaluation that confirms the effectiveness of the general design and im- plementation choices and sheds some light on the peculiar features and critical issues of recommendation systems for mobile applications.
1. Context-aware computing uses information about a user's environment and situation to provide tailored services, with the goal of delivering the right service at the right moment.
2. Context includes information such as location, identity, activity, schedule, nearby resources and more. It comes from various sources and changes over time.
3. Designing context-aware applications and systems requires acquiring context information, reasoning about it, and using it intelligently to benefit users or services while maintaining user privacy and control. Many technical and research challenges remain open.
The internet of things the next technology revolutionusman sarwar
This presentation provides overview of IoT technology from multifold perspective. It illustrates the IoT technological development areas, Market trends, platforms, IoT research and application trends.
The slides from the Machine Learning Summers School 2015 in Sydney on Machine Learning for Recommender Systems. Collaborative filtering algorithms, Context-aware methods, Restricted Boltzmann Machines, Recurrent Neural Networks, Tensor Factorization, etc.
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsXin-She Yang
This document analyzes metaheuristic optimization algorithms and discusses open problems in their analysis. It reviews convergence analyses that have been done for simulated annealing and particle swarm optimization. It also provides a novel convergence analysis for the firefly algorithm, showing that it can converge for certain parameter values but also exhibit chaos which can be advantageous for exploration. The document outlines the need for further mathematical analysis of convergence and efficiency in metaheuristics.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
Two-Stage Eagle Strategy with Differential EvolutionXin-She Yang
The document describes a two-stage optimization strategy called the Eagle Strategy (ES) that combines global and local search algorithms to improve search efficiency. It evaluates applying ES to differential evolution (DE), a popular evolutionary algorithm. ES first uses randomization like Levy flights for global exploration, then switches to DE for intensive local search around promising solutions. The authors validate ES-DE on test functions, finding it requires only 9.7-24.9% of the function evaluations of pure DE. They also apply it to real-world pressure vessel and gearbox design problems, achieving solutions with 14.9-17.7% fewer function evaluations than pure DE.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document discusses using particle swarm optimization (PSO) to design optimal close-range photogrammetry networks. PSO is introduced as a heuristic optimization algorithm inspired by bird flocking behavior that can be used to solve complex optimization problems. The document then provides an overview of close-range photogrammetry network design and the four design stages. It explains that PSO will be used to optimize the first stage of determining optimal camera station positions. Mathematical models of PSO for close-range photogrammetry network design are developed. Experimental tests are carried out to develop a PSO algorithm that can determine optimum camera positions and evaluate the accuracy of the developed network.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...ijcsit
Enterprise financial distress or failure includes bankruptcy prediction, financial distress, corporate performance prediction and credit risk estimation. The aim of this paper is that using wavelet networks innon-linear combination prediction to solve ARMA (Auto-Regressive and Moving Average) model problem.ARMA model need estimate the value of all parameters in the model, it has a large amount of computation.Under this aim, the paper provides an extensive review of Wavelet networks and Logistic regression. Itdiscussed the Wavelet neural network structure, Wavelet network model training algorithm, Accuracy rateand error rate (accuracy of classification, Type I error, and Type II error). The main research opportunity exist a proposed of business failure prediction model (wavelet network model and logistic regression
model). The empirical research which is comparison of Wavelet Network and Logistic Regression on training and forecasting sample, the result shows that this wavelet network model is high accurate and the overall prediction accuracy, Type Ⅰerror and Type Ⅱ error, wavelet networks model is better thanlogistic regression model.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
Solving multiple sequence alignment problems by using a swarm intelligent op...IJECEIAES
In this article, the alignment of multiple sequences is examined through swarm intelligence based an improved particle swarm optimization (PSO). A random heuristic technique for solving discrete optimization problems and realistic estimation was recently discovered in PSO. The PSO approach is a nature-inspired technique based on intelligence and swarm movement. Thus, each solution is encoded as “chromosomes” in the genetic algorithm (GA). Based on the optimization of the objective function, the fitness function is designed to maximize the suitable components of the sequence and reduce the unsuitable components of the sequence. The availability of a public benchmark data set such as the Bali base is seen as an assessment of the proposed system performance, with the potential for PSO to reveal problems in adapting to better performance. This proposed system is compared with few existing approaches such as deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) alignment (DIALIGN), PILEUP8, hidden Markov model training (HMMT), rubber band technique-genetic algorithm (RBT-GA) and ML-PIMA. In many cases, the experimental results are well implemented in the proposed system compared to other existing approaches.
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
This document summarizes recent advances and applications of the cuckoo search algorithm, a nature-inspired metaheuristic optimization algorithm developed in 2009. Cuckoo search mimics the brood parasitism breeding behavior of some cuckoo species. It uses a combination of local and global search achieved through random walks and Levy flights to efficiently explore the search space. Studies show cuckoo search often finds optimal solutions faster than genetic algorithms and particle swarm optimization. The algorithm has been applied to diverse optimization problems and continues to be improved and extended to multi-objective optimization.
Security constrained optimal load dispatch using hpso technique for thermal s...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Security constrained optimal load dispatch using hpso technique for thermal s...eSAT Journals
Abstract This paper presents Hybrid Particle Swarm Optimization (HPSO) technique to solve the Optimal Load Dispatch (OLD) problems with line flow constrain, bus voltage limits and generator operating constraints. In the proposed HPSO method both features of EP and PSO are incorporated, so the combined HPSO algorithm may become more effective to find the optimal solutions. In this paper, the proposed Hybrid PSO, PSO and EP techniques have been tested on IEEE14, 30 bus systems. Numerical simulation results show that the Hybrid PSO algorithm outperformed standard PSO algorithm and Evolution Programming method on the same problem and can save considerable cost of Optimal Load Dispatch.
This document describes a study on enhancing the serial estimation of discrete choice models through the use of standardization, warm starting, and early stopping techniques. It first reviews literature on accelerating discrete choice model estimation and quasi-Newton optimization methods. It then details the three techniques used: standardizing variables, initializing subsequent models with previous solutions, and stopping optimization early based on log-likelihood trends. Two sequences of 100 discrete choice models are tested to evaluate the effectiveness of the techniques. Results show that warm starting parameters and the Hessian matrix from previous solutions significantly reduces estimation time compared to estimating models separately.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
This document summarizes literature on using bio-inspired algorithms to optimize fuzzy clustering. It describes the general architecture of how bio-inspired optimization algorithms can be applied to optimize parameters of fuzzy clustering algorithms and improve clustering quality. The document reviews several popular bio-inspired optimization algorithms and analyzes literature on optimization fuzzy clustering, identifying China, India, and the United States as the top publishing countries. Network analysis is applied to literature on the topic to identify clusters in the research.
Analytical study of feature extraction techniques in opinion miningcsandit
Although opinion mining is in a nascent stage of development but still the ground is set for
dense growth of researches in the field. One of the important activities of opinion mining is to
extract opinions of people based on characteristics of the object under study. Feature extraction
in opinion mining can be done by various ways like that of clustering, support vector machines
etc. This paper is an attempt to appraise the various techniques of feature extraction. The first
part discusses various techniques and second part makes a detailed appraisal of the major
techniques used for feature extraction
Similar to Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications (20)
Cuckoo Search Algorithm: An IntroductionXin-She Yang
This presentation explains the fundamental ideas of the standard Cuckoo Search (CS) algorithm, which also contains the links to the free Matlab codes at Mathswork file exchanges and the animations of numerical simulations (video at Youtube). An example of multi-objective cuckoo search (MOCS) is also given with link to the Matlab code.
Metaheuristic Algorithms: A Critical AnalysisXin-She Yang
The document discusses metaheuristic algorithms and their application to optimization problems. It provides an overview of several nature-inspired algorithms including particle swarm optimization, firefly algorithm, harmony search, and cuckoo search. It describes how these algorithms were inspired by natural phenomena like swarming behavior, flashing fireflies, and bird breeding. The document also discusses applications of these algorithms to engineering design problems like pressure vessel design and gear box design optimization.
Nature-Inspired Optimization Algorithms Xin-She Yang
This document discusses nature-inspired optimization algorithms. It begins with an overview of the essence of optimization algorithms and their goal of moving to better solutions. It then discusses some issues with traditional algorithms and how nature-inspired algorithms aim to address these. Several nature-inspired algorithms are described in detail, including particle swarm optimization, firefly algorithm, cuckoo search, and bat algorithm. These are inspired by behaviors in swarms, fireflies, cuckoos, and bats respectively. Examples of applications to engineering design problems are also provided.
A Biologically Inspired Network Design ModelXin-She Yang
This document summarizes a biologically inspired network design model based on the foraging behavior of the slime mold Physarum polycephalum. The model uses a gravity model to estimate traffic flows between cities and simulates the slime mold's development of a protoplasmic network to connect food sources. It applies this approach to design transportation networks for Mexico and China, comparing the results to existing networks. The networks are evaluated based on cost, efficiency, and robustness. The model converges to solutions that balance these factors in a flexible and optimized way inspired by biological networks.
Multiobjective Bat Algorithm (demo only)Xin-She Yang
The document describes a Bat Algorithm used for multi-objective optimization. It includes the pseudo code for the Bat Algorithm and describes how it generates potential solutions and updates them over iterations to find optimal trade-offs between two objectives. It also includes two objective functions used as examples to generate a Pareto front of optimal solutions.
This document contains code for a bat-inspired algorithm for continuous optimization. It includes a function that implements the bat algorithm to minimize an objective function. The bat algorithm is a metaheuristic algorithm that simulates the echolocation behavior of bats. It initializes a population of bats with random solutions and velocities, then iteratively updates the solutions and tracks the best solution found based on the objective function value.
This document contains Matlab code that implements the firefly algorithm to solve constrained optimization problems. The firefly algorithm is used to minimize an objective function with bounds on the variables. It initializes a population of fireflies randomly within the bounds, calculates their light intensities based on the objective function, and iteratively moves the fireflies towards more intense ones while enforcing the bounds.
Flower Pollination Algorithm (matlab code)Xin-She Yang
This document describes the flower pollination algorithm (FPA), a nature-inspired metaheuristic algorithm for optimization problems. It contains the basic components of FPA implemented in a demo program for single objective optimization of unconstrained functions. FPA mimics the pollination process of flowers, where pollen can be transported over long distances by insects or animals, and reproduced by local pollination among neighboring flowers of the same species. The demo program initializes a population of solutions, evaluates their fitness, and then iteratively updates the solutions using either long distance global pollination or local pollination until a maximum number of iterations is reached.
Nature-Inspired Metaheuristic AlgorithmsXin-She Yang
This chapter introduces optimization problems and nature-inspired metaheuristics. Optimization problems involve minimizing or maximizing objective functions subject to constraints. Nature-inspired metaheuristics are computational algorithms inspired by natural phenomena, such as simulated annealing, genetic algorithms, particle swarm optimization, and ant colony optimization. They provide near-optimal solutions to complex optimization problems.
Metaheuristics and Optimiztion in Civil EngineeringXin-She Yang
This document provides an overview of metaheuristic algorithms that have been applied to optimization problems in civil engineering. It discusses several commonly used metaheuristic algorithms, including genetic algorithms, simulated annealing, ant colony optimization, and particle swarm optimization. The document also provides examples of applications of these algorithms to problems in areas such as structural engineering, transportation engineering, and geotechnical engineering.
A Biologically Inspired Network Design ModelXin-She Yang
This document summarizes a biologically inspired network design model based on the foraging behavior of the slime mold Physarum polycephalum. The model uses a gravity model to estimate traffic flows between cities and simulates the slime mold's development of a protoplasmic network to connect food sources. It applies this approach to design transportation networks for Mexico and China, comparing the results to existing networks. The networks are evaluated based on cost, efficiency, and robustness. The model converges to solutions that balance these factors in a flexible and optimized way inspired by biological networks.
Memetic Firefly algorithm for combinatorial optimizationXin-She Yang
The document proposes a memetic firefly algorithm (MFFA) for solving combinatorial optimization problems, specifically graph 3-coloring problems. The MFFA represents solutions as real-valued vectors whose elements determine the order vertices are colored. A local search heuristic is also incorporated. The results of the MFFA were compared to other algorithms on random graphs, showing it performs comparably or better at finding solutions. The structure of the paper outlines the graph 3-coloring problem, describes the MFFA approach, and presents experimental results.
Bat Algorithm for Multi-objective OptimisationXin-She Yang
This document proposes a multi-objective bat algorithm (MOBA) to solve multi-objective optimization problems. MOBA extends the previously developed bat algorithm for single objective optimization problems. MOBA uses Pareto dominance to evaluate non-dominated solutions and find an approximation of the true Pareto front. It initializes a population of bats and updates their positions and velocities over iterations to explore the search space. The best current solutions are used to guide the bats towards non-dominated regions.
Are motorways rational from slime mould's point of view?Xin-She Yang
The document discusses an experiment where slime mold Physarum polycephalum was used to approximate real-world motorway networks in 14 geographical regions. Researchers represented major urban areas with food sources and inoculated the slime mold in capital cities to observe how its network of protoplasmic tubes developed. They found the slime mold networks matched the motorway networks to some degree and used various measures to determine which regions had networks best approximated by the slime mold.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
This document provides a list of commonly used test functions for validating new optimization algorithms. It describes 24 test functions, including functions originally developed by De Jong, Griewank, Rastrigin, and Rosenbrock. The test functions have various properties like being unimodal, multimodal, convex, or stochastic. They serve as benchmarks for comparing how well new algorithms can find the optimal value for problems with different characteristics.
Engineering Optimisation by Cuckoo SearchXin-She Yang
This document summarizes a research paper that proposes a new metaheuristic optimization algorithm called Cuckoo Search (CS). CS is inspired by the breeding behavior of some cuckoo species. The paper describes the rules and steps of the CS algorithm, compares its performance to other algorithms on standard test functions and engineering design problems, and discusses unique features of CS like Lévy flights that make it promising for further research.
A New Metaheuristic Bat-Inspired AlgorithmXin-She Yang
This document proposes a new metaheuristic optimization algorithm called the Bat Algorithm (BA) which is inspired by the echolocation behavior of microbats. Microbats use echolocation to detect prey and navigate in darkness by emitting ultrasonic pulses and analyzing the echo. The BA idealizes these behaviors to develop rules for how "bats" can search for the optimal solution. Key behaviors include adjusting pulse rates and loudness based on proximity to the target solution. The BA shows potential to combine advantages of other algorithms like PSO and is shown to perform well in simulations.
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Xin-She Yang
This document proposes a new two-stage hybrid search method called the Eagle Strategy for solving stochastic optimization problems. The Eagle Strategy combines random search using Lévy walk with intensive local search using the Firefly Algorithm. It first uses Lévy walk to randomly explore the search space, then switches to the Firefly Algorithm to intensively search locally around good solutions. Numerical results suggest the Eagle Strategy is efficient for stochastic optimization problems.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications
1. Accelerated Particle Swarm Optimization and
Support Vector Machine for Business
Optimization and Applications
Xin-She Yang1
, Suash Deb2
, and Simon Fong3
1
Department of Engineering, University of Cambridge, Trumpinton Street,
Cambridge CB2 1PZ, UK
xy227@cam.ac.uk
2
Department of Computer Science & Engineering, C.V. Raman College of
Engineering, Bidyanagar, Mahura, Janla, Bhubaneswar 752054, India
suashdeb@gmail.com
3
Department of Computer and Information Science, Faculty of Science and
Technology, University of Macau, Taipa, Macau
ccfong@umac.mo
Abstract. Business optimization is becoming increasingly important
because all business activities aim to maximize the profit and perfor-
mance of products and services, under limited resources and appropriate
constraints. Recent developments in support vector machine and meta-
heuristics show many advantages of these techniques. In particular, par-
ticle swarm optimization is now widely used in solving tough optimiza-
tion problems. In this paper, we use a combination of a recently devel-
oped Accelerated PSO and a nonlinear support vector machine to form
a framework for solving business optimization problems. We first apply
the proposed APSO-SVM to production optimization, and then use it for
income prediction and project scheduling. We also carry out some para-
metric studies and discuss the advantages of the proposed metaheuristic
SVM.
Keywords: Accelerated PSO, business optimization, metaheuristics, PSO,
support vector machine, project scheduling.
1 Introduction
Many business activities often have to deal with large, complex databases. This
is partly driven by information technology, especially the Internet, and partly
driven by the need to extract meaningful knowledge by data mining. To ex-
tract useful information among a huge amount of data requires efficient tools
for processing vast data sets. This is equivalent to trying to find an optimal so-
lution to a highly nonlinear problem with multiple, complex constraints, which
is a challenging task. Various techniques for such data mining and optimization
S. Fong et al. (Eds.): NDT 2011, CCIS 136, pp. 53–66, 2011.
c Springer-Verlag Berlin Heidelberg 2011
2. 54 X.-S. Yang, S. Deb, and S. Fong
have been developed over the past few decades. Among these techniques, support
vector machine is one of the best techniques for regression, classification and data
mining [5,9,16,19,20,24].
On the other hand, metaheuristic algorithms also become powerful for solv-
ing tough nonlinear optimization problems [1,7,8,27,32]. Modern metaheuristic
algorithms have been developed with an aim to carry out global search, typi-
cal examples are genetic algorithms [6], particle swarm optimisation (PSO) [7],
and Cuckoo Search [29,30]. The efficiency of metaheuristic algorithms can be
attributed to the fact that they imitate the best features in nature, especially
the selection of the fittest in biological systems which have evolved by natural
selection over millions of years. Since most data have noise or associated random-
ness, most these algorithms cannot be used directly. In this case, some form of
averaging or reformulation of the problem often helps. Even so, most algorithms
become difficult to implement for such type of optimization.
In addition to the above challenges, business optimization often concerns with
a large amount but often incomplete data, evolving dynamically over time. Cer-
tain tasks cannot start before other required tasks are completed, such complex
scheduling is often NP-hard and no universally efficient tool exists. Recent trends
indicate that metaheuristics can be very promising, in combination with other
tools such as neural networks and support vector machines [5,9,15,21].
In this paper, we intend to present a simple framework of business optimiza-
tion using a combination of support vector machine with accelerated PSO. The
paper is organized as follows: We first will briefly review particle swarm opti-
mization and accelerated PSO, and then introduce the basics of support vector
machines (SVM). We then use three case studies to test the proposed frame-
work. Finally, we discussion its implications and possible extension for further
research.
2 Accelerated Particle Swarm Optimization
2.1 PSO
Particle swarm optimization (PSO) was developed by Kennedy and Eberhart
in 1995 [7,8], based on the swarm behaviour such as fish and bird schooling in
nature. Since then, PSO has generated much wider interests, and forms an ex-
citing, ever-expanding research subject, called swarm intelligence. PSO has been
applied to almost every area in optimization, computational intelligence, and
design/scheduling applications. There are at least two dozens of PSO variants,
and hybrid algorithms by combining PSO with other existing algorithms are also
increasingly popular.
PSO searches the space of an objective function by adjusting the trajectories of
individual agents, called particles, as the piecewise paths formed by positional
vectors in a quasi-stochastic manner. The movement of a swarming particle
consists of two major components: a stochastic component and a deterministic
component. Each particle is attracted toward the position of the current global
3. PSO and SVM for Business Optimization 55
best g∗
and its own best location x∗
i in history, while at the same time it has a
tendency to move randomly.
Let xi and vi be the position vector and velocity for particle i, respectively.
The new velocity vector is determined by the following formula
vt+1
i = vt
i + α 1[g∗
− xt
i] + β 2[x∗
i − xt
i]. (1)
where 1 and 2 are two random vectors, and each entry taking the values between
0 and 1. The parameters α and β are the learning parameters or acceleration
constants, which can typically be taken as, say, α ≈ β ≈ 2.
There are many variants which extend the standard PSO algorithm, and the
most noticeable improvement is probably to use an inertia function θ(t) so that
vt
i is replaced by θ(t)vt
i
vt+1
i = θvt
i + α 1[g∗
− xt
i] + β 2[x∗
i − xt
i], (2)
where θ ∈ (0, 1) [2,3]. In the simplest case, the inertia function can be taken as
a constant, typically θ ≈ 0.5 ∼ 0.9. This is equivalent to introducing a virtual
mass to stabilize the motion of the particles, and thus the algorithm is expected
to converge more quickly.
2.2 Accelerated PSO
The standard particle swarm optimization uses both the current global best g∗
and the individual best x∗
i . The reason of using the individual best is primarily
to increase the diversity in the quality solutions, however, this diversity can be
simulated using some randomness. Subsequently, there is no compelling reason
for using the individual best, unless the optimization problem of interest is highly
nonlinear and multimodal.
A simplified version which could accelerate the convergence of the algorithm is
to use the global best only. Thus, in the accelerated particle swarm optimization
(APSO) [27,32], the velocity vector is generated by a simpler formula
vt+1
i = vt
i + α n + β(g∗
− xt
i), (3)
where n is drawn from N(0, 1) to replace the second term. The update of the
position is simply
xt+1
i = xt
i + vt+1
i . (4)
In order to increase the convergence even further, we can also write the update
of the location in a single step
xt+1
i = (1 − β)xt
i + βg∗
+ α n. (5)
This simpler version will give the same order of convergence. Typically, α =
0.1L ∼ 0.5L where L is the scale of each variable, while β = 0.1 ∼ 0.7 is
sufficient for most applications. It is worth pointing out that velocity does not
appear in equation (5), and there is no need to deal with initialization of velocity
4. 56 X.-S. Yang, S. Deb, and S. Fong
vectors. Therefore, APSO is much simpler. Comparing with many PSO variants,
APSO uses only two parameters, and the mechanism is simple to understand.
A further improvement to the accelerated PSO is to reduce the randomness
as iterations proceed. This means that we can use a monotonically decreasing
function such as
α = α0e−γt
, (6)
or
α = α0γt
, (0 < γ < 1), (7)
where α0 ≈ 0.5 ∼ 1 is the initial value of the randomness parameter. Here t is
the number of iterations or time steps. 0 < γ < 1 is a control parameter [32].
For example, in our implementation, we will use
α = 0.7t
, (8)
where t ∈ [0, tmax] and tmax is the maximum of iterations.
3 Support Vector Machine
Support vector machine (SVM) is an efficient tool for data mining and classifica-
tion [25,26]. Due to the vast volumes of data in business, especially e-commerce,
efficient use of data mining techniques becomes a necessity. In fact, SVM can
also be considered as an optimization tool, as its objective is to maximize the
separation margins between data sets. The proper combination of SVM with
metaheuristics could be advantageous.
3.1 Support Vector Machine
A support vector machine essentially transforms a set of data into a significantly
higher-dimensional space by nonlinear transformations so that regression and
data fitting can be carried out in this high-dimensional space. This methodology
can be used for data classification, pattern recognition, and regression, and its
theory was based on statistical machine learning theory [21,24,25].
For classifications with the learning examples or data (xi, yi) where i =
1, 2, ..., n and yi ∈ {−1, +1}, the aim of the learning is to find a function φα(x)
from allowable functions {φα : α ∈ Ω} such that φα(xi) → yi for (i = 1, 2, ..., n)
and that the expected risk E(α) is minimal. That is the minimization of the risk
E(α) =
1
2
|φα(x) − y|dQ(x, y), (9)
where Q(x, y) is an unknown probability distribution, which makes it impossible
to calculate E(α) directly. A simple approach is to use the so-called empirical
risk
Ep(α) ≈
1
2n
n
i=1
φα(xi) − yi . (10)
5. PSO and SVM for Business Optimization 57
However, the main flaw of this approach is that a small risk or error on the
training set does not necessarily guarantee a small error on prediction if the
number n of training data is small [26].
For a given probability of at least 1 − p, the Vapnik bound for the errors can
be written as
E(α) ≤ Rp(α) + Ψ
h
n
,
log(p)
n
, (11)
where
Ψ
h
n
,
log(p)
n
=
1
n
h(log
2n
h
+ 1) − log(
p
4
) . (12)
Here h is a parameter, often referred to as the Vapnik-Chervonenskis dimension
or simply VC-dimension [24], which describes the capacity for prediction of the
function set φα.
In essence, a linear support vector machine is to construct two hyperplanes
as far away as possible and no samples should be between these two planes.
Mathematically, this is equivalent to two equations
w · x + b = ±1, (13)
and a main objective of constructing these two hyperplanes is to maximize the
distance (between the two planes)
d =
2
||w||
. (14)
Such maximization of d is equivalent to the minimization of ||w|| or more conve-
niently ||w||2
. From the optimization point of view, the maximization of margins
can be written as
minimize
1
2
||w||2
=
1
2
(w · w). (15)
This essentially becomes an optimization problem
minimize Ψ =
1
2
||w||2
+ λ
n
i=1
ηi, (16)
subject to yi(w · xi + b) ≥ 1 − ηi, (17)
ηi ≥ 0, (i = 1, 2, ..., n), (18)
where λ > 0 is a parameter to be chosen appropriately. Here, the term
n
i=1 ηi
is essentially a measure of the upper bound of the number of misclassifications
on the training data.
3.2 Nonlinear SVM and Kernel Tricks
The so-called kernel trick is an important technique, transforming data dimen-
sions while simplifying computation. By using Lagrange multipliers αi ≥ 0, we
6. 58 X.-S. Yang, S. Deb, and S. Fong
can rewrite the above constrained optimization into an unconstrained version,
and we have
L =
1
2
||w||2
+ λ
n
i=1
ηi −
n
i=1
αi[yi(w · xi + b) − (1 − ηi)]. (19)
From this, we can write the Karush-Kuhn-Tucker conditions
∂L
∂w
= w −
n
i=1
αiyixi = 0, (20)
∂L
∂b
= −
n
i=1
αiyi = 0, (21)
yi(w · xi + b) − (1 − ηi) ≥ 0, (22)
αi[yi(w · xi + b) − (1 − ηi)] = 0, (i = 1, 2, ..., n), (23)
αi ≥ 0, ηi ≥ 0, (i = 1, 2, ..., n). (24)
From the first KKT condition, we get
w =
n
i=1
yiαixi. (25)
It is worth pointing out here that only the nonzero αi contribute to overall
solution. This comes from the KKT condition (23), which implies that when
αi = 0, the inequality (17) must be satisfied exactly, while α0 = 0 means the
inequality is automatically met. In this latter case, ηi = 0. Therefore, only the
corresponding training data (xi, yi) with αi > 0 can contribute to the solution,
and thus such xi form the support vectors (hence, the name support vector
machine). All the other data with αi = 0 become irrelevant.
It has been shown that the solution for αi can be found by solving the following
quadratic programming [24,26]
maximize
n
i=1
αi −
1
2
n
i,j=1
αiαjyiyj(xi · xj), (26)
subject to
n
i=1
αiyi = 0, 0 ≤ αi ≤ λ, (i = 1, 2, ..., n). (27)
From the coefficients αi, we can write the final classification or decision function
as
f(x) = sgn
n
i=1
αiyi(x · xi) + b , (28)
where sgn is the classic sign function.
7. PSO and SVM for Business Optimization 59
As most problems are nonlinear in business applications, and the above linear
SVM cannot be used. Ideally, we should find some nonlinear transformation
φ so that the data can be mapped onto a high-dimensional space where the
classification becomes linear. The transformation should be chosen in a certain
way so that their dot product leads to a kernel-style function K(x, xi) = φ(x) ·
φ(xi). In fact, we do not need to know such transformations, we can directly use
the kernel functions K(x, xi) to complete this task. This is the so-called kernel
function trick. Now the main task is to chose a suitable kernel function for a
given, specific problem.
For most problems in nonlinear support vector machines, we can use
K(x, xi) = (x·xi)d
for polynomial classifiers, K(x, xi) = tanh[k(x·xi)+Θ)] for
neural networks, and by far the most widely used kernel is the Gaussian radial
basis function (RBF)
K(x, xi) = exp −
||x − xi||2
(2σ2)
= exp − γ||x − xi||2
, (29)
for the nonlinear classifiers. This kernel can easily be extended to any high
dimensions. Here σ2
is the variance and γ = 1/2σ2
is a constant. In general, a
simple bound of 0 < γ ≤ C is used, and here C is a constant.
Following the similar procedure as discussed earlier for linear SVM, we can
obtain the coefficients αi by solving the following optimization problem
maximize
n
i=1
αi −
1
2
αiαjyiyjK(xi, xj). (30)
It is worth pointing out under Mercer’s conditions for the kernel function, the
matrix A = yiyjK(xi, xj) is a symmetric positive definite matrix [26], which
implies that the above maximization is a quadratic programming problem, and
can thus be solved efficiently by standard QP techniques [21].
4 Metaheuristic Support Vector Machine with APSO
4.1 Metaheuristics
There are many metaheuristic algorithms for optimization and most these al-
gorithms are inspired by nature [27]. Metaheuristic algorithms such as genetic
algorithms and simulated annealing are widely used, almost routinely, in many
applications, while relatively new algorithms such as particle swarm optimiza-
tion [7], firefly algorithm and cuckoo search are becoming more and more pop-
ular [27,32]. Hybridization of these algorithms with existing algorithms are also
emerging.
The advantage of such a combination is to use a balanced tradeoff between
global search which is often slow and a fast local search. Such a balance is im-
portant, as highlighted by the analysis by Blum and Roli [1]. Another advantage
of this method is that we can use any algorithms we like at different stages of
8. 60 X.-S. Yang, S. Deb, and S. Fong
the search or even at different stage of iterations. This makes it easy to combine
the advantages of various algorithms so as to produce better results.
Others have attempted to carry out parameter optimization associated with
neural networks and SVM. For example, Liu et al. have used SVM optimized
by PSO for tax forecasting [13]. Lu et al. proposed a model for finding optimal
parameters in SVM by PSO optimization [14]. However, here we intend to pro-
pose a generic framework for combining efficient APSO with SVM, which can
be extended to other algorithms such as firefly algorithm [28,31].
4.2 APSO-SVM
Support vector machine has a major advantage, that is, it is less likely to overfit,
compared with other methods such as regression and neural networks. In addi-
tion, efficient quadratic programming can be used for training support vector
machines. However, when there is noise in the data, such algorithms are not
quite suitable. In this case, the learning or training to estimate the parameters
in the SVM becomes difficult or inefficient.
Another issue is that the choice of the values of kernel parameters C and σ2
in the kernel functions; however, there is no agreed guideline on how to choose
them, though the choice of their values should make the SVM as efficiently as
possible. This itself is essentially an optimization problem.
Taking this idea further, we first use an educated guess set of values and use
the metaheuristic algorithms such as accelerated PSO or cuckoo search to find
the best kernel parameters such as C and σ2
[27,29]. Then, we used these param-
eters to construct the support vector machines which are then used for solving
the problem of interest. During the iterations and optimization, we can also mod-
ify kernel parameters and evolve the SVM accordingly. This framework can be
called a metaheuristic support vector machine. Schematically, this Accelerated
PSO-SVM can be represented as shown in Fig. 1.
begin
Define the objective;
Choose kernel functions;
Initialize various parameters;
while (criterion)
Find optimal kernel parameters by APSO;
Construct the support vector machine;
Search for the optimal solution by APSO-SVM;
Increase the iteration counter;
end
Post-processing the results;
end
Fig. 1. Metaheuristic APSO-SVM.
9. PSO and SVM for Business Optimization 61
For the optimization of parameters and business applications discussed below,
APSO is used for both local and global search [27,32].
5 Business Optimization Benchmarks
Using the framework discussed earlier, we can easily implement it in any pro-
gramming language, though we have implemented using Matlab. We have vali-
dated our implementation using the standard test functions, which confirms the
correctness of the implementation. Now we apply it to carry out case studies with
known analytical solution or the known optimal solutions. The Cobb-Douglas
production optimization has an analytical solution which can be used for com-
parison, while the second case is a standard benchmark in resource-constrained
project scheduling [11].
5.1 Production Optimization
Let us first use the proposed approach to study the classical Cobb-Douglas pro-
duction optimization. For a production of a series of products and the labour
costs, the utility function can be written
q =
n
j=1
u
αj
j = uα1
1 uα2
2 · · · uαn
n , (31)
where all exponents αj are non-negative, satisfying
n
j=1
αj = 1. (32)
The optimization is the minimization of the utility
minimize q (33)
subject to
n
j=1
wjuj = K, (34)
where wj(j = 1, 2, ..., n) are known weights.
This problem can be solved using the Lagrange multiplier method as an un-
constrained problem
ψ =
n
j=1
u
αj
j + λ(
n
j=1
wjuj − K), (35)
whose optimality conditions are
∂ψ
∂uj
= αju−1
j
n
j=1
u
αj
j + λwj = 0, (j = 1, 2, ..., n), (36)
10. 62 X.-S. Yang, S. Deb, and S. Fong
∂ψ
∂λ
=
n
j=1
wjuj − K = 0. (37)
The solutions are
u1 =
K
w1[1 + 1
α1
n
j=2 αj]
, uj =
w1αj
wjα1
u1, (38)
where (j = 2, 3, ..., n). For example, in a special case of n = 2, α1 = 2/3,
α2 = 1/3, w1 = 5, w2 = 2 and K = 300, we have
u1 =
Q
w1(1 + α2/α1)
= 40, u2 =
Kα2
w2α1(1 + α2/α1)
= 50.
As most real-world problem has some uncertainty, we can now add some noise
to the above problem. For simplicity, we just modify the constraint as
n
j=1
wjuj = K(1 + β ), (39)
where is a random number drawn from a Gaussian distribution with a zero
mean and a unity variance, and 0 ≤ β 1 is a small positive number.
We now solve this problem as an optimization problem by the proposed APSO-
SVM. In the case of β = 0.01, the results have been summarized in Table 1 where
the values are provided with different problem size n with different numbers of
iterations. We can see that the results converge at the optimal solution very
quickly.
Table 1. Mean deviations from the optimal solutions
size n Iterations deviations
10 1000 0.014
20 5000 0.037
50 5000 0.040
50 15000 0.009
6 Income Prediction
Studies to improve the accuracy of classifications are extensive. For example,
Kohavi proposed a decision-tree hybrid in 1996 [10]. Furthermore, an efficient
training algorithm for support vector machines was proposed by Platt in 1998
[17,18], and it has some significant impact on machine learning, regression and
data mining.
A well-known benchmark for classification and regression is the income predic-
tion using the data sets from a selected 14 attributes of a household from a sensus
11. PSO and SVM for Business Optimization 63
form [10,17]. We use the same data sets at ftp://ftp.ics.uci.edu/pub/machine-
learning-databases/adult for this case study. There are 32561 samples in the
training set with 16281 for testing. The aim is to predict if an individual’s in-
come is above or below 50K ?
Among the 14 attributes, a subset can be selected, and a subset such as age,
education level, occupation, gender and working hours are commonly used.
Using the proposed APSO-SVM and choosing the limit value of C as 1.25, the
best error of 17.23% is obtained (see Table 2), which is comparable with most
accurate predictions reported in [10,17].
Table 2. Income prediction using APSO-SVM
Train set (size) Prediction set Errors (%)
512 256 24.9
1024 256 20.4
16400 8200 17.23
6.1 Project Scheduling
Scheduling is an important class of discrete optimization with a wider range of
applications in business intelligence. For resource-constrained project scheduling
problems, there exists a standard benchmark library by Kolisch and Sprecher
[11,12]. The basic model consists of J activities/tasks, and some activities cannot
start before all its predecessors h are completed. In addition, each activity j =
1, 2, ..., J can be carried out, without interruption, in one of the Mj modes, and
performing any activity j in any chosen mode m takes djm periods, which is
supported by a set of renewable resource R and non-renewable resources N.
The project’s makespan or upper bound is T, and the overall capacity of non-
renewable resources is Kν
r where r ∈ N. For an activity j scheduled in mode
m, it uses kρ
jmr units of renewable resources and kν
jmr units of non-renewable
resources in period t = 1, 2, ..., T.
For activity j, the shortest duration is fit into the time windows [EFj, LFj]
where EFj is the earliest finish times, and LFj is the latest finish times. Math-
ematically, this model can be written as [11]
Minimize Ψ(x)
Mj
m=1
LFj
t=EFj
t · xjmt, (40)
subject to
Mh
m=1
LFj
t=EFj
t xhmt ≤
Mj
m=1
LFj
t=EFj
(t − djm)xjmt, (j = 2, ..., J),
J
j=1
Mj
m=1
kρ
jmr
min{t+djm−1,LFj}
q=max{t,EFj }
xjmq ≤ Kρ
r , (r ∈ R),
12. 64 X.-S. Yang, S. Deb, and S. Fong
J
j=1
Mj
m=1
kν
jmr
LFj
t=EFj
xjmt ≤ Kν
r , (r ∈ N), (41)
and
Mj
j=1
t = EFj
LFj
= 1, j = 1, 2, ..., J, (42)
where xjmt ∈ {0, 1} and t = 1, ..., T. As xjmt only takes two values 0 or 1,
this problem can be considered as a classification problem, and metaheuristic
support vector machine can be applied naturally.
Table 3. Kernel parameters used in SVM
Number of iterations SVM kernel parameters
1000 C = 149.2, σ2
= 67.9
5000 C = 127.9, σ2
= 64.0
Using the online benchmark library [12], we have solved this type of problem
with J = 30 activities (the standard test set j30). The run time on a modern
desktop computer is about 2.2 seconds for N = 1000 iterations to 15.4 seconds
for N = 5000 iterations. We have run the simulations for 50 times so as to obtain
meaningful statistics.
The optimal kernel parameters found for the support vector machines are
listed in Table 3, while the deviations from the known best solution are given in
Table 4 where the results by other methods are also compared.
Table 4. Mean deviations from the optimal solution (J=30)
Algorithm Authors N = 1000 5000
PSO [22] Kemmoe et al. (2007) 0.26 0.21
hybribd GA [23] Valls eta al. (2007) 0.27 0.06
Tabu search [15] Nonobe & Ibaraki (2002) 0.46 0.16
Adapting GA [4] Hartmann (2002) 0.38 0.22
Meta APSO-SVM this paper 0.19 0.025
From these tables, we can see that the proposed metaheuristic support vector
machine starts very well, and results are comparable with those by other meth-
ods such as hybrid genetic algorithm. In addition, it converges more quickly,
as the number of iterations increases. With the same amount of function eval-
uations involved, much better results are obtained, which implies that APSO
is very efficient, and subsequently the APSO-SVM is also efficient in this con-
text. In addition, this also suggests that this proposed framework is appropriate
for automatically choosing the right parameters for SVM and solving nonlinear
optimization problems.
13. PSO and SVM for Business Optimization 65
7 Conclusions
Both PSO and support vector machines are now widely used as optimization
techniques in business intelligence. They can also be used for data mining to
extract useful information efficiently. SVM can also be considered as an opti-
mization technique in many applications including business optimization. When
there is noise in data, some averaging or reformulation may lead to better per-
formance. In addition, metaheuristic algorithms can be used to find the optimal
kernel parameters for a support vector machine and also to search for the op-
timal solutions. We have used three very different case studies to demonstrate
such a metaheuristic SVM framework works.
Automatic parameter tuning and efficiency improvement will be an important
topic for further research. It can be expected that this framework can be used
for other applications. Furthermore, APSO can also be used to combine with
other algorithms such as neutral networks to produce more efficient algorithms
[13,14]. More studies in this area are highly needed.
References
1. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: Overview and
conceptural comparision. ACM Comput. Surv. 35, 268–308 (2003)
2. Chatterjee, A., Siarry, P.: Nonlinear inertia variation for dynamic adaptation in
particle swarm optimization. Comp. Oper. Research 33, 859–871 (2006)
3. Clerc, M., Kennedy, J.: The particle swarm - explosion, stability, and convergence
in a multidimensional complex space. IEEE Trans. Evolutionary Computation 6,
58–73 (2002)
4. Hartmann, S.: A self-adapting genetic algorithm for project scheduling under
resource constraints. Naval Res. Log. 49, 433–448 (2002)
5. Howley, T., Madden, M.G.: The genetic kernel support vector machine: descrip-
tion and evaluation. Artificial Intelligence Review 24, 379–395 (2005)
6. Goldberg, D.E.: Genetic Algorithms in Search, Optimisation and Machine Learn-
ing. Addison Wesley, Reading (1989)
7. Kennedy, J., Eberhart, R.C.: Particle swarm optimization, in. In: Proc. of IEEE
International Conference on Neural Networks, Piscataway, NJ, pp. 1942–1948
(1995)
8. Kennedy, J., Eberhart, R.C.: Swarm intelligence. Academic Press, London (2001)
9. Kim, K.: Financial forecasting using support vector machines. Neurocomput-
ing 55, 307–319 (2003)
10. Kohavi, R.: Scaling up the accuracy of naive-Bayes classifiers: a
decision-tree hybrid. In: Proc. 2nd Int. Conf. on Knowledge Discov-
ery and Data Mining, pp. 202–207. AAAI Press, Menlo Park (1996),
ftp://ftp.ics.uci.edu/pub/machine-learning-databases/adult
11. Kolisch, R., Sprecher, A.: PSPLIB - a project scdeluing problem library, OR
Software-ORSEP (operations research software exchange prorgam) by H. W.
Hamacher. Euro. J. Oper. Res. 96, 205–216 (1996)
12. Kolisch, R., Sprecher, A.: The Library PSBLIB,
http://129.187.106.231/psplib/
14. 66 X.-S. Yang, S. Deb, and S. Fong
13. Liu, L.-X., Zhuang, Y., Liu, X.Y.: Tax forecasting theory and model based on
SVM optimized by PSO. Expert Systems with Applications 38, 116–120 (2011)
14. Lu, N., Zhou, J.Z., He, Y.: Y., Liu Y., Particle Swarm Optimization for Parameter
Optimization of Support Vector Machine Model. In: 2009 Second International
Conference on Intelligent Computation Technology and Automation, pp. 283–284.
IEEE publications, Los Alamitos (2009)
15. Nonobe, K., Ibaraki, T.: Formulation and tabu search algorithm for the resource
constrained project scheduling problem (RCPSP). In: Ribeiro, C.C., Hansen, P.
(eds.) Essays and Surveys in Metaheuristics, pp. 557–588 (2002)
16. Pai, P.F., Hong, W.C.: Forecasting regional electricity load based on recurrent
support vector machines with genetic algorithms. Electric Power Sys. Res. 74,
417–425 (2005)
17. Platt, J.C.: Sequential minimal optimization: a fast algorithm for training support
vector machines, Techical report MSR-TR-98014, Microsoft Research (1998)
18. Plate, J.C.: Fast training of support vector machines using sequential minimal
optimization, in. In: Scholkopf, B., Burges, C.J., Smola, A.J. (eds.) Advances in
Kernel Methods – Support Vector Learning, pp. 185–208. MIT Press, Cambridge
(1999)
19. Shi, G.R.: The use of support vector machine for oil and gas identification in
low-porosity and low-permeability reservoirs. Int. J. Mathematical Modelling and
Numerical Optimisation 1, 75–87 (2009)
20. Shi, G.R., Yang, X.-S.: Optimization and data mining for fracture prediction in
geosciences. Procedia Computer Science 1, 1353–1360 (2010)
21. Smola, A. J., Sch¨olkopf, B.: A tutorial on support vector regression, (1998),
http://www.svms.org/regression/
22. Tchomt´e, S.K., Gourgand, M., Quilliot, A.: Solving resource-constrained project
scheduling problem with particle swarm optimization. In: Proceeding of 3rd Mul-
tidsciplinary Int. Scheduling Conference (MISTA 2007), Paris, August 28 - 31,
pp. 251–258 (2007)
23. Valls, V., Ballestin, F., Quintanilla, S.: A hybrid genetic algorithm for the
resource-constrained project scheduling problem. Euro. J. Oper. Res (2007),
doi:10.1016/j.ejor.2006.12.033
24. Vapnik, V.: Estimation of Dependences Based on Empirical Data. Springer, New
York (1982) (in Russian)
25. Vapnik, V.: The nature of Statistical Learning Theory. Springer, New York (1995)
26. Scholkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.:
Comparing support vector machine with Gaussian kernels to radial basis function
classifiers. IEEE Trans. Signal Processing 45, 2758–2765 (1997)
27. Yang, X.S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press (2008)
28. Yang, X.-S.: Firefly algorithms for multimodal optimization. In: Watanabe, O.,
Zeugmann, T. (eds.) SAGA 2009. LNCS, vol. 5792, pp. 169–178. Springer, Hei-
delberg (2009)
29. Yang, X.-S., Deb, S.: Cuckoo search via L´evy flights, in. In: Proceeings of World
Congress on Nature & Biologically Inspired Computing, NaBIC 2009, pp. 210–
214. IEEE Publications, USA (2009)
30. Yang, X.S., Deb, S.: Engineering optimization by cuckoo search. Int. J. Mathe-
matical Modelling and Numerical Optimisation 1, 330–343 (2010)
31. Yang, X.S.: Firefly algorithm, stochastic test functions and design optimisation.
Int. J. Bio-inspired Computation 2, 78–84 (2010)
32. Yang, X.S.: Engineering Optimization: An Introduction with Metaheuristic Ap-
plications. John Wiley & Sons, Chichester (2010)