Branch-and-Bound algorithm is the basis for the majority of solving methods in mixed integer linear programming. It has been proving its efficiency in different fields. In fact, it creates little by little a tree of nodes by adopting two strategies. These strategies are variable selection strategy and node selection strategy. In our previous work, we experienced a methodology of learning branch-and-bound strategies using regression-based support vector machine twice. That methodology allowed firstly to exploit information from previous executions of Branch-and-Bound algorithm on other instances. Secondly, it created information channel between node selection strategy and variable branching strategy. And thirdly, it gave good results in term of running time comparing to standard Branch-and-Bound algorithm. In this work, we will focus on increasing SVM performance by using cross validation coupled with model selection.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
Data Science - Part IX - Support Vector MachineDerek Kane
This lecture provides an overview of Support Vector Machines in a more relatable and accessible manner. We will go through some methods of calibration and diagnostics of SVM and then apply the technique to accurately detect breast cancer within a dataset.
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
Branch and Bound technique (B&B) is commonly used for intelligent search in finding a set of integer solutions within a space of interest. The corresponding binary tree structure provides a natural parallelism allowing concurrent evaluation of sub-problems using parallel computing technology. Flower pollination Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of Flower pollination Meta-heuristic Algorithm, (FPPSO), for solving integer programming problems. The proposed algorithm combines the standard flower pollination algorithm (FP) with the particle swarm optimization (PSO) algorithm to improve the searching accuracy. Numerical results show that the FPPSO is able to obtain the optimal results in comparison to traditional methods (branch and bound) and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm.Branch and bound, flower pollination Algorithm; meta-heuristics; optimization; the particle swarm optimization; integer programming.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
Clustering and Classification Algorithms Ankita DubeyAnkita Dubey
Clustering is a process of partitioning a set of data (or objects) into a set of meaningful sub-classes, called clusters. Help users understand the natural grouping or structure in a data set. Used either as a stand-alone tool to get insight into data distribution or as a preprocessing step for other algorithms.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
Proposing an Appropriate Pattern for Car Detection by Using Intelligent Algor...Editor IJCATR
Nowadays, the automotive industry has attracted the attention of consumers, and product quality is considered as an
essential element in current competitive markets. Security and comfort are the main criteria and parameters of selecting a car.
Therefore, standard dataset of CAR involving six features and characteristics and 1728 instances have been used. In this paper, it
has been tried to select a car with the best characteristics by using intelligent algorithms (Random Forest, J48, SVM,
NaiveBayse) and combining these algorithms with aggregated classifiers such as Bagging and AdaBoostMI. In this study, speed
and accuracy of intelligent algorithms in identifying the best car have been taken into account.
The Application Of Bayes Ying-Yang Harmony Based Gmms In On-Line Signature Ve...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
Data Science - Part IX - Support Vector MachineDerek Kane
This lecture provides an overview of Support Vector Machines in a more relatable and accessible manner. We will go through some methods of calibration and diagnostics of SVM and then apply the technique to accurately detect breast cancer within a dataset.
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
Branch and Bound technique (B&B) is commonly used for intelligent search in finding a set of integer solutions within a space of interest. The corresponding binary tree structure provides a natural parallelism allowing concurrent evaluation of sub-problems using parallel computing technology. Flower pollination Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of Flower pollination Meta-heuristic Algorithm, (FPPSO), for solving integer programming problems. The proposed algorithm combines the standard flower pollination algorithm (FP) with the particle swarm optimization (PSO) algorithm to improve the searching accuracy. Numerical results show that the FPPSO is able to obtain the optimal results in comparison to traditional methods (branch and bound) and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm.Branch and bound, flower pollination Algorithm; meta-heuristics; optimization; the particle swarm optimization; integer programming.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
Clustering and Classification Algorithms Ankita DubeyAnkita Dubey
Clustering is a process of partitioning a set of data (or objects) into a set of meaningful sub-classes, called clusters. Help users understand the natural grouping or structure in a data set. Used either as a stand-alone tool to get insight into data distribution or as a preprocessing step for other algorithms.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
Proposing an Appropriate Pattern for Car Detection by Using Intelligent Algor...Editor IJCATR
Nowadays, the automotive industry has attracted the attention of consumers, and product quality is considered as an
essential element in current competitive markets. Security and comfort are the main criteria and parameters of selecting a car.
Therefore, standard dataset of CAR involving six features and characteristics and 1728 instances have been used. In this paper, it
has been tried to select a car with the best characteristics by using intelligent algorithms (Random Forest, J48, SVM,
NaiveBayse) and combining these algorithms with aggregated classifiers such as Bagging and AdaBoostMI. In this study, speed
and accuracy of intelligent algorithms in identifying the best car have been taken into account.
Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGcscpconf
Data clustering is a process of arranging similar data into groups. A clustering algorithm
partitions a data set into several groups such that the similarity within a group is better than
among groups. In this paper a hybrid clustering algorithm based on K-mean and K-harmonic
mean (KHM) is described. The proposed algorithm is tested on five different datasets. The research is focused on fast and accurate clustering. Its performance is compared with the traditional K-means & KHM algorithm. The result obtained from proposed hybrid algorithm is much better than the traditional K-mean & KHM algorithm
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
THE APPLICATION OF BAYES YING-YANG HARMONY BASED GMMS IN ON-LINE SIGNATURE VE...ijaia
In this contribution, a Bayes Ying-Yang(BYY) harmony based approach for on-line signature verification is
presented. In the proposed method, a simple but effective Gaussian Mixture Models(GMMs) is used to
represent for each user’s signature model based on the prior information collected. Different from the early
works, in this paper, we use the Bayes Ying Yang machine combined with the harmony function to achieve
Automatic Model Selection(AMS) during the parameter learning for the GMMs, so that a better
approximation of the user model is assured. Experiments on a database from the First International
Signature Verification Competition(SVC 2004) confirm that this combined algorithm yields quite a
satisfactory result.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
Similar to Adapted Branch-and-Bound Algorithm Using SVM With Model Selection (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2481 - 2490
2482
Next, we address research papers relative to our work. To do so, we divide contributions in five sub-
sections relatively to the strategy and to the learning technique used. Firstly, relatively to variable branching
strategy, authors in [1], learned a function to be with approximatively the same performance as strong
branching in term of precision, and in the same time, results in a gain of the processing time.
In this same category, we cite [2], wherein authors infer consistent data from applying an algorithm
for detecting clauses. A clause being a combination of binary values affected to a set of indexed variables,
that if it happens, the whole problem would be infeasible. In addition, that algorithm generates minimal
clauses and restarts with active clauses (“those that can be used in fathoming child nodes”). In parallel, this
information is used to choose branching variable with the best effect. Also [3] and [4] use backtracking to
improve branching decisions.
Besides, there are classic variable branching rules, like strong branching, pseudo costs branching,
hybrid branching, reliable branching, inference-based branching [5]. Note that reliability branching is known
to be the best branching rule with the reliability 𝜂 𝑟𝑒𝑙 = 4 𝑜𝑟 8 [6]. For our experiments, we used: 𝜂 𝑟𝑒𝑙 = 8.
The reliability parameter 𝜂 𝑟𝑒𝑙 is fixed to stop the calculation of pseudocost values after attaining a certain
level of the branch and bound tree. This is because pseudocost value remain approximatively constant after
calculating it several times for a determined variable.
Secondly, we cite from node selection strategy literature, in addition to classic node selection
strategies, such as depth-first rule, breadth-first rule, and node best-estimate [5], authors of [1], extracted
information from MILP Benchmark libraries by using specific algorithm called oracle. Thirdly, concerning
learning algorithms, they are used in different engineering fields. Algorithms purposes are classification,
regression, clustering [7][8]. There are algorithms that tend to do well in practice more than others [9][10].
In the same context of applying learning in branch and bound, the ExtraTrees is applied in [11].
Fourthly, when looking at model selection and the performance of algorithms, there are techniques
used to tune parameters such as Fuzzy Logic controller for Ant Colony System (ACS) epsilon
parameter [12]. Also, [13] and [14] used Hidden Markov Model (HMM) algorithm to tune the Particle
Swarm optimization population size and acceleration factors parameters. Besides, authors in [15] used HMM
to tune the inertia weight parameter of the Particle Swarm Optimization algorithm. Moreover, [16] used
Fuzzy controller to control Simulated Annealing cooling law, [17] and [18] used HMM to tune ACS
evaporation parameter and local pheromone decay parameter respectively, [19] and [20] used HMM to adapt
the simulated annealing cooling law. Furthermore, [14] used SVM algorithm to predict the performance of
optimization problems. Finally, authors in [20] used the Expectation-Maximization algorithm to learn the
HMM algorithm parameters. Finally, this paper is the continuity of our previous papers which deals with the
learning of branch-and-bound algorithm strategies, namely variable branching strategy and node selection
strategy [21], [22]. The learning algorithm used was Support Vector Machine (SVM).
The rest of this paper is organized as follows: Section 2 recalls some basics on branch-and-bound
algorithm and SVM algorithm with parameter tuning. In section 3, we present our methodology of inferring
efficient branch and bound strategies and experimentation configuration. Sections 4 is dedicated to results.
Finally, we conclude and propose some future work.
2. BRANCH-AND-BOUND AND SVM
In this section, we are first going to see an overview of a formal description of branch-and-bound
strategies and present the features used in the algorithm. Secondly, we will investigate SVM most important
advantages with a remainder of learning theory.
2.1. Branch-and-bound algorithm
Branch-and-bound algorithm is outlined in this section. We first define useful notation and then
proceed with the explanation of the algorithm steps. Let us define a general MILP problem P as follows:
𝑧 = min{𝑐 𝑇
𝑥 |𝐴𝑥 = 𝑏, 𝑥 ≥ 0, 𝑥 Non-negative vector of dimension 𝑛 containing at least one integer}
where 𝑐 𝜖 𝑅 𝑛, 𝑏 𝜖 𝑅 𝑚 𝑎𝑛𝑑 𝐴 is 𝑚 ∗ 𝑛 dimension matrix. We will use also define: 𝑃𝑟𝑒𝑙 is a relaxed version
of 𝑃: which is
𝑧𝑟𝑒𝑙 = 𝑚𝑖𝑛{𝑐 𝑇
𝑥 |𝐴𝑥 = 𝑏, 𝑥 ≥ 0}
𝑃𝑘 is the problem in the 𝑘 𝑡ℎ
iteration which corresponds to a node in branch-and-bound tree.
𝑃𝑘,𝑟𝑒𝑙 is a relaxed version of 𝑃𝑘.
𝑧 𝑘 is the objective value of 𝑘 𝑡ℎ
node.
(𝑥∗) is the incumbent point at iteration 𝑘, which means the vector that leads to the best 𝑧 𝑘 so far.
𝑧∗
is the objective function value on (𝑥∗
)
3. Int J Elec & Comp Eng ISSN: 2088-8708
Adapted branch-and-bound algorithm using SVM with model selection (Kabbaj Mohamed Mustapha)
2483
Briefly, the Branch-and-bound algorithm, in the case of minimization, is described as follows as
shown in Algorithm 1. It is an iterative algorithm, and in each iteration 𝑘, we have at least three steps
which are:
Firstly, the node selection step aims to retrieve a node from a node list that maximizes some
criterion. This latter is specific to the node selection strategy. Secondly, and once we have picked a node 𝑃𝑘,
we solve its relaxation 𝑃𝑘,𝑟𝑒𝑙 by an algorithm from the linear programming framework such as simplex or
interior points. Depending on the results, we distinguish three cases. The first one is when the problem 𝑃𝑘,𝑟𝑒𝑙
is infeasible or the resulting objective function 𝑧 𝑘 value is greater than 𝑧∗
. Consequently, the current iteration
is termintated. The second case is when the solution is integer and 𝑧 𝑘 < 𝑧∗
. In this moment, we update the
incumbent point and its objective value 𝑧∗
, then we move to the next iteration. In the third case, when none of
the condifions mentioned before happens, we perform variable branching. In this final step, we must select a
variable from a set of non-integer variables relatively to some defined criterion. And this criterion is defined
by the variable branching strategy.
Algorithm 1. Branch-and-bound Algorithm
2.2. Support Vector Machine
SVM is in top ten machine learning algorithms [9], it is used for both classification and regression.
It aims to find the hyperplane with the best margin. The best is demonstrated to be the large one differentiating
between the hyperplane and nearest data points called support vectors.
2.2.1. Case of Linear Hypothesis set for SVM:
In the case of regression, and especially one variant of SVM called 𝜀-SVM, we will present nextly,
the case of linear hypothesis set. Let’s have in have in the input, 𝑁 training data, namely (𝑥 𝑛, 𝑦𝑛),0 ≤ 𝑛 ≤ 𝑁
The output of the algorithm is a linear function: 𝑓(𝑥) = 𝑤 𝑇
𝑥 + 𝑏, with 𝑤 a coefficient vector, x the unknown
vector and b a constant.
The distance between a hyperplane of equation 𝑤 𝑇
𝑥 + 𝑏 = 0 and the support vectors, is
1
||𝑤||
.
Consequently, maximizing the margin is equivalent to the next optimization problem:
𝑃(𝑤): {
min
1
2
𝑤 𝑇
𝑤
𝑠. 𝑡. |𝑦𝑛 − (𝑤 𝑇
𝑥 + 𝑏)| ≤ 𝜀, ∀𝑛
With, 𝜀 being the error tolerance between 𝑦𝑛 and 𝑓(𝑥).
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2481 - 2490
2484
The last problem might be infeasible. And to add more chance to be feasible, we add slack variables
to the problem, in the following way:
𝑃(𝑤):
{
min 𝑃(𝑤) =
1
2
𝑤 𝑇
𝑤 + 𝐶 ∑ (𝜉 𝑛 + 𝜉 𝑛
∗
)𝑁
𝑖=1
𝑠. 𝑡. 𝑦𝑛 − (𝑤 𝑇
𝑥 + 𝑏) ≤ 𝜀 + 𝜉 𝑛, ∀𝑛
(𝑤 𝑇
𝑥 + 𝑏) − 𝑦𝑛 ≤ 𝜀 + 𝜉 𝑛
∗
, ∀n
𝜉 𝑛, 𝜉 𝑛
∗
≥ 0, ∀𝑛
with, 𝜉 𝑛 𝑎𝑛𝑑 𝜉 𝑛
∗
are the slack variables, and 𝐶 is the cost parameter used to penalize data points outside the
margin 𝜀.
By using lagragian function, and quadratic optimization or other resolution methods, one can prove
that the solution is with the form of:
𝑓(𝑥) = ∑ (𝛼𝑖
∗
− 𝛼𝑖)𝑥𝑖
𝑇
𝑥𝑁
𝑖=1 + 𝑏
with 0 ≤ 𝛼𝑖, 𝛼𝑖
∗
≤ 𝐶, ∀1 ≤ 𝑖 ≤ 𝑁.
2.2.2. Case of non linear hypothesis set
In the situation, where we cannot find a hyperplane containing all training instances, one might
transform the space of the training data to another, in such way can be comprised in one hyperplane on the
new space. To do this transformation, one can use the well-known kernel methods. In fact, there are in
literature different kernels used for SVM, such as RBF and polynomial. For the rest, we will present the
distance calculation method for the RBF kernel.
Instead of using the standard L2 − 𝑛𝑜𝑟𝑚 ||. ||, we used the norm associated with RBF kernel that is
described as follows:
𝑅𝐵𝐹(𝑥𝑖, 𝑥𝑗) = exp(−γ ||x𝑗 − x𝑖||
2
)
with γ, is the gamma parameter. Its geometrical interpretation is, when the gamma parameter has larger values,
the hyperplane associated with the solution will have more inclinations to contains, as far as possible, all
training data. The form of the resulting target function, will be as follows:
𝑓(𝑥) = ∑ (𝛼𝑖
∗
− 𝛼𝑖)𝑅𝐵𝐹(𝑥𝑖, 𝑥)𝑁
𝑖=1 + 𝑏
In this paper, we will use 𝜀-SVM regression algorithm with the RBF kernel twice for learning node selection
strategy and variable branching strategy respectively.
2.3. Learning of variable branching strategy and node selection strategy
Concerning the variable branching strategy, we aim in this paper to imitate the behavior of the
reliability branching rule. This rule is based on strong branching, which is time consuming. By and large,
reliability branching uses an unreliability quality for variable pseudo-costs values. For this reason, reliability
depends on numerous problem features. These features are to be classified in node-based features and variable-
based features.
2.3.1. Node-based features
We use in this category features below:
- Reduced Objective values gain:
Δ 𝑘,𝑟𝑒𝑑 =
|𝑧 𝑘−𝑧 𝑘−1|
|𝑧 𝑘−1|
(Note that the features should be independent of the problem scale)
- Depth in branch and bound tree 𝑑 𝑘 starting from zero.
- Node estimate.
- LP Objective Value
2.3.2. Variable-based features
In the same thinking line, we use:
- Pseudo-cost value
- The positive reduced cost and the negative reduced costs i.e.
5. Int J Elec & Comp Eng ISSN: 2088-8708
Adapted branch-and-bound algorithm using SVM with model selection (Kabbaj Mohamed Mustapha)
2485
max(
|𝑐 𝑗,𝑘|
∑ |𝑐 𝑗,𝑘|𝑗: 𝑐 𝑗 ,𝑘 ≤0
, 0) and max(
|𝑐 𝑗,𝑘|
∑ |𝑐 𝑗,𝑘|𝑗: 𝑐 𝑗 ,𝑘 ≥0
, 0)
with 𝑐𝑗,𝑘 is the 𝑗 𝑡ℎ
component value of cost vector of iteration 𝑘. These features aim to present either in
minimization or maximization problems how we approach to the optimal solution.
The other specificity in our work beyond changes in features based on those presented in [11], is we
add the value of learned function representing node selection in the set of features. This last point is justified
in the following sub-section. For learning node selection strategy, we will imitate node estimate strategy.
This strategy is the default one used in SCIP solver.
2.4. Interaction of node selection strategy and variable selection strategy
Intuitively, the choice of a node, by a node selection strategy, influences the choice the next
branching variable. For this reason, we describe formally the variable branching strategy function VB in
function of a combination of NS (Node selection strategy function) and other features described below:
𝑉𝑅(𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑠𝑒𝑡) = 𝑎 ∗ 𝑁𝑆( 𝑑 𝑘, 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑟𝑒𝑑𝑢𝑐𝑒𝑑 𝑐𝑜𝑠𝑡, 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑟𝑒𝑑𝑢𝑐𝑒𝑑 𝑐𝑜𝑠𝑡) + ∑ 𝑎𝑖 ∗ 𝑓𝑒𝑎𝑡𝑢𝑟𝑒 𝑖𝑖
where 𝑎 and 𝑎𝑖 are real numbers. Note that we double use NS features add more precision.
2.5. Overfitting and parameter tuning
In this sub-section, we will define overfitting, which is a very common problem in learning
techniques that affects the final performance.
2.5.1. Overfitting
A learning model is, by definition, a couple of a learning algorithm and a hypothesis set. A learning
algorithm is an iterative algorithm that searches the best hypothesis fitting the training data. This hypothesis
is included in the hypothesis set chosen initially. A very common problem encountered in learning is
overfitting. This phenomenon occurs when the learned hypothesis does not generalize well to all possible
values beyond the training data. Causes are number of data points, noise and target complexity [7].
The choice of learning algorithms could affect the noise by affecting either bias or variance. In the
case of SVM, the thorough choice of SVM parameters is required to prevent from overfitting. The RBF
Kernel SVR algorithm used in this work has two parameters, cost and gamma. Cost defines how much is
penalized misclassified examples and gamma defines how far the influence of a single training example
reaches. As known small cost and large gamma, give higher bias and lower variance. In addition, large cost
and small gamma give lower bias and larger variance. Consequently, we should tune cost and gamma
parameters until we find tradeoff values to minimize the generalization error. One way to tune gamma and
cost parameters is to use cross validation.
2.5.2. Cross validation with model selection
Before defining cross validation, let us find out what is validation. To do so, we define some useful
notation:
𝐷 the data set
𝐷𝑡𝑟𝑎𝑖𝑛 the training set
𝐷𝑣𝑎𝑙 the validation set
The goal of validation is to give an estimation of the generalization error. First, it divides 𝐷 of 𝑁
data points, to 𝐷𝑡𝑟𝑎𝑖𝑛 of size 𝑁 − 𝐾 and 𝐷𝑣𝑎𝑙 of size 𝐾, then learns the target function based on 𝐷𝑡𝑟𝑎𝑖𝑛.
Finally, we calculate error of the target function in 𝐷𝑣𝑎𝑙. This latter error is proven an estimation of the
generalization error. The Figure 1 represents what is described above.
Figure 1. Validation method
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2481 - 2490
2486
The 𝐸𝑣𝑎𝑙 error is a good estimate of generalization error but it is not too precise. To improve the
precision, other techniques repose on validation like cross validation. Without the loss of generality,
we present next 10-fold cross validation process.
Let’s partition 𝐷 to 𝐷1, 𝐷2 to 𝐷10. We use validation process ten times for 𝐷𝑣𝑎𝑙,𝑖 = 𝐷𝑖 where
1 ≤ 𝑖 ≤ 10 and 𝐷𝑡𝑟𝑎𝑖𝑛,𝑖 = 𝐷𝐷𝑖. In the output, we have 10 errors 𝐸𝑣𝑎𝑙1 to 𝐸𝑣𝑎𝑙10. Then we calculate cross
validation error denoted by 𝐸 𝐶𝑉 which is the mean validation errors. The cross validation error is more
precise that validation error. We resume this process in the Figure 2.
Figure 2. Cross validation for a specific learning algorithm
Now that we have presented cross validation, let us look forward model selection, that used in this
paper to tune parameters of gamma and cost. For 𝑘 ∗ 𝑙 different combinations of cost and gamma, let’s note a
couple (𝐶𝑖, 𝐺𝑎𝑚𝑚𝑎𝑗) with 1 ≤ 𝑖 ≤ 𝑘 and 1 ≤ 𝑗 ≤ 𝑙. As mentioned in the Figure 3, cross validation is
executed multiple times with different parameter configuration. As result, we get errors 𝐸 𝐶𝑉1,1 to 𝐸 𝐶𝑉𝑘,𝑙.
In the end, we have the configuration that have the lower error.
Figure 1. Cross validation for model selection
3. RESEARCH METHOD
In this section, we outline the methodology, step by step, of learning the node selection strategy NS
and variable branching strategy VB using parameter tuning. Then, we present the experiment configuration.
3.1. Collecting Datasets
We use the MIPLIB2010 library as instances to which we apply the Branch-and-Bound algorithm
featured by reliability branching rule and best estimate selection rule. Then we extract information of features
described before. Note that the best estimate selection rule is the default one is various optimization tools like
SCIP. Here is the pseudo-code of the data collection step as shown in Algorithm 2.
For I instance in MIPLIB2010
Solve I by branch-and-bound ruled by reliability branching and node estimate selection rule and
collect feature values.
Return the list of features values
Algorithm 2. Data Collection
7. Int J Elec & Comp Eng ISSN: 2088-8708
Adapted branch-and-bound algorithm using SVM with model selection (Kabbaj Mohamed Mustapha)
2487
3.2. Learning NS and VB
First, we divide collected data into two sets, one used for training and validation and the other for
test. By applying twice cross validation based model selection descripted above, we firstly learn the score
function of every node in the nodes queue. So the node having the best score 𝑁𝑆(𝑛𝑜𝑑𝑒) will be chosen in
branch-and-bound. Secondly, we will learn the score function VB of variable branching selection.
3.3. Experiments
In this sub-section, we present the lab-test used to experiment our methodology and we give the
pseudo-codes used for tests.
3.3.1. Experimentation configuration
We used SCIP 3.2.1 for the raison that is the best in open-source and free tools [23]. Moreover, for
SVM algorithm and model selection, we used the package e1071 [23] of the language R 3.2.5 known to be
among the most performant languages in implementation of SVM algorithms [13].
The cost range used is {10−4
, 10−3
, … , 105
} and the gamma range is {2−8
, 2−7
, … , 21
}. These ranges
cover too small and too high values of cost and gamma. The OS used is Debian 7 32 bits, 8 Go RAM,
Intel 2.40 GHz Processor. We use for MILP instances the benchmark set of MIPLIB2010 [28] to collect data,
valid it, and to test resulting models. For training and validation set, we took the following instances as
shown in Figure 4.
30n20b8.mps acc-tight5.mps aflow40b.mps air04.mps app1-2.mps ash608gpia-3col.mps bab5.mps
beasleyC3.mps biella1.mps bienst2.mps binkar10_1.mps bley_xl1.mps bnatt350.mps core2536-
691.mps cov1075.mps csched010.mps danoint.mps dfn-gwin-UUM.mps eil33-2.mps eilB101.mps
enlight13.mps enlight14.mps ex9.mps glass4.mps gmu-35-40.mps iis-bupa-cov.mps iis-pima-
cov.mps lectsched-4-obj.mps m100n500k4r1.mps macrophage.mps mcsched.mps mik-250-1-100-
1.mpsmine-166-5.mps mine-90-10.mps n3div36.mps n4-3.mps neos-1109824.mps neos-
1337307.mps neos-1396125.mps neos13.mps neos-1601936.mps neos18.mps neos-686190.mps neos-
849702.mps neos-916792.mps neos-934278.mps net12.mps
Figure 4. Training and validation sets
These instances about tens of thousands of rows and columns. The total description is available
in [28]. Concerning the validation set, it contains approximatively fifth of number of mentioned instances
above [7]. Finally, the node limit is fixed to five hundred nodes and running time limit is fixed to
six-hundred seconds.
3.3.2. Pseudo-codes
In the solving process, the algorithm as it is implemented in SCIP is executing numerous event
codes related to some events. Next, we will describe these events, and give the pseudo-code relative to each
one. Besides, main events modified are respectively:
3.3.3. Node selection event
This event occurs when the algorithm is in the phase of selecting the next node to solve. The criteria
of selection is determined by the strategy implemented. Note that this event code is also executed even in the
root node selection. In this event, we implement the node selection rule score function NS that is already
established. Moreover, it calculates the score for each node in the node list. Finally, it returns the node with
the maximum score. The pseudo-code is the Algorithm 3.
For each leap n
Calculate NS(n)
Return the leap with maximum NS(n)
Algorithm 3. Node selection event pseudo-code
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2481 - 2490
2488
3.3.4. Variable branching event
This event occurs when the algorithm is in the phase of selecting a branching variable of a node
already solved and had gave a non-integral value. In this event, we implement the variable selection rule
score function VB. The input of this function is the values of the node-based features and the variable-based
features. The node-based features are related to a fixed node and they are calculated for the current node as a
first step. Besides, we calculate, the value of NS on the current node. Then we calculate the values of the
variable-based features and consequently the value of VB for each variable. In the output, we will return the
variable with the maximum score VB. Finally, we branch on the selected variable and created two related
children nodes. The pseudo-code is as Algorithm 4.
Calculate the value of the node-based features
Calculate the value of NS relative to the present node.
For each branching variable candidate
Calculate values of variable-based features
Calculate VB in terms of calculated features.
Return the max of VB and relative variable
Create two children nodes relying upon the chosen variable
Calculate possible node-based features values of the two children
Algorithm 4. Variable branching event pseudo-code
3.3.5. Node Solved event
This event occurs when the algorithm is the state of leaving the node already solved. We use this
event to calculate the values of LP Objective Value of the current node, and the reduced objective values
gain. The pseudo-code is the Algorithm 5.
Get the LP Objective Value of the present node.
Calculate the reduced objective values gain
Algorithm 5. Node solved event pseudo-code
4. RESULTS
We present in this section, a comparison between algorithms resulted from our approaches and
standard branch-and-bound algorithm. The comparison is done in term of Running Time, Dual Bound and
Number of Solved nodes. The dual bound being a quantity converging to the optimal solution if it exists.
The greater value of dual bound is the best one.
To get to this comparison, we did three different solving configurations on test set. The first is done
by standard branch-and-bound (SBB) algorithm ruled by reliability pseudo-cost branching rule and best
estimate node selection rule. Then for the second and third, we used our algorithms with SVR without model
selection (ABB) and with model selection (ABB+MS). The results are detailed in the Table 1.
Table 1. Results of experimentation
9. Int J Elec & Comp Eng ISSN: 2088-8708
Adapted branch-and-bound algorithm using SVM with model selection (Kabbaj Mohamed Mustapha)
2489
In this test, we had ten instances from MIPLIB2010. These instances have three different types.
The first one is mixed integer program (MIP) that regroups integer and continuous variables. The second is
mixed binary program (MBP), which includes both continuous and binary variables. And the final one is
Binary program (BP) that contains exclusively binary variables.
These results show up that our approaches give equivalent if not better dual bound comparing to
standard branch-and-bound in term of dual bound in 80% of cases except from opm2-z7-s2 and ran16x16
instances. Another important result, is that our last approach gives equivalent or better running time
comparing our last approach in 80% of cases. Also, when comparing it to the standard branch and bound
algorithm ruled by reliability branching and node best esimate rule, our approach gives better or equivalent
result in about half of total instances.
We noticed that there is an empirical relation between the performance of dual bound and the number
of constraints of the problem from the one hand, and a relation between the performance of running time and
the number of variables from the other hand. To concretize these last points, we plot these in Figure 5.
Figure 5. Increase or decrease of dual bound and running time respectively
The left-hand figure shows that instances with less that 5000 constraints gave better dual boud for
our approaches comparing to standard branch and bound. As a matter of fact, the opm2-z7-s2 instance, which
is represented by the isolated point in the down-rignt side has approximatively 31000 variables. Concerning
the right-hand figure, it shows that instances with more than 2500 variables, increased the performance of
running time, when resolved by our approaches, especially for ns1208400, ns1688347 and rail507 instances.
5. CONCLUSION
In this paper, we add parameter tuning to infer better configuration of SVM. Saying this, we used
𝜀 −SVM regression learning algorithm known for his high accuracy to learn branch-and-bound algorithm
node selection and variables branching strategies. These choices lead to better results comparing to reliability
pseudo cost rule and best estimate selection rule, which are known to be from the best in literature. In
perspectives, we will work on eliminating noise in data, compare with different learning algorithms available
in literature.
REFERENCES
[1] He H., et al., “Learning to search in branch and bound algorithms,” Advances in neural information processing
systems, pp. 3293-3301, 2014.
[2] Moll R., et al., “Learning instance-independent value functions to enhance local search,” Advances in Neural
Information Processing Systems, pp. 1017-1023, 1999.
[3] Karzan F. K., et al., “Information-based branching schemes for binary linear mixed integer problems,”
Mathematical Programming Computation, vol/issue: 1(4), pp. 249-93, 2009.
[4] Davey B., et al., “Efficient intelligent backtracking using linear programming,” INFORMS Journal on Computing,
vol/issue: 14(4), pp. 373-86, 2002.
[5] Chen D. S., et al., “Applied integer programming: modeling and solution,” John Wiley & Sons, 2011.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 4, August 2019 : 2481 - 2490
2490
[6] Achterberg T., et al., “Branching rules revisited,” Operations Research Letters, vol/issue: 33(1), pp. 42-54, 2005.
[7] Abu-Mostafa Y. S., et al., “Learning from data,” New York, NY, USA, AMLBook, 2012.
[8] C. Rudin, “Prediction: Machine Learning and Statistics,” 2012.
[9] X. Wu, et al., “Top 10 algorithms in data mining,” Knowledge and information systems, vol/issue: 14(1), pp. 1-37,
2008.
[10] B. Lantz, “Machine learning with R,” Packt Publishing Ltd, 2015.
[11] Alvarez A. M., et al., “A machine learning-based approximation of strong branching,” INFORMS Journal on
Computing, vol/issue: 29(1), pp. 185-95, 2017.
[12] Aoun O., et al., “Investigation of hidden markov model for the tuning of metaheuristics in airline scheduling
problems,” IFAC-PapersOnLine, vol/issue: 49(3), pp. 347-52, 2016.
[13] E. Afia A., et al., “Hidden markov model control of inertia weight adaptation for Particle swarm optimization,”
IFAC-PapersOnLine, vol/issue: 50(1), pp. 9997-10002, 2017.
[14] E. Afia A. and Sarhani M., “Performance prediction using support vector machine for the configuration of
optimization algorithms,” Cloud Computing Technologies and Applications (CloudTech), 2017 3rd International
Conference, pp. 1-7, 2017.
[15] E. Afia A., et al., “Fuzzy logic controller for an adaptive Huang cooling of simulated annealing,” Proceedings of
the 2nd international Conference on Big Data, Cloud and Applications, pp. 64, 2017.
[16] Bouzbita S., et al., “A novel based Hidden Markov Model approach for controlling the ACS-TSP evaporation
parameter,” Multimedia Computing and Systems (ICMCS), 2016 5th International Conference, pp. 633-638, 2016.
[17] Lalaoui M., et al., “Hidden Markov Model for a self-learning of Simulated Annealing cooling law,” Multimedia
Computing and Systems (ICMCS), 2016 5th International Conference, pp. 558-563, 2016.
[18] Lalaoui M., et al., “A self-adaptive very fast simulated annealing based on Hidden Markov model,” Cloud
Computing Technologies and Applications (CloudTech), 2017 3rd International Conference, pp. 1-8, 2017.
[19] E. Afia A., et al., “The Effect of Updating the Local Pheromone on ACS Performance using Fuzzy Logic,”
International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 7(4), pp. 2161-8, 2017.
[20] Bouzbita S., et al., “Dynamic adaptation of the ACS-TSP local pheromone decay parameter based on the Hidden
Markov Model,” Cloud Computing Technologies and Applications (CloudTech), 2016 2nd International
Conference, pp. 344-349, 2016.
[21] Kabbaj M. M. and E. Afia A., “Towards learning integral strategy of branch and bound,” Multimedia Computing
and Systems (ICMCS), 2016 5th International Conference, pp. 621-626, 2016.
[22] E. Afia A. and Kabbaj M. M., “Supervised learning in Branch-and-cut strategies,” Proceedings of the 2nd
international Conference on Big Data, Cloud and Applications, pp. 114, 2017.
[23] http://Scip.zib.de.
[24] http://Miplib.zib.de.
[25] David M., “Support Vector Machines: The Interface to libsvm in Package e1071,” David. Meyer@ R-Project. org.
2017.
BIOGRAPHIES OF AUTHORS
M. Kabbaj Mohamed Mustapha is a PHD student in at National School of Computer Science and
Systems Analysis (ENSIAS), Rabat, Morocco. He obtained M.Eng. in 2013 in Computer Science
from National School of Computer Science and Systems Analysis. Research areas of interest are
Machine Learning, Combinatorial Optimization, Stochastic Programming and Feature Selection.
Abdellatif El Afia is an Associate Professor at National School of Computer Science and Systems
Analysis (ENSIAS), Rabat, Morocco. He received his M.Sc. degrees in Applied Mathematics from
University of Sherbrook. He obtained his Ph.D. in 1999 in Operation Research from University of
Sherbrook, Canada. Research areas of interest are Mathematical Programming (Stochastic and
deterministic), Metaheuristics, Recommendation Systems and Machine Learning.