This document summarizes a research paper that studies the Minmax Regret Path Problem with interval data. The paper presents a new exact branch and cut algorithm for solving this problem and also proposes new heuristics, including a local search heuristic and a simulated annealing metaheuristic that uses a novel neighborhood structure. Computational experiments on benchmark instances are conducted to analyze the performance of the different algorithms and approaches. The results provide an assessment of the algorithms and show the superiority of the simulated annealing approach for finding good solutions to large problem instances.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of decision problems under uncertainty. The exact approaches for computing decision based on possibilistic networks are limited by the size of the possibility distributions.
Generally, these approaches are based on possibilistic propagation algorithms. An important step in the computation of the decision is the transformation of the DAG into a secondary structure, known as the junction trees. This transformation is known to be costly and represents a difficult problem. We propose in this paper a new approximate approach for the computation
of decision under uncertainty within possibilistic networks. The computing of the optimal optimistic decision no longer goes through the junction tree construction step. Instead, it is performed by calculating the degree of normalization in the moral graph resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying its preferences.
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Sequence Similarity between Genetic Codes using Improved Longest Common Subse...rahulmonikasharma
Finding the sequence similarity between two genetic codes is an important problem in computational biology. In this paper, we developed an efficient algorithm to find sequence similarity between genetic codes using longest common subsequence algorithm. The algorithm takes the advantages over the edit distance algorithm and improves the performance. The proposed algorithm is tested on randomly generated DNA sequence and finding the exact DNA sequence comparison. The DNA genetic code sequence comparison can be used to discover information such as evolutionary divergence and ways to apply genetic codes from one DNA sequence to another sequence.
Soft Computing Techniques Based Image Classification using Support Vector Mac...ijtsrd
n this paper we compare different kernel had been developed for support vector machine based time series classification. Despite the better presentation of Support Vector Machine SVM on many concrete classification problems, the algorithm is not directly applicable to multi dimensional routes having different measurements. Training support vector machines SVM with indefinite kernels has just fascinated consideration in the machine learning public. This is moderately due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite. In this paper, by spreading the Gaussian RBF kernel by Gaussian elastic metric kernel. Gaussian elastic metric kernel is extended version of Gaussian RBF. The extended version divided in two ways time wrap distance and its real penalty. Experimental results on 17 datasets, time series data sets show that, in terms of classification accuracy, SVM with Gaussian elastic metric kernel is much superior to other kernels, and the ultramodern similarity measure methods. In this paper we used the indefinite resemblance function or distance directly without any conversion, and, hence, it always treats both training and test examples consistently. Finally, it achieves the highest accuracy of Gaussian elastic metric kernel among all methods that train SVM with kernels i.e. positive semi definite PSD and Non PSD, with a statistically significant evidence while also retaining sparsity of the support vector set. Tarun Jaiswal | Dr. S. Jaiswal | Dr. Ragini Shukla ""Soft Computing Techniques Based Image Classification using Support Vector Machine Performance"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23437.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23437/soft-computing-techniques-based-image-classification-using-support-vector-machine-performance/tarun-jaiswal
Critical Paths Identification on Fuzzy Network Projectiosrjce
In this paper, a new approach for identifying fuzzy critical path is presented, based on converting the
fuzzy network project into deterministic network project, by transforming the parameters set of the fuzzy
activities into the time probability density function PDF of each fuzzy time activity. A case study is considered as
a numerical tested problem to demonstrate our approach.
A Novel Approach to Mathematical Concepts in Data Miningijdmtaiir
-This paper describes three different fundamental
mathematical programming approaches that are relevant to
data mining. They are: Feature Selection, Clustering and
Robust Representation. This paper comprises of two clustering
algorithms such as K-mean algorithm and K-median
algorithms. Clustering is illustrated by the unsupervised
learning of patterns and clusters that may exist in a given
databases and useful tool for Knowledge Discovery in
Database (KDD). The results of k-median algorithm are used
to collecting the blood cancer patient from a medical database.
K-mean clustering is a data mining/machine learning algorithm
used to cluster observations into groups of related observations
without any prior knowledge of those relationships. The kmean algorithm is one of the simplest clustering techniques
and it is commonly used in medical imaging, biometrics and
related fields.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of decision problems under uncertainty. The exact approaches for computing decision based on possibilistic networks are limited by the size of the possibility distributions.
Generally, these approaches are based on possibilistic propagation algorithms. An important step in the computation of the decision is the transformation of the DAG into a secondary structure, known as the junction trees. This transformation is known to be costly and represents a difficult problem. We propose in this paper a new approximate approach for the computation
of decision under uncertainty within possibilistic networks. The computing of the optimal optimistic decision no longer goes through the junction tree construction step. Instead, it is performed by calculating the degree of normalization in the moral graph resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying its preferences.
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Sequence Similarity between Genetic Codes using Improved Longest Common Subse...rahulmonikasharma
Finding the sequence similarity between two genetic codes is an important problem in computational biology. In this paper, we developed an efficient algorithm to find sequence similarity between genetic codes using longest common subsequence algorithm. The algorithm takes the advantages over the edit distance algorithm and improves the performance. The proposed algorithm is tested on randomly generated DNA sequence and finding the exact DNA sequence comparison. The DNA genetic code sequence comparison can be used to discover information such as evolutionary divergence and ways to apply genetic codes from one DNA sequence to another sequence.
Soft Computing Techniques Based Image Classification using Support Vector Mac...ijtsrd
n this paper we compare different kernel had been developed for support vector machine based time series classification. Despite the better presentation of Support Vector Machine SVM on many concrete classification problems, the algorithm is not directly applicable to multi dimensional routes having different measurements. Training support vector machines SVM with indefinite kernels has just fascinated consideration in the machine learning public. This is moderately due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite. In this paper, by spreading the Gaussian RBF kernel by Gaussian elastic metric kernel. Gaussian elastic metric kernel is extended version of Gaussian RBF. The extended version divided in two ways time wrap distance and its real penalty. Experimental results on 17 datasets, time series data sets show that, in terms of classification accuracy, SVM with Gaussian elastic metric kernel is much superior to other kernels, and the ultramodern similarity measure methods. In this paper we used the indefinite resemblance function or distance directly without any conversion, and, hence, it always treats both training and test examples consistently. Finally, it achieves the highest accuracy of Gaussian elastic metric kernel among all methods that train SVM with kernels i.e. positive semi definite PSD and Non PSD, with a statistically significant evidence while also retaining sparsity of the support vector set. Tarun Jaiswal | Dr. S. Jaiswal | Dr. Ragini Shukla ""Soft Computing Techniques Based Image Classification using Support Vector Machine Performance"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23437.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23437/soft-computing-techniques-based-image-classification-using-support-vector-machine-performance/tarun-jaiswal
Critical Paths Identification on Fuzzy Network Projectiosrjce
In this paper, a new approach for identifying fuzzy critical path is presented, based on converting the
fuzzy network project into deterministic network project, by transforming the parameters set of the fuzzy
activities into the time probability density function PDF of each fuzzy time activity. A case study is considered as
a numerical tested problem to demonstrate our approach.
A Novel Approach to Mathematical Concepts in Data Miningijdmtaiir
-This paper describes three different fundamental
mathematical programming approaches that are relevant to
data mining. They are: Feature Selection, Clustering and
Robust Representation. This paper comprises of two clustering
algorithms such as K-mean algorithm and K-median
algorithms. Clustering is illustrated by the unsupervised
learning of patterns and clusters that may exist in a given
databases and useful tool for Knowledge Discovery in
Database (KDD). The results of k-median algorithm are used
to collecting the blood cancer patient from a medical database.
K-mean clustering is a data mining/machine learning algorithm
used to cluster observations into groups of related observations
without any prior knowledge of those relationships. The kmean algorithm is one of the simplest clustering techniques
and it is commonly used in medical imaging, biometrics and
related fields.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP ijasuc
DCSP (Distributed Constraint Satisfaction Problem) has been a very important research area in AI
(Artificial Intelligence). There are many application problems in distributed AI that can be formalized as
DSCPs. With the increasing complexity and problem size of the application problems in AI, the required
storage place in searching and the average searching time are increasing too. Thus, to use a limited
storage place efficiently in solving DCSP becomes a very important problem, and it can help to reduce
searching time as well. This paper provides an efficient knowledge base management approach based on
general usage of hyper-resolution-rule in consistence algorithm. The approach minimizes the increasing of
the knowledge base by eliminate sufficient constraint and false nogood. These eliminations do not change
the completeness of the original knowledge base increased. The proofs are given as well. The example
shows that this approach decrease both the new nogoods generated and the knowledge base greatly. Thus
it decreases the required storage place and simplify the searching process.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...ijnlc
The tremendous increase in the amount of available research documents impels researchers to propose topic models to extract the latent semantic themes of a documents collection. However, how to extract the hidden topics of the documents collection has become a crucial task for many topic model applications. Moreover, conventional topic modeling approaches suffer from the scalability problem when the size of documents collection increases. In this paper, the Correlated Topic Model with variational ExpectationMaximization algorithm is implemented in MapReduce framework to solve the scalability problem. The proposed approach utilizes the dataset crawled from the public digital library. In addition, the full-texts of the crawled documents are analysed to enhance the accuracy of MapReduce CTM. The experiments are conducted to demonstrate the performance of the proposed algorithm. From the evaluation, the proposed approach has a comparable performance in terms of topic coherences with LDA implemented in MapReduce framework.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
Inventory Model with Price-Dependent Demand Rate and No Shortages: An Interva...orajjournal
In this paper, an interval-valued inventory optimization model is proposed. The model involves the price dependent
demand and no shortages. The input data for this model are not fixed, but vary in some real bounded intervals. The aim is to determine the optimal order quantity, maximizing the total profit and minimizing the holding cost subjecting to three constraints: budget constraint, space constraint, and
budgetary constraint on ordering cost of each item. We apply the linear fractional programming approach based on interval numbers. To apply this approach, a linear fractional programming problem is modeled with interval type uncertainty. This problem is further converted to an optimization problem with interval valued
objective function having its bounds as linear fractional functions. Two numerical examples in crisp
case and interval-valued case are solved to illustrate the proposed approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
Dimensionality Reduction Techniques for Document Clustering- A SurveyIJTET Journal
Abstract— Dimensionality reduction technique is applied to get rid of the inessential terms like redundant and noisy terms in documents. In this paper a systematic study is conducted for seven dimensionality reduction methods such as Latent Semantic Indexing (LSI), Random Projection (RP), Principle Component Analysis (PCA) and CUR decomposition, Latent Dirichlet Allocation(LDA), Singular value decomposition (SVD). Linear Discriminant Analysis(LDA)
Accelerating materials property predictions using machine learningGhanshyam Pilania
The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning
methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
EFFICIENT KNOWLEDGE BASE MANAGEMENT IN DCSP ijasuc
DCSP (Distributed Constraint Satisfaction Problem) has been a very important research area in AI
(Artificial Intelligence). There are many application problems in distributed AI that can be formalized as
DSCPs. With the increasing complexity and problem size of the application problems in AI, the required
storage place in searching and the average searching time are increasing too. Thus, to use a limited
storage place efficiently in solving DCSP becomes a very important problem, and it can help to reduce
searching time as well. This paper provides an efficient knowledge base management approach based on
general usage of hyper-resolution-rule in consistence algorithm. The approach minimizes the increasing of
the knowledge base by eliminate sufficient constraint and false nogood. These eliminations do not change
the completeness of the original knowledge base increased. The proofs are given as well. The example
shows that this approach decrease both the new nogoods generated and the knowledge base greatly. Thus
it decreases the required storage place and simplify the searching process.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...ijnlc
The tremendous increase in the amount of available research documents impels researchers to propose topic models to extract the latent semantic themes of a documents collection. However, how to extract the hidden topics of the documents collection has become a crucial task for many topic model applications. Moreover, conventional topic modeling approaches suffer from the scalability problem when the size of documents collection increases. In this paper, the Correlated Topic Model with variational ExpectationMaximization algorithm is implemented in MapReduce framework to solve the scalability problem. The proposed approach utilizes the dataset crawled from the public digital library. In addition, the full-texts of the crawled documents are analysed to enhance the accuracy of MapReduce CTM. The experiments are conducted to demonstrate the performance of the proposed algorithm. From the evaluation, the proposed approach has a comparable performance in terms of topic coherences with LDA implemented in MapReduce framework.
Second or fourth-order finite difference operators, which one is most effective?Premier Publishers
This paper presents higher-order finite difference (FD) formulas for the spatial approximation of the time-dependent reaction-diffusion problems with a clear justification through examples, “why fourth-order FD formula is preferred to its second-order counterpart” that has been widely used in literature. As a consequence, methods for the solution of initial and boundary value PDEs, such as the method of lines (MOL), is of broad interest in science and engineering. This procedure begins with discretizing the spatial derivatives in the PDE with algebraic approximations. The key idea of MOL is to replace the spatial derivatives in the PDE with the algebraic approximations. Once this procedure is done, the spatial derivatives are no longer stated explicitly in terms of the spatial independent variables. In other words, only one independent variable is remaining, the resulting semi-discrete problem has now become a system of coupled ordinary differential equations (ODEs) in time. Thus, we can apply any integration algorithm for the initial value ODEs to compute an approximate numerical solution to the PDE. Analysis of the basic properties of these schemes such as the order of accuracy, convergence, consistency, stability and symmetry are well examined.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
A SECURE DIGITAL SIGNATURE SCHEME WITH FAULT TOLERANCE BASED ON THE IMPROVED ...csandit
Fault tolerance and data security are two important issues in modern communication systems.
In this paper, we propose a secure and efficient digital signature scheme with fault tolerance
based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains
three prime numbers and overcome several attacks possible on RSA. By using the Chinese
Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption
side and it provides high security also.
Inventory Model with Price-Dependent Demand Rate and No Shortages: An Interva...orajjournal
In this paper, an interval-valued inventory optimization model is proposed. The model involves the price dependent
demand and no shortages. The input data for this model are not fixed, but vary in some real bounded intervals. The aim is to determine the optimal order quantity, maximizing the total profit and minimizing the holding cost subjecting to three constraints: budget constraint, space constraint, and
budgetary constraint on ordering cost of each item. We apply the linear fractional programming approach based on interval numbers. To apply this approach, a linear fractional programming problem is modeled with interval type uncertainty. This problem is further converted to an optimization problem with interval valued
objective function having its bounds as linear fractional functions. Two numerical examples in crisp
case and interval-valued case are solved to illustrate the proposed approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IMPROVING SCHEDULING OF DATA TRANSMISSION IN TDMA SYSTEMScsandit
In an era where communication has a most important role in modern societies, designing efficient
algorithms for data transmission is of the outmost importance. TDMA is a technology used in many
communication systems such as satellites and cell phones. In order to transmit data in such systems we
need to cluster them in packages. To achieve a faster transmission we are allowed to preempt the
transmission of any packet in order to resume at a later time. Such preemptions though come with a delay
in order to setup for the next transmission. In this paper we propose an algorithm which yields improved
transmission scheduling. This algorithm we call MGA. We have proven an approximation ratio for MGA
and ran experiments to establish that it works even better in practice. In order to conclude that MGA will
be a very helpful tool in constructing an improved schedule for packet routing using preemtion with a setup
cost, we compare its results to two other efficient algorithms designed by researchers in the past.
Dimensionality Reduction Techniques for Document Clustering- A SurveyIJTET Journal
Abstract— Dimensionality reduction technique is applied to get rid of the inessential terms like redundant and noisy terms in documents. In this paper a systematic study is conducted for seven dimensionality reduction methods such as Latent Semantic Indexing (LSI), Random Projection (RP), Principle Component Analysis (PCA) and CUR decomposition, Latent Dirichlet Allocation(LDA), Singular value decomposition (SVD). Linear Discriminant Analysis(LDA)
Accelerating materials property predictions using machine learningGhanshyam Pilania
The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning
methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.
Urban strategies to promote resilient cities The case of enhancing Historic C...inventionjournals
This research tackles disaster prevention problems in dense urban areas, concentrating on the urban fire challenge in Historic Cairo district, Egypt, through disaster risk management approach. The study area suffers from the strike of several urban fire outbreaks, that resulted in disfiguring historic monuments and destroying unregulated traditional markets. Therefore, the study investigates the significance of hazard management and how can urban strategies improve the city resilient through reducing the impact of natural and man-made threats. The main findings of the research are the determination of the vulnerability factors in Historic Cairo district, either regarding management deficiency or issues related to the existing urban form. It is found that the absence of the mitigation and preparedness phases is the main problem in the risk management cycle in the case study. Additionally, the coping initiatives adopted by local authorities to address risks are random and insufficient. The study concludes with recommendations which invoke incorporating hazard management stages (pre disaster, during disaster and post disaster) into the process of evolving development planning. Finally, solutions are offered to mitigate, prepare, respond and recover from fire disasters in the case study. The solutions include urban policies, land-use planning, urban design outlines, safety regulation and public awareness and training.
Adapted Branch-and-Bound Algorithm Using SVM With Model SelectionIJECEIAES
Branch-and-Bound algorithm is the basis for the majority of solving methods in mixed integer linear programming. It has been proving its efficiency in different fields. In fact, it creates little by little a tree of nodes by adopting two strategies. These strategies are variable selection strategy and node selection strategy. In our previous work, we experienced a methodology of learning branch-and-bound strategies using regression-based support vector machine twice. That methodology allowed firstly to exploit information from previous executions of Branch-and-Bound algorithm on other instances. Secondly, it created information channel between node selection strategy and variable branching strategy. And thirdly, it gave good results in term of running time comparing to standard Branch-and-Bound algorithm. In this work, we will focus on increasing SVM performance by using cross validation coupled with model selection.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the presented problem. The optimal solution found by the proposed approach is characterized by maximum reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different examples taken from the literature to illustrate its efficiency in comparison with other previous methods.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost
constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy
membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the
presented problem. The optimal solution found by the proposed approach is characterized by maximum
reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different
examples taken from the literature to illustrate its efficiency in comparison with other previous methods
A NEW STUDY OF TRAPEZOIDAL, SIMPSON’S1/3 AND SIMPSON’S 3/8 RULES OF NUMERICAL...mathsjournal
The main goal of this research is to give the complete conception about numerical integration including Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to determine the best method, as well as the results, are compared. It includes graphical comparisons mentioning these methods graphically. After all, it is then emphasized that the among methods considered, Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving a definite integral.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Efficient approximate analytical methods for nonlinear fuzzy boundary value ...IJECEIAES
This paper aims to solve the nonlinear two-point fuzzy boundary value problem (TPFBVP) using approximate analytical methods. Most fuzzy boundary value problems cannot be solved exactly or analytically. Even if the analytical solutions exist, they may be challenging to evaluate. Therefore, approximate analytical methods may be necessary to consider the solution. Hence, there is a need to formulate new, efficient, more accurate techniques. This is the focus of this study: two approximate analytical methods-homotopy perturbation method (HPM) and the variational iteration method (VIM) is proposed. Fuzzy set theory properties are presented to formulate these methods from crisp domain to fuzzy domain to find approximate solutions of nonlinear TPFBVP. The presented algorithms can express the solution as a convergent series form. A numerical comparison of the mean errors is made between the HPM and VIM. The results show that these methods are reliable and robust. However, the comparison reveals that VIM convergence is quicker and offers a swifter approach over HPM. Hence, VIM is considered a more efficient approach for nonlinear TPFBVPs.
A stochastic algorithm for solving the posterior inference problem in topic m...TELKOMNIKA JOURNAL
Latent Dirichlet allocation (LDA) is an important probabilistic generative model and has usually used in many domains such as text mining, retrieving information, or natural language processing domains. The posterior inference is the important problem in deciding the quality of the LDA model, but it is usually non-deterministic polynomial (NP)-hard and often intractable, especially in the worst case. For individual texts, some proposed methods such as variational Bayesian (VB), collapsed variational Bayesian (CVB), collapsed Gibb’s sampling (CGS), and online maximum a posteriori estimation (OPE) to avoid solving this problem directly, but they usually do not have any guarantee of convergence rate or quality of learned models excepting variants of OPE. Based on OPE and using the Bernoulli distribution combined, we design an algorithm namely general online maximum a posteriori estimation using two stochastic bounds (GOPE2) for solving the posterior inference problem in LDA model. It also is the NP-hard non-convex optimization problem. Via proof of theory and experimental results on the large datasets, we realize that GOPE2 is performed to develop the efficient method for learning topic models from big text collections especially massive/streaming texts, and more efficient than previous methods.
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
This work considers the multi-objective optimization problem constrained by a system of bipolar fuzzy relational equations with max-product composition. An integer optimization based technique for order of preference by similarity to the ideal solution is proposed for solving such a problem. Some critical features associated with the feasible domain and optimal solutions of the bipolar max-Tp equation constrained optimization problem are studied. An illustrative example verifying the idea of this paper is included. This is the first attempt to study the bipolar max-T equation constrained multi-objective optimization problems from an integer programming viewpoint.
This work considers the multi-objective optimization problem constrained by a system of bipolar fuzzy relational equations with max-product composition. An integer optimization based technique for order of preference by similarity to the ideal solution is proposed for solving such a problem. Some critical features associated with the feasible domain and optimal solutions of the bipolar max-Tp equation constrained optimization problem are studied. An illustrative example verifying the idea of this paper is included. This
is the first attempt to study the bipolar max-T equation constrained multi-objective optimization problems
from an integer programming viewpoint.
Applied Mathematics and Sciences: An International Journal (MathSJ)mathsjournal
The main goal of this research is to give the complete conception about numerical integration including
Newton-Cotes formulas and aimed at comparing the rate of performance or the rate of accuracy of
Trapezoidal, Simpson’s 1/3, and Simpson’s 3/8. To verify the accuracy, we compare each rules
demonstrating the smallest error values among them. The software package MATLAB R2013a is applied to
determine the best method, as well as the results, are compared. It includes graphical comparisons
mentioning these methods graphically. After all, it is then emphasized that the among methods considered,
Simpson’s 1/3 is more effective and accurate when the condition of the subdivision is only even for solving
a definite integral.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
2018 algorithms for the minmax regret path problem with interval data
1. Accepted Manuscript
Algorithms for the Minmax Regret Path Problem with Interval Data
Francisco P´erez-Galarce, Alfredo Candia-V´ejar, C´esar Astudillo,
Matthew Bardeen
PII: S0020-0255(18)30456-0
DOI: 10.1016/j.ins.2018.06.016
Reference: INS 13709
To appear in: Information Sciences
Received date: 12 September 2017
Revised date: 5 June 2018
Accepted date: 7 June 2018
Please cite this article as: Francisco P´erez-Galarce, Alfredo Candia-V´ejar, C´esar Astudillo,
Matthew Bardeen, Algorithms for the Minmax Regret Path Problem with Interval Data, Information
Sciences (2018), doi: 10.1016/j.ins.2018.06.016
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
2. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Algorithms for the Minmax Regret Path Problem with Interval
Data
Francisco P´erez-Galarce1
, Alfredo Candia-V´ejar2∗
, C´esar Astudillo3
, Matthew Bardeen3
1
Computer Science Department, Pontificia Universidad Cat´olica de Chile, Santiago, Chile
2
Departamento de Ingenier´ıa Industrial, Universidad de Talca, Camino Los Niches km. 1, Curic´o, Chile
3
Departamento de Ciencias de la Computaci´on, Universidad de Talca, Camino Los Niches km. 1, Curic´o, Chile
Abstract
The Shortest Path in networks is an important problem in Combinatorial Optimization and
has many applications in areas like Telecommunications and Transportation. It is known that this
problem is easy to solve in its classic deterministic version, but it is also known that it is an NP-Hard
problem for several generalizations. The Shortest Path Problem consists in finding a simple path
connecting a source node and a terminal node in an arc-weighted directed network. In some real-
world situations the weights are not completely known and then this problem is transformed into an
optimization one under uncertainty. It is assumed that an interval estimate is given for each arc length
and no further information about the statistical distribution of the weights is known. Uncertainty
has been modeled in different ways in Optimization. Our aim in this paper is to study the Minmax
Regret Path with Interval Data problem by presenting a new exact branch and cut algorithm and,
additionally, new heuristics. A set of difficult and large size instances are defined and computational
experiments are conducted for the analysis of the different approaches designed to solve the problem.
The main contribution of our paper is to provide an assessment of the performance of the proposed
algorithms and an empirical evidence of the superiority of a simulated annealing approach based on
a new neighborhood over the other heuristics proposed.
Keywords: Minmax Regret Model with Interval Data; Simulated Annealing; Shortest Path Prob-
lem; Branch and Cut; Neighbourhoods for path problems
1 Introduction1
We study a variant of the well known Shortest Path (SP) problem called the Minmax Regret2
Path (MMR-P) Problem. In the classic SP problem, a digraph G = (V, A), where V is the set of3
nodes and A is the set of arcs, with non-negative lengths associated to each arc and two special nodes4
s and t belonging to V are considered. The SP problem consists of finding a path between s and5
t (s-t-path) with the minimum total length. Efficient algorithms for the original SP problem have6
been known since [14], in which the authors proposed a polynomial time algorithm and from that7
study, multiple approaches have been proposed. Some SP variants, algorithms and applications are8
discussed in [2].9
In this research the focus is on SP problems where there is uncertainty in the objective function10
parameters (the length function). In this SP variant, for each arc we have a closed interval that11
defines the possibilities for the arc length. The uncertainty model used here is the minmax regret12
approach (MMR), sometimes named robust deviation. In this approach the aim is to make decisions13
that will have a good objective value under any likely input data scenario included in the decision14
model. Three criteria are known to select among robust decisions, they are: absolute, MMR and15
relative MMR [27]. We use MMR, where the regret associated with each combination of decisions16
and input data scenario is defined as the difference between the resulting cost to the decision maker17
and the cost from the decision taken if it had been known prior to the time of the decision which18
scenario of data input would have occurred. In the context of Optimization with Uncertainty an19
important alternative model is the Fuzzy model, where several papers have studied the SP problem,20
see [20, 36, 17].21
The MMR Model has been increasingly studied in combinatorial optimization, see the books by22
[27], and [23], as well as the reviews by [4] and [8]. Most research on Minmax Regret Combinatorial23
Optimization (MMR-CO) has been focused on mono objective problems and recently, a paper has24
proposed robust multiobjective CO problems [15] and, in the last years, several papers have extended25
∗Corresponding author.
E-mail addresses: fjperez10@uc.cl (Francisco P´erez-Galarce), Alfredo Candia-V´ejar (acandia@utalca.cl), C´esar As-
tudillo (castudillo@utalca.cl), Matthew Bardeen (mbardeen@utalca.cl)
1
3. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
the concepts of robustness to Multiobjective CO problems [45, 9]. Moreover, SP has been studied in26
the context of multi-objective uncertain problems,[44].27
It is known that MMR-CO problems with interval data are usually NP-hard, even when the28
underlying classic problem is easy to solve; this is the case of the minimum spanning tree problem,29
SP problem, assignment problem and others, see [4] and [23] for a detailed analysis. Several efforts30
have been made to obtain exact solutions using a broad set of exact methods, frequently formulating31
an MMR problem like a Mixed Integer Linear Programming (MILP) problem and then using a32
commercial code or applying branch and bound, branch and cut or Benders decomposition approaches33
in a dedicated scheme. Some problems that have been studied are: MMR Spanning Trees [30, 42],34
MMR Paths [22, 23, 31, 32], MMR Assignment [39], MMR Set Covering [40], and MMR Traveling35
Salesman [34].36
Particularly, for MMR-P, [47] proved that the problem is NP-Hard even when a graph is restricted37
to be directed acyclic planar and regular of degree three and [46] proved that the problem is NP-Hard38
even in the case of a restricted class of Layered networks. Additional results about the complexity of39
MMR-P for some classes of networks is given in [23] and [5]. Exact algorithms for MMR-P have been40
proposed by [23, 31, 32], which show the application of several algorithmic approaches. However,41
most of these papers had computational experiments using small instances or instances with a special42
structure like real road networks. In fact, [32] compared several exact algorithms and concluded that43
an algorithm able to clearly outperform the others does not exist. Moreover, they established some44
recommendations depending on the type of instances to be solved. [16] presented some results about45
some classes of networks of MMR-P for which polynomial or pseudopolynomial approaches exist. The46
authors of [38] addressed the MMR-P on a finite multi-scenario model and they proposed three new47
approaches for algorithmic purposes. Numerical experiments using randomly generated instances48
showed that some of the proposed algorithms were able to obtain solutions in reasonable times for49
network instances up to 750 nodes. Very recently, [18] have proposed a new procedure to obtain a50
lower bound for the optimal value of instances of MMR-P. This value is part of a branch and bound51
algorithm that outperforms existing exact algorithms in the literature when it is applied to some52
classes of MMR-P instances.53
With respect to heuristic approaches, only a few methods are available. A basic heuristic based on54
the definition of a particular scenario (the midpoint of the intervals) was designed as an approximation55
algorithm for general MMR-CO problems [24, 23]. A new basic heuristic, HMU, solves an MMR-CO56
problem for two scenarios: the midpoint scenario and the scenario in which all the weights are set57
to their upper bounds, then the HMU returns the better of these two solutions. HMU achieves a58
good performance for several MMR-CO problems [24, 23]. [21] proposed a heuristic for MMR-P but59
only small instances were tested for comparison with other approaches. A new lower bound for the60
optimal value of MMR-CO problems was proposed in [10]. In particular, for MMR-P, [23] showed61
that for networks with a number of nodes under 1 000, HMU obtained solutions with gaps under62
6% (relative deviation from the reported optimum) for several classes of directed and undirected63
networks.64
A problem related to MMR-P, the minmax relative regret robust shortest path problem (MMRR-65
P), was studied in [11]. They proposed a mixed integer linear programming formulation and also66
developed several heuristics with emphasis on providing efficient and scalable methods for solving67
large instances for the MMR-P, based on pilot method and random-key genetic algorithms. The68
CPLEX branch-and-bound algorithm based on this formulation found optimal solutions for most69
of the small Layered and Grid instances with up to 200 nodes. However, gaps of 10% or higher70
were found for some instances. The Grid instances proposed in this paper were much harder to71
solve than the Layered instances found in the literature. Other heuristic approaches for MMR-CO72
problems are the Simulated Annealing approach for MMR-Spanning Tree by [35], the heuristic based73
on a bounding process for MMR- spanning Arborescences by [12], the metaheuristic approach for74
MMR-Assignment problem [39] and the Tabu Search for the MMR-Spanning Tree by [25].75
Our main contributions in this paper are: i) an efficient Branch and Cut algorithm was able to76
find exact solutions for some classes of large size instances and outperformed other exact algorithms77
for several of these instances, ii) a local search heuristic and a simulated annealing metaheuristic that78
uses a novel neighborhood to find good solutions for large sized instances that exact algorithms could79
not and iii) an extensive experimental analysis using several classes of network instances showing the80
performance of the different algorithms and highlighting the particular conditions when they could81
be used.82
In Section 2 the problem is formally defined and known results about the computational complex-83
ity of the problem are presented; in Section 3 a new Branch & Cut exact algorithm for MMR-P is84
introduced; in Section 4, various heuristics are analyzed including well-known basic heuristics, then85
a local search and simulated annealing approaches based on a new neighborhood for the problem are86
also presented; in Section 5, benchmark instances are presented and an implementation description87
is given. In Section 6 experiments are conducted with exact approaches, determining the perfor-88
2
4. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
mance of the algorithms when applied in several types of instances. The computational results of the89
heuristic and their analysis for hard instances are presented in Section 7, finally, in Section 8 some90
conclusions are discussed.91
2 Definition of MMR-P and Computational Complexity92
First of all, in 2.1 basic notation and the formal definition of MMR-P are presented. Then, in93
2.2 important known results about the computational complexity of the problem are presented.94
2.1 Notation for MMR-P95
We use a standard notation for MMR-CO problems, specially we follow the notation used in [39].96
We considered a digraph G = (V, A) where V is the set of nodes, | V |= n and | A |= m the set of97
arcs. For each arc e ∈ A, two non negative numbers c−
ij and c+
ij are given and c−
ij ≤ c+
ij. The length98
can take on any real number from its uncertainty interval c−
ij, c+
ij , regardless of the values taken by99
the costs of other arcs. The Cartesian product of the uncertainty intervals c−
ij, c+
ij , (i, j) ∈ A, is100
denoted as S and any element s of S is called a scenario; S is the vector of all possible realizations101
of the costs of arcs. cs
ij, (i, j) ∈ A denotes the cost of the arc (i, j) corresponding to scenario s.102
Let Φ the set of all s-t paths in G. For each X ∈ Φ and s ∈ S, let F(s, X) be the cost of the s-t103
path X in the scenario s.104
F(s, X) =
(i,j)∈X
cs
ij (CP)
The classical s-t SP problem for a fixed scenario s ∈ S is:105
min {F(s, X) : X ∈ Φ} (CSP)
Let F∗
(s) be the optimum objective value for problem (CSP). For any X ∈ Φ and s ∈ S, the value106
R(s, X) = F(s, X) − F∗
(s) is called the regret for X under scenario s. For any X ∈ Φ, the value107
Z(X) is called the maximum (or worst-case) regret for X.108
Z(X) = max
s∈S
R(s, X) (MR-Path)
The MMR version of Problem (CSP) is:109
min {Z(X) : X ∈ Φ} = min
X∈Φ
max
s∈S
R(s, X) (MMR-Path)
Let Z∗
denotes the optimum objective value for Problem MMR-P. Further, Z∗
is called a worst-110
case scenario for X. For any X ∈ Φ, the scenario induced by X, s(X), for each (i, j) ∈ A is defined111
by112
c
s(X)
ij =
c+
ij, (i, j) ∈ X
c−
ij, otherwise.
(1)
Property 1: For each s-t path X in Φ it is verified,113
Z(X) = Fs(X)
(X) − Fs(X)
(P1)
It is clear from the above definitions that the worst-case regret can be computed by solving just114
two classic SP problems.115
2.2 Computational Complexity of MMR-P116
Several works analyzing the computational complexity of MMR-P have shown that the problem is117
NP-Hard even for several classes of special networks. In the following two classes of directed graphs118
(digraphs) are defined. More details about the classes of digraphs and computational complexity119
results can be found in [23].120
Layered digraphs: In a layered digraph G = (V, A), set V can be partitioned into disjoint subsets121
V1, V2, ..., Vk called layers and arcs exist only between nodes from Vi and Vi+1 for i = 1, ..., k − 1.122
The maximal value of |Vi| for i = 1, ..., k is called a width of G. In every layered digraph all paths123
between two specified nodes s and t have the same number of arcs.124
Edge series-parallel multidigraphs: An edge series-parallel multidigraph (ESP) is recursively de-125
fined as follows. A digraph consisting of two nodes joined by a single arc is ESP. If G1 and G2 are126
ESP, so are the multidigraphs constructed by each of the operations:127
• Parallel composition p(G1, G2): identify the source of G1 with the source of G2 and the sink of128
G1 with the sink of G2.129
3
5. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
• Series composition s(G1, G2): identify the sink of G1 with the source of G2.130
In the following some computational complexity results are summarized:131
- MMR-P is strongly NP-hard for acyclic directed layered graphs, even if the bounds of weight132
intervals are 0 or 1.133
- MMR-P is strongly NP-hard for undirected graphs, even if the bounds of weight intervals are134
0 or 1.135
- MMR-P is NP-hard for edge series-parallel digraphs with a maximal node degree at most 3.136
- MMR-P is NP-hard for layered digraphs of width 3 and for layered multidigraphs of width 2.137
- MMR-P for ESP admits an FPTAS, that is an algorithm that for a given ESP computes path138
P such that ZG(P) ≤ (1 + )OPT in time O |A|3
/ 2
.139
The above results show that MMR-P is a very difficult problem still for some special classes of140
graphs. From the algorithmic point of view this represents a challenge when the objective is to141
develop efficient algorithms for its resolution.142
3 Exact Algorithms for MMR-P Problem143
In this section the proposed branch and cut (B&C) algorithm and a known MILP formulation144
for MMR-P are presented.145
3.1 A MILP Formulation for the MMR-P Problem146
We consider a digraph G = (V, A) with two distinguished nodes s and t and according the previous147
section each arc (i, j) ∈ A has associated an interval length c−
ij, c+
ij . We use Kasperski’s MILP148
formulation of the MMR-P Problem [23], this formulation is obtained using the duality properties.149
The problem MMR-P is formulated using the general formulation MMR-P defined in the previous150
section, by introducing both, the property P1 and the particular definitions of (CSP) and (CP) for151
SP. In this formulation each arc (i, j) in A has associated a binary variable xij expressing if the arc152
(i, j) is part of the solution X ∈ Φ. The constraints yij ∈ {0, 1} have been replaced by yij ≥ 0153
because the matrix A associated to the typical constraints of s-t paths is totally unimodular and154
yij ≤ 1 in every optimal solution of the above relaxed formulation.155
min
(i,j)∈A
(c+
ijxij + c−
ij(1 − xij))yij (2)
156
{i:(j,i)∈A}
yji −
{k:(k,j)∈A}
ykj =
1, j = s
0, j ∈ V {s, t}
−1, j = t
(3)
157
yij ≥ 0, ∀ (i, j) ∈ A (4)
The dual for this problem (2-4) is presented in (5-6).158
max λs − λt (5)
159
λi ≤ λj + c+
ijxij + c−
ij(1 − xij), (i, j) ∈ A (6)
Then we can use these results and tackle the MMR-P problem with the integer programming for-160
mulation showed in (7-10). This formulation can be numerically solved by a software like CPLEX.161
min
(i,j)∈A
c+
ijxij − λs + λt (7)
162
λi ≤ λj + c+
ijxij + c−
ij(1 − xij), (i, j) ∈ A (8)
163
{i:(j,i)∈A}
xji −
{k:(k,j)∈A}
xkj =
1, j = s
0, j ∈ V {s, t}
−1, j = t
(9)
xij ∈ {0, 1} , ∀ (i, j) ∈ A (10)
It is important to comment that we use this approach for evaluating the performance of both the164
B&C algorithm described next and the heuristics proposed in Section 4.165
4
6. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
3.2 Branch and Cut Approach166
We implemented a B&C over CPLEX framework using the formulation presented in equations167
(11), (12), and (13) where the constraints are separated by robust constraints in Equation (12) and168
topology in Equation (13). This formulation has an exponential number of robust constraints (one169
per each path s-t ∈ Φ) and it is based on [42].170
The topology constraints consider the flow formulation for the shortest path problem 3 and they171
are represented for X ∈ Φ in Equation (13), these constraints are added at the beginning of the172
algorithm. The robust constraints are the cuts in our B&C and they are added when a new feasible173
solution is found in each node of the branching process.174
Z∗
MMR = min
e∈E(X)
c+
e − θ (11)
s.t. θ ≤
e∈E(Y)
c−
e +
e∈E(Y)∩E(X)
(c+
e − c−
e ), ∀Y ∈ Φ (12)
θ ∈ IR≥0 and X ∈ Φ. (13)
Additionally, if a fractional solution ( ˜X) is found, we find a valid cut by rounding this fractional175
solution to a feasible one; to do so, we find a near integer vector ˜X by solving the SP on G with edge176
costs defined by Equation (14), using the obtained vector ˜X , an induced solution ˜Y is calculated177
and the corresponding cut is added to the model if the cut is violated.178
˜ce = (c−
e + c+
e ) min{1 − ˜xij, 1 − ˜xji}, ∀e : {i, j} ∈ E; (14)
Moreover, using ˜X (feasible or not) we apply a local-search in order to find still more violated179
robust constraints and add them to the model. We have also embedded into the B&C a primal180
heuristic which attempts to provide better upper bounds using the information of the fractional181
solution ˜X; a feasible vector ˜X is calculated by solving the SP on G with edge costs defined by(14).182
4 Heuristics for MMR-P183
In this section we present the proposed heuristic approaches for solving MMR-P. It contains184
(i) Two simple and known heuristics based on the definition of specific scenarios (ii) A Simulated185
Annealing and a Local Search approaches based on a novel definition of a neighborhood of feasible186
s-t paths and (iii) a Simulated Annealing approach based on a traditional k-opt type neighborhood187
for combinatorial optimization problems.188
4.1 Basic Heuristics for MMR-P189
Two basic heuristics for MMR-P are known; in fact the heuristics are applicable to any MMR-CO190
problem. These heuristics are based on the idea of specifying a particular scenario and then solving191
the classic problem using this scenario. The output of these heuristics are feasible solutions for the192
MMR-CO problem, for more details see [8, 12, 23], [34] and [40].193
First we mention the midpoint scenario, sM
, defined for each edge e ∈ A as sM
= c+
e + c−
e /2 .194
We refer to the heuristic based on the midpoint scenario as HM. The other heuristic based on the195
upper limit scenario will be denoted by HU. The computation of the output solution for each one196
of these heuristics implies to solve only twice the corresponding classic problem. The first of these197
problems is the computation of the solution Y in the specific scenario, sM
for HM or sU
for HU,198
and the second one is the computation of Z(Y ). These heuristics have been integrated in the new199
heuristic HMU by the sequential computing of the solutions given by HM and HU and getting the200
best. In the evaluation of heuristics for MMR problems several experiments have shown that if these201
heuristics are considered as an initial solution, the performance of more sophisticated heuristics is202
improved. For an in-depth discussion, please refer to [34, 39, 40] and [8].203
4.2 Local Search for MMR-P204
Local Search (LS), described in Algorithm 1, is a traditional search method for a CO problem205
P with feasible space S. The method starts from an initial solution and iteratively improves it by206
replacing the current solution with a new candidate, which is only marginally different. During this207
initialization phase, the method selects an initial solution s from the search space S. This selection208
may be at random or may take advantage of some a priori knowledge about the problem.209
An essential step in the algorithm is the acceptance criterion, i.e., a neighbor is identified as the210
new solution if its cost is strictly less in comparison to the current solution. This cost is a function211
assumed to be known and is dependent on the particular problem. The algorithm terminates when no212
5
7. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
improvements are possible, which happens when all the neighbors have a higher (or equal) cost when213
compared to the current solution. The method outputs the current solution as the best candidate.214
Observe that, at all iteration steps, the current solution is the best solution found so far. LS is a215
sub-optimal mechanism, and it is not unusual that the output will be far from the optimum. The216
literature reports many algorithms that attempt to overcome the hurdles encountered in the original217
LS strategy.218
Algorithm 1 Local Search
Input: Search space (S), cost function (f(·)), neighborhood function (N(·)).
Output: best solution founded Y , cost f(Y ).
Y ← s // s ∈ S
while Termination Criterion = TRUE do
Y ← N(S, Y )
if f(Y ) ≤ f(Y ) then
Y ← Y
end if
end while
219
4.3 A Simulated Annealing Approach for MMR-P Problem220
Simulated Annealing (SA) is a well known probabilistic metaheuristic proposed by Kirkpatrick221
et al. in the 80’s for solving hard combinatorial optimization [26, 6]. SA seeks to avoid being222
trapped in local optimum as would normally occur in algorithms using local search methods. A223
key characteristic of SA is the possible acceptation of worse solutions than the current during the224
exploration of the local neighborhood. Accordingly with the physical analogy of SA with metallurgy,225
several parameters must be tuned in order to find good solutions. Typical parameters are associated226
to concepts like neighborhood, cooling schedule, size of internal loop and termination criterion. These227
parameters are usually adjusted through experimentation and testing (see Algorithm 2).228
Algorithm 2 Simulated Annealing (SA)
Input: Search space (S), cost function (f (·)), neighborhood function (N(·)),
initial and final temperature (ti, tf ), number of internal loops (K), cooling
programming (β), acceptance function (g(·)).
Output: best solution founded Y ∗, cost f(Y ∗).
t ← ti
Y ← s // s ∈ S
while t ≥ tf do
k ← 0
while k ≤ K do
Y ← N(S, Y )
if f(Y ) ≤ f(Y ) then
Y ← Y
if f(Y ) ≤ f(Y ∗) then
Y ∗ ← Y
end if
else
if g(Y, Y ) == TRUE then
Y ← Y
end if
end if
k ← k + 1
end while
t ← βt
end while
229
Within the context of the MMR-P problem, we shall now describe the main concepts and param-230
eters generally used in SA.231
Search Space: A subgraph S of the original graph G is defined such that this subgraph contains a s-t232
path. In S a classical s-t shortest path subproblem is solved, where the arc lengths are chosen taking233
the upper limit arc costs. Then, the optimum solution of this problem is evaluated for acceptation.234
Next Subsection details this part.235
Initial Solution: The initial solution s is obtained by applying the heuristic HMU to the original236
network.237
Cooling Programming: A geometric descent of the temperature is used according to parameter β.238
Internal Loop: Next subsection describes in detail about this parameter.239
Neighborhood Search Moves: Next subsection describes in detail the structure of the neigbourhood240
used.241
6
8. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Acceptation Criterion: A standard probabilistic function is used for managing the acceptation of242
new solutions.243
Termination Criterion: A fixed value of temperature (final temperature Tf ) is used as termination244
criterion.245
4.4 Neighborhood Structure for the MMR-P problem246
Two fundamental concepts in LS are the Search space and Neighborhood structure. The Search247
space, denoted as S, is defined as the set of all feasible solutions for the problem. At each iteration248
of LS, a slight modification of the current solution leads to a neighbor, which on a more critical249
inspection, can be seen as a function which corresponds to a local transformation on the current250
solution. This function induces a set of possible neighbors to a current solution, concept know as251
the neighborhood set, and which is denoted by N(Y ). In particular N(Y ) ⊆ S. Many different252
neighborhood structures can be defined for the same problem, yielding the challenge of selecting253
the most suitable. It is important to note that depending on the context, small modifications of254
the neighborhood structure may lead to strongly different cost for the best solution found by the255
algorithm.256
In the classic SP problem the determination of neighborhood is more complex than in other257
problems, such as the TSP [28]. In [37] is presented a LS heuristic for the multicriteria SP problem.258
The mechanism to obtain a new path p from an existing path p is described as follows: first, a259
subpath starting from node s is obtained by cutting the path p at node i. Next, an arc emanating260
from node i and connected to the node j is attached to the new solution. Finally, the algorithm261
searches for a path from j to the terminal node t. This entire process is repeated for every node in262
the original path, and for every node j adjacent to node i, which, from our perspective, is prohibitive263
for many applications of the SP.264
A traditional neighborhood used in designing heuristics for CO problems is the family of k-opt.265
The idea in this scheme is to eliminate k arcs (in the network problems context) and add new arcs to266
complete a feasible solution. Typically, in problems where the cardinality of the arcs in the solution267
is fixed, (like the TSP or the Minimum Spanning tree problems) k eliminated arcs are replaced by k268
new arcs. In paths optimization problems, if k arcs are eliminated from a feasible solution, a different269
number of arcs added could generate a feasible solution. Some papers [19, 43, 29] have considered270
this strategy. For our problem, k-opt strategy is used by considering the values k = 2 and k = 3.271
Given the importance of the new neighborhood structure in our proposed method, we have272
dedicated this section to explain it in detail. We start by defining the LS mechanism. Subsequently273
we detail the concepts of neighborhood structure and Search space. After that, we explicitly describe274
an architectural model for obtaining a new candidate solution by restricting the original search space.275
Typically, in LS, several types of neighborhood structures are analogous to the k-opt method276
explained above, in the sense that a candidate solution is obtained by applying a slight modification to277
the previous candidate, see [3] for an analysis of several types of large neighborhoods for combinatorial278
optimization problems. A fundamentally different philosophy is the one of using subspaces to induce279
candidate solutions. In this model, the new candidate is not obtained directly from a previous280
solution. Rather the candidate is generated by an indirect step, which consists in perturbing a281
subspace in a LS fashion so as to obtain a new subspace which is marginally different in comparison282
to the former. Finally, the new subspace is employed to derive the new candidate solution. This283
concept adds an extra layer in the architectural model for defining the neighborhood structure. The284
method is detailed in Algorithm 3, which generalizes the method presented in [35] for solving minmax285
regret spanning tree problem. [35], in the first step, applied local transformations to a connected286
graph (subspace) to obtain a new graph which is also connected (new subspace). In the second step,287
the differences in the regret between the original and the modified candidate solutions are evaluated.288
Algorithm 3 Neighbor induction (R)
Input: R, a subspace of original search space S.
Output: Y , the new candidate solution.
1: R’← subspace-perturbation (R).
2: Y ← generate-candidate (R’).
289
Our proposed solution for the implementation of the MMR-P Neighborhood retains the idea of290
using bitmap strings to represent (and restrict) the search space. We start by defining a bitmap291
string with cardinality |A|, such that π (j) = 1 if edge aj belongs to the current subset, and π (j) =292
0 otherwise. Further, π (j) denotes the bit j of the bitmap vector. The full process for creating a293
new search space is detailed in Algorithm 4.294
At each iteration, a predetermined fraction of arcs from the original subspace are modified, i.e.,295
they are set to 1 (added) if they were not present in π or set to 0 (deleted) otherwise. This fraction296
is controlled by the parameter γ, and directly relates the concept of exploration and exploitation297
7
9. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
as detailed as follows. Small values for γ lead to slight perturbations of the current subspace, i.e.,298
the resultant subspace will be only marginally different from the subspace currently being examined.299
This configuration favors the exploitation of the current solution. In contrast, large values for γ300
produce strong perturbations of the subspace, producing subspaces which are expected to be much301
different from the subspace currently being perturbed, which favors the exploration of unvisited302
regions in the original search space. Exploratory test on a variety of datasets have show evidence303
that a suitable value for depends on the dataset being tested and particularly its size.304
Once the subspace is determined, the algorithm ensures that there exists a path between s and305
t. If so π is accepted, otherwise we reject it and randomly generate a new version of π following306
the same scheme. The overall algorithm starts with the entire search space by setting all the bits of307
the vector π to 1.308
Observe that, in our definition of neighborhood, a subspace is not restricted to connected graphs,309
i.e., a subspace may (or may not) possesses disconnected components. For this reason, we must check310
at all iterations that possess at least a single s-t path. Note that the disconnected components may311
become connected depending on the stochastic properties of the environment. Once the auxiliary312
graph is determined, we obtain a new candidate solution from it. When the node t is reachable from313
the node s, the new candidate solution is processed using Algorithm 5. In our proposition, the new314
candidate solution, i.e., a new s-t path, is obtained by a heuristic criterion.315
We decided to apply the HMU method mentioned earlier. We then calculate the regret of this316
path with a classical SP algorithm over the original graph, then using it to determine whether or not317
to accept the new subspace.318
With this method, we are able to tailor the percentage of arcs we flip when generating a neighbor319
candidate, enabling us to find the correct balance between exploration and exploitation. The result320
of this, however, is that we can no longer use the delta between the regrets as our acceptation321
criteria. Instead we have calculate the regret via a heuristic method. For MMR-P this compromise322
is acceptable, as we know of linear time algorithms for calculating the two SP required for the323
calculation of the HU and HM heuristics.324
Algorithm 4 Algorithm MMR-P for subspace perturbation (π, γ)
Input:
- π, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej belongs
to the current subset, and π (j)=0 otherwise.
- γ, the fraction of arcs from the original subspace which are to be flipped
(Γ = γ ∗ n , where n is the number of arcs).
Output:
- π’, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej belongs
to the current subset, and π (j)=0 otherwise.
π ← π
for k = 0 → Γ do
j ← RANDOM(0, |π |)
if π (j) = 0 then
π (j) ← 1
else
π (j) ← 0
end if
end for
325
Algorithm 5 Algorithm MMR-P for generate candidate
Input:
- π, a bitmap string with cardinality |A|, such that π (j)=1 if edge ej
belongs to the current subset, and π (j) = 0 otherwise.
- f(·), a cost function.
Output:
- Y ’, a new candidate solution.
1: YHU ← HU(π)
2: YHM ← HM(π)
3: if f(YHU ) < f(YHM ) then
4: Y ← YHU
5: else
6: Y ← YHM
7: end if
326
5 Benchmark Instances327
In the literature, several classes of instances have been considered in computational experiments328
for evaluating the performance of algorithms proposed for MMR-P. Among them we found the329
8
10. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
following, Random networks [33, 31, 32] and [41], Road networks located in some European cities330
[33, 31, 32] and Layered networks [33, 31, 32, 41]. Extensive experiments on random networks [41]331
showed that instances from 1 000 to up 20 000 nodes were solved, in short times, by an implementation332
in CPLEX and thus this class of instances were not considered at the present research. Road networks333
from European cities are not available and therefore only Layered networks, from this traditional334
group of instances, is considered here. A new particular class of networks, Grid instances (which335
could be interpreted as a type of road networks) was defined in [11] when they studied the relative336
robust version of MMR-P. In the present paper this class of instances is considered in the experiments337
and defined below.338
Layered networks were introduced in the paper of [46] in the study of the computational com-339
plexity of MMR-P problem. In [32] it is mentioned that Layered networks simulate some class of340
telecommunication networks. Layered networks are named as K-n-c-d-w, where n is the number341
of nodes, each cost interval has form c−
ij, c+
ij where a random number cij ∈ [1, c] is generated and342
c−
ij ∈ [(1 − d)cij, (1 + d)cij], c+
ij ∈ c−
ij + 1, (1 + d)cij ( 0 < d < 1) and w is the number of layers343
[31]. In Figure 1 an example of a Layered instance (K-12-c-d-3) is presented. Two groups of Layered344
instances were created. The group L1 contains eight subgroups of instances where for each subgroup345
only the width of the uncertainty interval is variable. The number of nodes is 1 000 for the first346
subgroup and 10 000 for the last. The number of layers at each subgroup is fixed as the 10% of347
n. The second group of Layered instances, L2, contains four subgroups of instances where for each348
subgroup is varied the width of the uncertainty interval and the number of layers. The number of349
nodes is 250 for the first subgroup and 2 000 for the last. Both group of instances are described in350
detail in Tables 1 and 4, for L1 and L2, respectively.351
A Grid network is related to a matrix with n rows and m columns. Each matrix cell corresponds352
to a node and two arcs with different directions connecting each pair of nodes whose respective353
matrix cell are adjacent. Therefore, the resulting directed graph has nxm nodes and 2(2mn−n−m)354
arcs. The node s is assumed located in the position (1,1) of the matrix and the node t in the position355
(m, n), an example is given in Figure 2 with n = 3 and m = 4. The interval costs were generated the356
same way as for Layered instances. The instances are named as G-n-m-c-d, where G identifies the357
instance type, n is the number of rows and m is the number of columns. We consider c = 200 and358
d = 0.5 for all instances in this group. For grid group, G, instances of different sizes were considered.359
2x{20, 40, 80, 160, 320} with {40, 80, 160, 320} nodes respectively and {116, 236, 476, 956, 1916} arcs,360
4x40 with 160 nodes and 552 arcs, 8x80 with 640 nodes and 2 384 arcs, 16x160 with 2 560 nodes361
and 9 888 arcs and 32x320 with 10 240 nodes and 40 256 arcs.362
Figure 1: Example of a Layered instance K-12-c-d-3 Figure 2: Example of a Grid instance G-3-4-c-d
Implementation of Algorithms: The exact approaches were implemented using CPLEX 12.5 and363
Concert Technology. The heuristic approaches were implemented in C++. All CPLEX parameters364
were set to their default values, except in B&C approach where the following parameters were set:365
(i) CPLEX cuts were turned off, (ii) CPLEX heuristics were turned off, (iii) the time limit was set366
to 900 seconds. All the experiments were performed on a Intel Core i7-3610QM machine with 16 GB367
RAM, where each execution was run on a single processor.368
Instances and best known solutions can be found at https://github.com/frperezga/MinmaxRegretPath
9
11. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
6 Exact Results and Analysis369
We know of four papers that propose exact algorithms and conduct experiments for MMR-P. In370
[32], according to the authors, outperformed previous approaches by the same group of researchers371
[33, 31], therefore we focus on the first paper. Other experimental research appears in a chapter of372
the book [23]. A general drawback of the experiments conducted using these approaches is the size of373
the instances tested. Only instances with small sizes were tested and then was very difficult to outline374
some conclusions. Even so, in [32] the performance of the algorithms was analyzed when applied on375
random instances, Layered instances and three instances from real road networks, and the authors376
concluded that Benders approach had a better performance than a branch and bound algorithm and377
a MILP formulation given in [22] and implemented by CPLEX. Very recently, [18] proposed a B&C378
procedure which considers an improved lower bound for the problem. They considers several classes379
of graph instances, including two real large size instances.380
Group L1. Our effort in this paper is to try to gain more information about the performance of381
algorithms when applied to instances of both greater size and different structure. In the case of the382
group L1 of Layered instances, Table 2 shows the results of MILP considering a time limit of 900383
seconds. It is clear that from 4 000 nodes and up, the algorithm’s performance degrades dramatically,384
so that for 5 000 nodes no optimum solution was achieved and worse yet, no feasible solutions were385
found. For the same group of instances, B&C algorithm was always able to find optimal solutions386
in no more than 250 seconds on average over ten runs, except for n =10 000 where the algorithm387
begins to be affected by the combinatorial explosion.388
Group L2. In Table 3 and Table 4 the performance of MILP and B&C algorithms for the second389
group of instances L2 is illustrated. These instances contain 250, 500, 1 000 and 2 000 nodes and each390
one contains two, four and six layers. In Table 3 is shown that MILP is able to get optimal solutions391
for all combinations of number of nodes when the number of layers is equal to six. However, its392
performance clearly diminished when the number of nodes increased and the number of layers is two393
or four. For example, for 2 000 nodes and two layers, MILP achieved 8% gap on average. In Table 4394
is shown that the performance of B&C is clearly inferior to MILP, achieving large gaps (about 30%)395
for 250 nodes and two layers. Clearly MILP outperforms B&C for this class of instances.396
In conclusion, after the experimentation with the exact algorithms MILP and B&C applied to397
Layered instances, the group L1 of large instances can be rapidly solved by B&C. With respect to398
group L2, the performance of MILP is better than B&C but loses efficiency from 1 000 nodes and399
two layers. It is clear that heuristic approaches are necessary for solving the large size L2 instances.400
Group G. MILP provides better solutions than B&C. However, as the size of the instances is401
increased, gaps also increase (see Table 1). For two combinations of the parameters m and n, both402
exact algorithms generate high gaps. It is also noted that the time limit was exhausted for the403
instances. Considering that the size of these instances is relatively small, it is clear that heuristics404
are necessary for solving large instances with this structure.405
Table 1: Running times and gaps for B&C and MILP in G instances. n and m represent the rows and columns
in the grid.
class gap (%) time (sec.)
n m min av max min av max
B&C
2 20 0 0 0 0.02 0.03 0.05
2 40 0 0 0 0.02 0.03 0.05
2 80 0 0 0 0.19 0.52 0.77
2 160 0 5.32 13.91 412.00 818.47 900.16
2 320 26.32 32.13 36.89 900.05 900.12 900.20
4 40 0 0 0 0.062 0.089 0.141
8 80 0 0 0 1.16 2.33 4.25
16 160 0 0 0 10.16 33.69 65,36
32 320 3.80 7.00 14.50 900.20 900.90 900.90
MILP
2 20 0 0 0 0.03 0.04 0.06
2 40 0 0 0 0.03 0.05 0.08
2 80 0 0 0 0.16 2.31 5.00
2 160 0 0 0 3.10 7.82 15.20
2 320 5.49 9.19 13.04 900.14 900.15 900.16
4 40 0 0 0 0.11 0.14 0.19
8 80 0 0 0 1.13 2.28 5.66
16 160 0 0 0 13.94 105.48 240.83
32 320 1.60 3.10 5.10 900.10 900.60 900.90
10
13. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Table 3: Running times and gaps for MILP in L2 instances. n is the number of nodes, nk is the number of nodes
in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal
solution.
gap (%) time (sec.) #optimum
n nk p min av max min av max
250 2 0.15 0.01 0.01 0.01 1.95 11.95 34.48 10
0.50 0.01 0.01 0.01 2.83 15.67 58.66 10
0.85 0.00 0.01 0.01 2.28 30.65 126.11 10
4 0.15 0.00 0.00 0.01 0.42 1.33 3.13 10
0.50 0.00 0.00 0.01 0.45 1.56 3.20 10
0.85 0.00 0.00 0.01 0.38 1.38 2.81 10
6 0.15 0.00 0.00 0.00 0.27 0.43 0.81 10
0.50 0.00 0.00 0.00 0.28 0.43 0.64 10
0.85 0.00 0.00 0.00 0.30 0.43 0.64 10
500 2 0.15 0.62 2.03 3.39 900.08 900.10 900.11 0
0.50 0.56 2.38 3.26 900.08 900.10 900.25 0
0.85 0.90 2.69 3.80 900.06 900.09 900.11 0
4 0.15 0.00 0.00 0.01 4.91 6.52 9.58 10
0.50 0.00 0.01 0.01 4.77 7.27 14.64 10
0.85 0.00 0.01 0.01 5.27 6.88 12.56 10
6 0.15 0.00 0.00 0.00 1.06 3.27 6.36 10
0.50 0.00 0.00 0.00 1.14 3.25 6.44 10
0.85 0.00 0.00 0.00 1.02 3.10 6.78 10
1 000 2 0.15 4.30 5.26 6.40 900.23 900.25 900.30 0
0.50 4.66 5.80 6.68 900.23 900.25 900.27 0
0.85 5.23 6.05 7.38 900.23 900.25 900.27 0
4 0.15 0.01 0.06 0.51 46.44 284.59 900.28 9
0.50 0.01 0.03 0.26 40.03 372.91 900.28 9
0.85 0.01 0.12 0.62 59.02 397.96 900.30 8
6 0.15 0.00 0.00 0.00 13.64 18.85 23.81 10
0.50 0.00 0.00 0.00 13.58 19.61 23.97 10
0.85 0.00 0.00 0.00 17.19 19.73 24.73 10
2 000 2 0.15 6.43 7.45 7.96 900.86 901.02 901.50 0
0.50 7.24 7.98 8.85 900.83 900.88 900.98 0
0.85 7.49 8.31 9.31 900.86 900.97 900.38 0
4 0.15 0.62 1.55 2.18 900.86 901.19 902.61 0
0.50 0.95 1.65 2.14 900.88 900.91 900.99 0
0.85 0.90 1.56 1.96 900.88 900.97 901.33 0
6 0.15 0.00 0.00 0.00 58.81 183.86 303.00 10
0.50 0.00 0.00 0.00 56.20 357.61 901.00 7
0.85 0.00 0.00 0.00 69.38 517.14 901.09 5
12
14. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Table 4: Running times and gaps for B&C in L2 instances. n is the number of nodes, nk is the number of nodes
in each layer, d manages the interval length and #optimum is the number of instances that achieve the optimal
solution.
gap (%) time (sec.) #optimum
n nk p min av max min av max
250 2 0.15 24.36 27.66 31.49 900.02 900.07 900.16 0
0.50 24.74 27.59 30.74 900.03 900.13 900.63 0
0.85 24.55 27.83 31.70 900.03 900.06 900.11 0
4 0.15 0.00 0.01 0.01 3.67 206.71 717.99 10
0.50 0.00 0.15 1.40 5.17 275.76 900.06 10
0.85 0.00 0.19 1.86 4.27 270.76 900.06 10
6 0.15 0.00 0.00 0.01 0.36 1.34 4.16 10
0.50 0.00 0.00 0.01 0.64 1.33 3.19 10
0.85 0.00 0.00 0.00 0.55 1.34 2.45 10
500 2 0.15 33.82 36.10 38.24 900.05 900.10 900.14 0
0.50 33.97 35.60 37.25 900.03 900.07 900.14 0
0.85 33.89 35.72 37.23 900.03 900.12 900.30 0
4 0.15 7.06 9.88 12.57 900.03 900.07 900.14 0
0.50 6.48 10.08 13.08 900.03 900.07 900.23 0
0.85 7.05 10.71 14.13 900.05 900.11 900.33 0
6 0.15 0.00 0.01 0.01 10.34 156.85 524.13 10
0.50 0.01 0.01 0.01 8.36 151.48 522.14 10
0.85 0.01 0.01 0.01 9.52 183.92 744.63 10
1 000 2 0.15 35.50 36.85 37.77 900.06 900.10 900.16 0
0.50 35.63 37.39 38.56 900.06 900.08 900.13 0
0.85 35.03 37.18 37.12 900.00 900.00 900.00 0
4 0.15 17.43 19.69 22.96 900.05 900.08 900.17 0
0.50 24.36 27.66 31.49 900.02 900.07 900.16 0
0.85 18.56 20.37 24.94 900.00 900.00 900.00 0
6 0.15 3.81 5.57 7.37 900.05 900.08 900.16 0
0.50 4.74 5.82 7.51 900.06 900.10 900.16 0
0.85 4.06 6.84 8.73 900.00 900.00 900.00 0
2 000 2 0.15 36.55 37.61 43.15 900.00 900.00 900.00 0
0.50 36.46 38.87 42.94 900.00 900.00 900.00 0
0.85 36.15 39.03 43.06 900.00 900.00 900.00 0
4 0.15 22.21 24.72 28.30 900.00 900.00 900.00 0
0.50 22.89 25.82 28.78 900.00 900.00 900.00 0
0.85 22.22 25.32 28.38 900.00 900.00 900.00 0
6 0.15 8.27 12.37 15.59 900.00 900.00 900.00 0
0.50 9.27 11.59 13.42 900.00 900.00 900.00 0
0.85 9.25 12.35 13.81 900.00 900.00 900.00 0
7 Performance of the Heuristic Approaches406
Taking into account the conclusion related to hard instances in both topologies (Layered and407
Grid), we have considered appropriate to apply heuristics only to hard instances. Specifically, we408
consider six groups of L2 instances and two groups of G instances (shown in bold in tables 1, 3 and409
4). Our heuristic approaches are based on the neighborhood (Nγ) defined in Subsection 4.4, Nγ410
is embedded in two SA settings and in a local search setting, both metaheuristic frameworks were411
explained in Section 4. Additionally, as pointed out in Subsection 4.4, a SA approach using the412
neighborhood Nk-opt based on the traditional heuristic k-opt was implemented here using k = 2 and413
k = 3.414
7.1 Algorithm parameters and measure of performance415
An important drawback of metaheuristic approaches is the step related to the selection of the416
best set of parameters. This task can be time-consuming and it is always necessary to deal with the417
tradeoff between time and solution quality. Good discusions can be found in [13, 1] and [7].418
The selected parameters were obtained through a mixed process based on a brute-force search419
over a grid and a trial-and-error procedure. The search over the grid allows a good exploration420
in the parameter space and trial-and-error was used in order to intensify the search near good421
solutions. After the experiments, we defined the settings shown in Table 5. Note that we chose one422
configuration for Nk-opt and three configurations for Nγ in order to represent the trade-off between423
time-consumption and solution quality in our neighborhood. In the case of SA using Nk-opt, more424
demanding parameters were tested but the results had a very marginal improvement.425
13
15. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Table 5: Parameters selected for heuristic algorithms. ti is the initial temperature, tf is the final temperature
and N is the neighborhood structure for each metaheuristic.
Algorithm id ti tf cooling factor loops N
Simulated Annealing SA0 50 0.1 0.9 800 Nk-opt
Simulated Annealing SA1 5 0.01 0.9 800 Nγ
Simulated Annealing SA2 5 0.1 0.88 500 Nγ
Local Search LS - - - 20 000 Nγ
The parameter γ must be regulated depending on the density, size and topology of the graph.426
The selection must consider the trade-off between exploration and the probability of obtaining a427
disconnected graph. We have estimated γ according to γ ≈ k
|A|
, where |A| is the total number of428
arcs in G and k ∈ [2, 10] is the number of modified edges in each iteration. Table 6 shows the final429
value of γ in each group of instances.430
Table 6: Selected values for the parameter γ, considering different groups of instances.
Group γ Group γ
L2 - 1 000 0.004 G - 2 - 320 0.004
L2 - 2 000 0.001 G - 32 - 320 0.0001
To measure performance, we use basic statistics (minimum, average and maximum) for the gaps431
and execution times from 50 runs for each instance. The results presented for the gaps are relative432
to the best solution found by the best exact algorithm in each instance ((S − Sbest) /Sbest).433
7.2 Performance comparison of the algorithms434
As we mentioned above, few papers have tackled the MMR-P problem using heuristics, therefore435
ad-hoc neighborhood structures that consider the nested structure in the problem formulation MMR-436
Path defined in Subsection 2.1 do not exist. As a natural strategy we use the neighborhood (Nk-opt)437
mentioned in Subsection 4.4 in a SA scheme (SA0 algorithm). This implementation had a better438
performance than another approach based on Ant Colony Optimization algorithm (ACO) that we439
designed for the problem. So, ACO was discarded and SA0 was compared with the heuristic HMU,440
since the literature has shown that it obtains moderate gaps for several classes of MMR-P instances441
and it is a fast algorithm that only needs to solve four classic problems [23, 41].442
As detailed in Table 7, HMU achieved gaps between 2.37% and 4.33% for most L2 instances.443
However, in G instances its performance is irregular. In the G-2-320 instances, the gaps are 11.47%444
on average and in the other group of instances, G-32-320, they do not exceed 1.53%. To the best of445
our knowledge, the performance of HMU over the G-2-320 instances is its worst performance over446
all classes of instances reported in the literature. SA0+2-opt and SA0+3-opt outperform HMU in447
the majority of L2 instances and SA0+3-opt outperforms SA0+2-opt in most of the L2 instances448
(except the last) but it achieves worse gaps in G instances. Note that for instances with smaller449
interval (d = 0.15) the performance of SA0+2-opt is worse. For detailed results, see the Tables 9 10450
12 11 in Appendix 10.451
In summary, k-opt neighborhood in SA framework obtained interesting results, it is able to452
improve the solutions reached by HMU heuristics in the majority of instances.453
Regarding run times, in Table 7, we highlight the difference observed between the two classes454
of G instances. Both variants of SA0 took much more run time in instances G-32-320 than the455
instances G-2-320. This is due to the difficulty in rebuilding a path in G-32-320 class using the k-opt456
framework.457
From the previous analysis it is clear that SA0 (using both variants) outperforms HMU but over458
most instances it does not reach the best known solutions BKS (they can be accessed in the link at459
the footnote of page 9). Therefore the task of the SA approach using the new neighborhood Nγ is460
to compete with the BKS values. In this context the performance of the LS and SA using a set of461
different parameters is analyzed (SA1 and SA2). The objective in including the performance of LS462
using the proposed neighborhood is to analyze to what extent the mechanism of SA to escape from463
the local optimum found in LS is effective.464
Table 8 shows the results of LS and SA approaches using Nγ. LS clearly achieved better gaps465
than HMU and SA0 for all instances, running at similar times to SA0. From the same Table, it is466
clear that, respect to L2 instances, SA1 and SA2 outperform LS noting that SA2 is able to obtain467
better results than LS in less time. Additionally, it can be also noted that the performance of SA1 is468
slightly better than SA2 as it was expected since the parameters used by SA1 are computationally469
more expensive than those used by SA2. These results are detailed in Tables 16 17 and 18. For470
example, in L2 instances with 2 000 nodes, the statistics related to gap (minimum, average and471
maximum) are 0.76, 1.06, 1.39 for LS and 0.71, 0.93, 1.22 for SA2. At the same time, when the472
variant SA1 is applied, more run time is necessary, but the results are better than the obtained by473
14
16. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
Table 7: Gaps (%) and running times obtained by SA0 and HMU for each class of instances. Each class contains
10 instances and we run 50 experiments for each one in SA0 approach.
SA0+2opt SA0+3opt HMU
Class min av max min av max min av max
gap (%)
L2 - 1 000 - 0.15 1.45 2.97 3.99 0.41 1.37 2.28 2.97 3.51 4.03
L2 - 1 000 - 0.50 0.54 1.52 2.80 0.35 1.17 2.13 2.70 3.19 3.83
L2 - 1 000 - 0.85 0.34 1.11 2.13 0.24 1.16 2.17 2.74 3.24 3.88
L2 - 2 000 - 0.15 3.06 3.53 4.33 0.86 1.65 2.32 3.06 3.54 4.33
L2 - 2 000 - 0.50 1.22 2.08 3.19 0.73 1.46 2.10 2.78 3.28 4.19
L2 - 2 000 - 0.85 0.50 1.30 2.16 0.66 1.29 1.93 2.37 3.19 4.00
G - 2 - 320 1.92 8.68 15.15 6.27 11.47 15.04 6.70 11.57 15.20
G - 32 - 320 0.00 0.53 1.53 -0.18 0.39 1.53 0.00 0.53 1.53
time (seconds)
L2 - 1 000 - 0.15 36.63 37.81 40.08 36.61 37.65 39.62
L2 - 1 000 - 0.50 36.47 37.14 38.42 36.63 37.48 39.24
L2 - 1 000 - 0.85 36.36 36.82 37.56 36.64 39.69 39.58
L2 - 2 000 - 0.15 73.42 74.56 77.97 73.38 74.18 75.69
L2 - 2 000 - 0.50 73.58 75.60 78.89 73.30 74.75 77.50
L2 - 2 000 - 0.85 70.44 71.52 88.33 73.58 75.41 78.74
G - 2 - 320 30.31 32.04 34.47 30.22 31.28 33.08
G - 32 - 320 776.77 894.10 932.64 710.54 889.89 938.68
SA2. These results confirm the effectivity of SA using Nγ when a group of difficult instances is474
investigated.475
The performance of heuristics applied to G instances is very different depending on the type of476
the instances used, G-2-320 or G-32-320. LS, SA1 and SA2 are not able to improve the quality of477
the solutions provided by exact algorithms nor the quality of the solutions provided by HMU for478
the instances (32,320). Considering that the best gap is 1.53% from MILP, these instances could be479
well solved for the corresponding size.480
The situation for the G-2-320 instances is different. The heuristics are able to largely improve the481
gaps of HMU and SA0 and are almost able to equal the best known value of the exact algorithms.482
In particular, SA1 is able, in one instance, to improve the solution given by exact approaches. It is483
clear that HMU finds solutions with large gaps, over 15% in some instances. Considering that the484
best gap from MILP is 5%, these instances tend to be difficult to solve when the size of the instances485
increases.486
As previously mentioned, two versions with different parameters of SA algorithm were tested487
with our novel neigbourhood. The degradation in the quality of the obtained solutions when more488
relaxed parameters were considered was small but significant. This allows the priorization of either489
time or quality of the solution. However, even the more relaxed version of the Simulated Annealing490
algorithm found better solutions than the implemented Local Search. For detailed results, see the491
tables 13 14 15 16 17 and 18 in Appendix 10.492
Table 8: Gaps (%) and running times obtained by LS, SA1 and SA2 for each class of instances. Each class
contains 10 instances and considers 50 runs.
LS SA1 SA2
class min av max min av max min av max
gap (%)
L2 - 1 000 - 0.15 0.07 1.70 3.56 0.00 1.05 1.53 0.00 1.35 3.56
L2 - 1 000 - 0.50 0.02 1.38 3.3 -0.04 0.93 3.31 -0.04 1.11 3.31
L2 - 1 000 - 0.85 0.00 1.44 3.54 0.00 1.11 3.54 0.00 1.24 3.54
L2 - 2 000 - 0.15 0.00 0.97 3.39 -0.07 0.64 3.39 0.01 0.83 3.39
L2 - 2 000 - 0.50 0.04 1.15 3.10 -0.11 0.88 3.10 -0.05 1.01 3.10
L2 - 2 000 - 0.85 -0.43 1.07 3.08 -0.45 0.83 3.08 -0.44 0.96 3.14
G - 2 - 320 -0.05 2.15 7.72 -0.12 1.62 6.97 0.00 2.23 8.18
G - 32 - 320 0.00 0.53 1.53 0.00 0.53 1.53 0.00 0.53 1.53
time (seconds)
L2 - 1 000 - 0.15 33.05 36.05 41.56 81.34 85.82 97.66 26.39 27.43 29.33
L2 - 1 000 - 0.50 34.03 35.94 39.63 80.74 85.00 89.70 26.44 27.61 30.28
L2 - 1 000 - 0.85 34.11 35.59 38.20 81.03 84.20 89.39 26.53 27.59 29.20
L2 - 2 000 - 0.15 69.97 72.96 76.57 167.48 175.00 190.40 54.99 56.97 61.22
L2 - 2 000 - 0.50 71.33 72.82 76.08 162.89 173.68 182.99 54.88 56.55 58.34
L2 - 2 000 - 0.85 70.42 72.65 79.83 157.30 163.90 178.73 55.33 57.20 60.12
G - 2 - 320 19.97 36.93 39.52 85.47 89.24 93.52 27.17 19.11 30.83
G - 32 - 320 425.49 537.62 690.44 418.83 439.19 579.79 203.92 227.89 273.21
15
17. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
8 Conclusions and final comments493
Both exact and heuristic algorithms were proposed for solving the MMR-P problem, a NP-Hard494
combinatorial optimization problem with uncertainty. The problem has been used as an effective way495
to formulate a version of the very known shortest path problem in a network when the arc weights496
are not completely known.497
A B&C exact algorithm has been proposed here for solving MMR-P. A broad set of instances498
from telecommunication networks, the Layered instances, whose size range from 100 to 10 000 nodes499
were analyzed. The algorithm has proven to outperform another traditional exact approach based500
on a MILP formulation and implemented by the CPLEX solver when applied to the set of Layered501
instances. Additionally, a class of Layered networks with special structure is investigated because502
exact algorithms have great difficulty finding their exact solutions. For these instances MILP outper-503
formed the B&C approach. However, the MILP approach loses efficiency as the size of the instance504
grows.505
Another class of test instances was introduced for the problem in our research, the Grid instances,506
which resembles road networks. For these networks, MILP approach outperformed B&C approach507
but is unable to solve instances with more than 5 000 nodes.508
A new and sophisticated neighborhood was designed for MMR-P and Local Search and Simulated509
Annealing algorithms based on this neighborhood were proposed. These heuristics were able to510
outperform a traditional basic heuristic, HMU, a metaheuristic ACO and another SA approach511
using the neighborhood k-opt, when they were tested on the sets of instances considered. More512
important, the Simulated Annealing algorithm was able to obtain feasible solutions with a similar513
quality to the solutions found by the two developed exact algorithms for the Grid instances. For514
larger Grid instances, both exact algorithms generate larger gaps or are unable to obtain feasible515
solutions in reasonable times. In this context, Simulated Annealing was able to find good feasible516
solutions in relatively short times. Since the SP problem and its variants have many important517
applications in several fields, the study of new efficient heuristics for large instances is necessary.518
Future research should consider to exploit the novel neighborhood applying it to different MMR519
Problems.520
9 Acknowledgements521
Alfredo Candia-V´ejar was supported by CONICYT, FONDECYT project N◦
1121095.522
References523
[1] B. Adenso-Diaz and M. Laguna. Fine-tuning of algorithms using fractional experimental designs524
and local search. Operations Research, 54(1):99–114, 2006.525
[2] R. Ahuja, T. Magnanti, and J. Orlin. Network Flows: Theory, Algorithms, and Applications.526
Prentice Hall, Upper Saddle River, NJ, 1993.527
[3] R. K. Ahuja, ¨O. Ergun, J. Orlin, and A. Punnen. A survey of very large-scale neighborhood528
search techniques. Discrete Applied Mathematics, 123(1):75–102, 2002.529
[4] H. Aissi, C. Bazgan, and D. Vanderpooten. Min-max and min-max regret versions of combi-530
natorial optimization problems: A survey. European Journal of Operational Research, 197(2):531
427–438, Sept. 2009.532
[5] I. Averbakh and V. Lebedev. Interval data minmax regret network optimization problems.533
Discrete Applied Mathematics, 138(3):289–301, 2004.534
[6] D. Bertsimas and J. Tsitsiklis. Simulated annealing. Statistical Science, 8(1):10–15, 1993.535
[7] M. Birattari and J. Kacprzyk. Tuning Metaheuristics: A Machine Learning Perspective, volume536
197. Springer, 2009.537
[8] A. Candia-Vejar, E. Alvarez-Miranda, and N. Maculan. Minmax regret combinatorial opti-538
mization problems: an algorithmic perspective. RAIRO Operations Research, 45(2):101–129,539
2011.540
[9] N. Chao and Y. Fengqi. Adaptive robust optimization with minimax regret criterion: Mul-541
tiobjective optimization framework and computational algorithm for planning and scheduling542
under uncertainty. Computers and Chemical Engineering, 108. doi: https://doi.org/10.1016/j.543
compchemeng.2017.09.026.544
16
18. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
[10] A. Chassein and M. Goerigk. A new bound for the midpoint solution in minmax regret optimiza-545
tion with an application to the robust shortest path problem. European Journal of Operational546
Research, 244(3):739–747, 2015.547
[11] A. Coco, J. J´unior, T. Noronha, and A. Santos. An integer linear programming formulation548
and heuristics for the minmax relative regret robust shortest path problem. Journal of Global549
Optimization, 60(2):265–287, 2014.550
[12] E. Conde and A. Candia. Minimax regret spanning arborescences under uncertain costs. Euro-551
pean Journal of Operational Research, 182(2):561–577, Oct. 2007.552
[13] S. Coy, B. Golden, G. Runger, and E. Wasil. Using experimental design to find effective param-553
eter settings for heuristics. Journal of Heuristics, 7(1):77–97, 2001.554
[14] E. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1):555
269–271, Dec. 1959.556
[15] M. Ehrgott, J. Ide, and A. Sch¨obel. Minmax robustness for multi-objective optimization prob-557
lems. European Journal of Operational Research, 239(1):17–31, 2014.558
[16] B. Escoffier, J. Monnot, and O. Spanjaard. Some tractable instances of interval data minmax559
regret problems: bounded distance from triviality (short version). In 34th International Con-560
ference on Current Trends in Theory and Practice of Computer Science, volume 4910 of Lecture561
Notes in Computer Science, pages 280–291, Nov´y Smokovec, Slovakia, Jan. 2008. Springer-562
Verlag.563
[17] Y. Gao. Shortest path problem with uncertain arc lengths. Computers and Mathematics with564
Applications, 62(6):2591–2600, 2011.565
[18] H. Gilbert and O. Spanjaard. A double oracle approach to minmax regret optimization problems566
with interval data. European Journal of Operational Research, (262):929–943, 2017.567
[19] W. Guerrero, N. Velasco, C. Prodhon, and C. Amaya. On the generalized elementary shortest568
path problem: A heuristic approach. Electronic Notes in Discrete Mathematics, 41:503–510,569
2013.570
[20] T. Hasuike. Robust shortest path problem based on a confidence interval in fuzzy bicriteria571
decision making. Information Sciences, 221:520–533, 2013.572
[21] J. Kang. The minmax regret shortest path problem with interval arc lengths. International573
Journal of Control and Automation, 6(5):171–180, 2013.574
[22] O. Karasan, M. Pinar, and H. Yaman. The robust shortest path problem with interval data.575
Technical report, Bilkent University, 2001.576
[23] A. Kasperski. Discrete Optimization with Interval Data, volume 228 of Studies in Fuzziness and577
Soft Computing. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.578
[24] A. Kasperski and P. Zieli´nski. An approximation algorithm for interval data minmax regret579
combinatorial optimization problems. Information Processing Letters, 97(5):177–180, 2006.580
[25] A. Kasperski, M. Makuchowski, and P. Zieli´nski. A tabu search algorithm for the minmax regret581
minimum spanning tree problem with interval data. Journal of Heuristics, 18(4):593–625, 2012.582
[26] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science,583
220(4598):671–680, 1983.584
[27] P. Kouvelis and G. Yu. Robust Discrete Optimization and its applications. Kluwer Academic585
Pablishers, 1997.586
[28] L. Lin and M. Gen. Priority-based genetic algorithm for shortest path routing problem in ospf.587
In M. Gen, D. Green, O. Katai, B. McKay, A. Namatame, R. Sarker, and B.-T. Zhang, editors,588
Intelligent and Evolutionary Systems, volume 187 of Studies in Computational Intelligence, pages589
91–103. Springer Berlin Heidelberg, 2009.590
[29] Y. Marinakis, A. Migdalas, and A. Sifaleras. A hybrid particle swarm optimization–variable591
neighborhood search algorithm for constrained shortest path problems. European Journal of592
Operational Research, 261(3):819–834, 2017.593
17
19. ACCEPTED MANUSCRIPT
ACCEPTED
M
ANUSCRIPT
[30] R. Montemanni. A Benders decomposition approach for the robust spanning tree problem with594
interval data. European Journal of Operational Research, 174(3):1479–1490, Nov. 2006.595
[31] R. Montemanni and L. Gambardella. An exact algorithm for the robust shortest path problem596
with interval data. Computers & Operations Research, 31(10):1667–1680, Sept. 2004.597
[32] R. Montemanni and L. Gambardella. The robust shortest path problem with interval data via598
Benders decomposition. 4or, 3(4):315–328, Dec. 2005.599
[33] R. Montemanni, L. Gambardella, and A. Donati. A branch and bound algorithm for the robust600
shortest path problem with interval data. Operations Research Letters, 32(3):225–232, 2004.601
[34] R. Montemanni, J. Barta, M. Mastrolilli, and L. Gambardella. The Robust Traveling Salesman602
Problem with Interval Data. Transportation Science, 41(3):366–381, Aug. 2007.603
[35] Y. Nikulin. Simulated annealing algorithm for the robust spanning tree problem. Journal of604
Heuristics, 14(4):391–402, 2008.605
[36] S. Okada and M. Gen. Fuzzy shortest path problem. Computers and Industrial Engineering, 27606
(1-4):465–468, 1994.607
[37] L. Paquete, J. Santos, and D. Vaz. Efficient paths by local search. In Agra, Agostinho and608
Doostmohammadi, Mahdi (2011) A Polyhedral Study of Mixed 0-1 Set. In: Proceedings of the609
7th ALIO/EURO Workshop. ALIO-EURO 2011, Porto, pp. 57-59., page 243, 2011.610
[38] M. Pascoal and M. Resende. The minmax regret robust shortest path problem in a finite611
multi-scenario model. Applied Mathematics and Computation, 241:88–111, 2014.612
[39] J. Pereira and I. Averbakh. Exact and heuristic algorithms for the interval data robust assign-613
ment problem. Computers & Operations Research, 38(8):1153–1163, Aug. 2011.614
[40] J. Pereira and I. Averbakh. The robust set covering problem with interval data. Annals of615
Operations Research, 207(1):217–235, 2013.616
[41] F. P´erez, C. Astudillo, M. Bardeen, and A. Candia-V´ejar. A simulated annealing approach617
for the minmax regret path problem. In Proceedings of the Congresso Latino Americano de618
Investigaci´on Operativa (CLAIO)—Simp´osio Brasileiro de Pesquisa Operacional (SBPO), 2012.619
[42] F. Perez-Galarce, E. ´Alvarez Miranda, A. Candia-V´ejar, and P. Toth. On exact solutions for620
the minmax regret spanning tree problem. Computers & Operations Research, 47(0):114 – 122,621
2014.622
[43] T. Pinto, C. Alves, and J. de Carvalho. Variable neighborhood search for the elementary shortest623
path problem with loading constraints. In International Conference on Computational Science624
and Its Applications, pages 474–489. Springer, 2015.625
[44] A. Raith, M. Schmidt, A. Sch¨obel, and L. Thom. Extensions of labeling algorithms for multi-626
objective uncertain shortest path problems. Networks, (In Press). doi: 10.1002/net.21815.627
[45] A. Raith, M. Schmidt, A. Sch¨obel, and L. Thom. Multi-objective minmax robust combina-628
torial optimization with cardinality-constrained uncertainty. European Journal of Operational629
Research, 267(2):628 – 642, 2018.630
[46] G. Yu and J. Yang. On the robust shortest path problem. Computers & Operations Research,631
25(6):457–468, 1998.632
[47] P. Zieli´nski. The computational complexity of the relative robust shortest path problem with633
interval data. European Journal of Operational Research, 158(3):570–576, 2004.634
18