This document proposes a Tabu search heuristic for solving the generalized assignment problem (GAP), which is an NP-hard combinatorial optimization problem. The algorithm uses a relaxed formulation of the GAP that allows for infeasible solutions by including a penalty term for capacity violations. It employs simple neighborhood structures and dynamically adjusts the penalty weighting over time using both recent and medium-term memory. Computational experiments show that the algorithm provides good quality solutions efficiently compared to other heuristic methods for the GAP.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
The document proposes a hybrid algorithm combining genetic algorithm and cuckoo search optimization to solve job shop scheduling problems. It aims to minimize makespan (completion time of all jobs) by scheduling jobs on machines. The genetic algorithm is used to explore the search space but can get trapped in local optima. Cuckoo search optimization performs local search faster than genetic algorithm and helps avoid local optima. Experimental results on benchmark problems show the hybrid algorithm yields better solutions in terms of makespan and runtime compared to genetic algorithm and ant colony optimization algorithms.
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
A MULTI-POPULATION BASED FROG-MEMETIC ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMacijjournal
The Job Shop Scheduling Problem (JSSP) is a well known practical planning problem in the
manufacturing sector. We have considered the JSSP with an objective of minimizing makespan. In this
paper, we develop a three-stage hybrid approach called JSFMA to solve the JSSP. In JSFMA,
considering a method similar to Shuffled Frog Leaping algorithm we divide the population in several sub
populations and then solve the problem using a Memetic algorithm. The proposed approach have been
compared with other algorithms for the Job Shop Scheduling and evaluated with satisfactory results on a
set of the JSSP instances derived from classical Job Shop Scheduling benchmarks. We have solved 20
benchmark problems from Lawrence’s datasets and compared the results obtained with the results of the
algorithms established in the literature. The experimental results show that JSFMA could gain the best
known makespan in 17 out of 20 problems.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
The document proposes a hybrid algorithm combining genetic algorithm and cuckoo search optimization to solve job shop scheduling problems. It aims to minimize makespan (completion time of all jobs) by scheduling jobs on machines. The genetic algorithm is used to explore the search space but can get trapped in local optima. Cuckoo search optimization performs local search faster than genetic algorithm and helps avoid local optima. Experimental results on benchmark problems show the hybrid algorithm yields better solutions in terms of makespan and runtime compared to genetic algorithm and ant colony optimization algorithms.
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
A MULTI-POPULATION BASED FROG-MEMETIC ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMacijjournal
The Job Shop Scheduling Problem (JSSP) is a well known practical planning problem in the
manufacturing sector. We have considered the JSSP with an objective of minimizing makespan. In this
paper, we develop a three-stage hybrid approach called JSFMA to solve the JSSP. In JSFMA,
considering a method similar to Shuffled Frog Leaping algorithm we divide the population in several sub
populations and then solve the problem using a Memetic algorithm. The proposed approach have been
compared with other algorithms for the Job Shop Scheduling and evaluated with satisfactory results on a
set of the JSSP instances derived from classical Job Shop Scheduling benchmarks. We have solved 20
benchmark problems from Lawrence’s datasets and compared the results obtained with the results of the
algorithms established in the literature. The experimental results show that JSFMA could gain the best
known makespan in 17 out of 20 problems.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
A Genetic Algorithm Based Approach for Solving Optimal Power Flow ProblemShubhashis Shil
This document describes a study that uses a genetic algorithm to solve the optimal power flow problem. The optimal power flow problem aims to minimize operating costs in a power system by optimizing generator outputs while meeting demand and constraints. The study develops a genetic algorithm approach and compares its results and computation time to traditional derivative-based methods on some example power flow cases. It finds that the genetic algorithm approach produces nearly equivalent results to traditional methods, but requires significantly less computation time to solve the optimal power flow problem, especially as more constraints are added.
This document summarizes the application of computational intelligence techniques like genetic algorithms and particle swarm optimization for solving economic load dispatch problems. It first applies a real-coded genetic algorithm to minimize generation costs for a 6-generator test system with continuous fuel cost equations, showing superiority over quadratic programming. It then uses particle swarm optimization to minimize costs for a 10-generator system with each generator having discontinuous fuel options, showing better results than other published methods. The document provides background on economic load dispatch problems and optimization techniques like quadratic programming, genetic algorithms, and particle swarm optimization.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAijejournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...ijcsa
The document describes a non-revisiting genetic algorithm with adaptive mutation for optimizing multi-dimensional numeric functions. The proposed algorithm avoids revisiting previously evaluated solutions by replacing any duplicates with a mutated version of either the best or random individual. The mutation rate adapts over generations, starting with more exploration and ending with more exploitation. Experimental results on 9 benchmark functions in 2 and 4 dimensions show the proposed algorithm achieves better best and average fitness than a standard genetic algorithm, with improvements ranging from 10-98% depending on the function.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...cscpconf
This document presents a mixed 0-1 goal programming approach to solve interval-valued fractional bilevel programming problems using a bio-inspired computational algorithm. It formulates the problem using goal programming to minimize regret intervals for target intervals of achieving goals. A genetic algorithm is used to determine target intervals and optimal decisions by distributing decision powers hierarchically. It presents the problem formulation, design of the genetic algorithm using fitter codon selection and two-point crossover, and formulation of interval-valued goals by determining best and worst solutions for objectives of decision makers at different levels using the genetic algorithm.
This document discusses using genetic algorithms to solve a multi-objective traveling salesman problem (MOTSP) that considers both cost and CO2 emissions. It provides background on genetic algorithms and the traveling salesman problem. The study aims to identify how tuning genetic operators and parameters can improve the efficiency of genetic algorithms in solving the MOTSP with CO2 emissions. Empirical results show that performance improves with some combinations of parameters and operators.
Advanced Computational Intelligence: An International Journal (ACII) aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides
the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Hybrid Genetic Algorithm for Bi-Criteria Multiprocessor Task Scheduling with ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan
and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides
the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic
based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Hybrid Genetic Algorithm for Bi-Criteria Multiprocessor Task Scheduling with ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most appealing choice for the different NP hard problems including multiprocessor task scheduling. Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been proposed and developedfor the problem. Computational analysis with the help of defined performance index has been conducted on the standard task scheduling problems for evaluating the performance of the proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
HYBRID GENETIC ALGORITHM FOR BI-CRITERIA MULTIPROCESSOR TASK SCHEDULING WITH ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...Jim Jimenez
This document describes a real coded genetic algorithm called MI-LXPM for solving integer and mixed integer constrained optimization problems. MI-LXPM modifies and extends an existing real coded genetic algorithm (LXPM) to handle integer restrictions on decision variables. It incorporates a truncation procedure to satisfy integer restrictions and a penalty approach for handling constraints. The performance of MI-LXPM is tested on 20 problems and compared to other algorithms, showing it outperforms them in most cases.
Essay On Teachers Day (2023) In English Short, Simple BestSandra Long
The document provides instructions for submitting a request to an online writing service called HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete an order form with instructions, sources, and deadline. 3) Review bids from writers and select one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied with the work. The service promises original, high-quality content and refunds for plagiarized work.
10 Best Printable Handwriting Paper Template PDF For Free At PrintableeSandra Long
The document discusses differences between schools in America and India. It notes that American schools have earlier start times, later end times, and are divided into elementary, middle, and high school sections. American schools also place more emphasis on extracurricular activities and sports compared to Indian schools. Key differences include dress codes, lunch times, qualifications for teachers, and approaches to learning that are more hands-on in America versus memorization-focused in India.
More Related Content
Similar to A Tabu Search Heuristic For The Generalized Assignment Problem
A review of automatic differentiationand its efficient implementationssuserfa7e73
Automatic differentiation is a powerful tool for automatically calculating derivatives of mathematical functions and algorithms. It works by expressing the target function as a sequence of elementary operations and then applying the chain rule to differentiate each operation. This can be done using either forward or reverse mode. Forward mode calculates how changes in inputs propagate through the function to influence the outputs, while reverse mode calculates how changes in outputs backpropagate to influence the inputs. Both modes require performing the computation twice - once for the forward pass and once for the derivative pass. Careful implementation is required to make automatic differentiation efficient in terms of speed and memory usage.
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
The document compares the genetic algorithms optimization and particle swarm optimization methods for designing close range photogrammetry networks. It presents the genetic algorithm and particle swarm optimization as two popular meta-heuristic algorithms inspired by natural evolution and collective animal behavior, respectively. The document develops mathematical models representing the genetic algorithm and particle swarm optimization for close range photogrammetry network design and evaluates them in a test field to reinforce the theoretical aspects.
A Genetic Algorithm Based Approach for Solving Optimal Power Flow ProblemShubhashis Shil
This document describes a study that uses a genetic algorithm to solve the optimal power flow problem. The optimal power flow problem aims to minimize operating costs in a power system by optimizing generator outputs while meeting demand and constraints. The study develops a genetic algorithm approach and compares its results and computation time to traditional derivative-based methods on some example power flow cases. It finds that the genetic algorithm approach produces nearly equivalent results to traditional methods, but requires significantly less computation time to solve the optimal power flow problem, especially as more constraints are added.
This document summarizes the application of computational intelligence techniques like genetic algorithms and particle swarm optimization for solving economic load dispatch problems. It first applies a real-coded genetic algorithm to minimize generation costs for a 6-generator test system with continuous fuel cost equations, showing superiority over quadratic programming. It then uses particle swarm optimization to minimize costs for a 10-generator system with each generator having discontinuous fuel options, showing better results than other published methods. The document provides background on economic load dispatch problems and optimization techniques like quadratic programming, genetic algorithms, and particle swarm optimization.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAijejournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...ijcsa
The document describes a non-revisiting genetic algorithm with adaptive mutation for optimizing multi-dimensional numeric functions. The proposed algorithm avoids revisiting previously evaluated solutions by replacing any duplicates with a mutated version of either the best or random individual. The mutation rate adapts over generations, starting with more exploration and ending with more exploitation. Experimental results on 9 benchmark functions in 2 and 4 dimensions show the proposed algorithm achieves better best and average fitness than a standard genetic algorithm, with improvements ranging from 10-98% depending on the function.
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Xin-She Yang
This document discusses applying an eagle strategy inspired by nature to engineering optimization problems. The eagle strategy uses a two-stage approach combining global exploration with local exploitation. Global exploration uses Lèvy flights for random walks to diversify solutions. Promising solutions are then locally optimized using an efficient local search algorithm like particle swarm optimization. The document analyzes random walk models like Lèvy flights and how they can maintain diversity in swarm intelligence algorithms. It applies the eagle strategy to four engineering design problems, finding Lèvy flights can effectively reduce computational efforts.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...cscpconf
This document presents a mixed 0-1 goal programming approach to solve interval-valued fractional bilevel programming problems using a bio-inspired computational algorithm. It formulates the problem using goal programming to minimize regret intervals for target intervals of achieving goals. A genetic algorithm is used to determine target intervals and optimal decisions by distributing decision powers hierarchically. It presents the problem formulation, design of the genetic algorithm using fitter codon selection and two-point crossover, and formulation of interval-valued goals by determining best and worst solutions for objectives of decision makers at different levels using the genetic algorithm.
This document discusses using genetic algorithms to solve a multi-objective traveling salesman problem (MOTSP) that considers both cost and CO2 emissions. It provides background on genetic algorithms and the traveling salesman problem. The study aims to identify how tuning genetic operators and parameters can improve the efficiency of genetic algorithms in solving the MOTSP with CO2 emissions. Empirical results show that performance improves with some combinations of parameters and operators.
Advanced Computational Intelligence: An International Journal (ACII) aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides
the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Hybrid Genetic Algorithm for Bi-Criteria Multiprocessor Task Scheduling with ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan
and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides
the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic
based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
Hybrid Genetic Algorithm for Bi-Criteria Multiprocessor Task Scheduling with ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most appealing choice for the different NP hard problems including multiprocessor task scheduling. Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been proposed and developedfor the problem. Computational analysis with the help of defined performance index has been conducted on the standard task scheduling problems for evaluating the performance of the proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
HYBRID GENETIC ALGORITHM FOR BI-CRITERIA MULTIPROCESSOR TASK SCHEDULING WITH ...aciijournal
Present work considers the minimization of the bi-criteria function including weighted sum of makespan and total completion time for a Multiprocessor task scheduling problem.Genetic algorithm is the most
appealing choice for the different NP hard problems including multiprocessor task scheduling.
Performance of genetic algorithm depends on the quality of initial solution as good initial solution provides the better results. Different list scheduling heuristics based hybrid genetic algorithms (HGAs) have been
proposed and developedfor the problem. Computational analysis with the help of defined performance
index has been conducted on the standard task scheduling problems for evaluating the performance of the
proposed HGAs. The analysis shows that the ETF-GA is quite efficient and best among the other heuristic based hybrid genetic algorithms in terms of solution quality especially for large and complex problems.
A Real Coded Genetic Algorithm For Solving Integer And Mixed Integer Optimiza...Jim Jimenez
This document describes a real coded genetic algorithm called MI-LXPM for solving integer and mixed integer constrained optimization problems. MI-LXPM modifies and extends an existing real coded genetic algorithm (LXPM) to handle integer restrictions on decision variables. It incorporates a truncation procedure to satisfy integer restrictions and a penalty approach for handling constraints. The performance of MI-LXPM is tested on 20 problems and compared to other algorithms, showing it outperforms them in most cases.
Similar to A Tabu Search Heuristic For The Generalized Assignment Problem (20)
Essay On Teachers Day (2023) In English Short, Simple BestSandra Long
The document provides instructions for submitting a request to an online writing service called HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete an order form with instructions, sources, and deadline. 3) Review bids from writers and select one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied with the work. The service promises original, high-quality content and refunds for plagiarized work.
10 Best Printable Handwriting Paper Template PDF For Free At PrintableeSandra Long
The document discusses differences between schools in America and India. It notes that American schools have earlier start times, later end times, and are divided into elementary, middle, and high school sections. American schools also place more emphasis on extracurricular activities and sports compared to Indian schools. Key differences include dress codes, lunch times, qualifications for teachers, and approaches to learning that are more hands-on in America versus memorization-focused in India.
Buy College Application Essay. Online assignment writing service.Sandra Long
The document provides instructions for requesting an assignment writing service from HelpWriting.net. It outlines a 5-step process: 1) Create an account with valid email and password. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Review the completed paper and authorize payment if satisfied. 5) Request revisions to ensure satisfaction, and the company offers refunds for plagiarized work.
FREE 6 Sample Informative Essay Templates In MS WordSandra Long
This document provides a summary of Ursula K. LeGuin's novel The Left Hand of Darkness. The story follows Genly Ai on a mission to persuade the nations of the planet Gethen, also called Winter, to join an intergalactic alliance. His journey is challenging due to major societal differences between Gethen and his home planet of Earth. One such difference is that the people of Gethen experience periods of sexual arousal and function during certain parts of their monthly cycle, which impacts their society and relationships. The document will discuss how sex and gender roles are portrayed in the novel.
Small Essay On Education. Small Essay On The EducSandra Long
The document provides instructions for using an online writing service called HelpWriting.net. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete an order form with instructions, sources, and deadline. 3) Review bids from writers and choose one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied with the work. The service offers original, plagiarism-free content and multiple revisions.
Where Can I Buy A Persuasive Essay, Buy PerSandra Long
The document discusses a dermal fillers training course that fully trains and certifies students to administer dermal fillers, allowing them to add the procedure to their practice once completed. Students receive all necessary training on dermal fillers during the course. While certified after completion, students can also get follow up assistance from the instructor by sending in photos and questions about early cases.
Chinese Writing Practice Paper With Pinyin GoodnotSandra Long
Here are the key ways of knowing that would be relevant in this situation:
Empirical: The rising lactate level and increasing discomfort experienced by the patient provide empirical evidence that the chlorhexedine wipes are irritating her skin. Direct observation of physical signs and symptoms allows the nurse to gather empirical knowledge.
Personal: Communicating directly with the patient about her experience of burning and distress allows the nurse to understand her personal experience of the situation from her own perspective. This gives the nurse personally-derived knowledge.
Ethical: Considering the patient's best interests and right to be free from unnecessary distress or harm guides the nurse to stop using the irritating wipes. The ethical way of knowing emphasizes caring for the patient in
Elephant Story Writing Sample - Aus - Elephant WSandra Long
The document provides instructions for requesting writing assistance from HelpWriting.net in 5 steps:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, deadline, and attach a sample work.
3. Review bids from writers and choose one based on qualifications.
4. Review the completed paper and authorize payment if satisfied. Free revisions are allowed.
5. Multiple revisions can be requested to ensure satisfaction. Plagiarized work will result in a full refund.
391505 Paragraph-Writ. Online assignment writing service.Sandra Long
The document provides information about a performance that will take the audience through centuries of musical innovation and revolution, from the Middle Ages to the Baroque period. It begins with a secular song by early troubadours, then discusses Machaut's secular ballades and virelais from the 14th century. Finally, it mentions an Italian soprano aria by Handel from the early 18th century Baroque period. The performance aims to showcase changes in vocal arrangements, instruments, mood and style over different historical periods of music.
Get Essay Writing Assignment Help Writing Assignments, Essay WritingSandra Long
This document provides instructions for getting essay writing assignment help from the website HelpWriting.net. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete a form with assignment details. 3) Review bids from writers and choose one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied. The document emphasizes that original, high-quality work is guaranteed, with refunds for plagiarism.
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, WhiSandra Long
The document discusses various methods for maintaining a healthy septic system, noting that certain cleaning agents and detergents can weaken or kill the bacteria that maintains the system. It recommends using liquid detergents labeled as safe for septic tanks, and provides tips for good laundry habits like avoiding bleach and antibacterial soaps. Combining these laundry best practices with regular septic system maintenance can help the system operate effectively for 20-30 years.
The Federalist Papers Writers Nozna.Net. Online assignment writing service.Sandra Long
The document summarizes an accounting software installation project that was going poorly after being well-planned initially. A task force from each company division provided input into the proposed installation and divisions were trained on how they would use the new software. However, four months into the project it was falling apart, with contractors complaining about delays caused by other contractors. The project manager called an emergency meeting to address the problems arising.
Whoever Said That Money CanT Buy Happiness, Simply DidnTSandra Long
The novel follows Cedric Jennings' journey from a dangerous neighborhood in Washington D.C. to attending an Ivy League university. To survive in his high school and pursue his dream of an Ivy education, Cedric isolates himself from his peers. When he reaches Brown University, Cedric still believes isolation is key to survival and sees himself in conflict with peers who have different values or successes. This leads to conflicts erupting from Cedric's refusal to socialize or celebrate achievements with others.
How To Write An Essay In College Odessa HowtowritSandra Long
The document discusses the steps to get writing help from the website HelpWriting.net, which include creating an account, submitting a request with instructions and deadline, and reviewing writer bids before choosing one and placing a deposit to start the assignment. It notes that customers can request revisions until satisfied with the final paper and receive a refund if the paper is plagiarized. The process aims to ensure customers' needs are fully met for original, high-quality content.
How To Write A Career Research Paper. Online assignment writing service.Sandra Long
This document provides instructions for creating an account and submitting an assignment request on the website HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete a form with assignment details, sources, and deadline. 3) Review bids from writers and choose one. 4) Review the completed paper and authorize payment. 5) Request revisions to ensure satisfaction, with a full refund option for plagiarized work. The purpose is to help students get high-quality assignment help and writing services from this online platform.
Columbia College Chicago Notable Alumni - INFOLEARNERSSandra Long
The Great Sphinx of Giza in Egypt is one of the most famous ancient monuments, but its origins remain mysterious. It is generally believed to have been built around 2500 BC as a statue of Pharaoh Khafre, but some details of its construction and purpose are uncertain. The Sphinx represents a guardian of the Giza plateau and holds symbolic meaning in ancient Egyptian culture. Recent archaeological excavations have uncovered additional clues, but the Sphinx continues to puzzle historians with the mysteries of its ancient construction and original function.
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.Sandra Long
The document provides instructions for requesting writing assistance from HelpWriting.net in 5 steps:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, and deadline.
3. Choose a writer based on their bid, qualifications, and feedback to start the assignment.
4. Review the completed paper and authorize payment or request revisions.
5. Request multiple revisions to ensure satisfaction, and HelpWriting.net offers refunds for plagiarized work.
Essay Writing Tips That Will Make Col. Online assignment writing service.Sandra Long
The document discusses the materials needed for rockets and space shuttles. Early rockets were disposable to launch astronauts into space but were expensive. Later, space shuttles were designed to be reusable by having aircraft connect to rockets to boost them into space and land back on Earth. Engineers had to develop materials that could withstand extreme heat, cold, pressure changes and debris collisions in space while being reusable.
Pin On Essay Writer Box. Online assignment writing service.Sandra Long
The document provides an analysis of the memoir The Glass Castle by Jeannette Walls. It discusses how Walls endured a difficult childhood, growing up in poverty with alcoholic parents who often neglected to properly care for their children. However, it argues that these hardships helped shape Walls into an independent, successful woman by teaching her self-reliance and resilience at a young age. While her upbringing exposed her to challenges that could have had negative long-term effects, Walls was able to overcome obstacles and achieve success despite coming from a low-income family environment.
How To Write A Funny Essay For College - AiSandra Long
The document discusses process improvement from a principal's perspective, noting that principals must assess delivery mechanisms, facilities, and equipment before making enhancement recommendations. It focuses on two learning delivery mechanisms, school facilities and technologies, and how process improvement relates to educational institutions. The principal is tasked with purposeful planning, preparation, and leadership to effectively evaluate an institution's success.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
A Tabu Search Heuristic For The Generalized Assignment Problem
1. Theory and Methodology
A Tabu search heuristic for the generalized assignment problem
Juan A. D
õaz 1
, Elena Fern
andez *
Dpt. d'Estad
õstica i Investigaci
o Operativa, Universitat Polit
ecnica de Catalunya, Pau Gargallo 5, 08028 Barcelona, Spain
Received 18 August 1998; accepted 20 March 2000
Abstract
This paper considers the generalized assignment problem (GAP). It is a well-known NP-hard combinatorial opti-
mization problem that is interesting in itself and also appears as a subproblem in other problems of practical impor-
tance. A Tabu search heuristic for the GAP is proposed. The algorithm uses recent and medium-term memory to
dynamically adjust the weight of the penalty incurred for violating feasibility. The most distinctive features of the
proposed algorithm are its simplicity and its ¯exibility. These two characteristics result in an algorithm that, compared
to other well-known heuristic procedures, provides good quality solutions in competitive computational times. Com-
putational experiments have been performed in order to evaluate the behavior of the proposed algorithm. Ó 2001
Elsevier Science B.V. All rights reserved.
Keywords: Generalized assignment problem; Metaheuristics; Tabu search; Local search
1. Introduction
The generalized assignment problem (GAP) is a
well-known NP-hard combinatorial optimization
problem. It considers a situation in which n jobs
have to be processed by m agents. The agents have
a capacity expressed in terms of a resource which is
consumed by job processing. The decision maker
seeks the minimum cost assignment of jobs to
agents such that each job is assigned to exactly one
agent subject to the agents' available capacity.
Assignment of jobs to computers in a computer
network, storage space allocation, design of com-
munication networks with node capacity con-
straints, scheduling variable-length television
commercials into time slots, etc., are examples of
practical applications for the GAP. It also appears
as a subproblem in many real-life problems such as
vehicle routing, plant location, ¯exible manufac-
turing systems and resource scheduling.
In this paper, we propose a Tabu search algo-
rithm for the GAP. Two distinctive features of this
algorithm have to be remarked: its simplicity and
¯exibility. These two characteristics result in an
algorithm that provides good quality solutions
European Journal of Operational Research 132 (2001) 22±38
www.elsevier.com/locate/dsw
*
Corresponding author. Tel.: +34-93-4016948; fax: +34-93-
4015855.
E-mail addresses: jadiaz@eio.upc.es (J.A. D
õaz), ele-
na@eio.upc.es (E. Fern
andez).
1
Partially supported by grant 110598/110666 from Consejo
Nacional de Ciencia y Tecnolog
õa (CONACYT), M
exico.
0377-2217/01/$ - see front matter Ó 2001 Elsevier Science B.V. All rights reserved.
PII: S 0 3 7 7 - 2 2 1 7 ( 0 0 ) 0 0 1 0 8 - 9
2. that are competitive with other well-known heu-
ristic procedures.
The ¯exibility is obtained from two sources: a
strategic oscillation scheme that results from al-
lowing the search to explore the infeasible solu-
tions' space and an adaptively modi®ed objective
function. During the search, the solutions' space is
enlarged to permit considering infeasible solutions.
This gives the search a higher level of ¯exibility
and some feasible solutions, that otherwise would
not have been considered, can be reached by suc-
cessively crossing the feasible/infeasible boundary.
This strategy has already been used for the GAP
by other authors (see [16,25±27]). As explained
later on, the di€erence is that in our approach no
solution is qualitatively preferred to others in
terms of its structure so a priori all considered
solutions are equally desired. This further in-
creases the degree of ¯exibility of the procedure.
Our algorithm considers a modi®ed objective
function that includes a penalty term associated
with infeasible solutions. The weight given to such
penalty is dynamically adapted according to the
history of the search. A similar idea has been
proposed independently by Yagiura et al. [25],
where the search parameter is also adjusted auto-
matically according to the result of the previous
iteration. However, in our approach the modi®ed
objective function plays a central role since it is the
element that really takes care of guiding the
search. This is achieved by using a combination of
recency-based and medium-term memory for ad-
justing the penalty weight. The combination of
these two memory components results in an ob-
jective function that is continuously adapting itself
to the stage of the search. This is an issue that we
believe is interesting since, traditionally memory
has been widely used to explore complex neigh-
borhood structures, whereas it has been used in a
very limited fashion to obtain automated mecha-
nisms to modify the objective function that guides
the search process.
The adaptability of the proposed algorithm
permits not having to resort to complex neigh-
borhood structures. This, in its turn, has an im-
portant e€ect on the overall speed of the procedure.
This is particularly true with the neighborhood
structures that are explored (the classical shift and
swap neighborhoods) and with the long-term
memory component considered which is the same
(the assignments' frequency) both for the intensi-
®cation and diversi®cation phases.
The result is an algorithm where the search
strategy is based on (a) exploring large areas of the
infeasible solutions' space and (b) dynamically
adapting the objective function to the stage of the
search.
The proposed algorithm has been tested on
di€erent sets of problems publicly available in
Beasley OR-library (http://mscmga.ms.ic.uk/jeb/
orlib/gapinfo.html) and in Yagiura library (http://
www-or.amp.i.kyoto-u.ac.jp/yagiura/gap). It has
also been compared to other well-known existing
methods proposed by other authors in
[7,16,21,25,27].
The results prove that the performance of the
algorithm is remarkable since it produces high-
quality solutions in small computational times.
This paper is organized as follows: in Section 2
a model for the GAP is given and a relaxation
RGAP that will be used throughout the paper is
presented. Section 3, reviews solution methods for
the GAP found in the literature. The Tabu search
heuristic for the GAP is explained in detail in
Section 4: neighborhood structures, the neighbor
generation mechanism, and the memory functions
used are detailed in that section. Section 4 also
presents the mechanism for dynamically adjusting
the penalty parameter. In Section 5, computa-
tional results are presented and our approach
is compared to other existing algorithms. Fi-
nally, Section 6 includes some conclusions and
remarks.
2. The model
Let I ˆ f1; . . . ; mg be the set of agents and
J ˆ f1; . . . ; ng the set of jobs. A standard integer
programming formulation for the GAP is the fol-
lowing:
GAP† min z ˆ
X
i2I
X
j2J
cijxij; 1†
s.t.
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 23
3. X
i2I
xij ˆ 1 8j 2 J; 2†
X
j2J
aijxij 6 bi 8i 2 I; 3†
xij 2 f0; 1g 8i 2 I 8j 2 J; 4†
where cij is the assignment cost of job j to agent i,
aij the resource required for processing job j by
agent i, and bi is the available capacity of agent i.
Decision variables xij are set to 1 if job j is assigned
to agent i, 0 otherwise. Constraints (2), together
with the integrality conditions on the variables,
state that each job is assigned to exactly one agent.
Constraints (3) insure that the resource availabil-
ities of agents are not exceeded.
In our approach, the assignment costs have
been expressed in terms of their best assignment
Dij ˆ cij c min
j
;
where c minj ˆ minfcij : i 2 Ig is the cost of the
best assignment for job j. Dij can be interpreted as
the relative assignment cost of job j to agent i with
respect to its best assignment cost. The use of the
Dij's allows fair comparisons for deciding which
jobs have to be assigned to a given agent, partic-
ularly when the assignment costs have di€erent
ranges for the various jobs. Then, the objective
function can be rewritten as
min z ˆ K ‡ min
X
i2I
X
j2J
Dijxij
( )
;
where K ˆ
P
j2J c minj, represents a ®xed assign-
ment cost which has to be incurred. Since K is
constant we can remove it from (1) and consider
the following equivalent objective function:
min z0
ˆ
X
i2I
X
j2J
Dijxij: 1
0
†
Throughout the paper, we consider a relaxed for-
mulation for the GAP in which the capacity con-
straints (3) are dropped and a penalty term is
added to the objective function:
RGAP† min z00
ˆ
X
i2I
X
j2J
Dijxij ‡ P; 100
†
X
i2I
xij ˆ 1 8j 2 J; 2†
xij 2 f0; 1g 8j 2 J; 8i 2 I: 4†
The term P in the objective function evaluates the
infeasibility of the solutions to RGAP, with re-
spect to constraints (3) of GAP. It takes the form
P ˆ q
X
i2I
si;
where si ˆ maxf
P
j2J aijxij bi; 0g measures the
amount by which the capacity of agent i is violated
by a given solution. The parameter q is a penalty
factor.
The reasons for considering RGAP are next
explained. In the formulation of the GAP con-
straints (3) are `dicult' and, usually, they are not
easy to ful®ll. For this reason these constraints
may con®ne feasible solutions to a fairly narrow
region. When exploring the neighbors of a given
solution in a heuristic based method, restricting
the search to feasible solutions using simple moves
tends to generate very few trial solutions, forcing
the process to terminate very quickly, often with
low quality solutions. On the other hand, allowing
some violation of feasibility has two advantages:
®rst, it permits the execution of moves that are less
complex than might otherwise be required. Sec-
ond, it gives the search a higher level of ¯exibility
and much larger areas of the solutions' space can
be explored. The deviation from feasibility can be
measured and controlled through a penalty term in
the objective function. Violation of feasibility is
allowed in several recent works for the GAP like
the Tabu search approach considered in [16], the
VDS proposed by Yagiura et al. [27], the VDS
with branching search considered in [26] and the
ejection chain approach proposed in [25]. How-
ever, these references, each one in its own context,
follow the same a priori criterion that, in some
sense, guides the search that is performed. Namely:
among feasible solutions to RGAP, feasible solu-
tions to GAP (those that ful®ll constraints (3)) are
always preferred. This is an important di€erence
with respect to our approach. We consider all
feasible solutions to RGAP equally attractive in
24 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
4. terms of their structure. This is further explained
in Section 4.2.
3. Related work
In this section, we present a brief review of
solution methods found in literature for GAP. A
comprehensive treatment of these methods can be
found in [6] and in [21].
Exact methods for the GAP di€er from each
other in the way the lower bounds are computed.
These bounds are derived from relaxations of the
assignment constraints (constraints (2)), the
knapsack constraints (constraints (3)) or the in-
tegrality conditions on the variables (constraints
(4)). See [3,18,20,22±24].
Large-sized instances of the GAP have been
tackled by means of approximate methods since
exact methods have only proved to be e€ective for
small-sized instances where the capacity con-
straints are loose. Di€erent Lagrangean relax-
ations have been proposed in [2,9,13±15,17].
Other types of heuristic methods focused on
providing good quality feasible solutions can be
found in the literature for the GAP.
Martello and Toth [19] propose a two phase
heuristic method MTH. In the ®rst phase an initial
solution is constructed using a greedy function to
measure the `desirability' of assigning job j to
agent i. In the second phase the initial solution is
improved using a simple local exchange procedure.
Cattrysse [4] considers a heuristic that incor-
porates a variable reduction procedure in order to
reduce the size of the problem. This reduction
procedure is based on the addition of valid in-
equalities (facets) of the knapsack polytopes as-
sociated to the capacity constraints. It starts
solving the LP relaxation of the GAP. Then, vio-
lated valid inequalities for each knapsack con-
straint are identi®ed and added to the formulation.
This process continues until no further valid in-
equalities can be generated. Once the reduction
step ®nishes, all variables with value 1 are ®xed
and the capacity of each agent adjusted. In a sec-
ond step, Simulated Annealing is applied to solve
the reduced problem.
Amini and Racer [1] developed a Variable
Depth Search Heuristic (VDSH) for the GAP.
Two phases are considered. In the ®rst phase, an
initial solution and the LP lower bound are gen-
erated. A doubly-nested iterative re®nement pro-
cedure is performed in the second phase. Feasible
shift moves of a job from one agent to another
one, and feasible swap moves between jobs as-
signed to di€erent agents are considered during the
re®nement procedure.
Trick [24] considers a heuristic approach based
on the linear programming relaxation of the GAP.
It is an iterative method which consists of three
steps. In the ®rst step all `useless' variables are
removed (a variable is considered useless if
aij bi). In the second step the LP-relaxation is
solved. During the third step all variables with
value 1 are ®xed. When the LP based heuristic
®nishes, an improvement phase that consists in
swapping pairs of jobs is applied.
Cattrysse et al. [5] reformulate the GAP as a set
partitioning problem. A heuristic procedure based
on this reformulation is proposed. Each column
represents a feasible assignment pattern of jobs to
an agent. For each agent, columns are generated
solving a knapsack problem in which the coe-
cients are obtained from the vector of dual vari-
ables of the LP-relaxation. Since this problem is
degenerate, a dual ascent procedure is used to
obtain the dual variables. Then, a subgradient
optimization procedure is applied to improve the
lower bound. Heuristic solutions may be found by
coincidence during the subgradient optimization
or by search among the columns generated.
Osman [21] proposes a hybrid heuristic SA/TS
which combines Simulated Annealing and Tabu
search. It uses a k-generation mechanism which
describes how a solution can be altered to generate
another neighbor solution. The SA/TS algorithm
is called hybrid because it combines the non-
monotonic oscillation strategy of Tabu search with
the simulating annealing philosophy. This non-
monotonic cooling scheme is used within the
framework of simulating annealing to induce an
strategic oscillation behavior in the temperature
values. Also a Tabu search algorithm is proposed
for the GAP. Both, the hybrid SA/TS and the
Tabu search algorithms use a frequency-based
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 25
5. memory that records information used for diver-
si®cation purposes.
A genetic algorithm (GA) for the GAP is pro-
posed by Beasley and Chu in [7]. Instead of using a
binary representation for the solutions, they use a
n-dimensional vector of integer alphabets in the set
I. These integer alphabets identify the agent to
which each job is assigned. Fitness±un®tness pair
evaluations are used to deal with both, feasible and
infeasible solutions. Apart from using mutation
and crossover operators, a two-phase heuristic
improvement operator is also used: in the ®rst
phase this operator tries to recover feasibility by
reducing the un®tness score. The second phase
tries to improve the cost (®tness) of the solution
without further violating the capacity constraints.
Laguna et al. [16] introduce a Tabu search al-
gorithm for the multilevel assignment problem
MGAP based on ejection chains. MGAP di€ers
from GAP in that agents can perform tasks at
di€erent eciency levels. An ejection chain is an
embedded neighborhood construction that con-
siders the neighborhoods of simple moves to create
more complex and powerful moves. Multiple dy-
namic Tabu lists and a long-term memory com-
ponent are used within the framework of Tabu
search. Also a strategic oscillation scheme that
allows searching paths to cross the capacity±fea-
sibility boundary is used.
Yagiura et al. [27] have considered a Variable
Depth Search algorithm VDS for the GAP. They
incorporate an adaptive use of modi®ed shift and
swap neighborhoods where some moves are for-
bidden in order to avoid cycling. The method also
allows the search to visit infeasible solutions
modifying the objective function to penalize the
violated capacity of the agents. In Yagiura et al.
[26], a branching search processes to construct the
neighborhoods are incorporated in order to im-
prove the performance of the VDS algorithm [27].
Simultaneously to our work, Yagiura et al. [25]
have proposed a Tabu search algorithm where
ejection chains are used to create more complex
and powerful moves. The algorithm maintains a
balance between visits to feasible and infeasible
regions using an automatic mechanism for ad-
justing the parameter that weights the penalty term
in the objective function.
4. Tabu search
In this section, we present the components
of our Tabu search algorithm for the GAP.
First, the solution representation is described.
Second, the neighborhood structures and their
associated moves are presented. Then, a strategic
oscillation scheme that permits the search to
alternate between feasible and infeasible solu-
tions is explained. Finally, the characteristics
of the short-term memory and the frequency-
based memory components of the algorithm are
detailed.
Tabu search, introduced in Glover [11] and
[12], exploits a collection of principles for intel-
ligent problem solving. It uses: (a) ¯exible
memory structures; (b) an associated mechanism
of control, to be used along with the memory
structures, to de®ne Tabu restrictions and aspi-
ration criteria and (c) records of historical in-
formation for di€erent time spans that permit
the use of strategies for intensi®cation and
diversi®cation of the search process. Its power is
based on the use of adaptive memory which is
used for guiding the search process to overcome
local optimality and to obtain near-optimal so-
lutions. The use of short-term memory consti-
tutes a form of aggressive search that tries to
make good moves subject to the constraints
implied by Tabu restrictions. Tabu restrictions
are used to prevent cycling and aspiration crite-
ria de®ne rules to ignore Tabu restrictions under
certain conditions. Intensi®cation and diversi®-
cation strategies are attained by means of long-
term memory. Intensi®cation strategies exploit
features historically found desirable while diver-
si®cation strategies encourage the search to ex-
plore unvisited regions.
4.1. Solutions
A solution p for both GAP and RGAP is rep-
resented with a vector of n elements, where the kth
component is the agent to which job k is assigned,
i.e.
p ˆ p1;...;pn†; where pj 2 I; pj ˆ i (
) xij ˆ 1;
26 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
6. 4.2. Neighborhood structures and generation mech-
anism
Given the set S of feasible solutions for an
optimization problem, a neighborhood structure
de®nes, for each solution p 2 S, a set Sp S of
solutions that in some sense are `close' to p. We
have considered two simple classical neighborhood
structures: shift and swap. Given a solution p, the
shift neighborhood, Nshift, is de®ned to be the set of
solutions that can be obtained by reassigning one
job from one agent to another (see Fig. 1).
Nshift p† ˆ f p0
1; . . . ; p0
n†j9j
2 J s:t: p0
j 6ˆ pj ;
p0
j ˆ pj 8j 6ˆ j
g:
The swap neighborhood, Nswap, is the set of solu-
tions that can be obtained by interchanging the
assignments of two jobs, originally assigned to
di€erent agents (see Fig. 2).
Nswap p† ˆ f p0
1; . . . ; p0
n†j9j1; j2 2 J; pj1
6ˆ pj2
;
s:t: p0
j1
ˆ pj2
; p0
j2
ˆ pj1
; p0
j ˆ pj 8j 6ˆ j1; j2g:
It is important to note here that when only swap
moves are used the solutions have a special struc-
ture. In particular, solutions have a ®xed number
of jobs assigned to each agent. On the other hand,
shift moves are more restrictive in terms of feasi-
bility. For these reasons we have used
N ˆ Nshift [ Nswap as neighborhood structure which
is a more ¯exible option.
A neighbor generation mechanism is a set of
rules for selecting a solution from a given neigh-
borhood. The following rules de®ne the generation
mechanism used in our Tabu search implementa-
tion:
· Moves in Nshift [ Nswap that are not Tabu-active
are considered admissible.
· For the current assignment p ˆ p1; . . . ; pn†,
jobs are processed by decreasing values of Dpjj.
· For a given job j, shift and swap moves are
Fig. 2. Swap neighborhood.
Fig. 1. Shift neighborhood.
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 27
7. evaluated and the best admissible move with re-
spect to objective function 100
† is associated to
the job.
· The candidate generation mechanism halts if the
best admissible move for the job being processed
improves the objective function value 100
†. At
each iteration, the candidate moves are regener-
ated.
· Otherwise, if all the jobs have been processed
but none of the associated moves improve objec-
tive function 100
†, the admissible move with the
lowest increment is selected and performed.
4.2.1. Remarks
(1) Note that both, the considered neighbor-
hood and the selection rules, are extremely simple.
It is interesting to note that since the search is ®rst
improving, in most cases only a partial exploration
of candidate moves is performed at each iteration.
It is highly probable to ®nd an improving move
with respect to objective function 100
† before all
jobs have been processed.
(2) As already mentioned, using the above
strategy, the search is guided by the objective
function 100
† and no priority is given to solutions
that are feasible for the GAP. This does not hap-
pen in Laguna et al. [16], where the strategy of the
search is guided towards feasibility through a
qualitative criterion which makes prefer feasible
solutions in most cases. Neither does this happen
in Yagiura et al. [27], where the search strategy
changes each time that feasibility is attained. In
Yagiura et al. [25] the ejection chain moves do not
consider solutions that increase infeasibility. Thus,
their strategy also guides the search towards fea-
sibility.
(3) It can also be observed that with the above
selection rules, no priority is given to shift moves
with respect to swap moves and vice versa. Once
more the element that takes care of guiding the
search is the modi®ed objective function 100
†. This
allows the solutions obtained at the various itera-
tions to change their structure and to have di€er-
ent numbers of jobs assigned to each agent. To a
great extent this does not happen in [27] which uses
a more complex neighborhood structure also
based on Nshift [ Nswap. In that work, at some given
iterations a shift move is forced and then only
swap moves are permitted until feasibility is re-
covered.
4.3. Strategic oscillation
As mentioned previously, the objective function
used in the relaxed formulation RGAP includes a
penalty to evaluate the violation of the agents'
capacity constraints for the di€erent solutions to
RGAP. This penalty term is of the form
P ˆ q
X
i2I
si;
where q is a penalty parameter. Since the value of
P is null for feasible solutions to the GAP, objec-
tive function 100
† in RGAP provides the original
evaluation 10
† for such solutions and a modi®ed
evaluator for infeasible solutions. This modi®ed
evaluator, together with the (previously described)
rules of movement guide the search towards the
domain of feasibility when dealing with solutions
far away from such domain. Therefore, consider-
ing objective function 100
† enables the algorithm to
have a strategic oscillation behavior that permits
alternating between feasible and infeasible solu-
tions.
In this context, the weight assigned to the fea-
sibility violation penalty, q, plays a central role to
furnish the process with an equilibrium that per-
mits alternating between feasibility and ¯exibility.
In our algorithm this is done dynamically ac-
cording to the expression
q :ˆ q a infeas= niter 1† 1†
;
where infeas is the number of infeasible solutions
to GAP obtained during the last niter iterations
and a P 1 is a parameter that varies as explained
later in this section.
The above expression can be interpreted in
terms of incorporating a recency-based memory
infeas† and a medium-term memory a† to adapt
the parameter q to the current stage of the search.
The rationale behind the expression that we use
is the following: the search strategy of our algo-
rithm relies on the usefulness of a wider explora-
tion of the infeasible solutions' space. For this
28 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
8. reason, we consider that variations on the value of
q, although necessary, should be performed very
smoothly. This permits the search to operate ac-
cording to a uniform criterion for a series of iter-
ations. In other words, it allows the paths of
successive solutions to be generated with respect to
homogeneous rules. For a given a, the value of q
ranges in ‰a 1
q; a1= niter 1†
qŠ. Note that, since the
values of a will always be greater or equal to one, q
will only increase when all the solutions found in
the last niter iterations are infeasible. However,
even in that case, the upper limit on the interval
makes the increase in q to be very small. In the
other cases, q is reduced (or maintained) increasing
(or not decreasing) the importance assigned to the
quality of the solutions versus their closeness to
feasibility.
If, in the above expression, a were ®xed to 1, the
value of q would also be ®xed during all the pro-
cedure and the history of the search would not be
taken into account. When a ˆ 2, the expression
obtained is very similar to the one used in [10] for
Routing Problems with stochastic demands and
customers. Observe, however, that when the value
of a is ®xed, only recency-based memory is used.
The value assigned to a in¯uences considerably
the progress of the search. The bigger the value of
a the larger the range of variation for q. Therefore,
when only infeasible solutions are visited in the
last iterations, the rate of increase of q is faster for
higher values of a. Since most of the time the
considered solutions will be infeasible, small values
of a tend to give smaller q's and vice versa. Con-
sequently, a medium-term memory component has
been incorporated to see if good feasible solutions
have been found with a similar rate of increase/
decrease for q. When this is so, the value of a is
maintained to enhance the search under the same
conditions on a given area. Otherwise, the value of
a is increased guiding the search towards the fea-
sible subregion of the area being explored. Again,
the variation of a is extremely moderate to avoid
sudden changes in the search. In particular, when
the best feasible solution known so far has not
been improved for the last 100 iterations, the value
of a is increased by 0.005 every 10 iterations until a
new better solution is found. Each time a new best
feasible solution is found, the value of a is reset to
the value 2.
4.4. Short-term memory phase
Fig. 3 outlines the short-term memory phase of
the algorithm. The short-term memory component
of Tabu search incorporates ¯exible memory
functions to forbid moves in order to avoid pre-
viously visited solutions. This memory is often
managed via solution's attributes that are forbid-
den during some period of time.
Fig. 3. Short-term memory phase.
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 29
9. The attributes we have considered are the as-
signments of jobs to agents. They are represented
by ordered pairs i; j† meaning that job j is as-
signed to agent i. In our implementation of Tabu
search for the GAP some of these attributes are
forbidden for the solutions to be generated in the
next iterations. In other words, these attributes
will be Tabu-active during a given number of it-
erations. More speci®cally, a matrix T is used to
record the forbidden attributes for the future so-
lutions. If during a given iteration k, a job j as-
signed to agent i1 is reassigned, then, any solution
having the pair i1; j† as attribute will be forbidden
for the next t iterations. That is, Tij ˆ k ‡ t, re-
cords the last iteration number for which the at-
tribute i1; j† will remain Tabu-active. In the case
of swap moves only one pair is recorded as Tabu-
active in the matrix T. From the two assignments
being considered it is the one with the higher Dpjj.
Recency-based memory is managed using dy-
namic Tabu tenures. The parameter t is randomly
selected within a range ‰tmin; tmaxŠ.
A standard aspiration criterion is used to by-
pass a Tabu restriction for a move. If, after per-
forming the move, a feasible solution better than
the best solution known so far would be obtained,
the move is selected and performed.
As usual, the short-term memory phase is ter-
minated after a ®xed number of iterations without
improvement.
4.5. Long-term memory phase
Long-term memory functions are used within
the framework of Tabu search to furnish the
search with a higher level of ¯exibility in order to
achieve regional intensi®cation and/or global di-
versi®cation. The role of long-term memory is to
record features of a selected number of trial solu-
tions. Our TS algorithm uses a frequency-based
memory scheme, both, for intensi®cation and di-
versi®cation purposes. The frequency matrix fr
records the number of times that job j has been
assigned to agent i in the total number of solutions
visited up to that point.
The intensi®cation phase recovers the best so-
lution found so far and ®xes the current assign-
ments of jobs to agents with a frequency of at least
85%. This assignments can be considered `good'
given that: ®rst, they are also present in the best
solution found so far. Second, if they appear in
most of the solutions generated so far either they
have been selected many times or after being se-
lected they have not been changed for a long pe-
riod of time. Fixing some assignments is equivalent
to reducing the size of the problem given that some
decision variables are set to the value 1. Once this
®xing process ®nishes, the short-term memory
phase restarts for the resulting reduced GAP with
the un®xed jobs and adjusted agents' capacities.
In the diversi®cation phase, the frequency-
based memory is also used but in a di€erent way.
The cost matrix D is replaced by the matrix fr ‡ D.
Now the elements of the matrix fr can be viewed as
penalties for selecting the most frequent assign-
ments of jobs to agents. This is done to encourage
the assignments that have seldom been selected
and to discourage those assignments that have
been frequently used. The matrix D is added to the
frequency matrix for tie breaking purposes. Dur-
ing this phase it is intended to perform a number
of high in¯uence moves in order to produce solu-
tions with di€erent structures from the ones gen-
erated up to this point. After performing such
moves, the original cost matrix D is recovered and
the short-term memory phase reinitiated. Our ex-
perience has shown that this diversi®cation scheme
is e€ective since di€erent high quality solutions
from the ones generated to that point are usually
obtained after a reduced number of iterations in
the diversi®cation phase.
Fig. 4 depicts the proposed Tabu search algo-
rithm. It starts with one application of the short-
term memory phase to improve the initial solution
obtained using the MTH heuristic proposed by
Martello and Toth [19]. Then, a cycle that alter-
nates between intensi®cation and diversi®cation
phases is performed during l iterations.
5. Computational results
Three di€erent sets of problems have been used
to test the proposed Tabu search algorithm. The
®rst two sets of problems are publicly available in
30 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
10. Beasley OR-library (http://mscmga.ms.ic.ac.uk/jeb/
orlib/gapinfo.html). The third set of problems is
publicly available in Yagiura library (http://www-
or.amp.i.kyoto-u.ac.jp/members/yagiura/gap/).
The ®rst set, S1, contains 60 `small-sized'
maximization problems. These problems were
used to test the set partitioning heuristic of Cat-
trysse et al. [5], the Simulated Annealing heuristic
FSA of Cattrysse [4] as well as the hybrid Simu-
lated Annealing/Tabu search heuristic SA/TS and
the Tabu search heuristic TS proposed in Osman
[21]. The test problems of this set have the fol-
lowing characteristics:
1. The number of agents m is set to 5, 8 and 10.
2. The ratio r ˆ m=n is set to 3, 4, 5 and 6 to de-
termine the number of jobs n.
3. aij values are integers generated from a uniform
distribution U 5; 25†.
4. cij values are integers generated from a uniform
distribution U 15; 25†.
5. The bi values are set to 0:8=m†
P
j2J aij†.
The test problems are divided into 12 groups (each
containing ®ve problems), gap1 to gap12, accord-
ing to the size of the test problems as is depicted in
Table 1.
The second set of problems, S2, contains 24
`large-sized' minimization problems used to test
the GA proposed in [7], the Variable Depth Search
algorithm proposed in [27], the VDS with
branching search considered in Yagiura et al. [26],
and the ejection chain approach proposed in
Yagiura et al. [25]. The third set of problems, S3,
contains: 3 Type C problems with n ˆ 400, 3 Type
D problems with n ˆ 400 and 9 Type E problems
used in [25]. Dimensions of S2 and S3 problems
can be seen in Table 5. The problems in S2 and S3
are divided into ®ve classes according to the way
they were generated:
1. Type A. aij are integers generated from an
uniform distribution U 5; 25†, cij are integers
Table 1
Dimensions for S1 problems
Problem group m n
gap1 5 15
gap2 5 20
gap3 5 25
gap4 5 30
gap5 8 24
gap6 8 32
gap7 8 40
gap8 8 48
gap9 10 30
gap10 10 40
gap11 10 50
gap12 10 60
Fig. 4. Tabu search algorithm for the GAP.
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 31
11. generated from an uniform distribution
U 10; 50† and bi ˆ 0:6 n=m† 15 ‡ 0:4R,
where
R ˆ max
i2I
X
j2J;Ijˆi
aij
and
Ij ˆ minfijcij 6 ckj 8k 2 Ig:
2. Type B. aij and cij generated as in Type A prob-
lems and bi is set to 70% of the value given for
Type A.
3. Type C. aij and cij generated as in Type A prob-
lems and bi ˆ 0:8
P
j2J aij=m.
4. Type D. In this case aij are integers generated
using an uniform distribution U 1; 100†,
cij ˆ 111 aij ‡ e, where e are integers generat-
ed using an uniform distribution U 10; 10†
and bi ˆ 0:8
P
j2J aij=m.
5. Type E.
aij ˆ 1 10 ln Uniform 0; 1Š†;
cij ˆ 1000=aij 10 Uniform‰0; 1Š;
bi ˆ max 0:8=m†
Pn
jˆ1;
aij; maxfaij 8j 2 Jg:
Type E problems were generated using the proce-
dure detailed in Laguna et al. [16].
A fairly simplistic assumption with respect to
the cost/resource relationship is assumed for Types
A, B and C problems. However Types B and C
problems are more dicult than Type A since the
capacity constraints are tighter. Types D and E
problems are known to be more dicult to solve.
Results for Type A problems are not given since
the MTH heuristic produces the optimal values in
all cases.
The experiments have been performed on a Sun
Sparc Station 10/30 with 4 HyperSparc at 100
MHz using only one processor. The algorithm is
coded in C language. For each of the test problems
30 runs were performed. After some tuning the
following parameters' values were used: number of
intensi®cation and diversi®cation iterations, l ˆ 6;
the Tabu tenures values range from tmin ˆ 2 to
tmax ˆ 6. The stopping criterion for the short-term
memory phase has been adapted to the size of the
test problems. In particular, for problems in S1,
this parameter has been set to 350 iterations
without improvement. For large-sized problems in
S2 and S3, the stopping criterion has been ®xed to
1500 iterations without improvement.
The value of a is initially set to 1. Starting with
a smaller value of a has the e€ect of producing
good quality feasible solutions at an earlier stage
of the search. Once a feasible solution is found, the
value of a is reset to the value 2 and it is permitted
to take values between 2 and 3. Its value is reset to
2 every time a better solution is found. Arbitrarly,
the value of q is initialized to 1.
For each of the 60 problems of S1 the following
values were computed:
· Average percentage deviation from the optimal
value, roptp
ˆ
P30
rˆ1 Op Spr†=30.
· Average best-solution time, tbestp
ˆ
P30
rˆ1
tbestpr =30.
· Average execution time, ttotalp ˆ
P30
rˆ1 ttotalpr =30.
Op denotes the optimal solution for problem p,
and Spr is the solution obtained for problem p
during the rth execution. tbestpr
and ttotalpr
denote the
times needed to reach the best solution and the
total running time for problem p during run r,
respectively.
Additionally, the following values were calcu-
lated for each problem group (gap1 to gap12):
· Average percentage deviation from the optimal
value, rgroup ˆ
P5
pˆ1 roptp
=5.
· Average best-solution time, tbestg
ˆ
P5
pˆ1 tbestp
=5.
· Average execution time, ttotalg
ˆ
P5
pˆ1 ttotalp
=5.
Table 2 summarizes some characteristics of the
methods used for comparison in our computa-
tional experiments. We have focused on the
neighborhoods that are explored, the possibility of
violating feasibility and the use of recent, medium-
term and long-term memory.
Tables 3 and 4 show the results obtained with
our Tabu search algorithm (denoted TSy) for
problems in S1. Results are compared to the hy-
brid Simulated Annealing/Tabu search SA/TS and
the Tabu search TS1 heuristics proposed in [21]
and also to the GA proposed in Beasley and Chu
[7]. For SA/TS and TS1, Table 3 shows the average
percentage deviation from optimum of the best
solution found for each problem, over all the
problems in each group (see [21]). For the GA and
the TSy approaches, this table shows the average
percentage deviations from optimum rgroup† taken
over all the executions of the problems in each
32 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
12. group. Also, for each of the approaches, the fol-
lowing averages over all groups of problems are
depicted in Table 3:
· rall: Global average percentage deviation from
optimum.
· rbest: Average percentage deviation from opti-
mum of the best solutions found.
· Number of problems for which the optimal so-
lution was found at least once.
Table 4 shows the average best-solution time,
tbest, and the average execution time, ttotal, for each
compared method and for each group of prob-
lems. Times are measured in seconds.
It can be seen that our Tabu search algorithm
provides the optimal values for all the problems. In
this sense it outperforms SA/TS and TS1 and is
equivalent to GA. With respect to the deviation
from optimal values it never exceeds 0.03% and,
therefore, it outperforms all the compared methods.
Although the average best-solution times and
average execution times are not comparable since
the experiments were carried out in di€erent
computers, it must be noticed that for TSy the
values tbest and ttotal are extremely good (recall that
we only use one processor).
We next show the results obtained for problems
in S2 and S3. We compare our results to the best
known solution for these problems. They are
shown in the last column of Table 5. For ®ve Type
C instances, one Type D instance and six Type E
instances this value corresponds to the optimal
solution value obtained with the branch-and-
bound algorithm proposed by Nauss [20]. These
instances are marked with an asterisk. For the
other instances, the best known values are not
known to be optimal and have been taken from [8]
in the case of Type B instances and from [25] for
the remaining Types C, D and E instances.
Now, for each of the problems the following
values were computed:
Table 2
Algorithms' characteristics
SA/TS TS1 GA BVDS TS(MGAP) TSEC TS
Moves Shift move x x x x x x x
Swap move x x x x x x x
Ejection chain x x
Infeasibility Allow search to visit
infeasible solutions
x x x x
Moves towards feasibility
are preferred
x x x
Objective
function
Automatic penalty
adjustment
x x
Recency memory 1 it. 10 it
Medium term memory 100 it
Long term
memory
Diversi®cation x x x x
Intensi®cation x
Table 3
Average percentage deviation from optimal for S1
Problem set SA/TS TS1 GA TSy
gap1 0.00 0.00 0.00 0.00
gap2 0.00 0.10 0.00 0.00
gap3 0.00 0.00 0.00 0.00
gap4 0.00 0.03 0.00 0.00
gap5 0.00 0.00 0.00 0.00
gap6 0.05 0.03 0.01 0.01
gap7 0.02 0.00 0.00 0.00
gap8 0.10 0.09 0.05 0.01
gap9 0.08 0.06 0.00 0.00
gap10 0.14 0.08 0.04 0.03
gap11 0.05 0.02 0.00 0.00
gap12 0.11 0.04 0.01 0.00
rall 0.210 0.070 0.009 0.004
rbest 0.04 0.03 0.00 0.00
No. Optimal
solutions
39 45 60 60
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 33
13. · Average percentage deviation from best known
solution, rbestp
ˆ
P30
rˆ1 Spr Bp†=30.
· Average best-solution time, tbestr
.
· Average execution time, ttotalp
.
Bp is the best known solution value for problem p.
Tables 5 and 6 show the results obtained. Now,
TSy is compared to the GA proposed by Beasley
and Chu in [7], to the Variable Depth Search with
Branching algorithm BVDS proposed by Yagiura
et al. [26], to the Tabu search algorithm for the
MGAP, denoted TS(MAGAP), proposed by La-
guna et al. [16], and to the Ejection Chain algo-
rithm TSEC proposed by Yagiura et al. [25].
Results for BVDS and TS(MGAP) were taken
from [25].
For each problem, Table 5 shows the best val-
ues (best) and the average percentage deviation
from the best known solution (rbest) found with the
di€erent approaches. Table 6 shows the average
best-solution times, tbest, and the average execution
times, ttotal. Again, times are measured in seconds.
The results obtained con®rm the validity of our
approach. Type B instances were easily solved and
the best known value was found for all these
problems. For Type C problems our results are
also remarkable: we obtained the best known so-
lution in four instances and for the other ®ve in-
stances our best value is at most four units above
the best known solution. The diculty of prob-
lems of Type D is re¯ected in the quality of the
results we obtained: for none of these problems we
attained the best known value. However, for the
instances with less than 400 jobs our solutions are
of good quality, although the performance of our
algorithm decreases for the very large instances.
For Type E instances our best solution is very
close to the best known value, excepting for the
very large instances of 400 jobs where the quality
of our solutions again tends to decrease. The val-
ues of the average deviations from the best known
values con®rm the good behavior of TSy. Com-
paring TSy with the other approaches we can see
that for all types of instances our algorithm out-
performs BVDS, TS(MGAP) and GA in terms of
quality of the solutions and average deviation
from the best known value (for the methods and
instances that report these values). If we compare
TSy with the Ejection Chain algorithm, the quality
of our best solutions is nearly the same than those
of TSEC for Type C problems and for problems
with less than 400 jobs of Types D and E. How-
ever, for the very large Types D and E problems
with 400 jobs, TSy shows a slight decrease in its
performance as compared to TSEC. In terms of
the average deviation from the best known value,
the results of TSEC are extremely good and are
Table 4
CPU Times for S1a
Problem group SA/TSb
TS1b
GA c
TSy d
tbest ttotal tbest ttotal tbest ttotal tbest ttotal
gap1 n.a. 0.22 n.a. 0.73 0.40 72.38 0.04 1.09
gap2 n.a. 0.91 n.a. 1.44 1.68 79.96 0.04 1.48
gap3 n.a. 1.82 n.a. 2.85 2.18 85.06 0.08 2.05
gap4 n.a. 2.32 n.a. 4.86 4.16 92.36 0.13 2.54
gap5 n.a. 2.48 n.a. 3.97 5.52 100.36 0.22 2.38
gap6 n.a. 5.67 n.a. 8.85 16.36 130.02 0.41 3.53
gap7 n.a. 13.97 n.a. 12.72 10.54 130.34 0.35 4.92
gap8 n.a. 19.53 n.a. 22.99 49.90 184.42 2.38 7.64
gap9 n.a. 5.73 n.a. 12.19 13.82 129.32 0.77 3.72
gap10 n.a. 15.55 n.a. 18.62 33.70 167.66 1.07 5.90
gap11 n.a. 36.96 n.a. 34.46 32.60 193.00 1.53 8.02
gap12 n.a. 65.96 n.a. 47.07 39.08 260.34 1.83 10.67
a
n.a. ˆ Not available.
b
CPU times in seconds on a VAX-8600 computer.
c
CPU times in seconds on a Silicon Graphics Indigo R4000 (100 MHz).
d
CPU times in seconds on a Sun Sparc Station 10/30 with 4 HyperSparc 100 MHz (SPECint95 2.35).
34 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
14. clearly better although, in our opinion, our devi-
ations are small.
The information of Table 6 permits to complete
the analysis of our results. First of all, it should be
noticed that the times correspond to experiments
performed in di€erent computers so a normalizing
factor should be applied to obtain a fair compar-
ison. In [25] Yagiura et al. give an estimate of 1 for
the normalizing factor that should be applied to
the times of BVDS, TS(MGAP) and TSEC and of
0.1 for the normalizing factor that should be ap-
plied to the times of GA. Taking into account the
characteristics of the computer where we have
performed our tests (SPECint95 of 2.35, 100 MHz
Table 5
Best values and average deviation from best values for S2, S3a
Prob. m n BVDS TS (MGAP) GA TSEC TSy Best known
value
Best Best Best rbest Best rbest Best rbest
B 5 100 n.a. n.a. 1843 0.347% n.a. n.a. 1843 0.000% 1843
B 10 100 n.a. n.a. 1407 0.071% n.a. n.a. 1407 0.000% 1407
B 20 100 n.a. n.a. 1166 0.069% n.a. n.a. 1166 0.097% 1166
B 5 200 n.a. n.a. 3553 0.301% n.a. n.a. 3552 0.005% 3552
B 10 200 n.a. n.a. 2831 0.308% n.a. n.a. 2828 0.046% 2828
B 20 200 n.a. n.a. 2340 0.060% n.a. n.a. 2340 0.113% 2340
0.193% 0.044%
C 5 100 1931 1931 1931 0.378% 1931 0.000% 1931 0.000% 1931b
C 10 100 1402 1403 1403 0.285% 1402 0.000% 1402 0.043% 1402b
C 20 100 1244 1244 1244 0.515% 1243 0.000% 1243 0.284% 1243b
C 5 200 3456 3457 3458 0.229% 3456 0.000% 3457 0.034% 3456b
C 10 200 2809 2812 2814 0.481% 2806 0.007% 2807 0.105% 2806b
C 20 200 2401 2396 2397 0.619% 2391 0.025% 2391 0.139% 2391
C 10 400 5605 5608 n.a. n.a. 5597 0.000% 5598 0.077% 5597
C 20 400 4795 4792 n.a. n.a. 4783 0.021% 4786 0.201% 4782
C 40 400 4259 4251 n.a. n.a. 4245 0.024% 4248 0.220% 4244
0.418% 0.009% 0.123%
D 5 100 6358 6386 6373 0.658% 6354 0.041% 6357 0.163% 6353b
D 10 100 6367 6398 6379 1.237% 6356 0.167% 6355 0.511% 6349
D 20 100 6275 6283 6269 1.652% 6215 0.387% 6220 0.905% 6196
D 5 200 12755 12788 12796 0.652% 12744 0.020% 12747 0.093% 12743
D 10 200 12480 12519 12601 1.536% 12445 0.076% 12457 0.277% 12436
D 20 200 12440 12416 12452 1.983% 12277 0.166% 12351 0.887% 12264
D 10 400 25032 25145 n.a. n.a. 24976 0.017% 25039 0.284% 24974
D 20 400 24780 24872 n.a. n.a. 24604 0.021% 24747 0.936% 24604
D 40 400 24724 24726 n.a. n.a. 24460 0.043% 24707 1.441% 24456
1.286% 0.104% 0.611%
E 5 100 12681 12687 n.a. n.a. 12681 0.003% 12681 0.040% 12681b
E 10 100 11585 11650 n.a. n.a. 11577 0.000% 11581 0.087% 11577b
E 20 100 8499 8555 n.a. n.a. 8439 0.055% 8460 0.703% 8436b
E 5 200 24942 25203 n.a. n.a. 24930 0.000% 24931 0.027% 24930b
E 10 200 23346 23567 n.a. n.a. 23307 0.004% 23318 0.116% 23307b
E 20 200 22475 22735 n.a. n.a. 22379 0.022% 22422 0.446% 22379b
E 10 400 45878 172185 n.a. n.a. 45746 0.003% 45781 0.077% 45746
E 20 400 45079 137153 n.a. n.a. 44887 0.017% 45007 0.648% 44882
E 40 400 44898 63669 n.a. n.a. 44596 0.065% 44921 1.266% 44579
0.019% 0.379%
a
n.a. ˆ Not available.
b
Known to be optimal values.
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 35
15. and 89 Mb) and comparing them with the char-
acteristics of the computer of reference in [25]
(SPECint95 of 12.3, 300 MHz and 1 Gb) we con-
sider that the normalizing factor to be applied to
our times should be at most 0.2. The times de-
picted in Table 6 are the actual running times
without applying the normalizing factor. After
considering this normalizing factor, it can be seen
that our times are smaller than the times required
by all the compared methods and instances, ex-
cepting for the 9 instances with 400 jobs where our
times seem to be slightly higher than those of
TSEC.
In our opinion the results of Tables 5 and 6
show the interest of our approach. For small,
medium-size and large problems of dimensions
until m ˆ 20 and n ˆ 200 we obtain solutions of a
remarkable quality that either are optimal/best-
known or are very close to them in very small
computational times (taking into account the
speed of the computer). For the very large in-
stances, our approach is outperformed by TSEC.
Table 6
CPU times for S2, S3a
Prob. m n BVDSb
TS(MGAP)a GAc
TSyd
TSECa
ttotal ttotal tbest ttotal tbest ttotal tbest ttotal
B 5 100 n.a. n.a. 126.9 288.2 n.a. n.a. 17.5 95.8
B 10 100 n.a. n.a. 30.1 276.0 n.a. n.a. 10.6 97.2
B 20 100 n.a. n.a. 191.5 617.3 n.a. n.a. 47.5 160.0
B 5 200 n.a. n.a. 439.5 790.0 n.a. n.a. 70.5 339.1
B 10 200 n.a. n.a. 608.4 1027.4 n.a. n.a. 154.8 389.8
B 20 200 n.a. n.a. 518.5 1323.5 n.a. n.a. 222.7 465.4
C 5 100 150 150 139.1 302.4 0.6 150 3.0 83.6
C 10 100 150 150 170.6 394.2 3.0 150 28.7 103.3
C 20 100 150 150 279.9 669.3 21.6 150 66.9 130.7
C 5 200 300 300 531.2 810.1 3.7 300 79.5 316.1
C 10 200 300 300 628.6 1046.0 100.5 300 212.7 403.0
C 20 200 300 300 1095.9 1792.3 137.4 300 291.8 498.3
C 10 400 3000 3000 n.a. n.a. 105.8 300 1348.8 1820.0
C 20 400 3000 3000 n.a. n.a. 130.4 300 1713.3 2236.3
C 40 400 3000 3000 n.a. n.a. 157.6 300 2333.2 3442.4
D 5 100 150 150 369.9 530.3 62.6 150 43.6 106.4
D 10 100 150 150 870.2 1094.7 107.2 150 83.2 133.4
D 20 100 150 150 1746.1 2126.1 111.0 150 119.5 167.4
D 5 200 300 300 1665.9 1942.8 95.5 300 226.8 363.2
D 10 200 300 300 2768.7 3189.6 129.2 300 205.0 397.4
D 20 200 300 300 4878.4 5565.1 120.7 300 348.2 515.6
D 10 400 3000 3000 n.a. n.a. 16.1 300 1979.6 2310.0
D 20 400 3000 3000 n.a. n.a. 81.3 300 2039.12 2440.1
D 40 400 3000 3000 n.a. n.a. 165.2 300 2573.6 3129.3
E 5 100 150 20000 n.a. n.a. 39.9 150 66.5 124.0
E 10 100 150 20000 n.a. n.a. 31.6 150 88.3 148.2
E 20 100 150 20000 n.a. n.a. 90.4 150 146.3 197.2
E 5 200 300 20000 n.a. n.a. 20.0 300 204.7 351.4
E 10 200 300 20000 n.a. n.a. 34.1 300 233.0 396.4
E 20 200 300 20000 n.a. n.a. 209.3 300 425.2 585.3
E 10 400 3000 3000 n.a. n.a. 260.7 300 492.5 891.1
E 20 400 3000 3000 n.a. n.a 212.9 300 750.3 875.0
E 40 400 3000 3000 n.a. n.a. 217.7 300 1035.5 1211.1
a
n.a. ˆ Not available.
b
CPU times in seconds on a Sun Ultra 2 model 2300, 300 MHz (SPECint95 12.3) computer.
c
CPU times in seconds on a Silicon Graphics Indigo R4000 (100 MHz) computer.
d
CPU times in seconds on a Sun Sparc Station 10/30 with 4 HiperSparc at 100 MHz (SPECint95 2.35).
36 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38
16. However, we believe that the simplicity of our
approach is an added value for our algorithm that
makes it competitive with TSEC even for the
larger instances. Anyhow, the obtained results
con®rm the success of the distinctive feature of our
proposal that consists in using both recent and
medium-term memory to automatically modify the
objective function that guides the search.
6. Conclusions and remarks
In this paper the GAP is considered and a Tabu
search heuristic approach is proposed. A relaxed
formulation that allows the search to cross the
capacity±feasibility boundary using a penalty term
is considered. A strategic oscillation scheme is then
used to permit alternating between feasible and
infeasible solutions and to give the search a higher
level of ¯exibility. Diversi®cation and intensi®ca-
tion strategies are also implemented by means of
frequency-based memory. The resulting algorithm
is very simple in terms of its basic components: the
neighborhoods that are explored, the rules used to
explore the neighborhoods and the use of recent
and medium-term memory. Its distinctive feature
is the use of recent and medium-term memory to
guide the search via a modi®ed objective function
which is dynamically adapted. This is an issue that
we believe is interesting since, traditionally mem-
ory has been widely used to explore complex
neighborhood structures, whereas it has been used
in a very limited fashion to obtain automated
mechanisms to modify the objective function that
guides the search process.
The performance of the algorithm has been
evaluated with three di€erent problem sets and
compared to other existing algorithms.
The results obtained with the proposed algo-
rithm are remarkable. In general, we obtain high
quality solutions that either are optimal/best-
known or are very close to them. However, for the
very large instances, our approach is outperformed
by one of the compared methods. In all the cases,
the required computational times are excellent
(taking into account the speed of the computers).
In our opinion, the simplicity of our approach
makes it competitive even for the larger instances.
Moreover, we understand that these results could
be signi®cantly improved by exploring more
complex neighborhoods and by considering more
sophisticated rules to explore the neighborhoods.
For this reason we believe that the obtained results
fully justify the interest of our proposal of using
short-term and medium-term memory to dynami-
cally adapt the objective function for guiding the
search in the infeasible solutions' space.
Acknowledgements
Authors are grateful to the anonymous referees
for their valuable comments which have helped to
improve this paper.
References
[1] M.M. Amini, M. Racer, A rigourous computational
comparison of alternative solution methods for the gener-
alized assignment problem, Management Science 40 (1994)
868±890.
[2] P. Barcia, K. J
ornsten, Improved Lagrangean decomposi-
tion: An application to the generalized assignment prob-
lem, European Journal of Operational Research 46 (1990)
84±92.
[3] J.F. Benders, J.A. van Nunen, A property of assignment
type mixed linear programming problems, Operations
Research Letters 2 (1983) 47±52.
[4] D.G. Cattrysse, Set Partitionig approaches to combinato-
rial optimization problems, PhD thesis, Centrum Industri-
eel Beleid, Katholieke Universiteit Leuven, Belgium, 1990.
[5] D.G. Cattrysse, M. Salomon, L.N. van Wassenhove, A set
partitioning heuristic for the generalized assignment prob-
lem, European Journal of Operational Research 72 (1994)
167±174.
[6] D.G. Cattrysse, L.N. van Wassenhove, A survey of
algorithms for the generalized assignment problem, Euro-
pean Journal of Operational Research 60 (1992) 260±272.
[7] P.C. Chu, J.E. Beasley, A genetic algorithm for the
generalised assignment problem, Computers Operation
Research 24 (1997) 17±23.
[8] J.A. D
õaz, E. Fern
andez, A Tabu search heuristic for the
generalized assignment problem, Technical Report DR. 98/
08, Departament d'Estad
õstica i Investigaci
o Operativa,
Universitat Polit
ecnica de Catalunya, 1998.
[9] M.L. Fisher, R. Jaikumar, L.N. van Wassenhove, A
multiplier adjustment method for the generalised assign-
ment problem, Management Science 32 (1986) 1095±1103.
[10] M. Gendreau, G. Laporte, R. S
eguin, A Tabu search
heuristic for the vehicle routing problem with stochastic
J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38 37
17. demands and customers, Operations Research 44 (1996)
469±477.
[11] F. Glover, Tabu search: Part I, ORSA Journal of Com-
puting 1 (1989) 190±206.
[12] F. Glover, Tabu search: Part II, ORSA Journal of
Computing 2 (1990) 4±32.
[13] M. Guignard, M.B. Rosenwein, An improved dual based
algorithm for the generalized assignment problem, Oper-
ations Research 37 (1989) 658±663.
[14] K.O. J
ornsten, M. N
asberg, A new Lagrangean relaxation
approach to the generalized assignment problem, Europe-
an Journal of Operational Research 27 (1986) 313±323.
[15] T.D. Klastorin, An e€ective subgradient algorithm for the
generalized assignment problem, Computers Operations
Research 6 (1979) 155±164.
[16] M. Laguna, J.P. Keely, J.L. Gonz
alez-Velarde, F. Glover,
Tabu search for the multilevel generalized assignment
problem, European Journal of Operational Research 82
(1995) 176±189.
[17] L.A.N. Lorena, M.G. Narciso, Relaxation heuristics for a
generalized assignment problem, European Journal of
Operational Research 91 (1996) 600±610.
[18] S. Martello, P. Toth, An algorithm for the generalized
assignment problem, in: In Proceedings of the Ninth
IFORS Conference, Hamburg, Germany, 1981.
[19] S. Martello, P. Toth, Knapsack problems: Algorithms and
computer implementations, Wiley, Chichester, 1990.
[20] R.M. Nauss, Solving the classical generalized assignment
problem, Technical Report, 1998.
[21] I.H. Osman, Heuristics for the generalised assignment
problem: Simulated annealing and Tabu search approach-
es, OR Spectrum 17 (1995) 211±225.
[22] G.T. Ross, R.M. Soland, A branch and bound algorithm
for the generalized assignment problem, Mathematical
Programming 8 (1975) 91±103.
[23] M.W.P. Savelsbergh, A branch-and-price algorithm for the
generalized assignment problem, Operations Research 45
(1997) 831±841.
[24] M.A. Trick, A linear relaxation heuristic for the general-
ized assignment problem, Naval Research Logistics 39
(1992) 137±151.
[25] M. Yagiura, T. Ibaraki, F. Glover, An ejection chain
approach for the generalized assignment problem. Techni-
cal Report 99013, Department of Applied Mathematics
and Physics, Graduate School of Informatics, Kyoto
University, 1999.
[26] M. Yagiura, T. Yamaguchi, T. Ibaraki, A variable depth
search algorithm with branching search for the generalized
assignment problem, Optimization Methods and Software
10 (1998) 419±441.
[27] M. Yagiura, T. Yamaguchi, T. Ibaraki, A variable depth
search algorithm for the generalized assignment problem,
in: S. Voss, S. Martello, I.H. Osman, C. Roucariol, (Eds.),
Metha-Heuristics: Advances and Trends in Local Search
Paradigms for Optimization, Kluwer Academic Publishers,
Dordrecht, 1999, pp. 459±471.
38 J.A. D
õaz, E. Fern
andez / European Journal of Operational Research 132 (2001) 22±38