The document analyzes solutions to the Traveling Salesman Problem (TSP) on a 532-city instance using five local search heuristics. It finds that lower-cost solutions tend to be closer to the optimal tour and to other good solutions, supporting the idea that TSP solution spaces have a "globally convex" or "big valley" structure. The optimal tour is located near the center of the main cluster of good solutions.
The document describes a project applying graph theory and operations research to find optimal paths. It uses Kruskal's algorithm to determine the minimum cost path for shipping products from Japan to the US. The Northwest Corner method is also applied to calculate transportation costs and verify that the path found through Kruskal's algorithm has the lowest cost. It is concluded that Kruskal's algorithm and the Northwest Corner method effectively determine the most cost-effective shipping route to minimize transportation costs.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
The document presents a novel spatial query called the K-Best Site Query (KBSQ). The KBSQ finds the K sites from a set of sites S that minimize the total distance from each object to its closest site. The document proposes two approaches for processing the KBSQ - a straightforward approach and the KBSQ algorithm. The straightforward approach directly computes distances and considers all possible site combinations, while the KBSQ algorithm leverages spatial indexes like the R*-tree and Voronoi diagram to improve efficiency. Experimental results demonstrate the KBSQ algorithm outperforms the straightforward approach for large datasets.
Particle Swarm Optimization to Solve Multiple Traveling Salesman ProblemIRJET Journal
This document proposes a new genetic ant colony optimization algorithm for solving the multiple traveling salesman problem (mTSP). The algorithm combines properties of genetic algorithms and ant colony optimization. Each salesman's route is determined using ant colony optimization, while the routes of different salesmen are combined into a complete solution controlled by the genetic algorithm. The algorithm is tested on benchmark problem instances and shown to perform efficiently compared to other existing algorithms for mTSP. Key aspects of the algorithm include the representation of solutions, crossover operators that always generate feasible solutions, and the integration of ant colony optimization and genetic algorithms.
All of the perturbative approaches to multidimensional wave
equation processing. for example. wave equation migration (see,
e.g., Claerbout, 1971; French, 1975: Schneider, 1978; Stolt, 1978;
Sattlegger et al, 1980), or Born approximation inversion (see,
e.g., Cohen and Bleistein, 1979; Raz, 1981: Clayton and Stolt,
1981) require some input velocity information. In the Born approximation
to inversion, a reference or background velocity is
chosena nd a perturbationa boutt his velocity is determined.S imilarly,
a velocity model is a required input to all wave equation
migration techniques.
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
The document compares the convergence rates of the bisection, Newton-Raphson, and secant methods for finding roots of functions. It finds the root of the function f(x)=x-cos(x) on the interval [0,1] using each method. The bisection method converges at the 52nd iteration, while Newton-Raphson and secant methods converge to the exact root of 0.739085 with an error of 0 at the 8th and 6th iterations respectively. Therefore, the document concludes that the secant method is the most effective of the three for this problem.
This document analyzes Bernstein's proposed circuit-based approach for the matrix step of the number field sieve integer factorization method. It finds that Bernstein overestimated the improvement in factoring larger integers, which would be a factor of 1.17 larger rather than 3.01 as claimed. The document also proposes an improved circuit design based on a new mesh routing algorithm. It estimates that for 1024-bit RSA, the matrix step could be completed in a day using a few thousand dollars of custom hardware, but that the relation collection step still determines the practical security of RSA.
The document describes a project applying graph theory and operations research to find optimal paths. It uses Kruskal's algorithm to determine the minimum cost path for shipping products from Japan to the US. The Northwest Corner method is also applied to calculate transportation costs and verify that the path found through Kruskal's algorithm has the lowest cost. It is concluded that Kruskal's algorithm and the Northwest Corner method effectively determine the most cost-effective shipping route to minimize transportation costs.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
The document presents a novel spatial query called the K-Best Site Query (KBSQ). The KBSQ finds the K sites from a set of sites S that minimize the total distance from each object to its closest site. The document proposes two approaches for processing the KBSQ - a straightforward approach and the KBSQ algorithm. The straightforward approach directly computes distances and considers all possible site combinations, while the KBSQ algorithm leverages spatial indexes like the R*-tree and Voronoi diagram to improve efficiency. Experimental results demonstrate the KBSQ algorithm outperforms the straightforward approach for large datasets.
Particle Swarm Optimization to Solve Multiple Traveling Salesman ProblemIRJET Journal
This document proposes a new genetic ant colony optimization algorithm for solving the multiple traveling salesman problem (mTSP). The algorithm combines properties of genetic algorithms and ant colony optimization. Each salesman's route is determined using ant colony optimization, while the routes of different salesmen are combined into a complete solution controlled by the genetic algorithm. The algorithm is tested on benchmark problem instances and shown to perform efficiently compared to other existing algorithms for mTSP. Key aspects of the algorithm include the representation of solutions, crossover operators that always generate feasible solutions, and the integration of ant colony optimization and genetic algorithms.
All of the perturbative approaches to multidimensional wave
equation processing. for example. wave equation migration (see,
e.g., Claerbout, 1971; French, 1975: Schneider, 1978; Stolt, 1978;
Sattlegger et al, 1980), or Born approximation inversion (see,
e.g., Cohen and Bleistein, 1979; Raz, 1981: Clayton and Stolt,
1981) require some input velocity information. In the Born approximation
to inversion, a reference or background velocity is
chosena nd a perturbationa boutt his velocity is determined.S imilarly,
a velocity model is a required input to all wave equation
migration techniques.
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
The document compares the convergence rates of the bisection, Newton-Raphson, and secant methods for finding roots of functions. It finds the root of the function f(x)=x-cos(x) on the interval [0,1] using each method. The bisection method converges at the 52nd iteration, while Newton-Raphson and secant methods converge to the exact root of 0.739085 with an error of 0 at the 8th and 6th iterations respectively. Therefore, the document concludes that the secant method is the most effective of the three for this problem.
This document analyzes Bernstein's proposed circuit-based approach for the matrix step of the number field sieve integer factorization method. It finds that Bernstein overestimated the improvement in factoring larger integers, which would be a factor of 1.17 larger rather than 3.01 as claimed. The document also proposes an improved circuit design based on a new mesh routing algorithm. It estimates that for 1024-bit RSA, the matrix step could be completed in a day using a few thousand dollars of custom hardware, but that the relation collection step still determines the practical security of RSA.
Global Optimization with Descending Region AlgorithmLoc Nguyen
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
This document is a dissertation proposal by Rishideep Roy at the University of Chicago in November 2014. The proposal is to generalize results on extreme values and entropic repulsion for two-dimensional discrete Gaussian free fields to a more general class of Gaussian fields with logarithmic correlations. Specifically, the proposal plans to find the convergence in law of the maximum of these log-correlated Gaussian fields under minimal assumptions, as well as obtain finer estimates on entropic repulsion which relates to the behavior of these fields near hard boundaries. The proposal provides background on related works and outlines the key steps to be taken, including proving expectations and tightness of maxima, invariance of maximum distributions under perturbations, approximating the fields, and
This document proposes improvements to Helsgaun's Lin-Kernighan heuristic for solving the symmetric traveling salesman problem. It introduces using approximations of backbones and double bridges, combined with implementation details, to guide the search process instead of Helsgaun's alpha-values. Computational results on some VLSI instances show the proposed approach finds competitive or improved solutions compared to other state-of-the-art heuristics.
This document presents a new method called the Real-valued Iterative Adaptive Approach (RIAA) for estimating the power spectral density of nonuniformly sampled data. It aims to improve upon the periodogram, which suffers from poor resolution and leakage. RIAA is an iteratively weighted least squares periodogram that uses an adaptive weighting matrix built from the most recent spectral estimate. It is shown to have significantly less leakage than the least squares periodogram through its use of an adaptive filter. The Bayesian Information Criterion is also discussed as a way to test the significance of peaks in the estimated spectrum.
Paolo Creminelli "Dark Energy after GW170817"SEENET-MTP
GW170817 and GRB170817A were detected simultaneously, originating from the same source. This provides a tight limit on the speed of gravitational waves and photons, showing they travel at the same speed to a high degree of precision. The effective field theory of dark energy provides a framework to parametrize possible deviations from general relativity at cosmological scales. It allows gravitational waves and photons to potentially travel at different speeds depending on the coefficients in the effective field theory action. The detection of GW170817 and GRB170817A from the same source places strong constraints on these coefficients and possible dark energy models.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
This document summarizes a study that applies a recently developed effective theory called SCETG to model jet quenching in heavy ion collisions at the LHC. SCETG allows for the unified treatment of vacuum and medium-induced parton showers. The authors establish an analytic connection between the QCD evolution approach and traditional energy loss approach in the soft gluon emission limit. They quantify uncertainties in implementing in-medium modifications to hadron production cross sections and find the coupling between jets and the medium can be constrained to better than 10% accuracy. Numerical comparisons between the medium-modified evolution approach and energy loss formalism for modeling RAA are also presented.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
This document describes using Bayesian inference to locate an opponent in a two-dimensional paintball arena based on the locations of paint spatters on the wall. It defines a joint distribution over all possible (x,y) coordinates of the opponent's location. Given observed spatter locations, it computes the posterior distribution, which provides the likelihood of each possible location. This allows extracting marginal and conditional distributions over each dimension, as well as computing credible intervals to identify likely regions where the opponent may be hiding.
1) The document describes a numerical simulation of the spherical collapse of dark matter perturbations in an expanding universe. It simulates both cold dark matter (CDM) and warm dark matter (WDM) cases.
2) For CDM, the simulation shows collapse times depend on the initial overdensity but are approximately symmetric around the turnaround time. Regions of all masses collapse as long as the initial overdensity exceeds a threshold.
3) For WDM, the simulation adds a pressure term to account for the thermal velocity of WDM in the early universe. This term slows collapse compared to CDM and can potentially prevent collapse of low-mass regions.
The document describes an algorithm for efficiently finding shortest paths between two points (the point-to-point or P2P problem) in a graph by allowing preprocessing. It improves on previous reach-based approaches by introducing bidirectional variants that use implicit lower bounds and adding shortcut arcs to reduce vertex reaches. The resulting algorithm, called RE, has similar performance to the best previous method (hh) but is simpler and combines better with A∗ search (REAL algorithm), yielding significantly faster query times, especially on road networks.
Capacitated Kinetic Clustering in Mobile Networks by Optimal Transportation T...Chien-Chun Ni
Presented in INFOCOM 2016
http://www3.cs.stonybrook.edu/~chni/publication/optran/
--
We consider the problem of capacitated kinetic clustering in which
n
n
mobile terminals and
k
k
base stations with respective operating capacities are given. The task is to assign the mobile terminals to the base stations such that the total squared distance from each terminal to its assigned base station is minimized and the capacity constraints are satisfied. This paper focuses on the development of distributed and computationally efficient algorithms that adapt to the motion of both terminals and base stations. Suggested by the optimal transportation theory, we exploit the structural property of the optimal solution, which can be represented by a power diagram on the base stations such that the total usage of nodes within each power cell equals the capacity of the corresponding base station. We show by using the kinetic data structure framework the first analytical upper bound on the number of changes in the optimal solution, i.e., its stability. On the algorithm side, using the power diagram formulation we show that the solution can be represented in size proportional to the number of base stations and can be solved by an iterative, local algorithm. In particular, this algorithm can naturally exploit the continuity of motion and has orders of magnitude faster than existing solutions using min-cost matching and linear programming, and thus is able to handle large scale data under mobility.
E. Sefusatti, Tests of the Initial Conditions of the Universe after PlanckSEENET-MTP
This document outlines Emiliano Sefusatti's presentation on testing the initial conditions of the universe using data from the Planck satellite. The presentation covers predictions from inflation like a flat, homogeneous universe with a nearly scale-invariant power spectrum. It discusses how Planck improved constraints on non-Gaussianity parameters like fNL compared to WMAP. For example, Planck reduced errors on the local fNL parameter by a factor of 2-4 depending on the shape. The implications of Planck's results are explored through the example of constraints on a DBI inflation model.
Econometric Investigation into Cryptocurrency Price Bubbles in Bitcoin and Et...Siddharth Hitkari
At this stage, it is common knowledge that cryptocurrency prices are indeed, a bubble. However, does modern-day finance have the tools to detect explosive behaviour in absence of a fundamental value?
Glad to have worked with Shane Jose to release a paper in a bid to answer the aforementioned question!
Corrected asymptotics for a multi-server queue in the Halfin-Whitt regimeolimpica
1) The document discusses methods to obtain more accurate approximations for performance measures like the probability of an empty queue in the Halfin-Whitt regime for an M/D/s queue.
2) The main idea is to view the M/D/s queue through the prism of the Gaussian random walk and obtain asymptotic series expansions involving terms including the Riemann zeta function.
3) The series expansions quantify the relationship between the limiting system and finite queue sizes, and the first few terms have the correct behavior as the number of servers grows large.
Este documento presenta un estudio del espacio de soluciones del problema del cajero viajante (TSP). Los autores analizaron 20 instancias del TSP tomadas de una base de datos estándar y obtuvieron muestras del espacio de soluciones usando dos métodos: un algoritmo de optimización local y un método propuesto para generar muestras uniformemente distribuidas. El análisis de los resultados fue consistente con la conjetura de que el espacio de soluciones del TSP tiene una estructura globalmente convexa, lo que significa
Los algoritmos genéticos son métodos de optimización inspirados en la evolución biológica que pueden usarse para resolver problemas complejos. Siguen pasos como evaluar la aptitud de soluciones codificadas, seleccionar las más aptas para reproducirse con mutaciones ocasionales, y repetir el proceso hasta converger. Se han aplicado a problemas de diseño, redes, almacenamiento y más.
This document summarizes a study that developed algorithms to optimize road freight transportation routes in Spain while accounting for environmental costs. The algorithms, called Algorithms with Environmental Criteria (AEC), incorporate estimates of environmental costs like noise and air pollution alongside traditional routing costs like distance and delivery expenses. The researchers applied the AEC algorithms to real delivery data from a logistics company in Navarre, Spain, finding routes that minimized total costs including environmental externalities.
Este documento describe una modificación al Algoritmo Genético Estocástico (StGA) llamada StGA2, la cual utiliza una tasa de mutación variable por bit basada en la aptitud para mejorar la capacidad de escapar de óptimos locales. El StGA2 se aplica a la estimación de la dirección de arribo de señales en comunicaciones móviles usando antenas inteligentes. Adicionalmente, se discute la convergencia del algoritmo y su precisión para funciones de 2 a 30 dimensiones.
Global Optimization with Descending Region AlgorithmLoc Nguyen
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
This document is a dissertation proposal by Rishideep Roy at the University of Chicago in November 2014. The proposal is to generalize results on extreme values and entropic repulsion for two-dimensional discrete Gaussian free fields to a more general class of Gaussian fields with logarithmic correlations. Specifically, the proposal plans to find the convergence in law of the maximum of these log-correlated Gaussian fields under minimal assumptions, as well as obtain finer estimates on entropic repulsion which relates to the behavior of these fields near hard boundaries. The proposal provides background on related works and outlines the key steps to be taken, including proving expectations and tightness of maxima, invariance of maximum distributions under perturbations, approximating the fields, and
This document proposes improvements to Helsgaun's Lin-Kernighan heuristic for solving the symmetric traveling salesman problem. It introduces using approximations of backbones and double bridges, combined with implementation details, to guide the search process instead of Helsgaun's alpha-values. Computational results on some VLSI instances show the proposed approach finds competitive or improved solutions compared to other state-of-the-art heuristics.
This document presents a new method called the Real-valued Iterative Adaptive Approach (RIAA) for estimating the power spectral density of nonuniformly sampled data. It aims to improve upon the periodogram, which suffers from poor resolution and leakage. RIAA is an iteratively weighted least squares periodogram that uses an adaptive weighting matrix built from the most recent spectral estimate. It is shown to have significantly less leakage than the least squares periodogram through its use of an adaptive filter. The Bayesian Information Criterion is also discussed as a way to test the significance of peaks in the estimated spectrum.
Paolo Creminelli "Dark Energy after GW170817"SEENET-MTP
GW170817 and GRB170817A were detected simultaneously, originating from the same source. This provides a tight limit on the speed of gravitational waves and photons, showing they travel at the same speed to a high degree of precision. The effective field theory of dark energy provides a framework to parametrize possible deviations from general relativity at cosmological scales. It allows gravitational waves and photons to potentially travel at different speeds depending on the coefficients in the effective field theory action. The detection of GW170817 and GRB170817A from the same source places strong constraints on these coefficients and possible dark energy models.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
This document summarizes a study that applies a recently developed effective theory called SCETG to model jet quenching in heavy ion collisions at the LHC. SCETG allows for the unified treatment of vacuum and medium-induced parton showers. The authors establish an analytic connection between the QCD evolution approach and traditional energy loss approach in the soft gluon emission limit. They quantify uncertainties in implementing in-medium modifications to hadron production cross sections and find the coupling between jets and the medium can be constrained to better than 10% accuracy. Numerical comparisons between the medium-modified evolution approach and energy loss formalism for modeling RAA are also presented.
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
This document describes using Bayesian inference to locate an opponent in a two-dimensional paintball arena based on the locations of paint spatters on the wall. It defines a joint distribution over all possible (x,y) coordinates of the opponent's location. Given observed spatter locations, it computes the posterior distribution, which provides the likelihood of each possible location. This allows extracting marginal and conditional distributions over each dimension, as well as computing credible intervals to identify likely regions where the opponent may be hiding.
1) The document describes a numerical simulation of the spherical collapse of dark matter perturbations in an expanding universe. It simulates both cold dark matter (CDM) and warm dark matter (WDM) cases.
2) For CDM, the simulation shows collapse times depend on the initial overdensity but are approximately symmetric around the turnaround time. Regions of all masses collapse as long as the initial overdensity exceeds a threshold.
3) For WDM, the simulation adds a pressure term to account for the thermal velocity of WDM in the early universe. This term slows collapse compared to CDM and can potentially prevent collapse of low-mass regions.
The document describes an algorithm for efficiently finding shortest paths between two points (the point-to-point or P2P problem) in a graph by allowing preprocessing. It improves on previous reach-based approaches by introducing bidirectional variants that use implicit lower bounds and adding shortcut arcs to reduce vertex reaches. The resulting algorithm, called RE, has similar performance to the best previous method (hh) but is simpler and combines better with A∗ search (REAL algorithm), yielding significantly faster query times, especially on road networks.
Capacitated Kinetic Clustering in Mobile Networks by Optimal Transportation T...Chien-Chun Ni
Presented in INFOCOM 2016
http://www3.cs.stonybrook.edu/~chni/publication/optran/
--
We consider the problem of capacitated kinetic clustering in which
n
n
mobile terminals and
k
k
base stations with respective operating capacities are given. The task is to assign the mobile terminals to the base stations such that the total squared distance from each terminal to its assigned base station is minimized and the capacity constraints are satisfied. This paper focuses on the development of distributed and computationally efficient algorithms that adapt to the motion of both terminals and base stations. Suggested by the optimal transportation theory, we exploit the structural property of the optimal solution, which can be represented by a power diagram on the base stations such that the total usage of nodes within each power cell equals the capacity of the corresponding base station. We show by using the kinetic data structure framework the first analytical upper bound on the number of changes in the optimal solution, i.e., its stability. On the algorithm side, using the power diagram formulation we show that the solution can be represented in size proportional to the number of base stations and can be solved by an iterative, local algorithm. In particular, this algorithm can naturally exploit the continuity of motion and has orders of magnitude faster than existing solutions using min-cost matching and linear programming, and thus is able to handle large scale data under mobility.
E. Sefusatti, Tests of the Initial Conditions of the Universe after PlanckSEENET-MTP
This document outlines Emiliano Sefusatti's presentation on testing the initial conditions of the universe using data from the Planck satellite. The presentation covers predictions from inflation like a flat, homogeneous universe with a nearly scale-invariant power spectrum. It discusses how Planck improved constraints on non-Gaussianity parameters like fNL compared to WMAP. For example, Planck reduced errors on the local fNL parameter by a factor of 2-4 depending on the shape. The implications of Planck's results are explored through the example of constraints on a DBI inflation model.
Econometric Investigation into Cryptocurrency Price Bubbles in Bitcoin and Et...Siddharth Hitkari
At this stage, it is common knowledge that cryptocurrency prices are indeed, a bubble. However, does modern-day finance have the tools to detect explosive behaviour in absence of a fundamental value?
Glad to have worked with Shane Jose to release a paper in a bid to answer the aforementioned question!
Corrected asymptotics for a multi-server queue in the Halfin-Whitt regimeolimpica
1) The document discusses methods to obtain more accurate approximations for performance measures like the probability of an empty queue in the Halfin-Whitt regime for an M/D/s queue.
2) The main idea is to view the M/D/s queue through the prism of the Gaussian random walk and obtain asymptotic series expansions involving terms including the Riemann zeta function.
3) The series expansions quantify the relationship between the limiting system and finite queue sizes, and the first few terms have the correct behavior as the number of servers grows large.
Este documento presenta un estudio del espacio de soluciones del problema del cajero viajante (TSP). Los autores analizaron 20 instancias del TSP tomadas de una base de datos estándar y obtuvieron muestras del espacio de soluciones usando dos métodos: un algoritmo de optimización local y un método propuesto para generar muestras uniformemente distribuidas. El análisis de los resultados fue consistente con la conjetura de que el espacio de soluciones del TSP tiene una estructura globalmente convexa, lo que significa
Los algoritmos genéticos son métodos de optimización inspirados en la evolución biológica que pueden usarse para resolver problemas complejos. Siguen pasos como evaluar la aptitud de soluciones codificadas, seleccionar las más aptas para reproducirse con mutaciones ocasionales, y repetir el proceso hasta converger. Se han aplicado a problemas de diseño, redes, almacenamiento y más.
This document summarizes a study that developed algorithms to optimize road freight transportation routes in Spain while accounting for environmental costs. The algorithms, called Algorithms with Environmental Criteria (AEC), incorporate estimates of environmental costs like noise and air pollution alongside traditional routing costs like distance and delivery expenses. The researchers applied the AEC algorithms to real delivery data from a logistics company in Navarre, Spain, finding routes that minimized total costs including environmental externalities.
Este documento describe una modificación al Algoritmo Genético Estocástico (StGA) llamada StGA2, la cual utiliza una tasa de mutación variable por bit basada en la aptitud para mejorar la capacidad de escapar de óptimos locales. El StGA2 se aplica a la estimación de la dirección de arribo de señales en comunicaciones móviles usando antenas inteligentes. Adicionalmente, se discute la convergencia del algoritmo y su precisión para funciones de 2 a 30 dimensiones.
Este documento presenta una introducción a los conceptos de heurísticas y problemas combinatorios. Explica que las heurísticas son técnicas que aumentan la eficiencia de la búsqueda de soluciones a problemas complejos al sacrificar en ocasiones la optimalidad. También define problemas combinatorios como aquellos con un número finito pero muy grande de soluciones posibles. Finalmente, distingue entre heurísticas de construcción, que encuentran una primera solución, y heurísticas de mejoramiento, que mejoran soluciones existentes.
Heuristicas para problemas de ruteo de vehiculosdolimpica
Este documento presenta un resumen de diferentes heurísticas para resolver problemas de ruteo de vehículos. Describe primero las características de estos problemas, incluyendo clientes, depósitos y vehículos. Luego revisa heurísticas clásicas como el algoritmo de ahorros, heurísticas de inserción y métodos de asignar primero-rutear después. También cubre metaheurísticas como algoritmos de hormigas, tabu search y algoritmos genéticos. Finalmente, analiza extensiones de las heurísticas clásicas
Estudio de técnicas de búsqueda por vecindad a muy gran escalaolimpica
Este documento describe técnicas de búsqueda por vecindad a gran escala para resolver problemas de optimización combinatoria que son intratables mediante métodos exactos. Se clasifican tres tipos de algoritmos de búsqueda por vecindad a gran escala: 1) métodos de profundidad variable que realizan búsquedas heurísticas en vecindades exponencialmente grandes, 2) algoritmos basados en flujos de redes que usan técnicas de flujo para identificar mejoras en vecindades grandes, y 3) vecindades inducidas
Sandoya fernando métodos exactos y heurísticos para el vrp jornadasolimpica
Este documento describe métodos exactos y heurísticos para resolver el Problema del Agente Viajero (TSP) y el Problema de Ruteo de Vehículos (VRP). El TSP busca encontrar la ruta más corta para visitar todas las ciudades, mientras que el VRP busca encontrar rutas óptimas para una flota de vehículos para distribuir bienes desde depósitos a clientes. Se presentan formulaciones matemáticas de ambos problemas y se describen heurísticas como el vecino más cercano para aproximar soluciones dado que son
This document summarizes an implementation of k-opt moves for the Lin-Kernighan traveling salesman problem heuristic. It describes LKH-2, which allows k-changes for any k from 2 to n. This generalizes a previous version, LKH-1, which uses 5-changes. The effectiveness of LKH-2 is demonstrated on instances with 10,000 to 10 million cities, finding high-quality solutions in polynomial time like the original Lin-Kernighan heuristic.
Parallel Guided Local Search and Some Preliminary Experimental Results for Co...csandit
This document proposes a Parallel Guided Local Search (PGLS) algorithm for continuous optimization problems. PGLS runs multiple Guided Local Search agents in parallel that periodically exchange information. The agents use local search and crossover to explore the search space. Preliminary experiments on benchmark functions show PGLS performs better than single-agent Guided Local Search by efficiently utilizing parallel computing resources and information exchange between agents.
The Traveling Salesman Problem: A Neural Network Perspectivemustafa sarac
This document provides an overview of different approaches to solving the Traveling Salesman Problem (TSP), including exact algorithms from operations research as well as neural network models inspired by artificial intelligence. It surveys three main neural network approaches - the Hopfield-Tank network, elastic net, and self-organizing map. The Hopfield-Tank network maps the TSP onto a neural network to represent solutions. It uses an update rule to iteratively explore configurations until reaching stability. While neural networks currently cannot match the solution quality of classical heuristics, they offer potential for massive parallelism and may lead to faster solving in the future.
Algorithms And Optimization Techniques For Solving TSPCarrie Romero
The document discusses three algorithms - simulated annealing, ant colony optimization, and genetic algorithm - for solving the traveling salesman problem (TSP). It analyzes each algorithm's approach, parameters used, and results of experiments on 15 and 50 randomly generated cities. Simulated annealing had average distances of 4.1341 and 20.1316 units for 15 and 50 cities respectively. Ant colony optimization yielded average distances of 3.9102 units for 15 cities, running faster than simulated annealing. Genetic algorithm was tested on 15 cities in Brazil.
The document proposes and evaluates two techniques for attention in multi-source sequence-to-sequence learning: flat attention combination and hierarchical attention combination. Both techniques achieved comparable results to existing context vector concatenation approaches on tasks of multimodal translation and automatic post-editing. Hierarchical attention combination performed best on multimodal translation and allows inspecting individual input attentions. The techniques provide a way to model importance of each input sequence.
This document describes a new algorithm for dual tree kernel conditional density estimation (KCDE) that provides fast and accurate density predictions. The algorithm extends previous work on univariate KCDE to allow for multivariate labels (Y) and conditioning variables (X). It applies Gray's dual tree approach separately to the numerator and denominator of the KCDE formula, and uses error bounds to ensure the quotient estimates have bounded relative error. This new algorithm provides the fastest known method for kernel conditional density estimation for prediction tasks.
Enhance The K Means Algorithm On Spatial DatasetAlaaZ
The document describes an enhancement to the standard k-means clustering algorithm. The enhancement aims to improve computational speed by storing additional information from each iteration, such as the closest cluster and distance for each data point. This avoids needing to recompute distances to all cluster centers in subsequent iterations if a point does not change clusters. The complexity of the enhanced algorithm is reduced from O(nkl) to O(nk) where n is points, k is clusters, and l is iterations.
This document summarizes a research paper that proposes a new method to accelerate the nearest neighbor search step of the k-means clustering algorithm. The k-means algorithm is computationally expensive due to calculating distances between data points and cluster centers. The proposed method uses geometric relationships between data points and centers to reject centers that are unlikely to be the nearest neighbor, without decreasing clustering accuracy. Experimental results showed the method significantly reduced the number of distance computations required.
1) The document describes a vehicle routing project that uses a multi-commodity network flow formulation to explore sub-optimal solutions for object classification with noisy sensors on a 2D grid.
2) It formulates the problem as assigning tasks to vehicles (commodities) that must flow through the graph in 4 directions while being constrained by boundaries and returning to base.
3) The algorithm uses a look-ahead window to consider future moves and a rollout step using linear programming to approximate costs farther in time and decide optimal vehicle movements.
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
The document discusses the travelling salesman problem (TSP) which aims to find the shortest route for a salesman to visit each city in a list only once and return to the origin city. It is an NP-hard problem with many applications. The TSP cannot be solved in polynomial time and is one of the most studied problems in optimization. While computationally difficult, heuristics and algorithms have been developed that can solve instances with tens of thousands of cities and approximate solutions for problems with millions of cities.
The document proposes a new method for efficiently finding the top-k shortest simple paths between two nodes in a graph. It precomputes shortest path trees, transforms the graph, and uses optimizations like k-reduction and adaptive thresholds to terminate path searches early. Experimental results on real and synthetic graphs show the method outperforms prior algorithms by Yen and Hershberger for discovering top-k shortest paths.
A Comparison of Particle Swarm Optimization and Differential Evolutionijsc
Two modern optimization methods including Particle Swarm Optimization and Differential Evolution are compared on twelve constrained nonlinear test functions. Generally, the results show that Differential Evolution is better than Particle Swarm Optimization in terms of high-quality solutions, running time and robustness.
A COMPARISON OF PARTICLE SWARM OPTIMIZATION AND DIFFERENTIAL EVOLUTIONijsc
Two modern optimization methods including Particle Swarm Optimization and Differential Evolution are
compared on twelve constrained nonlinear test functions. Generally, the results show that Differential
Evolution is better than Particle Swarm Optimization in terms of high-quality solutions, running time and
robustness.
1. The document describes a heuristic approach for solving the cluster traveling salesman problem (CTSP) using genetic algorithms.
2. The proposed algorithm divides nodes into pre-specified clusters, uses GA to find a Hamiltonian path for each cluster, then combines the optimized cluster paths to form a full tour.
3. The algorithm was tested on symmetric TSPLIB instances and shown to find high quality solutions faster than two other metaheuristic approaches for CTSP.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
Robust Fuzzy Data Clustering In An Ordinal Scale Based On A Similarity MeasureIJRES Journal
This paper is devoted to processing data given in an ordinal scale. A new objective function of a
special type is introduced. A group of robust fuzzy clustering algorithms based on the similarity measure is
introduced.
This document discusses using particle swarm optimization based on variable neighborhood search (PSO-VNS) to attack classical cryptography ciphers. PSO is a population-based optimization algorithm inspired by bird flocking behavior. VNS is a metaheuristic algorithm that explores neighborhoods of solutions to escape local optima. The paper proposes improving PSO with VNS to find better solutions. It evaluates PSO-VNS on substitution and transposition ciphers, finding it recovers keys better than standard PSO and other variants.
Similar to Cost versus distance_in_the_traveling_sa_79149 (20)
1. Cost Versus Distance
In the Traveling Salesman Problem
Kenneth D. Boese
UCLA Computer Science Dept., Los Angeles, CA 90024-1596 USA
Abstract
This paper studies the distribution of good solutions for the traveling salesman problem (TSP) on a
well-known 532-city instance that has been solved optimally by Padberg and Rinaldi 16]. For each of ve
local search heuristics, solutions are obtained from 2,500 di erent random starting points. Comparisons
of these solutions show that lower-cost solutions have a strong tendency to be both closer to the optimal
tour and closer to other good solutions. (Distance between two solutions is de ned in terms of the number
of edges they have in common.) These results support the conjecture of Boese, Kahng and Muddu 3] that
the solution spaces of TSP instances have a globally convex" or big valley" character. This observation
was used by 3] to motivate a new multi-start strategy for global optimization called Adaptive Multi-Start
(AMS).
1 Introduction
Local search is probably the most successful approach to nding heuristic solutions to combinatorial global
optimization problems. In global optimization, objective is to nd a solution in solution space
s S which
minizes a cost function ( ) de ned on . Local search moves iteratively from a solution to some nearby"
f s S si
solution +1 in the neighborhood of , ( ). The de nition of neighborhoods ( )
si si N si N s S for each 2 ,
s S
together with solution costs ( ), give rise to a cost surface for the particular problem instance. Understanding
f s
this cost surface can help both to explain the success of previous heuristics (e.g., simulated annealing) and to
motivate new, more e ective heuristics (e.g., multi-start strategies or better annealing schedules). Our results
indicate that cost surfaces for the traveling salesman problem (TSP) exhibit a globally convex" 6] or what
we call a big valley" structure. Figure 1 gives an intuitive picture of the big valley, in which the set of local
minima appears convex with one central global minimum.
In this paper, we discuss experimental results obtained by running ve di erent local search heuristics
many times on a single, well-known TSP instance called ATT532". ATT532 was compiled by AT&T Bell
Laboratories and is based on locations of 532 cities in the continental United States. It has been used in a
This work was performed under support from the UCLA Dissertation Year Fellowship.
1
2. Figure 1: Intuitive picture of the big valley" solution space structure.
number of other studies including 12, 13] and was solved to optimality by Padberg and Rinaldi in 1987 16].
We have chosen this instance because (i) it represents a real-world geometric TSP instance; (ii) it is large
enough to prove di cult for most heuristics to solve optimally; and (iii) its optimal tour is known, allowing
us to compare heuristic solutions to the optimal solution.
In 3] we presented similar results for two random geometric TSP instances with 100 and 500 cities.
The plots in 3] were over local minima obtained by a randomized implementation of the 2-Opt local search
heuristic. The current study augments 3] by using four additional local search heuristics for an instance
with a known globally optimal tour. We also note that other authors such as Muhlenbein et al. 15] and
Sourlas 18] have used similar plots to justify their heuristics. However, our results in 3] and in this report (i)
involve more solutions and use better local search heuristics; (ii) compare mean distances to other solutions,
in addition to distances to the optimal solution; (iii) lead to the observation that the optimal solution is
more central among good solutions; and (iv) motivate a di erent heuristic (Adaptive Multi-Start or AMS)
for global optimization.
2 Preliminaries
Suppose that 1 and 2 are TSP tours over the same set of cities. We de ne the distance ( 1 2) to be
t t n d t ;t n
minus the number of edges contained in both 1 and 2. This measure of distance has been used in a number
t t
of previous studies of TSP solution spaces (e.g., 9, 14, 18]). In 3], we showed that this distance approximates
2
3. the number of 2-Opt operations required to transform one tour into another, to within a factor of two.1
Each of the heuristics used in this report is based on the -Opt local search strategy, which iteratively
k
transforms tours into lower-cost tours by performing a sequence of -Opt moves. Each -Opt move replaces
k k k
edges in a tour with new edges to form a new tour. We believe that ( 1 2) is closely related to the -Opt
k d t ;t k
distance" between tours for general , in addition to = 2. Thus, we believe ( 1 2) is a good measure of
k k d t ;t
proximity between solutions produced by -Opt-based heuristics. The ve local search heuristics we study
k
include:
1. Random 2-Opt. At each iteration, we test all ( 2 ) possible 2-Opt moves in random order, until an
n
improving move is found or the current tour is shown to be a local minimum.
2. Fast 2-Opt. At each iteration, we perform the 2-Opt search proposed by Bentley 2]. This reduces the
time complexity of 2-Opt from ( 2 ) for Random 2-Opt to approximately ( log ) on average.
n n n
3. Fast 3-Opt. We follow Bentley's 2] e cient implementation of the 3-Opt heuristic originally described
by Lin 10].2
4. Lin-Kernighan. We have implemented, as accurately and completely as possible, Lin and Kernighan's
11] variation of -Opt that searches a small but e ective subset of all -Opt moves for 2
k k k n.
5. Large-Step Markov Chains (LSMC) Finally, we use the heuristic of Martin et al. 12] 13] which
iteratively applies 3-Opt to nd a sequence of local minima; the starting tour for each 3-Opt descent is
obtained by applying a random 4-Opt move to the most recent 3-Opt local minimum. Our implemen-
tation returns the best tour visited after a sequence of 1,000 3-Opt descents.3
We include Random 2-Opt to provide continuity with our original paper 3]. Interestingly, Random 2-Opt
returns solutions with signi cantly higher cost than those obtained by Fast 2-Opt. Heuristics 2 through 4 have
been compared to other heuristics by Johnson 7] and Bentley 1] and appear to be among the most e ective
TSP heuristics. For example, 3-Opt and Lin-Kernighan return tours even better than simulated annealing
1 The same result was proved independently by Kececioglu and Sanko 8] in the context of computing the number of chro-
mosome inversions required to evolve one organism into another.
2 Note that our implementations of Fast 2-Opt and Fast 3-Opt di er slightly from Bentley's in that we precompute a nearest
neighbor" matrix of the 25 closest cities to each city in the instance.
3 Note that before applying a random 4-Opt move, LSMC sometimes returns to the previous 3-Opt local minimum if the
current one has higher cost. This decision is based on a Metropolis criterion for which we have found a good temperature to be
10.0 for this instance.
3
4. when applied in a multi-start regime. Heuristic 5 is perhaps the best TSP heuristic available for returning
solutions very close to optimal, although it does require more computation time than the other heuristics
considered here.
3 Experimental Results
We ran each of the heuristics 2,500 times from random starting tours. We then computed the distance of each
solution to the optimal tour and to each of the other solutions found by the same heuristic. Our results are
plotted in Figures 2 through 6 and summarized in Table 1. Our experiments resulted in 2,500 unique tours
for each of the heuristics except LSMC, which found 1,884 unique tours. LSMC also found an optimal tour
six times, four times nding the tour published in 16] and twice nding a tour with equal cost (27,686) at
distance two from the published optimal. None of the other heuristics found an optimal tour in any of its
2,500 runs. Our results show a very clear relationship between cost and distance: better heuristic tours are
both closer to the optimal tour and to other heuristic tours. Moreover, the optimal tour is located at a more
central position within the subspace of good solutions: the optimal tour is closer on average to the heuristic
tours than are most of the heuristic tours themselves. This suggests a globally convex" 6] or big valley"
structure for the TSP solution space, with the optimal solution near the center of a single valley of low-cost
solutions.
Ave. Cost: Ave. Max. Fraction of Ave. Mean Ave.
Percent Distance Distance Solution Distance to Running Time
Algorithm Above Optimal to Optimal to Optimal Space Other Solutions (seconds)
Random 2-Opt 11.8 196 233 10?569 232 11.5
Fast 2-Opt 6.7 152 194 10?670 176 0.28
Fast 3-Opt 2.3 110 153 10?779 129 0.27
Lin-Kernighan 1.2 96 142 10?809 110 6.2
LSMC (3-Opt) 0.14 59 97 10?935 65 33.8
Table 1: Summary of solutions from 2,500 runs each of ve di erent TSP heuristics on ATT532. All
heuristics except LSMC found 2,500 unique tours; LSMC found 1,884 unique tours. Running times
are for an HP Apollo 9000-735.
The relationship between cost and distance is most striking for Random 2-Opt and for Lin-Kernighan, in
Figures 2 and 5. For Fast 2-Opt and Fast 3-Opt, the relationship is somewhat obscured by a relatively small
number of local minima with high cost. (Note that high-cost local minima are ignored by our AMS heuristic
in 3].) For LSMC, the relationship appears to be quite strong again, although there is perhaps a second
4
5. x 103 x 103
32.60 32.60
32.40 32.40
32.20 32.20
32.00 32.00
31.80 31.80
31.60 31.60
Cost
Cost
31.40 31.40
31.20 31.20
31.00 31.00
30.80 30.80
30.60 30.60
30.40 30.40
30.20 30.20
30.00 30.00
29.80 29.80
29.60 29.60
215.00 220.00 225.00 230.00 235.00 240.00 245.00 160.00 180.00 200.00 220.00
Mean distance to other solutions Distance to optimal
(a) (b)
Figure 2: 2,500 Random 2-Opt local minima for ATT532. Tour cost (vertical axis) is plotted against
(a) mean distance to the 2,499 other local minima and (b) distance to the global minimum.
valley at a distance between 60 and 80 from the optimal tour.
Our results indicate that studies of TSP solution spaces should concentrate on a very small subspace.
De ne a ball ( ) to be the subset of tours within distance of a tour . From the third column in Table 1,
b t; k k t
we see that all the tours found by the ve heuristics are contained in ( b topt ; 233). In Appendix B of 3], we
described how to calculate the number of tours within ( ) for any b t; k k n . We used this calculation to obtain
the fourth column of Table 1, which gives the fraction of the solution space contained in a ball centered at
the optimal tour and containing all tours obtained by the heuristic. For instance, all solutions found by Fast
2-Opt lie within a ball containing a fraction 1 10670 of the solution space, while all of the LSMC solutions lie
=
in 1 10935 of the solution space.4
=
Finally, in Table 2 we analyze the relationship between cost and distance more formally. For each of the
ve heuristics, we compute the correlations between cost and the distance to optimal and also the correlations
between cost and the mean distance to other solutions. The table con rms that the relationship between cost
and distance is strongest for the Random 2-Opt and Lin-Kernighan heuristics. The t-Statistics reported in
Table 2 indicate whether each correlation is statistically signi cant (i.e., could not occur merely by chance):
4 Because there are (531!)=2 101218 possible tours, these balls contain approximately 10648 and 10283 tours, respectively.
5
6. x 103 x 103
32.20 32.20
32.00 32.00
31.80 31.80
31.60 31.60
31.40 31.40
31.20 31.20
31.00 31.00
30.80 30.80
Cost
Cost
30.60 30.60
30.40 30.40
30.20 30.20
30.00 30.00
29.80 29.80
29.60 29.60
29.40 29.40
29.20 29.20
29.00 29.00
28.80 28.80
28.60 28.60
28.40 28.40
160.00 165.00 170.00 175.00 180.00 185.00 190.00 195.00 120.00 140.00 160.00 180.00
Mean distance to other solutions Distance to optimal
(a) (b)
Figure 3: 2,500 Fast 2-Opt local minima for ATT532.
x 103 x 103
29.60 29.60
29.50 29.50
29.40 29.40
29.30 29.30
29.20 29.20
29.10 29.10
29.00 29.00
Cost
Cost
28.90 28.90
28.80 28.80
28.70 28.70
28.60 28.60
28.50 28.50
28.40 28.40
28.30 28.30
28.20 28.20
28.10 28.10
28.00 28.00
27.90 27.90
120.00 130.00 140.00 150.00 60.00 80.00 100.00 120.00 140.00
Mean distance to other solutions Distance to optimal
(a) (b)
Figure 4: 2,500 Fast 3-Opt local minima for ATT532.
a value of approximately 2 0 or greater indicates a correlation signi cant at the 95% con dence level, and a
:
value of 2 6 or greater indicates signi cance at the 99% con dence level 17]. With t-Statistics ranging from
:
19 to 54, the correlations between distance and cost are highly signi cant statistically.
6
7. x 103 x 103
28.45 28.45
28.40 28.40
28.35 28.35
28.30 28.30
28.25 28.25
28.20 28.20
Cost
Cost
28.15 28.15
28.10 28.10
28.05 28.05
28.00 28.00
27.95 27.95
27.90 27.90
27.85 27.85
27.80 27.80
27.75 27.75
90.00 100.00 110.00 120.00 130.00 40.00 60.00 80.00 100.00 120.00 140.00
Mean distance to other solutions Distance to optimal
(a) (b)
Figure 5: 2,500 Lin-Kernighan local minima for ATT532.
x 103 x 103
27.78 27.78
27.77 27.77
27.77 27.77
27.76 27.76
27.76 27.76
27.76 27.76
27.75 27.75
27.74 27.74
Cost
Cost
27.74 27.74
27.74 27.74
27.73 27.73
27.73 27.73
27.72 27.72
27.71 27.71
27.71 27.71
27.71 27.71
27.70 27.70
27.70 27.70
27.69 27.69
27.69 27.69
55.00 60.00 65.00 70.00 75.00 80.00 0.00 20.00 40.00 60.00 80.00 100.00
Mean distance to other solutions Distance to optimal
(a) (b)
Figure 6: 1,884 unique solutions found by Large-Step Markov Chains (LSMC) in 2,500 runs.
4 Continuing Research
Our continuing research has produced similar plots for a number of other combinatorial optimization problems,
including circuit/graph partitioning, satis ability, number partitioning, and job shop scheduling. In 3] we
also presented two plots for random graph partitioning instances, which again showed a strong relationship
7
8. Mean Dist. to Other Solutions Distance to Optimal
Algorithm Correlation T-Statistic Correlation T-Statistic
Random 2-Opt 0.73 54 0.55 32
Fast 2-Opt 0.53 31 0.47 27
Fast 3-Opt 0.66 44 0.54 32
Lin-Kernighan 0.73 54 0.57 34
LSMC (3-Opt) 0.69 41 0.40 19
Table 2: Correlations between distance and cost for the ve heuristics applied to ATT532. (Based
on the unique minima resulting from 2,500 runs of each heuristic.)
between cost and distance. However, Hagen and Kahng 5] have shown for circuit partitioning at least, that
this relationship deteriorates for lower-cost solutions (i.e., those produced by more powerful heuristics such as
Fiduccia-Mattheyses 4]). In other problem formulations we also nd weaker cost-distance relationships than
in the TSP, although in some of them (e.g., job shop scheduling) the relationship becomes more apparent
when we use better heuristics. Finally, we are testing multi-start heuristics for the TSP that constrain edges
in later descents if they are common to all of the best tours in earlier descents. This strategy is very similar
to a multi-start approach suggested by Lin and Kernighan in their 1973 paper, except that we now freeze"
edges common to the best previous solutions (cf. 5]) rather than only the edges common to all previous
solutions.
References
1] J. L. Bentley, Experiments on Traveling Salesman Heuristics" in First Annual ACM-SIAM Symposium
on Discrete Algorithms (January 1990), pp. 187-197.
2] J. L. Bentley, Fast Algorithms for Geometric Traveling Salesman Problems", ORSA Journal on Com-
puting 4 (4) (Fall 1992), pp. 387-410.
3] K. D. Boese, A. B. Kahng and S. Muddu, A New Adaptive Multi-Start Technique for Combinatorial
Global Optimizations", Operations Research Letters, 16(2), Sept. 1994, pp. 101-113.
4] C. M. Fiduccia and R. M. Mattheyses, A Linear-Time Heuristic for Improving Network Partitions", in
ACM IEEE Nineteenth Design Automation Conference, June 1982, pp. 175-181.
5] L. Hagen and A. B. Kahng, Combining Problem Reduction and Adaptive Multi-Start: A New Technique
For Superior Iterative Partitioning", to appear in IEEE Trans. Computer Aided Design, 1995.
6] T. C. Hu, V. Klee and D. Larman, Optimization of Globally Convex Functions", SIAM J. on Control
and Optimization 27(5), 1989, pp. 1026-1047.
8
9. 7] D. S. Johnson, Local Optimization and the Traveling Salesman Problem", in Proceedings of the 17th
International Colloquium on Automata, Languages and Programming, July 1990, pp. 446-460.
8] J. Kececioglu and D. Sanko , Exact and Approximation Algorithms for the Inversion Distance Between
Two Chromosomes", in Proceedings of the 4th Annual Symposium on Combinatorial Pattern Matching,
July 1993, pp. 87-105.
9] S. Kirkpatrick and G. Toulouse, Con guration Space Analysis of Traveling Salesman Problems", Journal
de Physique 46, 1985, pp. 1277-1292.
10] S. Lin, Computer Solutions of the Traveling Salesman Problem", Bell System Technical Journal 44,
1965, pp. 2245-2269.
11] S. Lin and B. W. Kernighan, An e ective heuristic algorithm for the traveling-salesman problem",
Operations Research 31, 1973, pp. 498-516.
12] O. Martin, S. W. Otto and E. W. Felten, Large-Step Markov Chains For the Traveling Salesman
Problem" Complex Systems 5(3), June 1991, pp. 299-326.
13] O. Martin, S. W. Otto, and E. W. Felten, Large-Step Markov Chains for the TSP Incorporating Local
Search Heuristics", Operations Res. Letters 11, 1992, pp. 219-224.
14] M. Mezard and G. Parisi, A Replica Analysis of the Travelling Salesman Problem", Journal de Physique
47, 1986, 1285-1296.
15] H. Muhlenbein, M. Georges-Schleuter, and O. Kramer, Evolution Algorithms in Combinatorial Opti-
mization," Parallel Computing 7, 1988, pp. 65{85.
16] M. Padberg and G. Rinaldi, Optimization of a 532-city symmetric traveling salesman problem by branch
and cut", Operations Res. Letters 6, 1987, pp. 1-7.
17] S. M. Ross, Introduction to Probability and Statistics for Engineers and Scientists, (Wiley, New York,
1987).
18] N. Sourlas, Statistical Mechanics and the Travelling Salesman Problem", Europhysics Letters 2(12),
1986, pp. 919-923.
9