Chiral symmetry breaking and confinement effects on dilepton and photon produ...Daisuke Satow
Slides used in presentation at:
“New perspectives on Photons and Dileptons in Ultrarelativistic Heavy-Ion Collisions at RHIC and LHC”, in November, 2015 @ ECT*, Trento, Italy
Towards Minimal Test Collections for Evaluation of Audio Music Similarity and...Julián Urbano
Reliable evaluation of Information Retrieval systems requires large amounts of relevance judgments. Making these annotations is quite complex and tedious for many Music Information Retrieval tasks, so performing such evaluations requires too much effort. A low-cost alternative is the application of Minimal Test Collection algorithms, which offer quite reliable results while significantly reducing the annotation effort. The idea is to incrementally select what documents to judge so that we can compute estimates of the effectiveness differences between systems with a certain degree of confidence. In this paper we show a first approach towards its application to the evaluation of the Audio Music Similarity and Retrieval task, run by the annual MIREX evaluation campaign. An analysis with the MIREX 2011 data shows that the judging effort can be reduced to about 35% to obtain results with 95% confidence.
Finding Ground States of Sherrington-Kirkpatrick Spin Glasses with Hierarchic...Martin Pelikan
This study focuses on the problem of finding ground states of random instances of the Sherrington-Kirkpatrick (SK) spin-glass model with Gaussian couplings. While the ground states of SK spin-glass instances can be obtained with branch and bound, the computational complexity of branch and bound yields instances of not more than about 90 spins. We describe several approaches based on the hierarchical Bayesian optimization algorithm (hBOA) to reliably identifying ground states of SK instances intractable with branch and bound, and present a broad range of empirical results on such problem instances. We argue that the proposed methodology holds a big promise for reliably solving large SK spin-glass instances to optimality with practical time complexity. The proposed approaches to identifying global optima reliably can also be applied to other problems and they can be used with many other evolutionary algorithms. Performance of hBOA is compared to that of the genetic algorithm with two common crossover operators.
Chiral symmetry breaking and confinement effects on dilepton and photon produ...Daisuke Satow
Slides used in presentation at:
“New perspectives on Photons and Dileptons in Ultrarelativistic Heavy-Ion Collisions at RHIC and LHC”, in November, 2015 @ ECT*, Trento, Italy
Towards Minimal Test Collections for Evaluation of Audio Music Similarity and...Julián Urbano
Reliable evaluation of Information Retrieval systems requires large amounts of relevance judgments. Making these annotations is quite complex and tedious for many Music Information Retrieval tasks, so performing such evaluations requires too much effort. A low-cost alternative is the application of Minimal Test Collection algorithms, which offer quite reliable results while significantly reducing the annotation effort. The idea is to incrementally select what documents to judge so that we can compute estimates of the effectiveness differences between systems with a certain degree of confidence. In this paper we show a first approach towards its application to the evaluation of the Audio Music Similarity and Retrieval task, run by the annual MIREX evaluation campaign. An analysis with the MIREX 2011 data shows that the judging effort can be reduced to about 35% to obtain results with 95% confidence.
Finding Ground States of Sherrington-Kirkpatrick Spin Glasses with Hierarchic...Martin Pelikan
This study focuses on the problem of finding ground states of random instances of the Sherrington-Kirkpatrick (SK) spin-glass model with Gaussian couplings. While the ground states of SK spin-glass instances can be obtained with branch and bound, the computational complexity of branch and bound yields instances of not more than about 90 spins. We describe several approaches based on the hierarchical Bayesian optimization algorithm (hBOA) to reliably identifying ground states of SK instances intractable with branch and bound, and present a broad range of empirical results on such problem instances. We argue that the proposed methodology holds a big promise for reliably solving large SK spin-glass instances to optimality with practical time complexity. The proposed approaches to identifying global optima reliably can also be applied to other problems and they can be used with many other evolutionary algorithms. Performance of hBOA is compared to that of the genetic algorithm with two common crossover operators.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Intelligent Bias of Network Structures in the Hierarchical BOAMartin Pelikan
One of the primary advantages of estimation of distribution algorithms (EDAs) over many other stochastic optimization techniques is that they supply us with a roadmap of how they solve a problem. This roadmap consists of a sequence of probabilistic models of candidate solutions of increasing quality. The first model in this sequence would typically encode the uniform distribution over all admissible solutions whereas the last model would encode a distribution that generates at least one global optimum with high probability. It has been argued that exploiting this knowledge should improve EDA performance when solving similar problems. This paper presents an approach to bias the building of Bayesian network models in the hierarchical Bayesian optimization algorithm (hBOA) using information gathered from models generated during previous hBOA runs on similar problems. The approach is evaluated on trap-5 and 2D spin glass problems.
Effects of a Deterministic Hill climber on hBOAMartin Pelikan
Hybridization of global and local search algorithms is a well-established technique for enhancing the efficiency of search algorithms. Hybridizing estimation of distribution algorithms (EDAs) has been repeatedly shown to produce better performance than either the global or local search algorithm alone. The hierarchical Bayesian optimization algorithm (hBOA) is an advanced EDA which has previously been shown to benefit from hybridization with a local searcher. This paper examines the effects of combining hBOA with a deterministic hill climber (DHC). Experiments reveal that allowing DHC to find the local optima makes model building and decision making much easier for hBOA. This reduces the minimum population size required to find the global optimum, which substantially improves overall performance.
Fitness inheritance in the Bayesian optimization algorithmMartin Pelikan
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
iBOA: The Incremental Bayesian Optimization AlgorithmMartin Pelikan
This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.
Towards billion bit optimization via parallel estimation of distribution algo...kknsastry
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.
Using Previous Models to Bias Structural Learning in the Hierarchical BOAMartin Pelikan
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.
Empirical Analysis of ideal recombination on random decomposable problemskknsastry
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOAMartin Pelikan
An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distance-based statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1) test the technique on several classes of NP-complete problems, including MAXSAT, spin glasses and minimum vertex cover; (2) demonstrate that the technique is effective even when previous runs were done on problems of different size; (3) provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often yield nearly multiplicative speedups.
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesMartin Pelikan
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
The Bayesian Optimization Algorithm with Substructural Local SearchMartin Pelikan
This work studies the utility of using substructural neighborhoods for local search in the Bayesian optimization algorithm (BOA). The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the structure of the neighborhoods used in local search. Additionally, a surrogate fitness model is considered to evaluate the improvement of the local search steps. The results show that performing substructural local search in BOA significatively reduces the number of generations necessary to converge to optimal solutions and thus provides substantial speedups.
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Martin Pelikan
It has been shown that model building in the hierarchical Bayesian optimization algorithm (hBOA) can be efficiently parallelized by randomly generating an ancestral ordering of the nodes of the network prior to learning the network structure and allowing only dependencies consistent with the generated ordering. However, it has not been thoroughly shown that this approach to restricting probabilistic models does not affect scalability of hBOA on important classes of problems. This presentation demonstrates that although the use of a random ancestral ordering restricts the structure of considered models to allow efficient parallelization of model building, its effects on hBOA performance and scalability are negligible.
Estimation of Distribution Algorithms TutorialMartin Pelikan
Probabilistic model-building algorithms (PMBGAs), also called estimation of distribution algorithms (EDAs) and iterated density estimation algorithms (IDEAs), replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
This tutorial provides a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.
The video of this tutorial presented at GECCO-2008 can be found at
http://medal.cs.umsl.edu/blog/?p=293
Talk given at the Particle Technology Lab, Zurich, Switzerland, November 2008.larry77
The process of nanoparticle agglomeration as a function of the monomer-monomer interaction potential is simulated numerically by solving Langevin equations for a set of interacting monomers in three dimensions. The simulation results are used to investigate the structure of the generated clusters and the collision frequency between small clusters. Cluster restructuring is also observed and discussed. We identify a time-dependent fractal dimension whose evolution is linked to the kinetics of two cluster populations. The absence of screening in the Langevin equations is discussed and its effect on cluster translational and rotational properties is quantified.
Computational complexity and simulation of rare events of Ising spin glasses Martin Pelikan
We discuss the computational complexity of random 2D Ising spin glasses, which represent an interesting class of constraint satisfaction problems for black box optimization. Two extremal cases are considered: (1) the +/- J spin glass, and (2) the Gaussian spin glass. We also study a smooth transition between these two extremal cases. The computational complexity of all studied spin glass systems is found to be dominated by rare events of extremely hard spin glass samples. We show that complexity of all studied spin glass systems is closely related to Frechet extremal value distribution. In a hybrid algorithm that combines the hierarchical Bayesian optimization algorithm (hBOA) with a deterministic bit-flip hill climber, the number of steps performed by both the global searcher (hBOA) and the local searcher follow Frechet distributions. Nonetheless, unlike in methods based purely on local search, the parameters of these distributions confirm good scalability of hBOA with local search. We further argue that standard performance measures for optimization algorithms---such as the average number of evaluations until convergence---can be misleading. Finally, our results indicate that for highly multimodal constraint satisfaction problems, such as Ising spin glasses, recombination-based search can provide qualitatively better results than mutation-based search.
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Intelligent Bias of Network Structures in the Hierarchical BOAMartin Pelikan
One of the primary advantages of estimation of distribution algorithms (EDAs) over many other stochastic optimization techniques is that they supply us with a roadmap of how they solve a problem. This roadmap consists of a sequence of probabilistic models of candidate solutions of increasing quality. The first model in this sequence would typically encode the uniform distribution over all admissible solutions whereas the last model would encode a distribution that generates at least one global optimum with high probability. It has been argued that exploiting this knowledge should improve EDA performance when solving similar problems. This paper presents an approach to bias the building of Bayesian network models in the hierarchical Bayesian optimization algorithm (hBOA) using information gathered from models generated during previous hBOA runs on similar problems. The approach is evaluated on trap-5 and 2D spin glass problems.
Effects of a Deterministic Hill climber on hBOAMartin Pelikan
Hybridization of global and local search algorithms is a well-established technique for enhancing the efficiency of search algorithms. Hybridizing estimation of distribution algorithms (EDAs) has been repeatedly shown to produce better performance than either the global or local search algorithm alone. The hierarchical Bayesian optimization algorithm (hBOA) is an advanced EDA which has previously been shown to benefit from hybridization with a local searcher. This paper examines the effects of combining hBOA with a deterministic hill climber (DHC). Experiments reveal that allowing DHC to find the local optima makes model building and decision making much easier for hBOA. This reduces the minimum population size required to find the global optimum, which substantially improves overall performance.
Fitness inheritance in the Bayesian optimization algorithmMartin Pelikan
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
iBOA: The Incremental Bayesian Optimization AlgorithmMartin Pelikan
This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.
Towards billion bit optimization via parallel estimation of distribution algo...kknsastry
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.
Using Previous Models to Bias Structural Learning in the Hierarchical BOAMartin Pelikan
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.
Empirical Analysis of ideal recombination on random decomposable problemskknsastry
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOAMartin Pelikan
An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distance-based statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1) test the technique on several classes of NP-complete problems, including MAXSAT, spin glasses and minimum vertex cover; (2) demonstrate that the technique is effective even when previous runs were done on problems of different size; (3) provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often yield nearly multiplicative speedups.
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesMartin Pelikan
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
The Bayesian Optimization Algorithm with Substructural Local SearchMartin Pelikan
This work studies the utility of using substructural neighborhoods for local search in the Bayesian optimization algorithm (BOA). The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the structure of the neighborhoods used in local search. Additionally, a surrogate fitness model is considered to evaluate the improvement of the local search steps. The results show that performing substructural local search in BOA significatively reduces the number of generations necessary to converge to optimal solutions and thus provides substantial speedups.
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Martin Pelikan
It has been shown that model building in the hierarchical Bayesian optimization algorithm (hBOA) can be efficiently parallelized by randomly generating an ancestral ordering of the nodes of the network prior to learning the network structure and allowing only dependencies consistent with the generated ordering. However, it has not been thoroughly shown that this approach to restricting probabilistic models does not affect scalability of hBOA on important classes of problems. This presentation demonstrates that although the use of a random ancestral ordering restricts the structure of considered models to allow efficient parallelization of model building, its effects on hBOA performance and scalability are negligible.
Estimation of Distribution Algorithms TutorialMartin Pelikan
Probabilistic model-building algorithms (PMBGAs), also called estimation of distribution algorithms (EDAs) and iterated density estimation algorithms (IDEAs), replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
This tutorial provides a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.
The video of this tutorial presented at GECCO-2008 can be found at
http://medal.cs.umsl.edu/blog/?p=293
Talk given at the Particle Technology Lab, Zurich, Switzerland, November 2008.larry77
The process of nanoparticle agglomeration as a function of the monomer-monomer interaction potential is simulated numerically by solving Langevin equations for a set of interacting monomers in three dimensions. The simulation results are used to investigate the structure of the generated clusters and the collision frequency between small clusters. Cluster restructuring is also observed and discussed. We identify a time-dependent fractal dimension whose evolution is linked to the kinetics of two cluster populations. The absence of screening in the Langevin equations is discussed and its effect on cluster translational and rotational properties is quantified.
Computational complexity and simulation of rare events of Ising spin glasses Martin Pelikan
We discuss the computational complexity of random 2D Ising spin glasses, which represent an interesting class of constraint satisfaction problems for black box optimization. Two extremal cases are considered: (1) the +/- J spin glass, and (2) the Gaussian spin glass. We also study a smooth transition between these two extremal cases. The computational complexity of all studied spin glass systems is found to be dominated by rare events of extremely hard spin glass samples. We show that complexity of all studied spin glass systems is closely related to Frechet extremal value distribution. In a hybrid algorithm that combines the hierarchical Bayesian optimization algorithm (hBOA) with a deterministic bit-flip hill climber, the number of steps performed by both the global searcher (hBOA) and the local searcher follow Frechet distributions. Nonetheless, unlike in methods based purely on local search, the parameters of these distributions confirm good scalability of hBOA with local search. We further argue that standard performance measures for optimization algorithms---such as the average number of evaluations until convergence---can be misleading. Finally, our results indicate that for highly multimodal constraint satisfaction problems, such as Ising spin glasses, recombination-based search can provide qualitatively better results than mutation-based search.
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
Dual-hop Variable-Gain Relaying with Beamforming over 휿−흁 Shadowed Fading Cha...zeenta zeenta
Dual-hop relaying with beamforming is studied under 휅−휇 shadowed fading environments. Exact and asymptotic results for the outage probability and average capacity are derived.
Effects of shadowing on the system performance are analyzed in different scenarios
The analysis results is general that it includes many special cases.
Similar to Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Power-Law Interactions (16)
Population Dynamics in Conway’s Game of Life and its VariantsMartin Pelikan
The presentation for the project of high school students Yonatan Biel and David Hua made in the Students and Teachers As Research Scientists (STARS) program at the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). To see animations, please download the powerpoint presentation.
Image segmentation using a genetic algorithm and hierarchical local searchMartin Pelikan
This paper proposes a hybrid genetic algorithm to perform image segmentation based on applying the q-state Potts spin glass model to a grayscale image. First, the image is converted to a set of weights for a q-state spin glass and then a steady-state genetic algorithm is used to evolve candidate segmented images until a suitable candidate solution is found. To speed up the convergence to an adequate solution, hierarchical local search is used on each evaluated solution. The results show that the hybrid genetic algorithm with hierarchical local search is able to efficiently perform image segmentation. The necessity of hierarchical search for these types of problems is also clearly demonstrated.
Distance-based bias in model-directed optimization of additively decomposable...Martin Pelikan
For many optimization problems it is possible to define a distance metric between problem variables that correlates with the likelihood and strength of interactions between the variables. For example, one may define a metric so that the dependencies between variables that are closer to each other with respect to the metric are expected to be stronger than the dependencies between variables that are further apart. The purpose of this paper is to describe a method that combines such a problem-specific distance metric with information mined from probabilistic models obtained in previous runs of estimation of distribution algorithms with the goal of solving future problem instances of similar type with increased speed, accuracy and reliability. While the focus of the paper is on additively decomposable problems and the hierarchical Bayesian optimization algorithm, it should be straightforward to generalize the approach to other model-directed optimization techniques and other problem classes. Compared to other techniques for learning from experience put forward in the past, the proposed technique is both more practical and more broadly applicable.
Pairwise and Problem-Specific Distance Metrics in the Linkage Tree Genetic Al...Martin Pelikan
The linkage tree genetic algorithm (LTGA) identifies linkages between problem variables using an agglomerative hierarchical clustering algorithm and linkage trees. This enables LTGA to solve many decomposable problems that are difficult with more conventional genetic algorithms. The goal of this paper is two-fold: (1) Present a thorough empirical evaluation of LTGA on a large set of problem instances of additively decomposable problems and (2) speed up the clustering algorithm used to build the linkage trees in LTGA by using a pairwise and a problem-specific metric.
http://medal.cs.umsl.edu/files/2011001.pdf
Hybrid Evolutionary Algorithms on Minimum Vertex Cover for Random GraphsMartin Pelikan
This work analyzes the hierarchical Bayesian optimization algorithm (hBOA) on minimum vertex cover for standard classes of random graphs and transformed SAT instances. The performance of hBOA is compared with that of the branch-and-bound problem solver (BB), the simple genetic algorithm (GA) and the parallel simulated annealing (PSA). The results indicate that BB is significantly outperformed by all the other tested methods, which is expected as BB is a complete search algorithm and minimum vertex cover is an NP-complete problem. The best performance is achieved by hBOA; nonetheless, the performance differences between hBOA and other evolutionary algorithms are relatively small, indicating that mutation-based search and recombination-based search lead to similar performance on the tested classes of minimum vertex cover problems.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Power-Law Interactions
1. Analysis of Evolutionary Algorithms on the
One-Dimensional Spin Glass with Power-Law
Interactions
Martin Pelikan and Helmut G. Katzgraber
Missouri Estimation of Distribution Algorithms Laboratory (MEDAL)
University of Missouri, St. Louis, MO
http://medal.cs.umsl.edu/
pelikan@cs.umsl.edu
Download MEDAL Report No. 2009004
http://medal.cs.umsl.edu/files/2009004.pdf
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
2. Motivation
Testing evolutionary algorithms
Adversarial problems on the boundary of design envelope.
Random instances of important classes of problems.
Real-world problems.
This study
Use one-dimensional spin glass with power-law interactions.
This allows the user to tune the effective range of interactions.
Short-range to long-range interactions.
Generate large number of instances of proposed problem class.
Solve all instances with branch and bound and hybrids.
Test evolutionary algorithms on the generated instances.
Analyze the results.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
3. Outline
1. Sherrington-Kirkpatrick (SK) spin glass.
2. Power-law interactions.
3. Problem instances.
4. Experiments.
5. Conclusions and future work.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
4. SK Spin Glass
SK spin glass (Sherrington & Kirkpatrick, 1978)
Contains n spins s1 , s2 , . . . , sn .
Ising spin can be in two states: +1 or −1.
All pairs of spins interact.
Interaction of spins si and sj specified by
real-valued coupling Ji,j .
Spin glass instance is defined by set of couplings {Ji,j }.
Spin configuration is defined by the values of spins {si }.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
5. Ground States of SK Spin Glasses
Energy
Energy of a spin configuration C is given by
H(C) = − Ji,j si sj
i<j
Ground states are spin configurations that minimize energy.
Finding ground states of SK instances is NP-complete.
Compare with other standard spin glass types
2D: Spin interacts with only 4 neighbors in 2D lattice.
3D: Spin interacts with only 6 neighbors in 3D lattice.
SK: Spin interacts with all other spins.
2D is polynomially solvable; 3D and SK are NP-complete.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
6. Random Spin Glass Instances
Generating random spin glass instances
Generate couplings {Ji,j } using a specific distribution.
Study the properties of generated spin glasses.
Example study
Find ground states and analyze their properties.
Example coupling distributions
Each coupling is generated from N (0, 1).
Each coupling is +1 or -1 with equal probability.
Each coupling is generated from a power-law distribution.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
7. Power-Law Interactions
Power-law interactions
Spins arranged on a circle.
Couplings generated according to
i,j
Ji,j = c(σ) σ ,
ri,j
i,j are generated according to N (0, 1),
c(σ) is a normalization constant,
σ > 0 is a parameter to control
effective range of interactions,
ri,j = n sin(π|i − j|/n)/π is geometric
Figure 1: One-dimensional spin glass of size n = 10 ar
distance between si and sj
Magnitude ofwhere ǫi,j are generated decreases with their distance. zero
spin-spin couplings according to normal distribution with
Effects of distance on magnitude of couplings increase withparameter t
is a normalization constant, σ > 0 is the user-specified σ.
interactions, and ri,j = n sin(π|i − j|/n)/π denotes the geometric d
figure 1). The magnitude of spin-spin couplings decreases with th
discussed shortly, the effects EAsdistance on the magnitude of coupli
Martin Pelikan and Helmut G. Katzgraber Analysis of of on 1D Spin Glass with Power-Law Interactions
8. Power-Law Interactions: Illustration
Example for n = 10 (normalized)
Distance on Coupling variance
circle σ = 0.0 σ = 0.5 σ = 2.0
1 1.00 1.00 1.00
2 1.00 0.73 0.28
3 1.00 0.62 0.15
4 1.00 0.57 0.11
5 1.00 0.56 0.10
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
9. Problem Instances
Parameters
n = 20 to 150.
σ ∈ {0.00, 0.55, 0.75, 0.83, 1.00, 1.50, 2.00}.
σ = 0 denotes standard SK spin glass with N(0,1) couplings.
σ = 2 enforces short-range interactions.
Variety of instances
For each n and σ, generate 10,000 random instances.
Overall 610,000 unique problem instances.
Finding optima
Small instances solved using branch and bound.
For large instances, use heuristic methods to find reliable (but
not guaranteed) optima.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
10. Compared Algorithms
Basic algorithms
Hierarchical Bayesian optimization algorithm (hBOA).
Genetic algorithm with uniform crossover (GAU).
Genetic algorithm with twopoint crossover (G2P).
Local search
Single-bit-flip hill climbing (DHC) on each solution.
Improves performance of all methods.
Niching
Restricted tournament replacement (niching).
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
11. Experimental Setup
All algorithms
Bisection determines adequate population size for each
instance.
Ensure 10 successful runs out of 10 independent runs.
In RTR, use window size w = min{N/20, n}.
GA
Probability of crossover, pc = 0.6.
Probability of bit-flip in mutation, pm = 1/n.
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
12. Results: Evaluations until Optimum
Number of evaluations (GA, twopoint)
Number of evaluations (GA, twopoint)
Number of evaluations (GA, twopoint)
Number of evaluations (GA, twopoint)
Number of evaluations (GA, twopoint)
Number of evaluations (GA, twopoint)
5 5 5
10 10 10 σ=2.00 10 10 10 σ=2.00 10 10 10 σ=2.00
5 5 5
σ=2.00 σ=2.00 σ=2.00
Number of evaluations (hBOA)
5 5 5
σ=2.00 σ=2.00 σ=2.00
Number of evaluations (hBOA)
Number of evaluations (hBOA)
σ=1.50
σ=1.50σ=1.50 σ=1.50
σ=1.50
σ=1.50 σ=1.50
σ=1.50
σ=1.50
4 σ=1.00
4 4 σ=1.00 σ=1.00 4 σ=1.00
4 4 σ=1.00σ=1.00 4 4 σ=1.00
σ=1.00
4 σ=1.00
10 10 10 10 10 10 10 10 10
σ=0.83
σ=0.83σ=0.83 σ=0.83
σ=0.83
σ=0.83 σ=0.83
σ=0.83
σ=0.83
σ=0.75
σ=0.75σ=0.75 σ=0.75
σ=0.75
σ=0.75 σ=0.75
σ=0.75
σ=0.75
3 3 3 3 3 3 3 3 3
10 10 10 σ=0.55
σ=0.55σ=0.55 10 10 10 σ=0.55
σ=0.55
σ=0.55 10 10 10 σ=0.55
σ=0.55
σ=0.55
σ=0.00
σ=0.00σ=0.00 σ=0.00
σ=0.00
σ=0.00 σ=0.00
σ=0.00
σ=0.00
2 2 2 2 2 2 2 2 2
10 10 10 10 10 10 10 10 10
1 1 1 1 1 1 1 1 1
10 10 10 10 10 10 10 10 10
16 16 16 32 32 32 64 64 64 128 128
128 16 16 16 32 32 32 64 64 64 128 128
128 16 16 16 32 32 32 64 64 64 128128
128
Problem size
Problem size size
Problem Problem size
Problem size size
Problem Problem sizesize
Problem
Problem size
(a) hBOA
(a)(a) hBOA
hBOA (b) GA (twopoint)
(b) GA (twopoint)
(b) GA (twopoint) (c)(c) GA (uniform)
GA (uniform)
(c) GA (uniform)
Scalability of hBOA and GA with twopoint crossover better
forFigure 2:2: 2: Growththe the numberevaluations withwith problem size.
short-range interactions. of of evaluations problem size.
Figure Growth ofof of number of evaluations with problem size.
Figure Growth the number
6 6 σ=2.00 σ=2.00
10 10 10 σ=2.00
Linkage tightens 10 σ=2.00grows.
as σ
10 10 σ=2.00 σ=2.00 σ=2.00
σ=2.00
10 10 10 σ=2.00
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
6 6 6 6 6 6 6
σ=1.50σ=1.50
σ=1.50 σ=1.50σ=1.50
σ=1.50 σ=1.50
σ=1.50
σ=1.50
Number of flips (hBOA)
Number of flips (hBOA)
Tighter linkage makes problem easier (if good recombination).
Number of flips (hBOA)
5 5 σ=1.00 σ=1.00
5 σ=1.00 σ=1.00σ=1.00
σ=1.00 5 σ=1.00
σ=1.00
σ=1.00
5 5 5 5 5
10 10 10 10 10 10 10 10 10
σ=0.83σ=0.83 σ=0.83 σ=0.83
4 4 4
σ=0.83
σ=0.75σ=0.75
σ=0.75
Twopoint crossoverσ=0.75 respects tight linkage. σ=0.75σ=0.83
σ=0.75
σ=0.83
σ=0.83
σ=0.75 4
σ=0.83
4
σ=0.75
σ=0.75
4 4 4 4
σ=0.55σ=0.55
10 10 10 σ=0.55 σ=0.55
σ=0.55
10 10 10 σ=0.55 σ=0.55
σ=0.55
10 10 10 σ=0.55
3
σ=0.00
10 103 10
3
σ=0.00
σ=0.00 GA with uniform 10 10 10σ=0.00σ=0.00 with shorter-range σ=0.00
gets σ=0.00
worse 3
σ=0.00
10 10 10 3
interactions.
σ=0.00
3 3 3 3
2 2 2 2 2 2 2 2 2
10 10 10 10 10 10 10 10 10
16 16 16 32 32 32 64 64 64 128 128
128 16 16 16 32 32 32 64 64 64 128 128
128 16 16 16 32 32 32 64 64 64 128128
128
Problem size size
Problem
Problem size Problem size size
Problem
Problem size Problem sizesize
Problem size
Problem
(a) hBOA
(a)(a) hBOA
hBOA (b) GA (twopoint)
(b) GA (twopoint)
(b) GA (twopoint) (c)(c) GA (uniform)
GA (uniform)
(c) GA (uniform)
Martin Pelikan and Helmut G. Katzgraber Analysis of EAs on 1D Spin Glass with Power-Law Interactions
13. 10 2 2 2
1010 10 10 1010
2 2 2
Number
Numbe
10 10
Number of
Number of
Number
Number of
Number of
Number of
Number of
Results: LS Steps until Optimum (Flips)
10
1
10 10
16
1
16 16
1
32
32 32 64
64 64
Problem size
128
128 128
1010 10
1 1
1616 16
1
32 32 32 64 64 64 128 128
Problem size
128
10 1010
16 1616
1
1 1
32 3232 64 6464 128128
128
Problem size size
Problem Problem size size
Problem Problem sizesize
Problem size
Problem
(a) hBOA
(a) (a) hBOA
hBOA (b) GA (twopoint)
(b) GA GA (twopoint)
(b) (twopoint) (c)(c) GA (uniform)
GA GA (uniform)
(c) (uniform)
Figure 2: Growth ofofof the numberevaluations with problem size.
Figure 2: 2: Growththe number ofofof evaluations with problem size.
Figure Growth the number evaluations with problem size.
6 σ=2.00
σ=2.00 6 6 6 σ=2.00
σ=2.00 6 6 σ=2.00
σ=2.00
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
10 10 σ=2.00 1010 10σ=2.00 10 1010σ=2.00
6
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
Number of flips (GA, twopoint)
6 6
10
σ=1.50
σ=1.50
σ=1.50 σ=1.50
σ=1.50
σ=1.50 σ=1.50
σ=1.50
σ=1.50
Number of flips (hBOA)
Number of flips (hBOA)
Number of flips (hBOA)
5 5 5 σ=1.00
σ=1.00
σ=1.00 5 σ=1.00
5 5 σ=1.00σ=1.00 5 5 σ=1.00
σ=1.00
5 σ=1.00
10
10 10 σ=0.83 1010 10 σ=0.83 10 1010 σ=0.83
σ=0.83
σ=0.83 σ=0.83
σ=0.83 σ=0.83
σ=0.83
σ=0.75
σ=0.75
σ=0.75 σ=0.75
σ=0.75
σ=0.75 σ=0.75
σ=0.75
σ=0.75
4 4 4 4 4 4 4 4 4
10 σ=0.55
σ=0.55
10 10 σ=0.55 σ=0.55
1010 10σ=0.55
σ=0.55 σ=0.55
10 1010σ=0.55
σ=0.55
σ=0.00
σ=0.00
σ=0.00 σ=0.00
σ=0.00
σ=0.00 σ=0.00
σ=0.00
σ=0.00
3 3 3 3 3 3 3 3 3
10
10 10 1010 10 10 1010
2 2 2 2 2 2 2 2 2
10
10 10 1010 10 10 1010
16
16 16 32
32 32 64
64 64 128
128 128 1616 16 32 32 32 64 64 64 128 128
128 16 1616 32 3232 64 6464 128128
128
Problem size
Problem size size
Problem Problem size
Problem size size
Problem Problem sizesize
Problem size
Problem
(a) (a) hBOA
hBOA
(a) hBOA (b) GA GA (twopoint)
(b) (twopoint)
(b) GA (twopoint) (c)(c) GA (uniform)
GA GA (uniform)
(c) (uniform)
Scalability 3:3:Growth ofofof the numberflipsflips with problem size. better
Figure
of hBOA and GA with twopoint crossover
Figure 3: Growththe number ofofof with problem size.
Figure Growth the number flips with problem size.
for short-range interactions.
and how thethe effects σuniform gets worse the algorithm under consideration; this is the topic
and how theeffects of ofchange depending on the algorithm under consideration; this is is the topic
effects of change depending on the algorithm under consideration; this the topic
and howGA with σ σ change depending on with shorter-range interactions.
discussed in thethe following few paragraphs.
discussed in in following few paragraphs.
discussed the following few paragraphs.
Based on thethe definitionthe the 1D spin glass with power-law interactions,the the value of σ grows,
Based onon definition of of 1D spin glass with power-law interactions, asas the value of grows,
Based the definition of the 1D spin glass with power-law interactions, as value of σ σ grows,
thethe rangethethe most significant interactions is reduced. With reduction of the range of interactions,
therange of of most significant interactions isis reduced. With reduction of the range of interactions,
range of the most significant interactions reduced. With reduction of the range of interactions,
thethe problem should become easier both for selectorecombinative GAs capablelinkage learning,
theproblem should become easier both for selectorecombinative GAs capable of ofof linkage learning,
problem should become easier both for selectorecombinative GAs capable linkage learning,
such as hBOA, as well as for for selectorecombinative GAs which rarely break interactionsbetween
such as as hBOA, as well as selectorecombinative GAs which rarely break interactions between
such hBOA, as well as for selectorecombinative GAs which rarely break interactions between
closely located bits, such as GAGA with twopoint crossover. This isis clearlydemonstrated by the
closely located bits, such asas with twopoint crossover. This is clearly demonstrated by the the
closely located bits, such GA with twopoint crossover. This clearly demonstrated by
results for for these two algorithms presentedfigures 2 2 2 andAlthough for many problem sizes, the the
results forthese two algorithms presented ininin figures and 3.3. Although for many problem sizes,
results these two algorithms presented figures and 3. Although for many problem sizes, the
Martinabsolute number evaluations and the the number flips are of EAs on smaller Glassfor larger valuesσ, σ,
absolute number of G. Katzgraber
Pelikan and Helmut of evaluations and number of of flips are factfact smaller larger values of of
Analysis in in 1D Spin for with Power-Law Interactions