This document studies the effects of biasing the initial population in the Univariate Marginal Distribution Algorithm (UMDA) on the onemax and noisy onemax problems. Theoretical models are developed to predict the impact on population size, number of generations, and number of evaluations for different levels of initial bias. Experiments match the theoretical predictions, showing that a positively biased initial population improves performance while a negatively biased population harms performance. Introducing noise does not change these effects.
Statistical Analysis of Imaging Trials: Multivariate Methods and Prediction, Probing Cancer with MR II: From Animal Models to Clinical Assessment, 17th Annual Conference of the International Society for Magnetic Resonance in Medicine, Honolulu, Hawai\'i, April 19-24
Estimation of Distribution Algorithms TutorialMartin Pelikan
Probabilistic model-building algorithms (PMBGAs), also called estimation of distribution algorithms (EDAs) and iterated density estimation algorithms (IDEAs), replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
This tutorial provides a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.
The video of this tutorial presented at GECCO-2008 can be found at
http://medal.cs.umsl.edu/blog/?p=293
Economics Assignment Sample Problems Set 6 with SolutionsHebrew Johnson
Don't let that tough economics assignment break you down for no reason. Have your economics assignments solved today by a team of expert tutors who will provide well explained and laid out solutions tailored to meet your requirements and ensure you submit perfect work on time. Get help 24/7 on economics homework by visiting https://www.economicshomeworkhelper.com/
Approximate dynamic programming using fluid and diffusion approximations with...Sean Meyn
https://netfiles.uiuc.edu/meyn/www/spm_files/TD5552009/TD555.html
Presentation by Dayu Huang,
based on paper of the same name in Proc. of the 48th IEEE Conference on Decision and Control, December 16-18 2009
Effects of a Deterministic Hill climber on hBOAMartin Pelikan
Hybridization of global and local search algorithms is a well-established technique for enhancing the efficiency of search algorithms. Hybridizing estimation of distribution algorithms (EDAs) has been repeatedly shown to produce better performance than either the global or local search algorithm alone. The hierarchical Bayesian optimization algorithm (hBOA) is an advanced EDA which has previously been shown to benefit from hybridization with a local searcher. This paper examines the effects of combining hBOA with a deterministic hill climber (DHC). Experiments reveal that allowing DHC to find the local optima makes model building and decision making much easier for hBOA. This reduces the minimum population size required to find the global optimum, which substantially improves overall performance.
Intelligent Bias of Network Structures in the Hierarchical BOAMartin Pelikan
One of the primary advantages of estimation of distribution algorithms (EDAs) over many other stochastic optimization techniques is that they supply us with a roadmap of how they solve a problem. This roadmap consists of a sequence of probabilistic models of candidate solutions of increasing quality. The first model in this sequence would typically encode the uniform distribution over all admissible solutions whereas the last model would encode a distribution that generates at least one global optimum with high probability. It has been argued that exploiting this knowledge should improve EDA performance when solving similar problems. This paper presents an approach to bias the building of Bayesian network models in the hierarchical Bayesian optimization algorithm (hBOA) using information gathered from models generated during previous hBOA runs on similar problems. The approach is evaluated on trap-5 and 2D spin glass problems.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Fitness inheritance in the Bayesian optimization algorithmMartin Pelikan
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
Empirical Analysis of ideal recombination on random decomposable problemskknsastry
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Statistical Analysis of Imaging Trials: Multivariate Methods and Prediction, Probing Cancer with MR II: From Animal Models to Clinical Assessment, 17th Annual Conference of the International Society for Magnetic Resonance in Medicine, Honolulu, Hawai\'i, April 19-24
Estimation of Distribution Algorithms TutorialMartin Pelikan
Probabilistic model-building algorithms (PMBGAs), also called estimation of distribution algorithms (EDAs) and iterated density estimation algorithms (IDEAs), replace traditional variation of genetic and evolutionary algorithms by (1) building a probabilistic model of promising solutions and (2) sampling the built model to generate new candidate solutions. PMBGAs are also known as estimation of distribution algorithms (EDAs) and iterated density-estimation algorithms (IDEAs).
Replacing traditional crossover and mutation operators by building and sampling a probabilistic model of promising solutions enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. There are many successful applications of PMBGAs, for example, Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
This tutorial provides a gentle introduction to PMBGAs with an overview of major research directions in this area. Strengths and weaknesses of different PMBGAs will be discussed and suggestions will be provided to help practitioners to choose the best PMBGA for their problem.
The video of this tutorial presented at GECCO-2008 can be found at
http://medal.cs.umsl.edu/blog/?p=293
Economics Assignment Sample Problems Set 6 with SolutionsHebrew Johnson
Don't let that tough economics assignment break you down for no reason. Have your economics assignments solved today by a team of expert tutors who will provide well explained and laid out solutions tailored to meet your requirements and ensure you submit perfect work on time. Get help 24/7 on economics homework by visiting https://www.economicshomeworkhelper.com/
Approximate dynamic programming using fluid and diffusion approximations with...Sean Meyn
https://netfiles.uiuc.edu/meyn/www/spm_files/TD5552009/TD555.html
Presentation by Dayu Huang,
based on paper of the same name in Proc. of the 48th IEEE Conference on Decision and Control, December 16-18 2009
Effects of a Deterministic Hill climber on hBOAMartin Pelikan
Hybridization of global and local search algorithms is a well-established technique for enhancing the efficiency of search algorithms. Hybridizing estimation of distribution algorithms (EDAs) has been repeatedly shown to produce better performance than either the global or local search algorithm alone. The hierarchical Bayesian optimization algorithm (hBOA) is an advanced EDA which has previously been shown to benefit from hybridization with a local searcher. This paper examines the effects of combining hBOA with a deterministic hill climber (DHC). Experiments reveal that allowing DHC to find the local optima makes model building and decision making much easier for hBOA. This reduces the minimum population size required to find the global optimum, which substantially improves overall performance.
Intelligent Bias of Network Structures in the Hierarchical BOAMartin Pelikan
One of the primary advantages of estimation of distribution algorithms (EDAs) over many other stochastic optimization techniques is that they supply us with a roadmap of how they solve a problem. This roadmap consists of a sequence of probabilistic models of candidate solutions of increasing quality. The first model in this sequence would typically encode the uniform distribution over all admissible solutions whereas the last model would encode a distribution that generates at least one global optimum with high probability. It has been argued that exploiting this knowledge should improve EDA performance when solving similar problems. This paper presents an approach to bias the building of Bayesian network models in the hierarchical Bayesian optimization algorithm (hBOA) using information gathered from models generated during previous hBOA runs on similar problems. The approach is evaluated on trap-5 and 2D spin glass problems.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Fitness inheritance in the Bayesian optimization algorithmMartin Pelikan
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions, but also for estimating their fitness. The results indicate that fitness inheritance is a promising concept in BOA, because population-sizing requirements for building appropriate models of promising solutions lead to good fitness estimates even if only a small proportion of candidate solutions is evaluated using the actual fitness function. This can lead to a reduction of the number of actual fitness evaluations by a factor of 30 or more.
Empirical Analysis of ideal recombination on random decomposable problemskknsastry
This paper analyzes the behavior of a selectorecombinative genetic algorithm (GA) with an ideal crossover on a class of random additively decomposable problems (rADPs). Specifically, additively decomposable problems of order k whose subsolution fitnesses are sampled from the standard uniform distribution U[0,1] are analyzed. The scalability of the selectorecombinative GA is investigated for 10,000 rADP instances. The validity of facetwise models in bounding the population size, run duration, and the number of function evaluations required to successfully solve the problems is also verified. Finally, rADP instances that are easiest and most difficult are also investigated.
Towards billion bit optimization via parallel estimation of distribution algo...kknsastry
This paper presents a highly efficient, fully parallelized implementation of the compact genetic algorithm to solve very large scale problems with millions to billions of variables. The paper presents principled results demonstrating the scalable solution of a difficult test function on instances over a billion variables using a parallel implementation of compact genetic algorithm (cGA). The problem addressed is a noisy, blind problem over a vector of binary decision variables. Noise is added equaling up to a tenth of the deterministic objective function variance of the problem, thereby making it difficult for simple hillclimbers to find the optimal solution. The compact GA, on the other hand, is able to find the optimum in the presence of noise quickly, reliably, and accurately, and the solution scalability follows known convergence theories. These results on noisy problem together with other results on problems involving varying modularity, hierarchy, and overlap foreshadow routine solution of billion-variable problems across the landscape of search problems.
iBOA: The Incremental Bayesian Optimization AlgorithmMartin Pelikan
This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.
Using Previous Models to Bias Structural Learning in the Hierarchical BOAMartin Pelikan
Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. While the primary goal of applying EDAs is to discover the global optimum or at least its accurate approximation, besides this, any EDA provides us with a sequence of probabilistic models, which in most cases hold a great deal of information about the problem. Although using problem-specific knowledge has been shown to significantly improve performance of EDAs and other evolutionary algorithms, this readily available source of problem-specific information has been practically ignored by the EDA community. This paper takes the first step towards the use of probabilistic models obtained by EDAs to speed up the solution of similar problems in future. More specifically, we propose two approaches to biasing model building in the hierarchical Bayesian optimization algorithm (hBOA) based on knowledge automatically learned from previous hBOA runs on similar problems. We show that the proposed methods lead to substantial speedups and argue that the methods should work well in other applications that require solving a large number of problems with similar structure.
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOAMartin Pelikan
An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distance-based statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1) test the technique on several classes of NP-complete problems, including MAXSAT, spin glasses and minimum vertex cover; (2) demonstrate that the technique is effective even when previous runs were done on problems of different size; (3) provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often yield nearly multiplicative speedups.
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesMartin Pelikan
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying optimization problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem.
The Bayesian Optimization Algorithm with Substructural Local SearchMartin Pelikan
This work studies the utility of using substructural neighborhoods for local search in the Bayesian optimization algorithm (BOA). The probabilistic model of BOA, which automatically identifies important problem substructures, is used to define the structure of the neighborhoods used in local search. Additionally, a surrogate fitness model is considered to evaluate the improvement of the local search steps. The results show that performing substructural local search in BOA significatively reduces the number of generations necessary to converge to optimal solutions and thus provides substantial speedups.
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Martin Pelikan
It has been shown that model building in the hierarchical Bayesian optimization algorithm (hBOA) can be efficiently parallelized by randomly generating an ancestral ordering of the nodes of the network prior to learning the network structure and allowing only dependencies consistent with the generated ordering. However, it has not been thoroughly shown that this approach to restricting probabilistic models does not affect scalability of hBOA on important classes of problems. This presentation demonstrates that although the use of a random ancestral ordering restricts the structure of considered models to allow efficient parallelization of model building, its effects on hBOA performance and scalability are negligible.
Graph mining 2: Statistical approaches for graph miningtuxette
Workshop "Advanced mathematics for network analysis"
organized by Institut des Systèmes Complexes de Toulouse
http://isc-t.fr/evenements/?event_id1=2
Luchon, France
May, 3rd 2016
Population Dynamics in Conway’s Game of Life and its VariantsMartin Pelikan
The presentation for the project of high school students Yonatan Biel and David Hua made in the Students and Teachers As Research Scientists (STARS) program at the Missouri Estimation of Distribution Algorithms Laboratory (MEDAL). To see animations, please download the powerpoint presentation.
Image segmentation using a genetic algorithm and hierarchical local searchMartin Pelikan
This paper proposes a hybrid genetic algorithm to perform image segmentation based on applying the q-state Potts spin glass model to a grayscale image. First, the image is converted to a set of weights for a q-state spin glass and then a steady-state genetic algorithm is used to evolve candidate segmented images until a suitable candidate solution is found. To speed up the convergence to an adequate solution, hierarchical local search is used on each evaluated solution. The results show that the hybrid genetic algorithm with hierarchical local search is able to efficiently perform image segmentation. The necessity of hierarchical search for these types of problems is also clearly demonstrated.
Distance-based bias in model-directed optimization of additively decomposable...Martin Pelikan
For many optimization problems it is possible to define a distance metric between problem variables that correlates with the likelihood and strength of interactions between the variables. For example, one may define a metric so that the dependencies between variables that are closer to each other with respect to the metric are expected to be stronger than the dependencies between variables that are further apart. The purpose of this paper is to describe a method that combines such a problem-specific distance metric with information mined from probabilistic models obtained in previous runs of estimation of distribution algorithms with the goal of solving future problem instances of similar type with increased speed, accuracy and reliability. While the focus of the paper is on additively decomposable problems and the hierarchical Bayesian optimization algorithm, it should be straightforward to generalize the approach to other model-directed optimization techniques and other problem classes. Compared to other techniques for learning from experience put forward in the past, the proposed technique is both more practical and more broadly applicable.
Pairwise and Problem-Specific Distance Metrics in the Linkage Tree Genetic Al...Martin Pelikan
The linkage tree genetic algorithm (LTGA) identifies linkages between problem variables using an agglomerative hierarchical clustering algorithm and linkage trees. This enables LTGA to solve many decomposable problems that are difficult with more conventional genetic algorithms. The goal of this paper is two-fold: (1) Present a thorough empirical evaluation of LTGA on a large set of problem instances of additively decomposable problems and (2) speed up the clustering algorithm used to build the linkage trees in LTGA by using a pairwise and a problem-specific metric.
http://medal.cs.umsl.edu/files/2011001.pdf
Finding Ground States of Sherrington-Kirkpatrick Spin Glasses with Hierarchic...Martin Pelikan
This study focuses on the problem of finding ground states of random instances of the Sherrington-Kirkpatrick (SK) spin-glass model with Gaussian couplings. While the ground states of SK spin-glass instances can be obtained with branch and bound, the computational complexity of branch and bound yields instances of not more than about 90 spins. We describe several approaches based on the hierarchical Bayesian optimization algorithm (hBOA) to reliably identifying ground states of SK instances intractable with branch and bound, and present a broad range of empirical results on such problem instances. We argue that the proposed methodology holds a big promise for reliably solving large SK spin-glass instances to optimality with practical time complexity. The proposed approaches to identifying global optima reliably can also be applied to other problems and they can be used with many other evolutionary algorithms. Performance of hBOA is compared to that of the genetic algorithm with two common crossover operators.
Computational complexity and simulation of rare events of Ising spin glasses Martin Pelikan
We discuss the computational complexity of random 2D Ising spin glasses, which represent an interesting class of constraint satisfaction problems for black box optimization. Two extremal cases are considered: (1) the +/- J spin glass, and (2) the Gaussian spin glass. We also study a smooth transition between these two extremal cases. The computational complexity of all studied spin glass systems is found to be dominated by rare events of extremely hard spin glass samples. We show that complexity of all studied spin glass systems is closely related to Frechet extremal value distribution. In a hybrid algorithm that combines the hierarchical Bayesian optimization algorithm (hBOA) with a deterministic bit-flip hill climber, the number of steps performed by both the global searcher (hBOA) and the local searcher follow Frechet distributions. Nonetheless, unlike in methods based purely on local search, the parameters of these distributions confirm good scalability of hBOA with local search. We further argue that standard performance measures for optimization algorithms---such as the average number of evaluations until convergence---can be misleading. Finally, our results indicate that for highly multimodal constraint satisfaction problems, such as Ising spin glasses, recombination-based search can provide qualitatively better results than mutation-based search.
Hybrid Evolutionary Algorithms on Minimum Vertex Cover for Random GraphsMartin Pelikan
This work analyzes the hierarchical Bayesian optimization algorithm (hBOA) on minimum vertex cover for standard classes of random graphs and transformed SAT instances. The performance of hBOA is compared with that of the branch-and-bound problem solver (BB), the simple genetic algorithm (GA) and the parallel simulated annealing (PSA). The results indicate that BB is significantly outperformed by all the other tested methods, which is expected as BB is a complete search algorithm and minimum vertex cover is an NP-complete problem. The best performance is achieved by hBOA; nonetheless, the performance differences between hBOA and other evolutionary algorithms are relatively small, indicating that mutation-based search and recombination-based search lead to similar performance on the tested classes of minimum vertex cover problems.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Initial-Population Bias in the Univariate Estimation of Distribution Algorithm
1. Initial-Population Bias in the Univariate
Estimation of Distribution Algorithm
Martin Pelikan and Kumara Sastry
Missouri Estimation of Distribution Algorithms Laboratory (MEDAL)
University of Missouri, St. Louis, MO
http://medal.cs.umsl.edu/
pelikan@cs.umsl.edu
Download MEDAL Report No. 2009001
http://medal.cs.umsl.edu/files/2009001.pdf
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
2. Motivation
Importance of bias
Efficiency enhancements of EDAs may introduce bias.
Examples
Local search.
Injection of prior full or partial solutions.
Bias based on prior knowledge about the problem.
Bias may have positive or negative effects.
It is important to understand these effects.
This study
Study the effects of biasing the initial population.
Consider UMDA on onemax and noisy onemax.
Theory and experiment.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
3. Outline
1. UMDA.
2. Basic model for bias.
3. Population size.
4. Number of generations.
5. Compare to hill climber.
6. Conclusions.
7. Future work.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
4. Probability Vector as a Model
Probability vector, p
Store probability of 1 in each position.
p = (p1 , p2 , . . . , pn ).
pi is probability of 1 in position i.
Replace crossover/mutation by model building and sampling
Learn the probability vector from selected points.
Sample new points according to the learned vector.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
5. Univariate Marginal Distribution Algorithm (UMDA)
UMDA (Muhlenbein & Paaß, 1996).
1. Generate random population of binary strings.
2. Selection (e.g. tournament selection).
3. Example: Probability Vector
Learn probability vector for selected solutions.
4. Sample probability vector to generate new solutions.
5. Incorporate new solutions into original population.
(Mühlenbein, Paass, 1996), (Baluja, 1994)
Current Selected New
population population population
Probability
11001 11001 vector 10101
10101 10101 10001
1.0 0.5 0.5 0.0 1.0
01011 01011 11101
11000 11000 11001
Martin Pelikan, Probabilistic Model-Building GAs
13
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
6. Assumptions
Algorithm
UMDA with binary tournament selection and full replacement.
Results should generalize to other selection methods with
fixed selection intensity.
Fitness
Deterministic onemax:
n
onemax(X1 , X2 , . . . , Xn ) = Xi
i=1
Noisy onemax:
n
onemaxnoisy (X1 , X2 , . . . , Xn ) = Xi + N (0, σ 2 )
i=1
Results should generalize to other separable problems of
bounded order (if good model is used).
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
7. Basic Model for Bias
Basic model
Introduce bias in the initial population.
Increase or decrease the initial proportion pinit of optimal bits.
Use the same bias for all string positions.
Examples
pinit = 0.2 pinit = 0.5 pinit = 0.8
00001 11110 11110
00001 01010 01011
01000 11101 01111
00010 00010 11111
10000 11011 10111
What to expect?
pinit grows ⇒ UMDA performance improves.
pinit decreases ⇒ UMDA performance suffers.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
8. Theoretical Model for Deterministic Onemax
Population size
Gambler’s ruin population-sizing model (Harik et al., 1997).
Population sizing bound
1 √
N =− ln α πn
4pinit
Number of generations
Convergence model (Thierens & Goldberg, 1994).
Number of generations bound
π √
G= − arcsin(2pinit − 1) πn
2
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
9. Deterministic Onemax: Theoretical Speedup
Speedup factors
How many times faster the algorithm becomes compared to
pinit = 0.5?
Population size:
1
ηN =
2pinit
Number of generations:
2 arcsin(2pinit − 1)
ηG = 1 −
π
Number of evaluations:
1 2 arcsin(2pinit − 1)
ηE = 1−
2pinit π
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
10. Experimental Setup
Basic setup
Binary tournament selection without replacement.
Full replacement (no elitism or niching).
Problems of n = 100 to n = 500 tested (focus on n = 500).
Population size set using bisection to ensure 10 successful
runs with 95% optimal solution out of 10 independent runs.
Bisection repeated 10 times for each setting.
Observed statistics
Population size.
Number of generations.
Number of evaluations.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
11. Deterministic Onemax: Speedup and Slowdown
Speedup Slowdown
8 20
Number of evaluations Number of evaluations
Population size Population size
6 Number of generations 15 Number of generations
Base case Base case
Speedup
Slowdown
4
10
2 (faster than pinit=0.5)
5
0 (slower than pinit=0.5)
(slower than p =0.5)
init
0 (faster than pinit=0.5)
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
pinit p
init
Empirical results confirm intuition. of
size, the number of generations and the The factor by which the population siz
Figure 2: number
mpared to the base case bias improves 0.5. The three
Positive with pevaluations should change with varying pinit comp
init = performance.
Negative bias The results are shown the population-sizing and tim
worsens performance.
time-to-convergence models. factors are based on
as speedup and slowdown curves.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
12. Deterministic Onemax: Experiments vs. Theory
Population size Number of generations x
5
120
Experiment Experiment
400
Number of evaluations
Theory Theory 4
Number of generations
100
Population size
300 80 3
200 60
2
40
100 1
20
0 0
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0
pinit pinit
Empirical results size. theory.
(a) Population
match (b) Number of generations.
Theory makes conservative estimates.
Figure 3: Effects of initial-population bias on UMDA performance
Empirical results confirm intuition.
without external noise.
5.1 Noisy Onemax
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
13. Theoretical Model for Noisy Onemax: Population Size
Population size
Gambler’s ruin population-sizing model (Harik et al., 1997).
Variance of external noise given in terms of fitness variance:
2 2
σnoise = β × σf itness
Population sizing bound becomes
1
N =− ln α πn(1 + β)
4pinit
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
14. Theoretical Model for Noisy Onemax: Generations
Number of generations
Convergence model (Miller & Goldberg, 1994; Sastry, 2001;
Goldberg, 2002).
Difficult to solve analytically for arbitrary pinit .
Effects of pinit modeled by an empirical fit.
Number of generations bound
π√ 2 arcsin(2pinit − 1)
G= πn 1+β 1−
2 π
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
15. Noisy Onemax: Theoretical Speedup
Speedup factors same as for deterministic case!
Population size:
1
ηN =
2pinit
Number of generations:
2 arcsin(2pinit − 1)
ηG = 1 −
π
Number of evaluations:
1 2 arcsin(2pinit − 1)
ηE = 1−
2pinit π
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
16. Figure 4: Effects of initial-population bias on UMDA performance
Noisy Onemax: Experiments vs. Theory for β = 1 o
2 2
σN = 0.5σF = 0.125n.
Population size Number of generations
x
800 250 15
Experiment Experiment
Theory Theory
Number of evaluations
Number of generations
200
600
Population size
10
150
400
100 5
200
50
0 0
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0
pinit pinit
(a) Population size. (b) Number of generations. (
Empirical results match theory.
Figure 5: Effects of initial-population bias estimate. performance o
Population sizing remains a conservative on UMDA
2 =Note: β = 1 is a lot of noise (noise variance equal to overall
2
σN σF = 0.25n.
fitness variance).
Figure 8 visualizes the effects of external noise on the number of
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
17. Compare to Hill Climber on Deterministic Case
2 2
on UMDA performance with external noise σN = 2σF .
4
x 10
4
Experiment UMDA
heory Hill Climbing
Number of evaluations
3
2
1
0
7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
init
onemax. (b) Comparison of UMDA and HC.
Performance of HC is great regardless of bias.
This agrees with theory (M¨hlenbein, 1992).
u
500-bit deterministic onemax and its comparison to UMDA.
uhlenbein, Kumara Sastry is used to provide an upper bound on the
¨Martin Pelikan and 1992) Initial-Population Bias in UMDA
18. Compare to Hill Climber on Noisy Case
Performance of HC becomes poor with noise!
β n pinit HC evaluations UMDA evaluations
0.5 10 0.1 4,449 1,210
0.5 25 0.1 2,125,373 1,886
0.5 10 0.5 11,096 66
0.5 25 0.5 8,248,140 169
1.0 5 0.1 215 574
1.0 15 0.1 5,691,725 1,210
1.0 5 0.5 64 20
1.0 15 0.5 15,738,168 64
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
19. Conclusions
We have good theoretical understanding of the effects of one
type of initial-population bias on performance of UMDA on
deterministic and noisy onemax.
Effects of bias match intuition
Good bias improves performance.
Bad bias worsens performance.
Effects of bias are independent of noise.
Experimental results match theory.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
20. Future Work
Study specific efficiency enhancement techniques and the bias
they introduce, and apply the theory developed here to
estimate the final effects.
Extend this work to other types of bias.
Extend this work to other evolutionary algorithms, especially
the standard genetic algorithms with two-parent
recombination and EDAs with multivariate models (e.g. BOA
and ecGA).
Eliminate the empirical fit from the model for the noisy
onemax.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA
21. Acknowledgments
Acknowledgments
NSF; NSF CAREER grant ECS-0547013.
U.S. Air Force, AFOSR; FA9550-06-1-0096.
University of Missouri; High Performance Computing
Collaboratory sponsored by Information Technology Services;
Research Award; Research Board.
Martin Pelikan and Kumara Sastry Initial-Population Bias in UMDA