This paper presents a new algorithm called the translational propagation algorithm for generating near-optimal Latin hypercube designs (LHDs) without using formal optimization. The algorithm exploits patterns in optimal LHDs by translating small building blocks consisting of points within the design space. It was found that the algorithm represents a computationally fast strategy for obtaining good LHDs, especially for medium-dimensional design spaces. The algorithm divides the design space into blocks based on the number of points in the seed design and final LHD. It then propagates the seed design through translations to fill the blocks while maintaining Latin hypercube properties. Monte Carlo simulations evaluated the performance of the algorithm for different configurations and compared it to formal optimization approaches.
We give a modified version of a heuristic, available in the relevant literature, of the capacitated facility
location problem. A numerical experiment is performed to compare the two heuristics. The study would
help to design heuristics for different generalizations of the problem.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
Sampling-Based Planning Algorithms for Multi-Objective MissionsMd Mahbubur Rahman
multiobjective path planning has Increasing demand in military missions, rescue operations, construction job-sites.
There is Lack of robotic path planning algorithm that compromises multiple
objectives. Commonly no solution that optimizes all the objective functions. Here we modify RRT, RRT* sampling based algorithm.
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problemjfrchicanog
The document describes research on decomposing optimization problem landscapes into elementary components. It defines key landscape concepts like configuration space, neighborhood operators, and objective functions. It then introduces the idea of elementary landscapes where the objective function is a linear combination of eigenfunctions. The paper discusses decomposing general landscapes into a sum of elementary components and proposes using average neighborhood fitness for selection in non-elementary landscapes. It applies these concepts to the Hamiltonian Path Optimization problem, analyzing the problem's reversals and swaps neighborhoods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document summarizes a study that proposes a hybrid approach called HDMNS (Hybrid Data Mining Neighbourhood Search) to solve the NP-hard p-median problem. HDMNS combines a neighbourhood search metaheuristic with a data mining technique called frequent itemset mining. In experimental tests on datasets of various sizes, HDMNS produced better results than existing GRASP and neighbourhood search approaches, finding optimal or near-optimal solutions with similar or less execution time. The study concludes that HDMNS outperforms other methods for solving the p-median problem.
Design of Quadrature Mirror Filter Bank using Particle Swarm Optimization (PSO)IDES Editor
In this paper, the particle swarm optimization
technique is used for the design of a two channel quadrature
mirror filter (QMF) bank. A new method is developed to
optimize the prototype filter response in passband, stopband
and overall filter bank response. The design problem is
formulated as nonlinear unconstrained optimization of an
objective function, which is weighted sum of square of error
in passband, stop band and overall filter bank response at
frequency ( ω=0.5π ). For solving the given optimization
problem, the particle swarm optimization (PSO) technique
is used. As compared to the conventional design techniques,
the proposed method gives better performance in terms of
reconstruction error, mean square error in passband,
stopband, and computational time. Various design examples
are presented to illustrate the benefits provided by the
proposed method.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 6: Mixtureszukun
1. Gaussian mixtures are commonly used in computer vision and pattern recognition tasks like classification, segmentation, and probability density function estimation.
2. The document reviews Gaussian mixtures, which model a probability distribution as a weighted sum of Gaussian distributions. It discusses estimating Gaussian mixture models with the EM algorithm and techniques for model order selection like minimum description length and Gaussian deficiency.
3. Gaussian mixtures can model images and perform color-based segmentation. The EM algorithm is used to estimate the parameters of Gaussian mixtures by alternating between expectation and maximization steps.
We give a modified version of a heuristic, available in the relevant literature, of the capacitated facility
location problem. A numerical experiment is performed to compare the two heuristics. The study would
help to design heuristics for different generalizations of the problem.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
Sampling-Based Planning Algorithms for Multi-Objective MissionsMd Mahbubur Rahman
multiobjective path planning has Increasing demand in military missions, rescue operations, construction job-sites.
There is Lack of robotic path planning algorithm that compromises multiple
objectives. Commonly no solution that optimizes all the objective functions. Here we modify RRT, RRT* sampling based algorithm.
In this work, we propose to apply trust region optimization to deep reinforcement
learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also a method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo
environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problemjfrchicanog
The document describes research on decomposing optimization problem landscapes into elementary components. It defines key landscape concepts like configuration space, neighborhood operators, and objective functions. It then introduces the idea of elementary landscapes where the objective function is a linear combination of eigenfunctions. The paper discusses decomposing general landscapes into a sum of elementary components and proposes using average neighborhood fitness for selection in non-elementary landscapes. It applies these concepts to the Hamiltonian Path Optimization problem, analyzing the problem's reversals and swaps neighborhoods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document summarizes a study that proposes a hybrid approach called HDMNS (Hybrid Data Mining Neighbourhood Search) to solve the NP-hard p-median problem. HDMNS combines a neighbourhood search metaheuristic with a data mining technique called frequent itemset mining. In experimental tests on datasets of various sizes, HDMNS produced better results than existing GRASP and neighbourhood search approaches, finding optimal or near-optimal solutions with similar or less execution time. The study concludes that HDMNS outperforms other methods for solving the p-median problem.
Design of Quadrature Mirror Filter Bank using Particle Swarm Optimization (PSO)IDES Editor
In this paper, the particle swarm optimization
technique is used for the design of a two channel quadrature
mirror filter (QMF) bank. A new method is developed to
optimize the prototype filter response in passband, stopband
and overall filter bank response. The design problem is
formulated as nonlinear unconstrained optimization of an
objective function, which is weighted sum of square of error
in passband, stop band and overall filter bank response at
frequency ( ω=0.5π ). For solving the given optimization
problem, the particle swarm optimization (PSO) technique
is used. As compared to the conventional design techniques,
the proposed method gives better performance in terms of
reconstruction error, mean square error in passband,
stopband, and computational time. Various design examples
are presented to illustrate the benefits provided by the
proposed method.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 6: Mixtureszukun
1. Gaussian mixtures are commonly used in computer vision and pattern recognition tasks like classification, segmentation, and probability density function estimation.
2. The document reviews Gaussian mixtures, which model a probability distribution as a weighted sum of Gaussian distributions. It discusses estimating Gaussian mixture models with the EM algorithm and techniques for model order selection like minimum description length and Gaussian deficiency.
3. Gaussian mixtures can model images and perform color-based segmentation. The EM algorithm is used to estimate the parameters of Gaussian mixtures by alternating between expectation and maximization steps.
RuleML 2015 Constraint Handling Rules - What Else?RuleML
Constraint Handling Rules (CHR) is both a versatile theoretical formalism based on logic and an efficient practical high-level programming language based on rules and constraints.
Procedural knowledge is often expressed by if-then rules, events and actions are related by reaction rules, change is expressed by update rules. Algorithms are often specified using inference rules, rewrite rules, transition rules, sequents, proof rules, or logical axioms. All these kinds of rules can be directly written in CHR. The clean logical semantics of CHR facilitates non-trivial program analysis and transformation. About a dozen implementations of CHR exist in Prolog, Haskell, Java, Javascript and C. Some of them allow to apply millions of rules per second. CHR is also available as WebCHR for online experimentation with more than 40 example programs. More than 200 academic and industrial projects worldwide use CHR, and about 2000 research papers reference it.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Approximation Algorithms for the Directed k-Tour and k-Stroll ProblemsSunny Kr
In the Asymmetric Traveling Salesman Problem (ATSP), the input is a directed n-vertex graph G = (V; E) with nonnegative edge lengths, and the goal is to nd a minimum-length tour, visiting
each vertex at least once. ATSP, along with its undirected counterpart, the Traveling Salesman
problem, is a classical combinatorial optimization problem
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...Mokhtar SELLAMI
This document presents a study comparing Long Short-Term Memory (LSTM) architectures for next frame forecasting in satellite image time series data. Three models - ConvLSTM, Stack-LSTM and CNN-LSTM - were implemented and evaluated based on training loss, time and structural similarity between predicted and actual images. The CNN-LSTM architecture was found to provide the best performance, achieving accurate predictions while requiring less processing time than ConvLSTM for higher resolution images. Overall, the study demonstrates the suitability of deep learning models like CNN-LSTM for predictive tasks using earth observation satellite imagery time series data.
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICVLSICS Design
In VLSI technology, designers main concentration were on area required and on performance of the
device. In VLSI design power consumption is one of the major concerns due to continuous increase in chip
density and decline in size of CMOS circuits and frequency at which circuits are operating. By considering
these parameter logical circuits are designed using quaternary voltage mode logic and quaternary current
mode logic. Power consumption required for quaternary voltage mode logic is 51.78 % less as compared
to binary . Area in terms of number of transistor required for quaternary voltage mode logic is 3 times
more as compared to binary. As quaternary voltage mode circuit required large area as compared to
quaternary current mode circuit but power consumption required in quaternary voltage mode circuit is less
than that required in quaternary current mode circuit .
Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S...Mokhtar SELLAMI
This document summarizes a research paper on parallel hybridization techniques for solving Boolean satisfiability problems. It describes combining search space splitting and portfolio approaches to leverage their strengths while avoiding weaknesses. Experimental results on hard SAT instances showed the proposed approach solved more problems faster than a baseline portfolio solver. Future work could investigate optimizing parameters like the "jump factor" and addressing unsatisfiable instances.
The retrieval algorithms in remote sensing generally involve complex physical forward models that are nonlinear and computationally expensive to evaluate. Statistical emulation provides an alternative with cheap computation and can be used to calibrate model parameters and to improve computational efficiency of the retrieval algorithms. We introduce a framework of combining dimension reduction of input and output spaces and Gaussian process emulation
technique. The functional principal component analysis (FPCA) is chosen to reduce to the output space of thousands of dimensions by orders of magnitude. In addition, instead of making restrictive assumptions regarding the correlation structure of the high-dimensional input space,
we identity and exploit the most important directions of this space and thus construct a Gaussian process emulator with feasible computation. We will present preliminary results obtained from applying our method to OCO-2 data, and discuss how our framework can be generalized in
distributed systems. This is joint work with Jon Hobbs, Alex Konomi, Pulong Ma, and Anirban Mondal, and Joon Jin Song.
This document describes a new machine learning algorithm called the Balancing Board Machine (BBM). BBM approximates the centroid of the polyhedral cone that represents the version space in kernel machines. It does this by computing the centroid of the intersection between the version space cone and a hyperplane, and then iteratively "balancing" the hyperplane towards the centroid. This process improves the generalization performance over support vector machines. The document provides mathematical details on exactly computing polytope centroids and deriving efficient approximation algorithms for implementing BBM.
This document proposes an optimized version of the AO* algorithm called OAO* that improves upon AO* in two ways:
1) It performs a depth-limited search on nodes before assigning heuristic values, allowing previously underestimated nodes to become promising.
2) It handles interacting subproblems, where the optimal solution to one subproblem depends on the solution to another. This provides a more realistic solution.
The OAO* algorithm is shown to always provide an optimal or better solution than AO* by re-evaluating nodes and accounting for interactions between subproblems. It also explores fewer nodes overall through the use of an explored list to avoid re-exploring known nodes.
Development of Reliability Analysis and Multidisciplinary Design Optimization...Altair
This document summarizes the capabilities of the RAMDO software for reliability-based design optimization (RBDO). It discusses both sensitivity-based and sampling-based RBDO approaches in RAMDO. It also provides examples of multidisciplinary applications of RAMDO in areas like fatigue analysis, casting process design, vehicle crashworthiness, and more. Several published case studies demonstrate how RAMDO has been used to optimize designs while accounting for input variability and reliability constraints.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
IRJET-A Design Of Modified SSTBC Encoder to Noise Free Mimo Communication in ...IRJET Journal
The document proposes a modified semi-orthogonal space-time block code (SSTBC) encoder for noise-free MIMO communication in OFDM systems. It summarizes previous research on space-time coding techniques and identifies limitations. The proposed SSTBC encoder is evaluated and found to have a lower bit error rate than previous designs, indicating it provides more effective error correction and spectral efficiency for optimal MIMO communication. Key aspects of the proposed encoder include a new quasi-orthogonal fading matrix and use of discrete wavelet transform instead of fast Fourier transform.
This document describes a quadratic assignment problem (QAP) involving assigning 358 constraints and 50 variables. It provides an example of a QAP with 3 facilities and 3 locations. The QAP aims to assign facilities to locations in a way that minimizes total cost, which is a function of the flow between facilities and the distance between locations. Several applications of QAP are discussed, including facility location, scheduling, and ergonomic design problems.
Let me summarize the key points:
- The min-entropy of a state ρAB can be expressed as the optimal value of a semi-definite program (SDP)
- The SDP has a primal and dual problem
- Solving the dual problem gives an operator XAB that achieves the max of the dual objective ρAB, XAB
- The optimal dual states satisfy XB = 1B and correspond to Choi-Jamiolkowski states of unital CPMs
- This formulation allows the min-entropy to be computed by solving an SDP, which can be done efficiently for small systems
Cari 2020: A minimalistic model of spatial structuration of humid savanna veg...Mokhtar SELLAMI
The document describes a minimalistic spatial model of vegetation structuring in humid savanna environments. It aims to illustrate the emergence of spatial patterns and highlight fire resistance strategies of trees. The model builds on previous work, incorporating a spatial component to represent how tree and grass biomass change over space and time based on logistic growth, nonlocal competition/cooperation, and fire mortality probabilities dependent on tree density. Mathematical analysis and numerical illustration of the model will provide insights into savanna vegetation structuring.
This document presents distributed algorithms for k-truss decomposition of large graphs. It begins with introducing the problem of k-truss decomposition and definitions. It then describes how k-truss decomposition can be performed using traditional sequential algorithms. It proposes two distributed algorithms: MRTruss, which uses MapReduce but has limitations; and i-MRTruss, an improved version that aims to address the issues with MRTruss. The document outlines the different sections that will evaluate these distributed algorithms and experimentally analyze their performance.
Parallel hybrid chicken swarm optimization for solving the quadratic assignme...IJECEIAES
In this research, we intend to suggest a new method based on a parallel hybrid chicken swarm optimization (PHCSO) by integrating the constructive procedure of GRASP and an effective modified version of Tabu search. In this vein, the goal of this adaptation is straightforward about the fact of preventing the stagnation of the research. Furthermore, the proposed contribution looks at providing an optimal trade-off between the two key components of bio-inspired metaheuristics: local intensification and global diversification, which affect the efficiency of our proposed algorithm and the choice of the dependent parameters. Moreover, the pragmatic results of exhaustive experiments were promising while applying our algorithm on diverse QAPLIB instances . Finally, we briefly highlight perspectives for further research.
Model-Based User Interface Optimization: Part IV: ADVANCED TOPICS - At SICSA ...Aalto University
The document discusses optimization techniques for user interfaces, focusing on metaheuristics and ant colony optimization. Metaheuristics provide intelligent, black-box optimization by learning and updating models of the problem environment through cooperation of multiple search agents. Ant colony optimization is well-suited for user interface design as layouts are constructed iteratively. The document outlines challenges like robustness to noise, multi-objective optimization, and dynamic problems. Techniques for addressing complex tasks include decomposition, screening, space reduction, and sub-space elimination.
[AAAI-16] Tiebreaking Strategies for A* Search: How to Explore the Final Fron...Asai Masataro
This is a presentation used in the aural session in AAAI-16. The original paper is available at http: guicho271828.github.io/publications/ .
# Abstract
Despite recent improvements in search techniques for cost-optimal classical planning, the exponential growth of
the size of the search frontier in A* is unavoidable. We investigate tiebreaking strategies for A*,
experimentally analyzing the performance of standard tiebreaking strategies that break ties according to the heuristic value of the nodes. We find that tiebreaking has a significant impact on search algorithm performance when there are zero-cost operators that induce large plateau regions in the search space. We develop a new framework for tiebreaking based on a depth metric which measures distance from the entrance to the plateau, and propose a new, randomized strategy which significantly outperforms standard strategies on domains with zero-cost actions.
Stochastic computation graphs provide a framework for automatically deriving unbiased gradient estimators. They generalize backpropagation to deal with random variables by treating the computation graph as a DAG with both deterministic and stochastic nodes. This allows gradients to be computed through expectations, enabling techniques like policy gradients for reinforcement learning and variational inference. The document describes several policy gradient methods that use stochastic computation graphs to compute gradients, including SVG(0), SVG(1), and DDPG. These methods have been successfully applied to robotics tasks and driving.
An Exact Branch And Bound Algorithm For The General Quadratic Assignment ProblemJoe Andelija
The document describes an exact branch and bound algorithm for solving the general quadratic assignment problem. It reviews several existing exact algorithms and integer programming formulations for the QAP. The author proposes a new exact algorithm based on linearizing the general QAP into a linear assignment problem that is smaller in size. Computational results and comparisons to other methods are discussed.
This document summarizes a survey paper on using intelligent techniques like expert systems, fuzzy logic, genetic algorithms, and neural networks to solve facility layout problems (FLP). FLP involves assigning facilities to locations to minimize costs associated with facility interactions. The paper discusses conventional algorithms and intelligent techniques for FLP, including optimal algorithms like branch and bound, and suboptimal algorithms like construction and improvement heuristics.
RuleML 2015 Constraint Handling Rules - What Else?RuleML
Constraint Handling Rules (CHR) is both a versatile theoretical formalism based on logic and an efficient practical high-level programming language based on rules and constraints.
Procedural knowledge is often expressed by if-then rules, events and actions are related by reaction rules, change is expressed by update rules. Algorithms are often specified using inference rules, rewrite rules, transition rules, sequents, proof rules, or logical axioms. All these kinds of rules can be directly written in CHR. The clean logical semantics of CHR facilitates non-trivial program analysis and transformation. About a dozen implementations of CHR exist in Prolog, Haskell, Java, Javascript and C. Some of them allow to apply millions of rules per second. CHR is also available as WebCHR for online experimentation with more than 40 example programs. More than 200 academic and industrial projects worldwide use CHR, and about 2000 research papers reference it.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Approximation Algorithms for the Directed k-Tour and k-Stroll ProblemsSunny Kr
In the Asymmetric Traveling Salesman Problem (ATSP), the input is a directed n-vertex graph G = (V; E) with nonnegative edge lengths, and the goal is to nd a minimum-length tour, visiting
each vertex at least once. ATSP, along with its undirected counterpart, the Traveling Salesman
problem, is a classical combinatorial optimization problem
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...Mokhtar SELLAMI
This document presents a study comparing Long Short-Term Memory (LSTM) architectures for next frame forecasting in satellite image time series data. Three models - ConvLSTM, Stack-LSTM and CNN-LSTM - were implemented and evaluated based on training loss, time and structural similarity between predicted and actual images. The CNN-LSTM architecture was found to provide the best performance, achieving accurate predictions while requiring less processing time than ConvLSTM for higher resolution images. Overall, the study demonstrates the suitability of deep learning models like CNN-LSTM for predictive tasks using earth observation satellite imagery time series data.
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICVLSICS Design
In VLSI technology, designers main concentration were on area required and on performance of the
device. In VLSI design power consumption is one of the major concerns due to continuous increase in chip
density and decline in size of CMOS circuits and frequency at which circuits are operating. By considering
these parameter logical circuits are designed using quaternary voltage mode logic and quaternary current
mode logic. Power consumption required for quaternary voltage mode logic is 51.78 % less as compared
to binary . Area in terms of number of transistor required for quaternary voltage mode logic is 3 times
more as compared to binary. As quaternary voltage mode circuit required large area as compared to
quaternary current mode circuit but power consumption required in quaternary voltage mode circuit is less
than that required in quaternary current mode circuit .
Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S...Mokhtar SELLAMI
This document summarizes a research paper on parallel hybridization techniques for solving Boolean satisfiability problems. It describes combining search space splitting and portfolio approaches to leverage their strengths while avoiding weaknesses. Experimental results on hard SAT instances showed the proposed approach solved more problems faster than a baseline portfolio solver. Future work could investigate optimizing parameters like the "jump factor" and addressing unsatisfiable instances.
The retrieval algorithms in remote sensing generally involve complex physical forward models that are nonlinear and computationally expensive to evaluate. Statistical emulation provides an alternative with cheap computation and can be used to calibrate model parameters and to improve computational efficiency of the retrieval algorithms. We introduce a framework of combining dimension reduction of input and output spaces and Gaussian process emulation
technique. The functional principal component analysis (FPCA) is chosen to reduce to the output space of thousands of dimensions by orders of magnitude. In addition, instead of making restrictive assumptions regarding the correlation structure of the high-dimensional input space,
we identity and exploit the most important directions of this space and thus construct a Gaussian process emulator with feasible computation. We will present preliminary results obtained from applying our method to OCO-2 data, and discuss how our framework can be generalized in
distributed systems. This is joint work with Jon Hobbs, Alex Konomi, Pulong Ma, and Anirban Mondal, and Joon Jin Song.
This document describes a new machine learning algorithm called the Balancing Board Machine (BBM). BBM approximates the centroid of the polyhedral cone that represents the version space in kernel machines. It does this by computing the centroid of the intersection between the version space cone and a hyperplane, and then iteratively "balancing" the hyperplane towards the centroid. This process improves the generalization performance over support vector machines. The document provides mathematical details on exactly computing polytope centroids and deriving efficient approximation algorithms for implementing BBM.
This document proposes an optimized version of the AO* algorithm called OAO* that improves upon AO* in two ways:
1) It performs a depth-limited search on nodes before assigning heuristic values, allowing previously underestimated nodes to become promising.
2) It handles interacting subproblems, where the optimal solution to one subproblem depends on the solution to another. This provides a more realistic solution.
The OAO* algorithm is shown to always provide an optimal or better solution than AO* by re-evaluating nodes and accounting for interactions between subproblems. It also explores fewer nodes overall through the use of an explored list to avoid re-exploring known nodes.
Development of Reliability Analysis and Multidisciplinary Design Optimization...Altair
This document summarizes the capabilities of the RAMDO software for reliability-based design optimization (RBDO). It discusses both sensitivity-based and sampling-based RBDO approaches in RAMDO. It also provides examples of multidisciplinary applications of RAMDO in areas like fatigue analysis, casting process design, vehicle crashworthiness, and more. Several published case studies demonstrate how RAMDO has been used to optimize designs while accounting for input variability and reliability constraints.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
IRJET-A Design Of Modified SSTBC Encoder to Noise Free Mimo Communication in ...IRJET Journal
The document proposes a modified semi-orthogonal space-time block code (SSTBC) encoder for noise-free MIMO communication in OFDM systems. It summarizes previous research on space-time coding techniques and identifies limitations. The proposed SSTBC encoder is evaluated and found to have a lower bit error rate than previous designs, indicating it provides more effective error correction and spectral efficiency for optimal MIMO communication. Key aspects of the proposed encoder include a new quasi-orthogonal fading matrix and use of discrete wavelet transform instead of fast Fourier transform.
This document describes a quadratic assignment problem (QAP) involving assigning 358 constraints and 50 variables. It provides an example of a QAP with 3 facilities and 3 locations. The QAP aims to assign facilities to locations in a way that minimizes total cost, which is a function of the flow between facilities and the distance between locations. Several applications of QAP are discussed, including facility location, scheduling, and ergonomic design problems.
Let me summarize the key points:
- The min-entropy of a state ρAB can be expressed as the optimal value of a semi-definite program (SDP)
- The SDP has a primal and dual problem
- Solving the dual problem gives an operator XAB that achieves the max of the dual objective ρAB, XAB
- The optimal dual states satisfy XB = 1B and correspond to Choi-Jamiolkowski states of unital CPMs
- This formulation allows the min-entropy to be computed by solving an SDP, which can be done efficiently for small systems
Cari 2020: A minimalistic model of spatial structuration of humid savanna veg...Mokhtar SELLAMI
The document describes a minimalistic spatial model of vegetation structuring in humid savanna environments. It aims to illustrate the emergence of spatial patterns and highlight fire resistance strategies of trees. The model builds on previous work, incorporating a spatial component to represent how tree and grass biomass change over space and time based on logistic growth, nonlocal competition/cooperation, and fire mortality probabilities dependent on tree density. Mathematical analysis and numerical illustration of the model will provide insights into savanna vegetation structuring.
This document presents distributed algorithms for k-truss decomposition of large graphs. It begins with introducing the problem of k-truss decomposition and definitions. It then describes how k-truss decomposition can be performed using traditional sequential algorithms. It proposes two distributed algorithms: MRTruss, which uses MapReduce but has limitations; and i-MRTruss, an improved version that aims to address the issues with MRTruss. The document outlines the different sections that will evaluate these distributed algorithms and experimentally analyze their performance.
Parallel hybrid chicken swarm optimization for solving the quadratic assignme...IJECEIAES
In this research, we intend to suggest a new method based on a parallel hybrid chicken swarm optimization (PHCSO) by integrating the constructive procedure of GRASP and an effective modified version of Tabu search. In this vein, the goal of this adaptation is straightforward about the fact of preventing the stagnation of the research. Furthermore, the proposed contribution looks at providing an optimal trade-off between the two key components of bio-inspired metaheuristics: local intensification and global diversification, which affect the efficiency of our proposed algorithm and the choice of the dependent parameters. Moreover, the pragmatic results of exhaustive experiments were promising while applying our algorithm on diverse QAPLIB instances . Finally, we briefly highlight perspectives for further research.
Model-Based User Interface Optimization: Part IV: ADVANCED TOPICS - At SICSA ...Aalto University
The document discusses optimization techniques for user interfaces, focusing on metaheuristics and ant colony optimization. Metaheuristics provide intelligent, black-box optimization by learning and updating models of the problem environment through cooperation of multiple search agents. Ant colony optimization is well-suited for user interface design as layouts are constructed iteratively. The document outlines challenges like robustness to noise, multi-objective optimization, and dynamic problems. Techniques for addressing complex tasks include decomposition, screening, space reduction, and sub-space elimination.
[AAAI-16] Tiebreaking Strategies for A* Search: How to Explore the Final Fron...Asai Masataro
This is a presentation used in the aural session in AAAI-16. The original paper is available at http: guicho271828.github.io/publications/ .
# Abstract
Despite recent improvements in search techniques for cost-optimal classical planning, the exponential growth of
the size of the search frontier in A* is unavoidable. We investigate tiebreaking strategies for A*,
experimentally analyzing the performance of standard tiebreaking strategies that break ties according to the heuristic value of the nodes. We find that tiebreaking has a significant impact on search algorithm performance when there are zero-cost operators that induce large plateau regions in the search space. We develop a new framework for tiebreaking based on a depth metric which measures distance from the entrance to the plateau, and propose a new, randomized strategy which significantly outperforms standard strategies on domains with zero-cost actions.
Stochastic computation graphs provide a framework for automatically deriving unbiased gradient estimators. They generalize backpropagation to deal with random variables by treating the computation graph as a DAG with both deterministic and stochastic nodes. This allows gradients to be computed through expectations, enabling techniques like policy gradients for reinforcement learning and variational inference. The document describes several policy gradient methods that use stochastic computation graphs to compute gradients, including SVG(0), SVG(1), and DDPG. These methods have been successfully applied to robotics tasks and driving.
An Exact Branch And Bound Algorithm For The General Quadratic Assignment ProblemJoe Andelija
The document describes an exact branch and bound algorithm for solving the general quadratic assignment problem. It reviews several existing exact algorithms and integer programming formulations for the QAP. The author proposes a new exact algorithm based on linearizing the general QAP into a linear assignment problem that is smaller in size. Computational results and comparisons to other methods are discussed.
This document summarizes a survey paper on using intelligent techniques like expert systems, fuzzy logic, genetic algorithms, and neural networks to solve facility layout problems (FLP). FLP involves assigning facilities to locations to minimize costs associated with facility interactions. The paper discusses conventional algorithms and intelligent techniques for FLP, including optimal algorithms like branch and bound, and suboptimal algorithms like construction and improvement heuristics.
This document proposes a new variant of the differential evolution algorithm called DE New. It presents the basic differential evolution algorithm and the proposed DE New algorithm. The DE New algorithm modifies the mutation strategy to better explore the search space and solve stagnation problems. The performance of DE New is evaluated on 24 benchmark functions from 2D to 40D and is found to outperform GA, DE-PSO, and DE-AUTO on most of the benchmark functions, particularly for unimodal and multi-modal problems.
Economic/Emission Load Dispatch Using Artificial Bee Colony AlgorithmIDES Editor
This paper presents an application of the
artificial bee colony (ABC) algorithm to multi-objective
optimization problems in power system. A new multiobjective
artificial bee colony (MOABC) algorithm to
solve the economic/ emission dispatch (EED) problem is
proposed in this paper. Non-dominated sorting is
employed to obtain a Pareto optimal set. Moreover, fuzzy
decision theory is employed to extract the best
compromise solution. A numerical result for IEEE 30-bus
test system is presented to demonstrate the capability of
the proposed approach to generate well-distributed
Pareto-optimal solutions of EED problem in one single
run. In addition, the EED problem is also solved using the
weighted sum method using ABC. Results obtained with
the proposed approach are compared with other
techniques available in the literature. Results obtained
show that the proposed MOABC has a great potential in
handling multi-objective optimization problem.
This document discusses using a particle swarm optimization algorithm to solve the K-node set reliability optimization problem for distributed computing systems. The K-node set reliability optimization problem aims to maximize the reliability of a subset of k nodes in a distributed computing system while satisfying a specified capacity constraint. It presents the problem formulation and describes particle swarm optimization, a metaheuristic optimization technique inspired by swarm intelligence. The proposed algorithm applies a discrete particle swarm optimization approach to solve the K-node set reliability optimization problem, which is demonstrated on an example distributed computing system topology.
This paper proposes a method for adapting the dictionary elements in kernel-based nonlinear adaptive filtering algorithms. The dictionary contains a subset of input vectors that are used to approximate the nonlinear system. Typically, elements are added to the dictionary but never removed or adapted. The proposed method considers dictionary elements as adjustable model parameters that can be optimized to minimize the instantaneous output error, while maintaining coherence to control complexity. Gradient-based adaptation is derived for polynomial and radial basis kernels. Dictionary adaptation is incorporated into Kernel Recursive Least Squares, Kernel Normalized Least Mean Squares, and Kernel Affine Projection algorithms. Experiments on simulated and real data demonstrate that dictionary adaptation can reduce error or dictionary size compared to non-adaptive methods.
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
Multiobjective Design of Micro- and Macrostructures.
"To craft and analyze algorithms that search for optimal structures is the subject of the research in the multiobjective optimization and decision analysis group, and in the talk, we will discuss approaches, their theoretical limits, as well as applications to challenging design problems across multiple scales."
Compressing Graphs and Indexes with Recursive Graph Bisection aftab alam
This document summarizes a paper that proposes a new algorithm called Compression-friendly Graph Reordering (CGR) to reorder graphs and indexes to improve compression ratios. CGR formulates the reordering problem as a bipartite minimum logarithmic arrangement problem (BiMLogA) and uses recursive graph bisection to partition the graph into compression-friendly blocks. The algorithm is evaluated on social networks, web graphs, and search indexes, outperforming prior reordering techniques in terms of compression ratios. Parameters like the initialization procedure and number of bisections are analyzed for their effect on CGR's performance.
The document describes efficient algorithms for projecting a vector onto the l1-ball (sum of absolute values being less than a threshold). It presents two methods: 1) An exact projection algorithm that runs in expected O(n) time, where n is the dimension. 2) A method for vectors with k perturbed elements outside the l1-ball, which projects in O(k log n) time. It demonstrates these algorithms outperform interior point methods on various learning tasks, providing models with high sparsity.
The document describes efficient algorithms for projecting a vector onto the l1-ball (sum of absolute values) constraint. It presents two methods: 1) An exact projection algorithm that runs in O(n) expected time, where n is the dimension. 2) A method for vectors with k perturbed elements outside the l1-ball, which projects in O(k log n) time. It demonstrates these algorithms outperform interior point methods on various learning tasks, providing models with high sparsity.
IRJET- K-SVD: Dictionary Developing Algorithms for Sparse Representation ...IRJET Journal
This document discusses the K-SVD algorithm, which is a dictionary learning method for sparse representations of signals. It alternates between sparse coding to represent signals using the current dictionary, and dictionary updating to learn a new dictionary from the signal representations. The algorithm generalizes the K-means clustering process. The document provides details on the methodology of K-SVD, including its basic steps and application to synthetic and real image signals. It is concluded that K-SVD learns dictionaries that better represent given signal groups compared to alternative methods.
Workflow Allocations and Scheduling on IaaS Platforms, from Theory to PracticeFrederic Desprez
This document summarizes a joint workshop on workflow allocations and scheduling on Infrastructure as a Service (IaaS) platforms. It discusses using on-demand resources to more efficiently allocate workflows compared to static allocations. It proposes two algorithms, Eager and Deferred, to determine allocations within a given budget limit. Simulations using synthetic workflows showed Deferred guarantees budget constraints while Eager is faster. For small applications and budgets, Deferred is preferred. For larger applications and budgets approaching task parallelism saturation, Eager performs better. The document also discusses a prototype system using Nimbus, Phantom and DIET to deploy workflows on IaaS resources.
A Local Branching Heuristic For Solving A Graph Edit Distance ProblemRobin Beregovska
This document summarizes a local branching heuristic called LocBra for solving the GEDEnA (Graph Edit Distance problem with Edges and no Attributes) problem. LocBra iteratively modifies a Mixed Integer Linear Program formulation by adding constraints to define neighborhoods in the solution space. It then explores these neighborhoods using a solver to find efficient solutions. Computational experiments showed LocBra strongly outperforms literature heuristics for the GEDEnA problem in both computational time and solution quality.
This document discusses various methods for interpolating geofield parameters to model the surface of geofields. It analyzes methods like algebraic polynomials, filters, splines, kriging and neural networks. It then focuses on using neural networks to identify parameters for a mathematical model of a geofield by training the network parameters using experimental statistical data. As a result, it finds parameters for a regression equation that satisfy the training data. The application of neural networks is shown to have advantages over traditional statistical methods for modeling geofields, especially when data is limited in the early stages.
The document discusses using the Master's Theorem to analyze the time complexity of recursive algorithms. It provides an overview of the Master's Theorem and its three cases. The paper proposes modifying the theorem to estimate derivatives of iterative functions. It demonstrates applying the modified theorem to examples like the Tower of Hanoi problem and Fibonacci sequence. The modified theorem requires certain conditions on the function, and the paper provides guidelines for identifying these conditions. It discusses potential applications in algorithm analysis and understanding complex functions. Overall, the paper contributes new knowledge on using the Master's Theorem to analyze derivatives and offers a novel perspective.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
A Geometry Projection Method For Shape OptimizationJoaquin Hamad
The document presents a new geometry projection method for shape optimization that combines the advantages of direct geometry representations and fictitious domain analysis methods. An analytical geometry model defines the design domain, while a projection onto a fictitious domain enables simplified response analysis and sensitivity calculations. The geometry projection converges to the analytical geometry model as the numerical mesh is refined, ensuring optimal designs converge to solutions of well-defined continuum problems. Example computations demonstrate the method for minimum compliance with a volume constraint and minimum volume with a stress constraint.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...IDES Editor
Differential Evolution (DE) is a novel evolutionary
approach capable of handling non-differentiable, non-linear
and multi-modal objective functions. DE has been consistently
ranked as one of the best search algorithm for solving global
optimization problems in several case studies. This paper
presents an Improved Constraint Differential Evolution
(ICDE) algorithm for solving constrained optimization
problems. The proposed ICDE algorithm differs from
unconstrained DE algorithm only in the place of initialization,
selection of particles to the next generation and sorting the
final results. Also we implemented the new idea to five versions
of DE algorithm. The performance of ICDE algorithm is
validated on four mechanical engineering problems. The
experimental results show that the performance of ICDE
algorithm in terms of final objective function value, number
of function evaluations and convergence time.
Optimization of Mechanical Design Problems Using Improved Differential Evolut...
Fulltext
1. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING
Int. J. Numer. Meth. Engng 2010; 82:135–156
Published online 12 October 2009 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nme.2750
An algorithm for fast optimal Latin hypercube
design of experiments
Felipe A. C. Viana1,∗, †, ‡ , Gerhard Venter2, § and Vladimir Balabanov3, ¶
1 Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611, U.S.A.
2 Stellenbosch University, Matieland, Stellenbosch 7602, South Africa
3 The Boeing Company, Seattle, WA 98204, U.S.A.
SUMMARY
This paper presents the translational propagation algorithm, a new method for obtaining optimal or near
optimal Latin hypercube designs (LHDs) without using formal optimization. The procedure requires
minimal computational effort with results virtually provided in real time. The algorithm exploits patterns
of point locations for optimal LHDs based on the
p criterion (a variation of the maximum distance
criterion). Small building blocks, consisting of one or more points each, are used to recreate these patterns
by simple translation in the hyperspace. Monte Carlo simulations were used to evaluate the performance of
the new algorithm for different design configurations where both the dimensionality and the point density
were studied. The proposed algorithm was also compared against three formal optimization approaches
(namely random search, genetic algorithm, and enhanced stochastic evolutionary algorithm). It was found
that (i) the distribution of the
p values tends to lower values as the dimensionality is increased and (ii)
the proposed translational propagation algorithm represents a computationally attractive strategy to obtain
near optimum LHDs up to medium dimensions. Copyright q 2009 John Wiley & Sons, Ltd.
Received 14 January 2009; Revised 22 July 2009; Accepted 14 August 2009
KEY WORDS: design of computer experiments; experimental design; Latin hypercube sampling;
translational propagation algorithm
1. INTRODUCTION
Design optimization usually requires a large number of potentially expensive simulations.
Advancements in computational hardware and algorithms have not alleviated much of the resulting
∗ Correspondence to: Felipe A. C. Viana, Department of Mechanical and Aerospace Engineering, University of
Florida, Gainesville, FL 32611, U.S.A.
†
E-mail: fchegury@ufl.edu
‡
Research Assistant.
§ Professor.
¶ Structural Analysis Engineer.
Contract/grant sponsor: Vanderplaats Research and Development, Inc.
Copyright q 2009 John Wiley & Sons, Ltd.
2. 136 F. A. C. VIANA, G. VENTER AND V. BALABANOV
computational crunch because of the ever increasing appetite for improved modeling of physical
processes and more detailed optimization [1]. To reduce the computational cost, surrogate models,
also known as meta-models, are often used in place of the actual simulation models [2–7].
Surrogate-based design optimization begins by identifying locations in the design space where
simulations will be conducted. This process of identifying locations in the design space is known
as design of experiments (DOE) [8, 9]. Response data (often via numerical simulations) are
collected at these locations and one or more candidate surrogate models are fitted to the data
[10–12]. Finally, one or more of the candidate models are selected for calculating responses (and
to facilitate objective and constraint calculation during the optimization process) at points in the
design space where the actual responses are not yet available.
It is well known among designers that the quality of fit, which often defines the performance
of the surrogate model during optimization and design space exploration, strongly depends on the
DOE (point location and density) [13, 14]. By quality of fit, the authors imply the discrepancy
(in a general sense) between the actual response and the value predicted by the corresponding
surrogate model. There exist many measures for the quality of fit, depending on the particular
problem under consideration [14, 15].
A DOE with n p points and n v variables is usually written as an n p ×n v matrix X =
[x1 , x2 , . . . , xn p ]T , where each row xi = [xi1 , xi2 , . . . , xinv ] represents a sample and each column
represents a variable. Within the design and analysis of computer experiments, the Latin hypercube
design (LHD) proposed by McKay et al. [16] and Iman and Conover [17] is very popular. The
LHD presents advantages such as (i) the number of samples (points) is not fixed; (ii) orthogonality
of the sampling points (a design is orthogonal if the inner product of any two columns is zero
[3, 18]); and (iii) the sampling points do not depend on the surrogate model that will be constructed.
An LHD with n p points is constructed in such a way that each of the n v variables is divided into
n p equal levels and that there is only one point (or sample) at each level. A random procedure
is used to determine the point locations. Figure 1 shows two examples of LHDs with n v = 2 and
n p = 16. As the LHD is constructed using a random procedure, there is nothing preventing a
design that has poor space filling qualities, as the extreme case illustrated in Figure 1(a). A better
choice is shown in Figure 1(b), where the points are more uniformly distributed in the domain.
The optimization of the space-filling qualities of the LHD is a challenging problem that has
resulted in a number of research publications [19–28]. One interpretation of the space-filling
property is to consider a n v -dimensional sphere around each design point in the experimental
design. The larger the radius of the smallest sphere that does not cross the boundary of the design
space, the better the space-filling property of the design. Optimizing the point location in a LHD to
Figure 1. Examples of Latin hypercube designs: (a) ill-suited LHD with n v = 2 and n p = 16 and
(b) reasonable LHD with n v = 2 and n p = 16.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
3. AN ALGORITHM FOR FAST OPTIMAL LHD 137
Table I. Approaches for constructing the optimal Latin hypercube design.
Researchers Year Algorithm Objective functions
Audze and Egl¯ js [19]
a 1977 Coordinates Exchange Potential energy
Algorithm
Park [20] 1994 A 2-stage (exchange- and Integrated mean-squared error
Newton-type) algorithm and entropy criteria
Morris and Mitchell [21] 1995 Simulated annealing
p criterion
Ye et al. [22] 2000 Columnwise–pairwise
p and entropy criteria
Fang et al. [23] 2002 Threshold accepting algorithm Centered L 2 -discrepancy
Bates et al. [24] 2004 Genetic algorithm Potential energy
Jin et al. [25] 2005 Enhanced stochastic
p criterion, entropy and L 2
evolutionary algorithm discrepancy
Liefvendahl and Stocki [26] 2006 Columnwise-pairwise and Minimum distance and
genetic algorithms Audze–Egl¯ js functions
a
van Dam et al. [27] 2007 Branch-and-bound algorithm 1-norm and infinite norm
distances
Grosso et al. [28] 2008 Iterated local search and
p criterion
simulated annealing
algorithms
improve the uniformity of the point distribution, typically by maximizing the radius of the smallest
sphere, results in an Optimal LHD. Such designs are usually obtained from time-consuming
combinatorial optimization problems, with search space of the order of (n p !)n v . For example, to
optimize the location of 20 samples in two dimensions, the algorithm has to select the best design
from more than 1036 possible designs. If the number of variables is increased to 3, the number
of possible designs is more than 1055 . Researches have proposed various optimization algorithms
and objective functions to solve this problem. Table I summarizes some strategies found in the
literature. As for the computational time, Ye et al. [22] reported several hours on a Sun SPARC
20 workstation for generating an optimal Latin hypercube with 25 points for 4 variables. Jin et al.
[25] reported minutes on a PC with a Pentium III 650 MHZ CPU for generating an optimal Latin
hypercube with 100 points for 10 variables. Section 4 shows a comparison of different optimization
methods for obtaining optimal LHDs with contemporary computing power.
Owing to its popularity, the
p criterion was selected as a performance measure in this paper.
Minimizing
p leads to the maximization of the point-to-point distance in the design (see [21, 25]
for details). Mathematically:
n p −1 np 1/ p
−p
p = dij (1)
i=1 j=i+1
where p is a positive integer, n p is the number of points in the design, and dij is the inter-point
distance between all point pairs in the design. The general interpoint distance between any point
pair xi and x j can be expressed as follows:
nv 1/t
dij = d(xi , x j ) = |xik − xjk |t , t = 1 or 2 (2)
k=1
In the present work, p = 50 and t = 1 are used following the suggestions of Jin et al. [25].
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
4. 138 F. A. C. VIANA, G. VENTER AND V. BALABANOV
Figure 1(b) shows what point locations one can expect to obtain if the
p criterion is used
to generate an LHD with n v = 2 and n p = 16 (although other criteria may also lead to the same
distribution of points).
In this work, the translational propagation algorithm is presented for obtaining optimum or near
optimum LHDs. This algorithm requires minimal computational effort and does not use formal
optimization. The aim is to solve the optimization problem in an approximate sense, that is, to
obtain a good Latin hypercube quickly, rather than finding the best possible solution. The algorithm
exploits simple translation of small building blocks (consisting of one or more points) in the
hyperspace. The obtained LHD is referred to as TPLHD (a LHD obtained via the translational
propagation algorithm). In general, the obtained TPLHD could be useful by itself as optimum
or near optimum LHDs, or as an initial guess for numerical implementations based on some
optimization algorithm.
The remainder of the paper is organized as follows. Section 2 describes the translational prop-
agation algorithm for obtaining LHDs. Section 3 describes the numerical experiments used in this
study. Section 4 presents the results and discussion. Finally, the paper is closed by recapitulating
salient points and concluding remarks.
2. LATIN HYPERCUBE VIA TRANSLATIONAL PROPAGATION ALGORITHM
2.1. Basic algorithm
The proposed approach is based on the idea of constructing the optimal n v -dimensional LHD
from a fairly small n v -dimensional seed design. The simple example of a 16×2 (i.e. 16 points
in two dimensions) LHD is used to explain the methodology. Figure 2 shows some possible two-
dimensional seed designs. While there is no limitation on the number of points for the seed design,
the seed design can be as simple as just a single point (1×n v design). For this reason, the seed of
Figure 2(a) is seed used in the 16×2 Latin hypercube example.
In order to construct a LHD of n p points from a seed design of n s points, the design space is
first divided into a total of n b blocks such that:
np
nb = (3)
ns
Figure 2. Example of seed designs for 2 variables: (a) 1×2 seed design; (b) 2×2 seed design;
(c) 3×2 seed design; and (d) 4×2 seed design.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
5. AN ALGORITHM FOR FAST OPTIMAL LHD 139
Figure 3. 16×2 Latin hypercube mesh divided into blocks (4 divisions in each dimension results
in 16 blocks). The left-lower block is the first one to be picked in the algorithm.
Equation (3) means that each dimension is partitioned into the same number of divisions, n d ,
calculated by:
n d = (n b )1/n v (4)
In the example of the 16×2 LHD (i.e. n p = 16 and n v = 2), considering n s = 1 (seed design selected
from Figure 2(a)), one obtains n b = 16 and n d = 4 from Equations (3) and (4).
Next, each block is filled with the previously selected seed design. Figure 3 shows the division
of the design space for the 16×2 LHD and which of the blocks is the first one to be picked in the
algorithm. Figure 4 illustrates the process step by step. First, the seed design is properly scaled
and placed at the origin of the first block, as shown in Figure 4(a). Next, the block with the seed
design is iteratively shifted by n p /n d levels in one of the dimensions. Every time that the seed
design is shifted, a new point is added to the experimental design. Figure 4(b) shows the first
shift of the seed design (chosen to be in the horizontal direction). To preserve the Latin hypercube
property of only a single point per level; Figure 4(b) shows that there also has to be a one-level
shift in the vertical direction. To preserve the Latin hypercube properties, a displacement vector is
built accounting for the shifting in the dimension of interest (horizontal direction in the example)
as well a shift in all other dimensions (vertical direction in our example). The shifting process
continues in one of the dimensions until all the divisions in this dimension are filled with the seed
design as illustrated in Figure 4(a)–(d). In the next step, the current set of points (newly filled
division) is used as a new seed design and the procedure of shifting the seed design is repeated in
the next dimension. Figure 4(e)–(g) illustrates the shifting procedure in the vertical direction.
The biggest advantage of this approach is that there are no calculations to perform. All
operations can be viewed as a simple translation of the seed designs in the n v -dimensional
hypercube. Although efficient for generating large designs, the algorithm proposed up to now
fails to provide flexibility for the total number of points in the final LHD. The approach
presented so far is limited in the sense that Equations (3) and (4) must hold. The next section
describes a strategy to overcome this limitation and generate designs with arbitrary number of
points.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
6. 140 F. A. C. VIANA, G. VENTER AND V. BALABANOV
(a) Step 1 (b) Step 2 (c) Step 3 (d) Step 4
(e) Step 5 (f) Step 6 (g) Step 7
Figure 4. Process of creating the 16×2 enhanced Latin hypercube design. (a) illustrates
the initial placement of the seed; (b)–(d) show the translation of the seed in the horizontal
direction (which is accompanied by a one-level vertical displacement to preserve Latin
hypercube properties); (d) also represents the newly created ‘seed’ that will be translated in
the vertical direction; and (e)–(g) illustrate the translation in the vertical direction (which
is accompanied by horizontal displacement of one level).
2.2. Generating experimental designs of any size
To generate an LHD with any number of points, the first step is to generate a TPLHD that has
at least the required number of points using the algorithm described above. If this design contains
the required number of points, the process is completed. Otherwise, an experimental design larger
than the required is created and a resizing process is used to reduce the number of points to the
desired one. The points are removed one by one from the initially created TPLHD by discarding the
points that are the furthest from the center of the hypercube and reallocating remaining points to fill
the whole design (preserving the Latin hypercube properties). In the proposed algorithm removing
the points furthest from the center does not reduce the area of exploration. After removing the
points, the final design is rescaled to cover the whole design space.
To illustrate the approach, consider the example where a 12×2 Latin hypercube is created using
the one point seed of Figure 2(a) (i.e. n s = 1). From Equations (3) and (4), n d = n 1n v ∼ 3.46 does
p =
not result in an integer number in this case. Rounding n d ∼ 3.46 up would give n d = 4 and the
=
next largest design that can be constructed is the 16×2, as illustrated in Figure 4. The resizing
process begins with first calculating the distance between each of the 16 points and the center
of the design space. To create a 12×2 design out of a 16×2 one, the four points furthest from
the center have to be removed. In practice, this means the points of the original TPLHD have to
be ranked according to the distance from the center of the design space. The suggested resizing
algorithm computes these distances only one time, during the very first iteration. When a point is
removed, the levels occupied by its projection along each of the dimensions have to be eliminated.
This ensures that the Latin hypercube property that only a single point is found at any of the levels.
Figure 5 illustrates the resizing process step by step. The number of points in the design actually
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
7. AN ALGORITHM FOR FAST OPTIMAL LHD 141
shrinks, but the final design still represents samples over the same design space. Figure 5(a) shows
that in the 16×2 design, the points in the left-bottom and right-top corners are equally far from
the center. Owing to symmetry, it is not important which of the points will be removed first. The
top-right point is removed first, Figure 5(a); and the bottom-left one is removed next, Figure 5(b).
Figure 5(c) might be the best illustration of how the algorithm guarantees the Latin hypercube
properties because it is the first time that an internal level is removed. Removing a point is as
simple as eliminating it from the experimental design. However, as illustrated in Figure 6, this
would leave empty levels (breaking the Latin hypercube requirements). The correct implementation
of the resizing algorithm (Figure 5(c)) takes care of this limitation by also eliminating the levels
that the point used to occupy. In Figure 5(c) this means that the three points on the right would
move to the left (occupying the empty level that was in between the points). Next, the remaining
points are scaled to cover the original design space. The scaling is as simple as the mapping of the
remaining points to the design space of interest such that the lower and upper bounds are sampled
by one of the points of the design (i.e. simple unidirectional scaling of all points). After each step,
a new near optimum Latin hypercube is obtained with one point less. The process continues until
the 12×2 design is achieved. Removing points/levels part of the algorithm reduces the number
of points of the experimental design, while preserving the Latin hypercube requirements. On the
other hand, the resizing part makes the experimental design to fit in the original design space
again.
2.3. Seed designs
Using the described algorithm, one can create different instances of the TPLHD by changing the
seed design, as illustrated by Figure 7 for the 16×2 TPLHD. As it is not known beforehand
which seed design will lead to the best design in terms of the
p criterion, one would not know
what seed design to use. However, this might be a positive feature of the algorithm. First, it is
possible that the different designs are comparably good (when looking at the
p criterion). Second,
due to the low cost of the algorithm (as we will show in the next sections), it pays to create
instances of the Latin hypercube with different seeds and then pick the best one according to the
p
criterion.
Because of the shifting process, the number of levels in a block is greater than the number of
points in the seed design. This means that when creating design starting from seeds with more
than one point (n s >1) it is necessary to reshape them to fit into one block of the TPLHD. Figure 8
illustrates this idea with the design of a 16×2 TPLHD starting from a 4×2 seed (i.e. n s = 4).
Figure 8(a) shows the initial 4×2 seed (that respects the Latin hypercube properties). It is necessary
to insert as many empty levels between two consecutive projections of points as the number of
division in a given directions, as shown in Figure 8(b). This way, at the end of the shifting process,
there will be only one point per level (Figure 8(c)). The next section reviews the implementation
of what was described so far.
2.4. Summary of the algorithm
The proposed algorithm is inspired by the optimization of the
p criterion penalizing designs with
close points (due to p = 50 and t = 1 in Equation (1)). Good LHDs are expected to be obtained
because the minimum distance is fixed at the level of seed design. This is particularly clear for large
number of points in low dimensions. Figure 9 illustrates the translational propagation algorithm.
The input parameters are the initial seed design s, the number of points in the seed design, n s ,
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
8. 142 F. A. C. VIANA, G. VENTER AND V. BALABANOV
Figure 5. Process of creating a 12×2 enhanced Latin hypercube design from a 16×2 design.
Figure 6. Wrong step of the resizing algorithm.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
9. AN ALGORITHM FOR FAST OPTIMAL LHD 143
Figure 7. 16×2 TPLHD obtained with different seeds: (a) from 1-point seed; (b) from 2-points seed;
(c) from 3-points seed; and (d) from 4-points seed.
Figure 8. Illustration of the reshaping of the seed design: (a) original seed; (b) seed reshaphed to fit in
one block of the TPLHD; and (c) seed design filling different blocks of the TPLHD.
number of points of the desired LHD, n p , and the number of variables, n v . n ∗ and n ∗ are the
p d
number of points and number of divisions of the first TPLHD to be created. While n p can assume
any value, n ∗ is such that Equation (3) returns an integer number of blocks n b . The first step is to
p
check whether or not a bigger experimental design is needed. The number of divisions, n d , and its
rounded up value, n ∗ , are compared. If n ∗ >n d , Equations (3) and (4) do not hold and the number
d d
of points in the TPLHD to be created, n ∗ , is greater than the desired, n p . Next, the seed design
p
is reshaped to fit in one block of the TPLHD (Section 2.3), and a TPLHD of n ∗ points is created
p
(Section 2.1). Finally, if n ∗ >n p , the TPLHD previously obtained is resized such that it will have
p
n p points (Section 2.2). X is the final TPLHD (with n p points).
Appendix A gives the MATLAB implementation of the different building blocks of the trans-
lational propagation algorithm.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
10. 144 F. A. C. VIANA, G. VENTER AND V. BALABANOV
Figure 9. Flowchart of the translational propagation algorithm.
3. NUMERICAL EXPERIMENTS
To illustrate the effectiveness of the proposed TPLHD, a set of configurations that covers the
typical application range of the LHD is considered. The experimental designs vary from only 2
to 12 variables as summarized in Table II. For each number of variables, three cases representing
small, medium, and large number of design points were considered. The number of design points
was calculated relative to the number of coefficients in a full polynomial model for the specified
number of variables. The small designs have two times more points than the number of coefficients
in a full quadratic polynomial model, while the large designs have 20 times more points than the
number of coefficients in the same model. The medium designs have two times more points than
the number of coefficients in a full cubic polynomial model.
The influence of the initial seed used to construct the TPLHD was studied first. For this study,
different TPLHDs were generated using seeds with one to five points. The designs were then
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
11. AN ALGORITHM FOR FAST OPTIMAL LHD 145
Table II. Latin hypercube design configurations considered.
No. of points
No. of variables Small designs Medium designs Large designs
2 12 20 120
4 30 70 300
6 56 168 560
8 90 330 900
10 132 572 1320
12 182 910 1820
compared using the values of the
p criterion. Next, the efficiency of the proposed algorithm
in approximating the optimal LHD was studied. For this study, an estimate of the range of the
p criterion for the LHDs proposed in Table II is created based on Monte Carlo simulation plus
the results from 100 simulations of three different Latin Hypercube optimization techniques. The
Monte Carlo simulation created 200 000 random LHDs for the configurations proposed in Table II
(this is possible because the Latin hypercube algorithm allows for the creation of different designs
based on a random number generator). The three different Latin Hypercube optimization techniques
used in the study are:
1. The Enhanced Stochastic Evolutionary algorithm (ESEA) of Jin et al. [25]: set with maximum
number of 20 iterations (but each simulation was allowed to run at most five iterations without
improvement of the objective function).
2. The Genetic algorithm (GA) implementation of Bates et al. [24]: set with maximum number
of 50 iterations (but each simulation was allowed to run at most 20 iterations without
improvement of the objective function).
3. The native MATLAB function lhsdesign [29]: using the maxmin criterion (maximization
of the minimum distance) and 200 iterations. With these settings, the MATLAB lhsdesign
function selects the best LHD from 200 randomly created designs. The maxmin criterion is
used to select the best design.
The settings for both GA and ESEA were based on a few trials for the design with 560
points and 6 variables. All simulations were conducted using an Intel Core 2 Quad CPU Q6600
at 2.40 GHz, with 3 GB of RAM running MATLAB 7.6 (R2008a) under Windows XP. The
SURROGATES toolbox [30] was used to execute the TPLHD, ESEA, and GA algorithms under
MATLAB [29].
˜
Instead of showing only the
p criterion as defined in Equation (1), a normalized version,
p ,
is also presented for easier comparison:
p −min(
p )
˜
p = (5)
max(
p )−min(
p )
where max(
p ) and min(
p ) are the maximum and minimum values of
p found in the generated
˜
DOEs (including both the TPLHDs and the Monte Carlo simulations); which means that 0
p 1.
Using the Monte Carlo simulations and the optimization results, it is possible to estimate the
range of variation of
p values and thus make a judgment if a specific design represents a
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
12. 146 F. A. C. VIANA, G. VENTER AND V. BALABANOV
˜
substantial improvement or not. The
p criterion makes the comparison of the distributions of
different designs easier. Although it eliminates the sense of absolute magnitude (because of the
normalization), it permits identifying if the distributions are toward the low or high values of the
p criterion.
4. RESULTS AND DISCUSSION
Table III shows the values of the
p criterion for TPLHDs obtained using different initial seed
designs for design configurations of Table II. From these results, it can be concluded that no single
seed size is always the best (even considering designs with the same number of variables). For
example, the single point seed is the best for the 12×2 and 20×2 designs, while the two-point seed
is the best for the 120×2 design. However, there might be cases where different seeds are equally
competitive. As illustration, this is the case for the 30×4 design, where the four-point and five-
point seeds produce designs with nearly the same
p criterion. These two observations together
with the fact that no time-consuming optimization is required in the translational propagation
algorithm suggests that it is possible to create several instances of the TPLHDs (based on different
initial seeds) and select the design of choice based on the
p criterion. Table III also shows that
an increase in dimensionality seems to favor the single point seed. The reason is that in higher
dimensions a seed with multiple points acts like a cluster of points, hurting the
p criterion (this
effect is less evident in lower dimensions).
Table IV shows detailed statistics regarding the
p criterion with data obtained during the Monte
Carlo and optimization simulations (the 5th percentile is added to give information about the lower
Table III.
p criterion, defined in (1), ( p = 50, t = 1) for TPLHD constructed from
different seeds (the best TPLHD is shown in bold face).
Seed size
No. of variables No. of points 1 2 3 4 5
2 12 2.8 2.9 3.8 3.7 3.8
2 20 4 4.8 4.9 5 4
2 120 11.0 9.4 12.6 13.4 13.8
4 30 1.9 1.8 1.8 1.6 1.6
4 70 2.7 5.0 2.9 2.0 2.1
4 300 7.2 13.6 6.2 3.6 4.0
6 56 1.7 2.1 1.8 3.7 1.7
6 168 3.1 6.0 2.4 2.9 2.5
6 560 3.2 11.7 5.3 7.9 3.6
8 90 1.6 3.4 2.6 5.9 1.9
8 330 3.7 4.6 3.1 3.0 2.5
8 900 4.7 16.4 4.6 2.7 2.6
10 132 1.6 3.7 3.9 6.6 3.5
10 572 2.0 8.8 4.0 3.1 2.9
10 1320 4.2 6.1 3.5 3.9 3.1
12 182 1.7 11.3 3.1 1.9 2.6
12 910 2.0 16.8 3.2 3.1 2.4
12 1820 2.1 17.8 3.7 3.4 2.4
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
13. AN ALGORITHM FOR FAST OPTIMAL LHD 147
Table IV. TPLHD and statistics about the
p criterion, defined in (1), ( p = 50, t = 1)
for each studied configuration. Bold faces show when TPLHD offers the best design.
TPLHD performs very well up to six dimensions.
Percentiles
No. of variables No. of points TPLHD Min 5 25 50 75 Max Range
2 12 2.8 2.3 3.7 5.5 5.6 5.6 5.7 3.4
2 20 4.0 3.4 6.4 9.5 9.6 9.7 10.0 6.6
2 120 9.4 9.4 40.2 59.5 60.3 60.8 62.3 52.9
4 30 1.6 1.5 2.3 2.9 3.2 3.7 7.4 5.9
4 70 2.0 2.0 3.8 4.9 5.8 6.9 17.3 15.3
4 300 3.6 3.4 8.6 11.1 13.0 15.7 74.8 71.3
6 56 1.7 1.0 1.5 1.8 2.0 2.3 7.9 6.9
6 168 2.4 1.3 2.4 2.8 3.2 3.6 23.9 22.6
6 560 3.2 1.8 3.7 4.4 4.9 5.7 32.9 31.1
8 90 1.6 0.9 1.1 1.2 1.3 1.5 5.9 5.0
8 330 2.5 1.4 1.6 1.8 2.0 2.2 9.7 8.3
8 900 2.6 1.8 2.1 2.4 2.7 3.0 16.3 14.5
10 132 1.6 0.7 0.8 0.9 1.0 1.1 6.6 5.8
10 572 2.0 1.0 1.2 1.3 1.4 1.5 8.8 7.8
10 1320 3.1 1.3 1.4 1.6 1.7 1.9 6.1 4.8
12 182 1.7 0.6 0.7 0.7 0.8 0.8 11.3 10.7
12 910 2.0 0.8 0.9 1.0 1.1 1.1 16.8 16.0
12 1820 2.1 0.9 1.0 1.1 1.2 1.3 17.8 16.9
values of
p ). For up to six variables, TPLHD delivers exceptionally good designs (within 5%
of the best designs for most of the cases). For 8–12 variables, TPLHD is not attractive anymore
(with designs located within the last quartile of the
p criterion distribution). The reader will
notice that for 10 variables the maximum value of
p drops when coming from 572 to 1320 points
(contrarily from the trend of the other design configurations). This is because it is difficult to
estimate this value for high-dimensional cases (even with total of more than 200 000 simulations).
On the other hand, the percentile information establishes the trend of increasing
p with the
number of points. As the algorithm was heuristically developed based on the visual observation
of the patterns generated by the optimal LHD in two and three dimensions according to the
p
criterion, it is of no surprise that the efficiency is harmed when the dimensionality increases. The
range of the
p criterion varies with different design configurations and can also be very large
(for example, the range for the 120×2 design is 52.9). Thus, only the normalized
p criterion,
˜
p , is used from this point on.
˜
Figure 10 illustrates the box plots of the
p criterion for each of the studied configurations.
A box plot shows the full data range on the y-axis. A box is defined by lines at the lower quartile
(25%), median (50%), and upper quartile (75%) values. Lines extend from each end of the box
for a distance of 1.5 times the interquartile range or until the data limit is reached. The data points
falling outside this 1.5 times the interquartile range represent outliers that are shown using ‘+’
˜
symbols. Figure 10 shows that the distribution of
p tends to lower values as the dimensionality
of the problem grows. It does not necessarily mean that the LHD fills the space better; after all,
the sparsity of the points severely increases with the number of variables. However, this behavior
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
14. 148 F. A. C. VIANA, G. VENTER AND V. BALABANOV
˜
Figure 10. Boxplots of the
p criterion, defined in (5), ( p = 50, t = 1), where circles indicate the best
˜
TPLHD. TPLHD represents a good solution in low to medium dimensions.
p ranges from 0 to 1:
(a) low to medium dimensions and (b) medium to high dimensions.
˜
suggests that the probability of generating a design in the lower levels of the
p increases with the
dimensionality. The following conclusions can thus be drawn from the numerical experimentation:
• In low dimensions: the optimum LHD is an outlier located in the lower region of the
p
values; making it difficult to be obtained.
˜
• In high dimensions: most of the LH designs are in the lower region of the
p ; making it
difficult to distinguish the optimal LHD (obviously according to this criterion only) and other
several good representations of the LHD.
Figure 10 makes clear that the TPLHD represents a better approximation of the optimum LHD
in low to medium dimensions (up to six variables) and thus we will focus only on the designs
with up to six variables.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
15. AN ALGORITHM FOR FAST OPTIMAL LHD 149
Table V. Performance comparison between TPLHD and the median out of 100 simulations of different
optimization techniques. ESEA is the implementation of the Enhanced Stocastic Evolutionary algorithm of
Jin et al. [25]. lhsdesign is a native MATLAB function. GA refers to the Genetic algorithm implementation
˜
of Bates et al. [24].
p , defined in (5), ranges from 0 to 1.
No. of variables 2 4 6
No. of points 12 20 120 30 70 300 56 168 560
˜
p TPLHD 0.1 0.1 0 0 0 0 0.1 0 0
ESEA 0.2 0.1 0.1 0 0 0 0 0 0
GA 0.2 0.3 0.3 0.1 0.1 0 0 0 0
lhsdesign 0.5 0.5 0.6 0.1 0.1 0.1 0.1 0.0 0.1
Time (s) TPLHD ∼0
= ∼0
= 0.1 ∼0
= ∼0
= 0.7 ∼0
= 0.5 2
ESEA ∼0
= 0.2 13 0.3 3 173 2 34 1096
GA 0.4 0.9 24 4 17 275 19 143 1509
lhsdesign ∼0
= ∼0
= 0.2 ∼0
= 0.1 1.8 0.1 1.0 13
Table V allows a comparison between the TPLHD and the mean out of 100 simulations of three
different optimization techniques: (i) the ESEA of Jin et al. [25]; (ii) the GA implementation of
Bates et al. [24]; and (iii) the native MATLAB function lhsdesign [29]. TPLHD was obtained
by generating five candidates from different seeds and then picking the best one according to the
p criterion (which means that Table V shows the time needed to generate all five candidates.
Appendix B shows the discussion on the number of points that needs to be allocated in each of the
cases; which directly impacts the computational cost). Results in Table V reinforce that TPLHD
˜
offers very good solutions; with most of the designs presenting
p = 0. This means that TPLHD
found the best result of the set of all simulations (including the Monte Carlo and all optimization
ones). As for the computational cost, the TPLHD design is superior in most of the cases or in
the worst-case matches the best time of the traditional optimization techniques. Clearly, the point
density has a dramatic impact on the optimization techniques (especially ESEA and GA). For
instance, for the ESEA with 6 variables, moving from 56 to 560 points makes the computational
cost change from 2 s to a little more than 18 min. The lhsdesign MATLAB function tends to suffer
less, as it can be seen as a Monte Carlo simulation with only 200 samples (200 is the number of
iterations that lhsdesign was allowed to run). Still, in six dimensions, when moving from 56 to
560 points the computational cost of the lhsdesign increases more than 100 times.
Table V gives the impression that increasing the number of variables (no matter the point density)
made the three optimization techniques to improve their own performance (in terms of the
p ˜
˜
criterion). Considering the
p criterion of the designs with four and six variables, TPLHD and any
of the optimization techniques would offer designs that are below 0.1. Then, the computational
cost would leave only TPLHD and the lhsdesign as competing strategies. However, a closer look at
˜
the distributions of the
p criterion shows that TPLHD is the best choice. Figure 11 contrasts the
˜
box plot of the
p criterion with the values found by TPLHD and the 0.1 threshold of lhsdesign.
Because the distributions tend to lower values, the 0.1 threshold is a bad value for the 300×4
design and is a marginal to undesired value in 6 dimensions. It is clear that TPLHD offers a better
design (not mentioning a faster solution).
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
16. 150 F. A. C. VIANA, G. VENTER AND V. BALABANOV
˜
Figure 11. Boxplots of the
p criterion between 0 and 0.5 for the four and six-dimensional designs.
˜
is defined in (5), ( p = 50, t = 1) and ranges from 0 to 1. Circles indicate the TPLHD.
p
One of the reasons for the poor performance of the TPLHD in high dimensions might be the
existence of the directionality property of TPLHD. Because of the way the points are created in
TPLHD, they tend to be stretched along one direction. Even the optimal LHD (obtained with
ESEA or GA for example) from Figure 1(b) may be considered as having points placed in a
preferred direction (close to 45◦ ). Goel et al. [14] discussed that a single criterion for generating
DOE may lead to large deteriorations in other criteria. In the proposed algorithm, the
p criterion
is used and the directionality of the resulting LHDs (for both the TPLHD and the designs obtained
with traditional optimization) is a clear loss. There has been work on the optimization of the
space-filling properties of the LHD while preserving low correlation between variables. Cioppa and
Lucas [31] presented an algorithm that improves the space-filling properties of LHD at the expense
of inducing small correlations between the columns in the design matrix. However, authors warned
that the approach is computationally prohibitive. Hernandez [32] presented an extensive study on
a set of methodologies to create design matrices with little or no correlation (including saturated
nearly orthogonal LHDs). Franco et al. [33] discussed a radar-shaped statistic for identifying
the directionality property in a DOE (especially efficient in low dimension). However, there
has been little research on how to employ this statistic for improving DOE and not just for
identifying the directionality in an existing DOE, the directionality statistic was not employed to
improve existing DOE in this paper. Nevertheless, the focus of this research is the translational
propagation algorithm as a cheap alternative to traditional optimization of the LHDs based on the
p criterion.
Other, more general questions arise in higher dimensions: it is not clear what is the meaning
of space-filling designs and the relative cost of any optimization strategy of experimental designs.
The sparsity of the data may make it difficult to judge whether designs given by the optimization
of the Latin hypercube are in fact space-filling designs. Even if they are, considering plots such
as those in Figure 10, the computational cost of strategies such as TPLHD may end up being
very close to random search (such as in lhsdesign). It may happen that in higher dimensions it
is not worth investing much in time-consuming optimization (few iterations of random search
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
17. AN ALGORITHM FOR FAST OPTIMAL LHD 151
may provide satisfactory results). In spite of that, authors intend to investigate the potential of the
radar-shaped statistic in to improve the TPLHD properties.
5. CONCLUSIONS
A methodology for creating Latin hypercube designs (LHDs) via translational propagation algo-
rithm (TPLHD) is proposed. The approach is based on the idea that a simple ‘seed design’ with a
few points can be used as a building block to construct a near optimal LHD. The main advantage
of the proposed methodology is that it requires virtually no computational time. The approach was
tested on 18 different experimental design configurations with varying dimensionality and point
density. Monte Carlo simulations were used to support the analysis of the algorithm’s performance.
For these cases it was found that:
• Based on the Monte Carlo simulations, the probability of randomly generating good designs in
terms of the
p criterion (i.e. designs with low values of
p ) increases with the dimensionality.
This does not mean that the designs will fill the space better, it only means that designs tends
to be equally competitive. This fact makes computationally cheap strategies such as random
search attractive in high dimensions.
• It was found that for the low-dimensional cases (up to six variables) the TPLHD approximates
the optimal LHD (using
p criterion) very well. In higher dimensions (8–12 variables), the
TPLHD does not approximate the optimum solution well anymore.
As the time for generating the TPLHD is very small, in small to medium dimensions it is
recommended to generate several TPLHDs (using seed designs with different number of points)
and to pick the best one according to the
p criterion.
APPENDIX A: MATLAB IMPLEMENTATION OF THE TRANSLATIONAL
PROPAGATION ALGORITHM
Tables AI to AIV give the MATLAB code of our implementation of the translational propagation
algorithm. For simplicity, the levels of the Latin hypercube are represented by integer values. This
means that each of the points of the created LHD can assume values from one to the number of
points. The created Latin hypercube needs to be properly scaled to the design space of interest.
Table AI. MATLAB implementation of the translational propagation algorithm. The created design ranges
from one to the number of points. The algorithm works with integer indexes for each point. ceil(x)
is the MATLAB function that rounds the elements of x up to the nearest integers. ones(m, n) is the
MATLAB function that creates an m-by-n matrix of ones. See Table AII for details about reshapeSeed.
See Table AIII for details about createTPLHD. See Table AIV for details about resizeTPLHD.
vertcat(A, B) is the MATLAB function that performs the vertical concatenation of matrices A and B.
1: function X = tplhsdesign(np, nv, seed, ns)
2: % inputs: np - number of points of the desired Latin hypercube (LH)
3: % nv - number of variables in the LH
4: % seed - initial seed design (points within 1 and ns)
5: % ns - number of points in the seed design
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
18. 152 F. A. C. VIANA, G. VENTER AND V. BALABANOV
Table AI. Continued.
6: % outputs: X - Latin hypercube created using the translational
7: % propagation algorithm
8:
9: % define the size of the TPLHD to be created first
10: nd = ( np/ns)∧ ( 1/nv ); % number of divisions, nd
11: ndStar = ceil( nd );
12: if (ndStar > nd)
13: nb = ndStar∧ nv; % it is necessary to create a bigger TPLHD
14: else
15: nb = np/ns; % it is NOT necessary to create a bigger TPLHD
16: end
17: npStar = nb*ns; % size of the TPLHD to be created first
18:
19: % reshape seed to properly create the first design
20: seed = reshapeSeed(seed , ns, npStar, ndStar, nv);
21:
22: % create TPLHD with npStar points
23: X = createTPLHD(seed, ns, npStar, ndStar, nv);
24:
25: % resize TPLH if necessary
26: npStar > np;
27: if (npStar > np)
28: X = resizeTPLHD(X, npStar, np, nv);
29: end
30: return
Table AII. MATLAB implementation of the reshapeSeed function. ones(m, n) is the MATLAB
function that creates an m-by-n matrix of ones. round(x) is the MATLAB function that rounds
the elements of X to the nearest integers.
1: function seed = reshapeSeed(seed , ns, npStar, ndStar, nv)
2: % inputs: seed - initial seed design (points within 1 and ns)
3: % ns - number of points in the seed design
4: % npStar - number of points of the Latin hypercube (LH)
5: % nd - number of divisions of the LH
6: % nv - number of variables in the LH
7: % outputs: seed - seed design properly scaled
8:
9: if ns == 1
10: seed = ones(1, nv); % arbitrarily put at the origin
11: else
12: uf = ns*ones(1, nv);
13: ut = ( (npStar / ndStar) - ndStar*(nv - 1) + 1 )*ones(1, nv);
14: rf = uf - 1;
15: rt = ut - 1;
16: a = rt./rf;
17: b = ut - a.*uf;
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
19. AN ALGORITHM FOR FAST OPTIMAL LHD 153
Table AII. Continued.
18: for c1 = 1 : ns
19: seed(c1,:) = a.*seed(c1,:) + b;
20: end
21: seed = round(seed); % to make sure that the numbers are integer
22: end
23:
24: return
Table AIII. MATLAB implementation of the createTPLHD function. ones(m, n) is the MATLAB
function that creates an m-by-n matrix of ones. length(a) is the MATLAB function that returns the
number of entries in the vector a. vertcat(A, B) is the MATLAB function that does the vertical
concatenation of the matrices A and B.
1: function X = createTPLHD(seed, ns, npStar, ndStar, nv)
2: % inputs: seed - initial seed design (points within 1 and ns)
3: % ns - number of points in the seed design
4: % npStar - number of points of the Latin hypercube (LH)
5: % nd - number of divisions of the LH
6: % nv - number of variables in the LH
7: % outputs: X - Latin hypercube design created by the translational
8: % propagation algorithm
9: % we warn that this function has to be properly translated to other
10: % programming languages to avoid problems with memory allocation
11:
12: X = seed;
13: d = ones(1, nv); % just for memory allocation
14: for c1 = 1 : nv % shifting one direction at a time
15: seed = X; % update seed with the latest points added
16: d(1 : (c1 - 1)) = ndStar∧ (c1 - 2);
17: d(c1) = npStar/ndStar;
18: d((c1 + 1) : end) = ndStar∧ (c1 - 1);
19: for c2 = 2 : ndStar % fill each of the divisions
20: ns = length(seed(:,1)); % update seed size
21: for c3 = 1 : ns
22: seed(c3,:) = seed(c3,:) + d;
23: end
24: X = vertcat(X, seed);
25: end
26: end
27: return
Table AI gives the main function, which has as input parameters the number of points, n p , and
variables, n v , of the desired LHD, as well as the seed design, s, and its number of points, n s . The
points in the seed design initially assume values from one to the n s . Lines 9–17 of the algorithm
shown in Table AI illustrate how to take care of the size of the Latin hypercube to be created
using the translational propagation algorithm. At first, a Latin hypercube observing Equations (3)
and (4) is created; even if it means creating a design bigger than required. The seed design is
then properly placed into the first block of to be filled by the algorithm (line 19). Line 43 is
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
20. 154 F. A. C. VIANA, G. VENTER AND V. BALABANOV
Table AIV. MATLAB implementation of the resizeTPLHD function. ones(m, n) is the MATLAB function
that creates an m-by-n matrix of ones. zeros(m, n) is the MATLAB function that creates an m-by-n
matrix of zeros. norm(x) is the MATLAB function that computes the vector norm ( (xi2 )). min (x)
is the MATLAB function that finds the smallest element in x. sort(a) is the MATLAB function that
sorts the vector a in the ascending order, returning the sorted and the index vectors. sortrows(X, COL)
is the MATLAB function that sorts the matrix X based on the columns specified in the vector COL.
isequal(A, B) is the MATLAB function that returns logical 1 (TRUE) if arrays A and B are the same size
and contain the same values, and logical 0 (FALSE) otherwise.
1: function X = resizeTPLHD(X, npStar, np, nv)
2: % inputs: X - initial Latin hypercube design
3: % npStar - number of points in the initial X
4: % np - number of points in the final X
5: % nv - number of variables
6: % outputs: X - final X, properly shrunk
7:
8: center = npStar*ones(1,nv)/2; % center of the design space
9: % distance between each point of X and the center of the design space
10: distance = zeros(npStar, 1);
11: for c1 = 1 : npStar
12: distance(c1) = norm( ( X(c1,:) - center) );
13: end
14: [dummy, idx] = sort(distance);
15: X = X( idx(1:np), : ); % resize X to np points
16:
17: % re-establish the LH conditions
18: Xmin = min(X);
19: for c1 = 1 : nv
20: % place X in the origin
21: X = sortrows(X, c1);
22: X(:,c1) = X(:,c1) - Xmin(c1) + 1;
23: % eliminate empty coordinates
24: flag = 0;
25: while flag == 0;
26: mask = (X(:,c1) ∼ = ([1:np]’));
27: flag = isequal(mask,zeros(np,1));
28: X(:,c1) = X(:,c1) - (X(:,c1) ∼ = ([1:np]’));
29: end
30: end
31: return
the call for the function that creates the initial LHD. If necessary, the design is then resized to
the initially set configuration. Table AII gives the implementation of the reshaping of the seed
design (see Section 2.3). Table AIII illustrates the body of the translational propagation algorithm.
Lines 16–18 show how to create the displacement vector that is used to translate the seed in the
hyperspace. Finally, Table AIV presents the implementation of the resizing process. Lines 8–15
show how we reduce the initially large Latin hypercube to the n p initially set. Lines 18–30 present
how to restore the Latin hypercube condition of one point per level to the remaining points.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
21. AN ALGORITHM FOR FAST OPTIMAL LHD 155
APPENDIX B: NUMBER OF POINTS ALLOCATED BEFORE RESIZING THE TPLHD
As discussed in Section 2, the number of points that needs to be allocated for a given configuration
of the TPLHD is dictated by Equations (3) and (4). In the most general cases, a resizing process
needs to be used. Thus, the total number of points initially generated dictates the computational
cost of the TPLHD algorithm. Table BI shows the number of points allocated before the resizing
process for each of the configurations studied in this paper. It can be observed that more the
variables, the more the points are discarded. This confirms the fact that design strategies based on
the domain subdivision (the translational propagation algorithm is an example) tend to have poor
performance as dimensionality increases.
Table BI. Size of the TPLHD created before the resizing process for each seed size. The final
best TPLHD is shown in bold face.
Seed size
No. of variables No. of points 1 2 3 4 5
2 12 16 18 12 16 20
2 20 25 32 27 36 20
2 120 121 128 147 144 125
4 30 81 32 48 64 80
4 70 81 162 243 324 80
4 300 625 512 768 324 405
6 56 64 128 192 256 320
6 168 729 1458 192 256 320
6 560 729 1458 2187 2916 3645
ACKNOWLEDGEMENTS
The authors thank the members at Vanderplaats Research and Development, Inc. (VR&D) for the encour-
agement and support. This work was initiated when authors were employed by VR&D in various positions.
REFERENCES
1. Venkataraman S, Haftka RT. Structural optimization complexity: what has Moore’s law done for us? Structural
and Multidisciplinary Optimization 2004; 28(6):375–387.
2. Sacks J, Welch WJ, Mitchell TJ, Wynn HP. Design and analysis of computer experiments. Statistical Science
1989; 4(4):409–435.
3. Simpson TW, Peplinski JD, Koch PN, Allen JK. Meta-models for computer based engineering design: survey
and recommendations. Engineering with Computers 2001; 17(2):129–150.
4. Queipo NV, Haftka RT, Shyy W, Goel T, Vaidyanathan R, Tucker PK. Surrogate-based analysis and optimization.
Progress in Aerospace Sciences 2005; 41:1–28.
5. Simpson TW, Toropov V, Balabanov V, Viana FAC. Design sand analysis of computer experiments in
multidisciplinary design optimization: a review of how far we have come—or not. Proceedings of the 12th
AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Victoria, Canada, 10–12 September 2008.
AIAA 2008-5802.
6. Kleijnen JPC. Design and Analysis of Simulation Experiments. Springer: Berlin, 2007.
7. Forrester AIJ, Keane AJ. Recent advances in surrogate-based optimization. Progress in Aerospace Sciences 2009;
45(1–3):50–79.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme
22. 156 F. A. C. VIANA, G. VENTER AND V. BALABANOV
8. Montgomery DC. Design and Analysis of Experiments. Wiley: New York, 2004.
9. Simpson TW, Lin DKJ, Chen W. Sampling strategies for computer experiments: design and analysis. International
Journal of Reliability and Applications 2001; 2(3):209–240.
10. Viana FAC, Haftka RT, Steffen V. Multiple surrogates: how cross-validation errors can help us to obtain the best
predictor. Structural and Multidisciplinary Optimization, DOI: 10.1007/s00158-008-0338-0.
11. Viana FAC, Haftka RT. Using multiple surrogates for metamodeling. Seventh ASMO-UK/ISSMO International
Conference on Engineering Design Optimization, Bath, U.K., 7–8 July 2008.
12. Samad A, Kim K, Goel T, Haftka RT, Shyy W. Multiple surrogate modeling for axial compressor blade shape
optimization. Journal of Propulsion and Power 2008; 24(2):302–310.
13. Giunta AA, Wojtkiewicz SF, Eldred MS. Overview of modern design of experiments methods for computational
simulations. 41st AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, AIAA-2003-0649, 6–9 January 2003.
14. Goel T, Haftka RT, Shyy W, Watson LT. Pitfalls of using a single criterion for selecting experimental designs.
International Journal for Numerical Methods in Engineering 2008; 75(2):127–155.
15. Myers RH, Montgomery DC. Response Surface Methodology. Process and Product Optimization Using Designed
Experiments. Wiley: New York, 1995.
16. McKay MD, Beckman RJ, Conover WJ. A comparison of three methods for selecting values of input variables
from a computer code. Technometrics 1979; 21:239–245.
17. Iman RL, Conover WJ. Small sample sensitivity analysis techniques for computer models, with an application
to risk assessment. Communications in Statistics, Part A. Theory and Methods 1980; 17:1749–1842.
18. Kleijnen JPC, Sanchez SM, Lucas TW, Cioppa TM. A user’s guide to the brave new world of designing simulation
experiments. INFORMS Journal on Computing 2005; 17(3):263–289.
19. Audze P, Eglajs V. New approach for planning out of experiments. Problems of Dynamics and Strengths 1977;
35:104–107. Riga, Zinatne Publishing House (in Russian).
20. Park JS. Optimal Latin-hypercube designs for computer experiments. Journal of Statistical Planning and Inference
1994; 39:95–111.
21. Morris MD, Mitchell TJ. Exploratory designs for computational experiments. Journal of Statistical Planning and
Inference 1995; 43:381–402.
22. Ye KQ, Li W, Sudjianto A. Algorithmic construction of optimal symmetric Latin hypercube designs. Journal of
Statistical Planning and Inference 2000; 90:145–159.
23. Fang KT, Ma CX, Winker P. Centered L2-discrepancy of random sampling and Latin hypercube design and
construction of uniform designs. Mathematics of Computation 2002; 71:275–296.
24. Bates SJ, Sienz J, Toropov VV. Formulation of the optimal Latin hypercube design of experiments using a
permutation genetic algorithm. Forty-Fifth AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and
Materials Conference, Palm Springs, CA, 19–22 April 2004. AIAA-2004-2011.
25. Jin R, Chen W, Sudjianto A. An efficient algorithm for constructing optimal design of computer experiments.
Journal of Statistical Planning and Inference 2005; 134:268–287.
26. Liefvendahl M, Stocki R. A study on algorithms for optimization of Latin hypercubes. Journal of Statistical
Planning and Inference 2006; 136(9):3231–3247.
27. van Dam E, Husslage B, den Hertog D, Melissen H. Maximin Latin hypercube designs in two dimensions.
Operations Research 2007; 55(1):158–169.
28. Grosso A, Jamali A, Locatelli M. Finding maximin Latin hypercube designs by iterated local search heuristics.
European Journal of Operational Research 2009; 197(2):541–547.
29. Mathworks contributors, MATLAB R The language of technical computing, Version 7.6.0.324 R2008a, The
MathWorks Inc., 2008.
30. Viana FAC. SURROGATES ToolBox, http://fchegury.googlepages.com, 2009.
31. Cioppa TM, Lucas TW. Efficient nearly orthogonal and space-filling Latin hypercubes. Technometrics 2007;
49(1):45–55.
32. Hernandez AS. Breaking barriers to design dimensions in nearly orthogonal Latin hypercubes. Ph.D. Thesis,
Naval Postgraduate School, Monterey, CA, U.S.A., 2008.
33. Franco J, Carraro L, Roustant O, Jourdan A. A radar-shaped statistic for testing and visualizing uniformity
properties in computer experiments. Proceedings of the Joint ENBIS-DEINDE 2007 Conference: Computer
Experiments Versus Physical Experiments, Turin, Italy, 11–13 April 2007.
Copyright q 2009 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2010; 82:135–156
DOI: 10.1002/nme