As optimization (or prescriptive analytics) has grown as a tool for business decision-making, a key factor in its success has been the adoption of model-based optimization. Using this approach, an analyst’s major work is to describe a problem of interest by means of an algebraic model, while the computation of a solution is left to general-purpose, off-the-shelf software. Powerful modeling systems manage the difficulties of translating between the human modeler’s ideas and the computer software’s needs. This tutorial introduces model-based optimization and offers a guide to its effective use.
Enhancing Partition Crossover with Articulation Points Analysisjfrchicanog
This is the presentation of the paper entitled "Enhancing Partition Crossover with Articulation Points Analysis" at the ECOM track in gECCO 2018 (Kyoto). This paper was awarded with a "Best Paper Award"
Linear Programming Problems with Icosikaipentagonal Fuzzy Numberijtsrd
The objective of this paper is to introduce a new fuzzy number with twenty five points called as Icosikaipentagonal fuzzy number. In which Fuzzy numbers develop a membership function where there are no limitations of any specified form. The aim of this paper is to define Icosikaipentagonal fuzzy number and its some arithmetic operations. Fuzzy Linear Programming problem is one of the active research areas in optimization. Many real world problems are modelled as Fuzzy Linear Programming Problems. Icosikaipentagonal fuzzy number proposed a ranking function to solve fuzzy linear programming problems. Dr. S. Paulraj | G. Tamilarasi "Linear Programming Problems with Icosikaipentagonal Fuzzy Number" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31357.pdf Paper Url :https://www.ijtsrd.com/mathemetics/applied-mathamatics/31357/linear-programming-problems-with-icosikaipentagonal-fuzzy-number/dr-s-paulraj
Decision Tree Algorithm Implementation Using Educational Data ijcax
There is different decision tree based algorithms in data mining tools. These algorithms are used for classification of data objects and used for decision making purpose. This study determines the decision tree based ID3 algorithm and its implementation with student data example.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a study that used fuzzy finite element analysis to solve a heat transfer problem where material properties were treated as fuzzy parameters. The problem considered heat transfer through the periphery of an insulated circular iron rod. Thermal conductivity and heat transfer coefficient were represented as triangular fuzzy numbers to account for uncertainty. The temperatures at various points along the rod were calculated using analytical, finite difference, and finite element methods. The finite element method was also used to solve the problem considering the uncertain material properties as fuzzy values. Results were presented as fuzzy plots showing the temperature ranges associated with different alpha cut values. The fuzzy finite element method was able to provide solutions that closely matched the traditional finite element method while incorporating uncertainty in the material properties.
ARCHAEOLOGICAL LAND USE CHARACTERIZATION USING MULTISPECTRAL REMOTE SENSING DATAgrssieee
1) The document discusses aggregating parallel computing techniques and hardware/software co-design to implement high-performance remote sensing applications in real-time.
2) A methodology is proposed that applies parallelization techniques to remote sensing algorithms and maps computational tasks to super-systolic array co-processor architectures through hardware/software co-design.
3) Case studies demonstrate applying techniques like loop optimization, tiling, and space-time mapping to the matrix vector multiplication algorithm and implementing the results in FPGA and VLSI platforms.
A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE) Slide Joonyoung Yi
PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE).
I made ppt for explaining the paper.
Abstract of the paper:
This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.
Enhancing Partition Crossover with Articulation Points Analysisjfrchicanog
This is the presentation of the paper entitled "Enhancing Partition Crossover with Articulation Points Analysis" at the ECOM track in gECCO 2018 (Kyoto). This paper was awarded with a "Best Paper Award"
Linear Programming Problems with Icosikaipentagonal Fuzzy Numberijtsrd
The objective of this paper is to introduce a new fuzzy number with twenty five points called as Icosikaipentagonal fuzzy number. In which Fuzzy numbers develop a membership function where there are no limitations of any specified form. The aim of this paper is to define Icosikaipentagonal fuzzy number and its some arithmetic operations. Fuzzy Linear Programming problem is one of the active research areas in optimization. Many real world problems are modelled as Fuzzy Linear Programming Problems. Icosikaipentagonal fuzzy number proposed a ranking function to solve fuzzy linear programming problems. Dr. S. Paulraj | G. Tamilarasi "Linear Programming Problems with Icosikaipentagonal Fuzzy Number" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31357.pdf Paper Url :https://www.ijtsrd.com/mathemetics/applied-mathamatics/31357/linear-programming-problems-with-icosikaipentagonal-fuzzy-number/dr-s-paulraj
Decision Tree Algorithm Implementation Using Educational Data ijcax
There is different decision tree based algorithms in data mining tools. These algorithms are used for classification of data objects and used for decision making purpose. This study determines the decision tree based ID3 algorithm and its implementation with student data example.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a study that used fuzzy finite element analysis to solve a heat transfer problem where material properties were treated as fuzzy parameters. The problem considered heat transfer through the periphery of an insulated circular iron rod. Thermal conductivity and heat transfer coefficient were represented as triangular fuzzy numbers to account for uncertainty. The temperatures at various points along the rod were calculated using analytical, finite difference, and finite element methods. The finite element method was also used to solve the problem considering the uncertain material properties as fuzzy values. Results were presented as fuzzy plots showing the temperature ranges associated with different alpha cut values. The fuzzy finite element method was able to provide solutions that closely matched the traditional finite element method while incorporating uncertainty in the material properties.
ARCHAEOLOGICAL LAND USE CHARACTERIZATION USING MULTISPECTRAL REMOTE SENSING DATAgrssieee
1) The document discusses aggregating parallel computing techniques and hardware/software co-design to implement high-performance remote sensing applications in real-time.
2) A methodology is proposed that applies parallelization techniques to remote sensing algorithms and maps computational tasks to super-systolic array co-processor architectures through hardware/software co-design.
3) Case studies demonstrate applying techniques like loop optimization, tiling, and space-time mapping to the matrix vector multiplication algorithm and implementing the results in FPGA and VLSI platforms.
A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE) Slide Joonyoung Yi
PPT for A Neural Autoregressive Approach to Collaborative Filtering (CF-NADE).
I made ppt for explaining the paper.
Abstract of the paper:
This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF tasks. Then we propose to improve the model by sharing parameters between different ratings. A factored version of CF-NADE is also proposed for better scalability. Furthermore, we take the ordinal nature of the preferences into consideration and propose an ordinal cost to optimize CF-NADE, which shows superior performance. Finally, CF-NADE can be extended to a deep model, with only moderately increased computational complexity. Experimental results show that CF-NADE with a single hidden layer beats all previous state-of-the-art methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more hidden layers can further improve the performance.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
The document presents a modification to the Jaya optimization algorithm. The standard Jaya algorithm seeks guidance from only the best and worst solutions in each iteration. The modification proposes that Jaya should also seek guidance from the top and bottom 10% of solutions, in addition to the best and worst. This allows information to flow more continuously from the extremities.
The proposed algorithm is tested on the sphere function optimization problem. Initial candidate solutions are generated and ranked. The top and bottom 10% solutions near the best and worst are identified. Each candidate is then modified based on these neighboring solutions, moving toward the top 10% and away from the bottom 10%. Finally, candidates are refined using the standard Jaya equations seeking guidance from the
Reproducibility and differential analysis with selfishtuxette
Selfish is a Python tool for identifying differentially interacting chromatin regions from Hi-C contact maps of two conditions with no replicates. It begins by distance-correcting the interaction frequencies. It then computes Gaussian filters over neighboring bins to capture spatial dependencies. It compares the evolution of these filters between conditions and assigns p-values assuming Gaussian differences. Selfish is faster than existing methods and shows enrichment for epigenetic markers near differential regions. However, its statistical justification could be improved as it does not model overdispersion like other methods.
This document summarizes a student project on building machine learning models to recognize handwritten digits. The project involved collecting handwritten digit data, preprocessing the data, implementing logistic regression with gradient descent, evaluating performance on test data, and achieving over 90% accuracy on the test set. The models have applications in areas like banking, postal services, and document digitization.
Slides of my doctoral thesis dissertation talk, given on 20 March 2014 at Politecnico di Milano. Title: "Computational prediction of gene functions through machine learning methods and multiple validation procedures"
This document provides an overview of digital electronics and logic gates. It begins with classifying materials as metals, semiconductors, or insulators based on their conductivity and energy bands. Common semiconductor devices like diodes, transistors, and their applications in rectifiers are described. The main logic gates - NOT, AND, OR, NAND, NOR, XOR and XNOR - are defined through their truth tables and symbols. Equivalence of different gates using NAND or NOR is also shown. Circuit analogies are provided to explain gate operations.
Goodness–of–fit tests for regression models: the functional data caseNeuroMat
In this talk the topic of the goodness–of–fit for regression models with functional covariates is considered. Although several papers have been published in the last two decades for the checking of regression models, the case where the covariates are functional is quite recent and has became of interest in the last years. We will review the very recent advances in this area and we will propose a new goodness–of–fit test for the null hypothesis of a functional linear model with scalar response. Our test is based on a generalization to the functional framework of a previous one, designed for the goodness–of–fit of regression models with multivariate covariates using random projections. The test statistic is easy to compute using geometrical and matrix arguments, and simple to calibrate in its distribution by a wild bootstrap on the residuals. Some theoretical aspects are derived and the finite sample properties of the test are illustrated by a simulation study. Finally, the test is applied to real data for checking the assumption of the functional linear model and a graphical tool is introduced. Lecturer: Wenceslao González-Manteiga, Univ. de Santiago de Compostela, Spain.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Why biased matrix factorization works well?Joonyoung Yi
The document discusses the Netflix Prize competition to improve movie recommendation accuracy, where the winning algorithm was based on biased matrix factorization. It provides background on the Netflix data and evaluation metrics used. Matrix factorization techniques like Funk SVD, probabilistic matrix factorization, and alternating minimization are introduced as important algorithms for collaborative filtering and recommendation systems.
Automatic Feature Subset Selection using Genetic Algorithm for Clusteringidescitation
Feature subset selection is a process of selecting a
subset of minimal, relevant features and is a pre processing
technique for a wide variety of applications. High dimensional
data clustering is a challenging task in data mining. Reduced
set of features helps to make the patterns easier to understand.
Reduced set of features are more significant if they are
application specific. Almost all existing feature subset
selection algorithms are not automatic and are not application
specific. This paper made an attempt to find the feature subset
for optimal clusters while clustering. The proposed Automatic
Feature Subset Selection using Genetic Algorithm (AFSGA)
identifies the required features automatically and reduces
the computational cost in determining good clusters. The
performance of AFSGA is tested using public and synthetic
datasets with varying dimensionality. Experimental results
have shown the improved efficacy of the algorithm with optimal
clusters and computational cost.
IRJET- Matrix Multiplication using Strassen’s MethodIRJET Journal
1) The document discusses matrix multiplication and Strassen's algorithm for matrix multiplication. Strassen's algorithm improves upon the normal O(n^3) matrix multiplication algorithm by reducing the time complexity to O(n^2.81).
2) It provides code to implement basic matrix multiplication in C and explains Strassen's divide and conquer approach which uses 7 matrix operations rather than 8 to multiply 2x2 matrices.
3) Adopting Strassen's algorithm can help reduce time complexity compared to the normal matrix multiplication approach.
The document discusses arrays and multi-dimensional arrays in C programming. It defines arrays as collections of homogeneous data elements indexed by integers, and multi-dimensional arrays as arrays with more than one dimension where each additional dimension requires another pair of brackets. Examples are provided to demonstrate one-dimensional and two-dimensional arrays, including programs to input, output, sort and manipulate array elements. Common array operations like traversing, accessing, and declaring arrays of various types are also explained.
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptogr...IJECEIAES
The use of logistics functions as a random number generator in a cryptography algo- rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography.
Optimization Software and Systems for Operations Research: Best Practices and...Bob Fourer
For a great variety of large-scale optimization problems arising in Operations Research applications, it has become practical to rely on “off-the-shelf” software, without any special programming of algorithms. As a result the use of optimization within business systems has grown dramatically in the past decade.
--
One key factor in this success has been the adoption of model-based optimization. Using this approach, an optimization problem is conceived as a particular minimization or maximization of some function of decision variables, subject to varied equations, inequalities, and other constraints on the variables. A range of computer modeling languages have evolved to allow these optimization models to be described in a concise and readable way, separately from the data that determines the size and shape of the resulting problem that may have thousands (or even millions) of variables and constraints.
--
After an optimization problem is instantiated from the model and data, it is automatically put into a standard mathematical form and solved by sophisticated general-purpose algorithmic software packages. Numerous heuristic routines embedded within these packages enable them to adapt to many problem structures without any special effort from the model builder.
--
The evolution and current state of both modeling and solving software for optimization will be presented in the main part of this talk. The presentation will then conclude with a consideration of current trends and likely future directions.
Optimization Problems Solved by Different Platforms Say Optimum Tool Box (Mat...IRJET Journal
The document discusses using MATLAB and Excel Solver to solve optimization problems in engineering. It provides examples of using these tools to solve linear programming problems, including a purchasing optimization problem maximizing profits. Nonlinear programming problems are also demonstrated, such as quadratic and least squares problems. The key benefits of MATLAB and Excel Solver for optimization problems are their ease of use without requiring an in-depth understanding of mathematical algorithms. They allow students and researchers to efficiently model and solve a variety of optimization problems.
This document discusses methods in Java programming. It defines methods as collections of statements grouped together to perform operations. Methods can take parameters and return values. When a method is invoked, an activation record is created on the call stack to store its parameters and variables. The document discusses defining methods, passing arguments, return values, and method overloading. It also provides examples of tracing method calls and return values on the call stack.
As optimization methods have been applied more broadly and effectively, a key factor in their success has been the adoption of a model-based approach. A researcher or analyst focuses on modeling the problem of interest, while the computation of a solution is left to general-purpose, off-the-shelf solvers; independent modeling languages and systems manage the difficulties of translating between the human modeler’s ideas and the computer software’s needs. This tutorial introduces model-based optimization with examples from the AMPL modeling language and various popular solvers; the presentation concludes by surveying current software, with special attention to the role of constraint programming.
AMPL Workshop, part 2: From Formulation to DeploymentBob Fourer
Optimization is the most widely adopted technology of Prescriptive Analytics, but also the most challenging to implement. In this two-part workshop, we show how AMPL gets you going without elaborate training, extra programmers, or premature commitments. We start by introducing model-based optimization, the key approach to streamlining the optimization
modeling cycle and building successful applications today. Then we demonstrate how AMPL’s design of a language and system for model-based optimization is able to offer exceptional power of expression while maintaining ease of use. The remainder of the presentation takes a single example through successive stages of the optimization modeling life cycle: prototyping, integration, and deployment.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
IRJET- The Machine Learning: The method of Artificial IntelligenceIRJET Journal
This document discusses machine learning and its role in artificial intelligence. It begins with an abstract that explains machine learning is widely used in artificial intelligence to enable systems to learn and make decisions without being explicitly programmed. It then provides an introduction to machine learning, explaining it allows software to learn from data and improve predictions without being explicitly programmed. The document also discusses related work from other researchers on topics like supervised learning, unsupervised learning, and evaluating different machine learning methods. It describes problems that can occur during the learning process like bias, noise, and pattern recognition. Finally, it provides algorithms for hierarchical clustering and k-means clustering as examples of unsupervised learning methods.
This document discusses packing problems and how to formulate and solve them as mixed integer linear programs (MILPs) using the Gurobi optimization solver. Packing problems involve fitting objects into containers in the most dense packing possible or using the fewest number of containers. The document provides examples of packing problems and presents an MILP formulation for packing rectangles. It explains why MILPs are suitable for packing problems and how to model the constraints, variables, and objective function. Finally, it outlines the basic steps for solving a packing problem MILP using Gurobi: defining the model struct, modifying parameters, running the optimization, and printing the solution.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
The document presents a modification to the Jaya optimization algorithm. The standard Jaya algorithm seeks guidance from only the best and worst solutions in each iteration. The modification proposes that Jaya should also seek guidance from the top and bottom 10% of solutions, in addition to the best and worst. This allows information to flow more continuously from the extremities.
The proposed algorithm is tested on the sphere function optimization problem. Initial candidate solutions are generated and ranked. The top and bottom 10% solutions near the best and worst are identified. Each candidate is then modified based on these neighboring solutions, moving toward the top 10% and away from the bottom 10%. Finally, candidates are refined using the standard Jaya equations seeking guidance from the
Reproducibility and differential analysis with selfishtuxette
Selfish is a Python tool for identifying differentially interacting chromatin regions from Hi-C contact maps of two conditions with no replicates. It begins by distance-correcting the interaction frequencies. It then computes Gaussian filters over neighboring bins to capture spatial dependencies. It compares the evolution of these filters between conditions and assigns p-values assuming Gaussian differences. Selfish is faster than existing methods and shows enrichment for epigenetic markers near differential regions. However, its statistical justification could be improved as it does not model overdispersion like other methods.
This document summarizes a student project on building machine learning models to recognize handwritten digits. The project involved collecting handwritten digit data, preprocessing the data, implementing logistic regression with gradient descent, evaluating performance on test data, and achieving over 90% accuracy on the test set. The models have applications in areas like banking, postal services, and document digitization.
Slides of my doctoral thesis dissertation talk, given on 20 March 2014 at Politecnico di Milano. Title: "Computational prediction of gene functions through machine learning methods and multiple validation procedures"
This document provides an overview of digital electronics and logic gates. It begins with classifying materials as metals, semiconductors, or insulators based on their conductivity and energy bands. Common semiconductor devices like diodes, transistors, and their applications in rectifiers are described. The main logic gates - NOT, AND, OR, NAND, NOR, XOR and XNOR - are defined through their truth tables and symbols. Equivalence of different gates using NAND or NOR is also shown. Circuit analogies are provided to explain gate operations.
Goodness–of–fit tests for regression models: the functional data caseNeuroMat
In this talk the topic of the goodness–of–fit for regression models with functional covariates is considered. Although several papers have been published in the last two decades for the checking of regression models, the case where the covariates are functional is quite recent and has became of interest in the last years. We will review the very recent advances in this area and we will propose a new goodness–of–fit test for the null hypothesis of a functional linear model with scalar response. Our test is based on a generalization to the functional framework of a previous one, designed for the goodness–of–fit of regression models with multivariate covariates using random projections. The test statistic is easy to compute using geometrical and matrix arguments, and simple to calibrate in its distribution by a wild bootstrap on the residuals. Some theoretical aspects are derived and the finite sample properties of the test are illustrated by a simulation study. Finally, the test is applied to real data for checking the assumption of the functional linear model and a graphical tool is introduced. Lecturer: Wenceslao González-Manteiga, Univ. de Santiago de Compostela, Spain.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Why biased matrix factorization works well?Joonyoung Yi
The document discusses the Netflix Prize competition to improve movie recommendation accuracy, where the winning algorithm was based on biased matrix factorization. It provides background on the Netflix data and evaluation metrics used. Matrix factorization techniques like Funk SVD, probabilistic matrix factorization, and alternating minimization are introduced as important algorithms for collaborative filtering and recommendation systems.
Automatic Feature Subset Selection using Genetic Algorithm for Clusteringidescitation
Feature subset selection is a process of selecting a
subset of minimal, relevant features and is a pre processing
technique for a wide variety of applications. High dimensional
data clustering is a challenging task in data mining. Reduced
set of features helps to make the patterns easier to understand.
Reduced set of features are more significant if they are
application specific. Almost all existing feature subset
selection algorithms are not automatic and are not application
specific. This paper made an attempt to find the feature subset
for optimal clusters while clustering. The proposed Automatic
Feature Subset Selection using Genetic Algorithm (AFSGA)
identifies the required features automatically and reduces
the computational cost in determining good clusters. The
performance of AFSGA is tested using public and synthetic
datasets with varying dimensionality. Experimental results
have shown the improved efficacy of the algorithm with optimal
clusters and computational cost.
IRJET- Matrix Multiplication using Strassen’s MethodIRJET Journal
1) The document discusses matrix multiplication and Strassen's algorithm for matrix multiplication. Strassen's algorithm improves upon the normal O(n^3) matrix multiplication algorithm by reducing the time complexity to O(n^2.81).
2) It provides code to implement basic matrix multiplication in C and explains Strassen's divide and conquer approach which uses 7 matrix operations rather than 8 to multiply 2x2 matrices.
3) Adopting Strassen's algorithm can help reduce time complexity compared to the normal matrix multiplication approach.
The document discusses arrays and multi-dimensional arrays in C programming. It defines arrays as collections of homogeneous data elements indexed by integers, and multi-dimensional arrays as arrays with more than one dimension where each additional dimension requires another pair of brackets. Examples are provided to demonstrate one-dimensional and two-dimensional arrays, including programs to input, output, sort and manipulate array elements. Common array operations like traversing, accessing, and declaring arrays of various types are also explained.
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptogr...IJECEIAES
The use of logistics functions as a random number generator in a cryptography algo- rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography.
Optimization Software and Systems for Operations Research: Best Practices and...Bob Fourer
For a great variety of large-scale optimization problems arising in Operations Research applications, it has become practical to rely on “off-the-shelf” software, without any special programming of algorithms. As a result the use of optimization within business systems has grown dramatically in the past decade.
--
One key factor in this success has been the adoption of model-based optimization. Using this approach, an optimization problem is conceived as a particular minimization or maximization of some function of decision variables, subject to varied equations, inequalities, and other constraints on the variables. A range of computer modeling languages have evolved to allow these optimization models to be described in a concise and readable way, separately from the data that determines the size and shape of the resulting problem that may have thousands (or even millions) of variables and constraints.
--
After an optimization problem is instantiated from the model and data, it is automatically put into a standard mathematical form and solved by sophisticated general-purpose algorithmic software packages. Numerous heuristic routines embedded within these packages enable them to adapt to many problem structures without any special effort from the model builder.
--
The evolution and current state of both modeling and solving software for optimization will be presented in the main part of this talk. The presentation will then conclude with a consideration of current trends and likely future directions.
Optimization Problems Solved by Different Platforms Say Optimum Tool Box (Mat...IRJET Journal
The document discusses using MATLAB and Excel Solver to solve optimization problems in engineering. It provides examples of using these tools to solve linear programming problems, including a purchasing optimization problem maximizing profits. Nonlinear programming problems are also demonstrated, such as quadratic and least squares problems. The key benefits of MATLAB and Excel Solver for optimization problems are their ease of use without requiring an in-depth understanding of mathematical algorithms. They allow students and researchers to efficiently model and solve a variety of optimization problems.
This document discusses methods in Java programming. It defines methods as collections of statements grouped together to perform operations. Methods can take parameters and return values. When a method is invoked, an activation record is created on the call stack to store its parameters and variables. The document discusses defining methods, passing arguments, return values, and method overloading. It also provides examples of tracing method calls and return values on the call stack.
As optimization methods have been applied more broadly and effectively, a key factor in their success has been the adoption of a model-based approach. A researcher or analyst focuses on modeling the problem of interest, while the computation of a solution is left to general-purpose, off-the-shelf solvers; independent modeling languages and systems manage the difficulties of translating between the human modeler’s ideas and the computer software’s needs. This tutorial introduces model-based optimization with examples from the AMPL modeling language and various popular solvers; the presentation concludes by surveying current software, with special attention to the role of constraint programming.
AMPL Workshop, part 2: From Formulation to DeploymentBob Fourer
Optimization is the most widely adopted technology of Prescriptive Analytics, but also the most challenging to implement. In this two-part workshop, we show how AMPL gets you going without elaborate training, extra programmers, or premature commitments. We start by introducing model-based optimization, the key approach to streamlining the optimization
modeling cycle and building successful applications today. Then we demonstrate how AMPL’s design of a language and system for model-based optimization is able to offer exceptional power of expression while maintaining ease of use. The remainder of the presentation takes a single example through successive stages of the optimization modeling life cycle: prototyping, integration, and deployment.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
IRJET- The Machine Learning: The method of Artificial IntelligenceIRJET Journal
This document discusses machine learning and its role in artificial intelligence. It begins with an abstract that explains machine learning is widely used in artificial intelligence to enable systems to learn and make decisions without being explicitly programmed. It then provides an introduction to machine learning, explaining it allows software to learn from data and improve predictions without being explicitly programmed. The document also discusses related work from other researchers on topics like supervised learning, unsupervised learning, and evaluating different machine learning methods. It describes problems that can occur during the learning process like bias, noise, and pattern recognition. Finally, it provides algorithms for hierarchical clustering and k-means clustering as examples of unsupervised learning methods.
This document discusses packing problems and how to formulate and solve them as mixed integer linear programs (MILPs) using the Gurobi optimization solver. Packing problems involve fitting objects into containers in the most dense packing possible or using the fewest number of containers. The document provides examples of packing problems and presents an MILP formulation for packing rectangles. It explains why MILPs are suitable for packing problems and how to model the constraints, variables, and objective function. Finally, it outlines the basic steps for solving a packing problem MILP using Gurobi: defining the model struct, modifying parameters, running the optimization, and printing the solution.
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this paper, a hybrid method for solving multi-objective problem has been provided. The proposed
method is combining the ε-Constraint and the Cuckoo algorithm. First the multi objective problem
transfers into a single-objective problem using ε-Constraint, then the Cuckoo optimization algorithm will
optimize the problem in each task. At last the optimized Pareto frontier will be drawn. The advantage of
this method is the high accuracy and the dispersion of its Pareto frontier. In order to testing the efficiency
of the suggested method, a lot of test problems have been solved using this method. Comparing the results
of this method with the results of other similar methods shows that the Cuckoo algorithm is more suitable
for solving the multi-objective problems.
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this paper, a hybrid method for solving multi-objective problem has been provided. The proposed method is combining the ε-Constraint and the Cuckoo algorithm. First the multi objective problem transfers into a single-objective problem using ε-Constraint, then the Cuckoo optimization algorithm will optimize the problem in each task. At last the optimized Pareto frontier will be drawn. The advantage of
this method is the high accuracy and the dispersion of its Pareto frontier. In order to testing the efficiency of the suggested method, a lot of test problems have been solved using this method. Comparing the results of this method with the results of other similar methods shows that the Cuckoo algorithm is more suitable for solving the multi-objective problems.
The document discusses problem solving approaches and techniques in operations research. It defines operations research as using quantitative methods to assist decision-makers in designing, analyzing, and improving systems to make better decisions. The scientific approach involves studying differences between past and present cases while considering new environmental factors. Some quantitative techniques mentioned include break-even point analysis, financial analysis, and decision theory. The document also provides examples of linear programming models and their components.
Selecting Best Tractor Ranking Wise by Software using MADM(Multiple –Attribut...IRJET Journal
This document discusses using a multi-attribute decision making (MADM) technique called TOPSIS to select the best tractor among alternatives. It begins with an introduction to MADM problems and the TOPSIS method. The TOPSIS method involves normalizing a decision matrix, determining ideal and negative ideal alternatives, calculating distances from each alternative to the ideals, and ranking alternatives based on relative closeness. The document then applies this process to select the best tractor among 6 models, using attributes like price, power, fuel efficiency, and maintenance cost to evaluate the alternatives. It aims to help buyers select the tractor best suited to their application needs based on technical specifications.
AMPL Workshop, part 1: Model-Based Optimization, Plain and SimpleBob Fourer
Optimization is the most widely adopted technology of Prescriptive Analytics, but also the most challenging to implement. In this two-part workshop, we show how AMPL gets you going without elaborate training, extra programmers, or premature commitments. We start by introducing model-based optimization, the key approach to streamlining the optimization modeling cycle and building successful applications today. Then we demonstrate how AMPL’s design of a language and system for model-based optimization is able to offer exceptional power of expression while maintaining ease of use. The remainder of the presentation takes a single example through successive stages of the optimization modeling life cycle: prototyping, integration, and deployment.
Stock market analysis using supervised machine learningPriyanshu Gandhi
This document summarizes a paper on using machine learning algorithms to predict stock prices. It discusses using open source libraries to build prediction models from historical stock data, including attributes like open, high, low, close prices and volume. Linear regression is used to identify relationships between attributes and predict future prices. The model is trained and tested on preprocessed data, and accuracy is evaluated using metrics like R^2 and RMSE. Common mistakes like data leakage and overfitting are also discussed.
This document discusses generative adversarial networks (GANs) and their applications. It begins with an overview of generative models including variational autoencoders and GANs. GANs use two neural networks, a generator and discriminator, that compete against each other in a game theoretic framework. The generator learns to generate fake samples to fool the discriminator, while the discriminator learns to distinguish real and fake samples. Applications discussed include image-to-image translation using conditional GANs to map images from one domain to another, and text-to-image translation using GANs to generate images from text descriptions.
Visualizing and Forecasting Stocks Using Machine LearningIRJET Journal
This document discusses using machine learning techniques like regression and LSTM models to predict stock market returns. It first provides background on the challenges of predicting the stock market due to its unpredictable nature. It then describes obtaining stock price data from Yahoo Finance to use as the dataset. The document outlines using regression analysis to build a relationship between stock prices and time and using LSTM due to its ability to learn from sequence data. It then reviews related work applying machine learning like neural networks and genetic algorithms to optimize stock prediction. The methodology section provides more detail on preprocessing the dataset and using regression and LSTM models to make predictions and compare results.
This document presents the solutions to 4 optimization problems related to software engineering. The problems involve finding the optimal dimensions of boxes, cylindrical structures, and 3D models to minimize costs or pixel usage. Each problem is solved using calculus techniques like taking derivatives and finding critical points to determine maximum or minimum values. The conclusions emphasize how optimization problems are directly applied calculus to calculate extremes of functions subject to conditions.
This document provides an overview of algorithms and recursion from a lecture. It discusses performance analysis using Big O notation. Common time complexities like O(1), O(n), O(n^2) are introduced. The document defines an algorithm as a set of well-defined steps to solve a problem and categorizes algorithms as recursive vs iterative, logical, serial/parallel/distributed, deterministic/non-deterministic, exact/approximate, and quantum. Examples of recursive algorithms like factorials, greatest common divisor, and the Fibonacci sequence are presented along with their recursive definitions and code implementations.
Model-Based Optimization for Effective and Reliable Decision-MakingBob Fourer
Optimization originated as an advanced mathematical technique, but it has become an accessible and widely used decision-making tool. A key factor in the spread of successful optimization applications has been the adoption of a model-based approach: A domain expert or operations analyst focuses on modeling the problem of interest, while the computation of a solution is left to general-purpose, off-the-shelf solvers; powerful yet intuitive modeling software manages the difficulties of translating between the human modeler’s formulation and the solver software’s needs. This talk introduces model-based optimization by contrasting it to a method-based approach that relies on customized implementation of rules and algorithms. Model-based implementations are illustrated using the AMPL modeling language and popular solvers. The presentation concludes by surveying the variety of modeling languages and solvers available for model-based optimization today.
Cloud services promising “optimization on demand” have become steadily more numerous and more powerful in recent years. This presentation offers a user-oriented survey, with a focus on the role of the AMPL modeling language in streamlining development and deployment of optimization models using online tools. Starting with the pioneering free NEOS Server, we compare more recent commercial offerings such as Gurobi Instant Cloud and the Satalia SolveEngine; the benefits of these solver services are enhanced through their use with AMPL’s algebraic modeling facilities. We conclude by introducing QuanDec, which turns AMPL models into web-based collaborative decision-making tools.
Identifying Good Near-Optimal Formulations for Hard Mixed-Integer ProgramsBob Fourer
When an exact mixed-integer programming formulation resists attempts at solution, sometimes much better results can be achieved by “cheating” a bit on the formulation. Typically, a judicious choice of reformulation, restriction, or decomposition serves to make the problem easier, in a way not guaranteed to preserve the solution's optimality but highly unlikely to make much of a difference given the model and data of interest. This tutorial illustrates such an approach through a series of case studies. All rely on trial and error, a flexible modeling language, and a good general-purpose solver, and each is seen to be founded on one or two simple ideas that have the potential to be more broadly applied.
The Ascendance of the Dual Simplex Method: A Geometric ViewBob Fourer
First described in the 1950s, the dual simplex evolved in the 1990s to become the method most often used in solving linear programs. Factors in the ascendance of the dual simplex method include Don Goldfarb’s proposal for a steepest-edge variant, and an improved understanding of the bounded-variable extension. The ways that these come together to produce a highly effective algorithm are still not widely appreciated, however. This talk employs a geometric approach to the dual simplex method to provide a unified and straightforward description of the factors that work in its favor.
The Evolution of Computationally Practical Linear ProgrammingBob Fourer
Every student of Operations Research learns that linear programs can be solved efficiently. The fascinating story of how they came to be solved efficiently is little known, however.
---
We begin with an account of how a practicable primal simplex method for linear programming was first successfully implemented on primitive computers, by Dantzig, Orchard-Hays and others at the RAND Corporation in the early 1950s. Combining mathematics and engineering, success emerged from a series of innovative reorganizations of the computational steps. One of the key ideas arose unexpectedly as the by-product of an ill-advised plan for avoiding degenerate cycling.
---
At around the same time, early experts in linear programming developed a compact “tableau” scheme for carrying out the computations as a series of “pivots” on a matrix. We consider how the tableau form, impractical for computer implementations, was adopted by almost all textbooks, while the computationally practical form of the simplex method remained obscure.
---
The remainder of the presentation considers how computational practice in linear programming has continued to evolve to the present day. In fact the primal simplex method has become rarely used. We consider the new interior-point methods, which can outperform that simplex method particularly on very large problems; they also went through a period of innovations and false steps before being made practical. Also we consider how the dual simplex method, long considered only a curiosity, has become the preferred method of linear programming solvers today.
Developing Optimization Applications Quickly and Effectively with Algebraic M...Bob Fourer
Can you negotiate the complexities of the optimization modeling lifecycle, to deliver a working application before the funding runs out or the problem owner loses interest? Algebraic languages streamline the key steps of model formulation, testing, and revision, while offering powerful facilities for incorporating models into larger systems and deploying them to users. This presentation introduces algebraic modeling for optimization by carrying a single illustrative example through multiple contexts, from interactively evolved formula- tions to scripted iterative schemes to embedded applica- tions. We feature the new Python API for AMPL, and the QuanDec system for creating web-based collaborative applications from AMPL models.
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
1. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
2
Model-Based Optimization
Best Practices and Current Trends
Robert Fourer
Northwestern University, AMPL Optimization Inc.
4er@northwestern.edu, 4er@ampl.com
INFORMS International Conference, Taipei, Taiwan
Tutorial Session MB06, 18 June 2018, 11:00-12:30
4. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
In general terms,
Given a function of some decision variables
Choose values of the variables to
make the function as large or as small as possible
Subject to restrictions on the values of the variables
In practice,
A paradigm for a very broad variety of problems
A successful approach for finding solutions
6
Mathematical Optimization
5. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Method-based vs. model-based approaches
Example 1: Unconstrained optimization
Example 2: Balanced assignment
Example 3: Linear regression
Modeling languages for model-based optimization
Executable languages
Declarative languages
Solvers for model-based optimization
“Linear” solvers
“Nonlinear” solvers
Other solver types
7
Outline: Model-Based Optimization
6. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
An example from calculus
Min/Max 𝑓 𝑥 , . . . , 𝑥
. . . where 𝑓 is a smooth (differentiable) function
Approach #1
Form 𝛻𝑓 𝑥 , . . . , 𝑥 0
Find an expression for the solution to these equations
Approach #2
Choose a starting point 𝐱 𝑥 , . . . , 𝑥
Iterate 𝐱 𝐱 𝐝, where 𝛻 𝑓 𝐱 · 𝐝 𝛻𝑓 𝐱
. . . until the iterates converge
What makes these approaches different?
8
Example #1: Unconstrained Optimization
7. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Approach #1: Method-oriented
Finding an expression that solves 𝛻𝑓 𝑥 , . . . , 𝑥 0
. . . requires a different “method” for each new form of 𝑓
Approach #2: Model-oriented
Choosing 𝑓 to model your problem
. . . the same iteration procedure applies to any 𝑓 and 𝐱
. . . can be implemented by general, off-the-shelf software
Am I leaving something out?
To solve 𝛻 𝑓 𝐱 · 𝐝 𝛻𝑓 𝐱 ,
you need to form 𝛻 and 𝛻 for the given 𝑓
9
Where You Put the Most Effort
What makes these approaches different?
8. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Modeling
Describing of 𝑓 as a function of variables
Evaluation
Computing 𝑓 𝐱 from the description
Computing 𝛻𝑓 𝐱 , 𝛻 𝑓 𝐱 by automatic differentiation
Solution
Applying the iterative algorithm
Computing the iterates
Testing for convergence
Send to an off-the-shelf solver?
Choice of excellent smooth constrained nonlinear solvers
Differentiable unconstrained optimization is a special case
10
No . . . Software Can Handle Everything
Am I leaving something out?
10. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Given
𝑚 number of locally optimal points
𝑛 number of variables
and
𝑎 for each 𝑖 1, . . . , 𝑚 and 𝑗 1, . . . , 𝑛
𝑐 for each 𝑖 1, . . . , 𝑚
Determine
𝑥𝑗 for each 𝑗 1, . . . , 𝑛
to maximize
∑ 1/ 𝑐 ∑ 𝑥 𝑎
12
Mathematical Formulation
11. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
13
Modeling Language Formulation
Symbolic model (AMPL)
param m integer > 0;
param n integer > 0;
param a {1..m, 1..n};
param c {1..m};
var x {1..n};
maximize objective:
sum {i in 1..m} 1 / (c[i] + sum {j in 1..n} (x[j] - a[i,j])^2);
∑ 1/ 𝑐 ∑ 𝑥 𝑎
12. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
14
Modeling Language Data
Explicit data (independent of model)
param m := 5 ;
param n := 4 ;
param a: 1 2 3 4 :=
1 4 4 4 4
2 1 1 1 1
3 8 8 8 8
4 6 6 6 6
5 3 7 3 7 ;
param c :=
1 0.1
2 0.2
3 0.2
4 0.4
5 0.4 ;
13. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
15
Modeling Language Solution
Model + data = problem instance to be solved
ampl: model shekelEX.mod;
ampl: data shekelEX.dat;
ampl: option solver knitro;
ampl: solve;
Knitro 10.3.0: Locally optimal solution.
objective 5.055197729; feasibility error 0
6 iterations; 9 function evaluations
ampl: display x;
x [*] :=
1 1.00013
2 1.00016
3 1.00013
4 1.00016
;
14. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
16
Modeling Language Solution
. . . again with multistart option
ampl: model shekelEX.mod;
ampl: data shekelEX.dat;
ampl: option solver knitro;
ampl: option knitro_options 'ms_enable=1 ms_maxsolves=100';
ampl: solve;
Knitro 10.3.0: Locally optimal solution.
objective 10.15319968; feasibility error 0
43 iterations; 268 function evaluations
ampl: display x;
x [*] :=
1 4.00004
2 4.00013
3 4.00004
4 4.00013
;
15. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
17
Solution (cont’d)
. . . again with a “global” solver
ampl: model shekelEX.mod;
ampl: data shekelEX.dat;
ampl: option solver baron;
ampl: solve;
BARON 17.10.13 (2017.10.13):
43 iterations, optimal within tolerances.
Objective 10.15319968
ampl: display x;
x [*] :=
1 4.00004
2 4.00013
3 4.00004
4 4.00013
;
16. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
18
Example 1b: Protein Folding
Objective
For a given protein molecule,
find the configuration that has lowest energy
Sum of 6 energy expressions defined in terms of variables
Variables
Decision variables: positions of atoms
Variables defined in terms of other variables: 22 kinds
Data
8 index sets
30 collections of parameters, indexed over various sets
17. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
19
Modeling Language Formulation
Problem variables & some defined variables
var x {i in Atoms, j in D3} := x0[i,j];
var aax {i in Angles, j in D3} = x[it[i],j] - x[jt[i],j];
var abx {i in Angles, j in D3} = x[kt[i],j] - x[jt[i],j];
var a_axnorm {i in Angles} = sqrt(sum{j in D3} aax[i,j]^2);
var a_abdot {i in Angles} = sum {j in D3} aax[i,j]*abx[i,j];
.......
var cosphi {i in Torsions} = (sum{j in D3} ax[i,j]*bx[i,j])
/ sqrt(sum{j in D3} ax[i,j]^2)
/ sqrt(sum{j in D3} bx[i,j]^2);
var term {i in Torsions} = if np[i] == 1 then cosphi[i]
else if np[i] == 2 then 2*cosphi[i]^2 — 1
else 4*cosphi[i]^3 - 3*cosphi[i];
18. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
20
Modeling Language Formulation
Some more defined variables
var rinv14{i in Pairs14} =
1. / sqrt( sum{j in D3} (x[i14[i],j] - x[j14[i],j])^2 );
var r614{i in Pairs14} =
((sigma[i14[i]] + sigma[j14[i]]) * rinv14[i]) ^ 6;
var rinv{i in Pairs} =
1. / sqrt( sum{j in D3} (x[inb[i],j] - x[jnb[i],j])^2 );
var r6{i in Pairs} =
((sigma[inb[i]] + sigma[jnb[i]]) * rinv[i]) ^ 6;
19. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
21
Modeling Language Formulation
Components of total energy
var bond_energy = sum {i in Bonds} fcb[i] *
(sqrt( sum {j in D3} (x[ib[i],j] - x[jb[i],j])^2 ) - b0[i] ) ^ 2
var angle_energy = sum {i in Angles} fct[i] *
(atan2 (sqrt( sum{j in D3}
(abx[i,j]*a_axnorm[i] - a_abdot[i]/a_axnorm[i]*aax[i,j])^2),
a_abdot[i] ) - t0[i]) ^ 2
var torsion_energy =
sum {i in Torsions} fcp[i]*(1 + cos(phase[i])*term[i])
var improper_energy =
sum {i in Improper} (fcr[i] * idi[i]^2);
20. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
22
Modeling Language Formulation
Components of total energy (cont’d)
var pair14_energy =
sum {i in Pairs14} ( 332.1667*q[i14[i]]*q[j14[i]]*rinv14[i]*0.5
+ sqrt(eps[i14[i]]*eps[j14[i]])*(r614[i]^2 - 2*r614[i]) );
var pair_energy =
sum{i in Pairs} ( 332.1667*q[inb[i]]*q[jnb[i]]*rinv[i]
+ sqrt(eps[inb[i]]*eps[jnb[i]])*(r6[i]^2 - 2*r6[i]) );
minimize energy:
bond_energy + angle_energy + torsion_energy +
improper_energy + pair14_energy + pair_energy;
22. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
24
Solution
Local optimum from Knitro run
ampl: model pfold.mod;
ampl: data pfold3.dat;
ampl: option solver knitro;
ampl: option show_stats 1;
ampl: solve;
Substitution eliminates 762 variables.
Adjusted problem:
66 variables, all nonlinear
0 constraints
1 nonlinear objective; 66 nonzeros.
Knitro 10.3.0: Locally optimal solution.
objective -32.38835099; feasibility error 0
13 iterations; 20 function evaluations
23. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
25
Solution
Local optimum from Knitro multistart run
ampl: model pfold.mod;
ampl: data pfold3.dat;
ampl: option solver knitro;
ampl: knitro_options 'ms_enable=1 ms_maxsolves=100';
ampl: solve;
Knitro 10.3.0: ms_enable=1
ms_maxsolves=100
Knitro 10.3.0: Locally optimal solution.
objective -32.39148536; feasibility error 0
3349 iterations; 4968 function evaluations
24. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
26
Solution
Details from Knitro run
Number of nonzeros in Hessian: 2211
Iter Objective FeasError OptError ||Step|| CGits
0 -2.777135e+001 0.000e+000
1 -2.874955e+001 0.000e+000 1.034e+001 5.078e-001 142
2 -2.890054e+001 0.000e+000 1.361e+001 1.441e+000 0
...
11 -3.238833e+001 0.000e+000 6.225e-002 9.557e-002 0
12 -3.238835e+001 0.000e+000 8.104e-004 3.341e-003 0
13 -3.238835e+001 0.000e+000 8.645e-009 1.390e-005 0
# of function evaluations = 20
# of gradient evaluations = 14
# of Hessian evaluations = 13
Total program time (secs) = 0.022
Time spent in evaluations (secs) = 0.009
25. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Example #2: Balanced Assignment
Motivation
meeting of employees from around the world
Given
several employee categories
(title, location, department, male/female)
a specified number of project groups
Assign
each employee to a project group
So that
the groups have about the same size
the groups are as “diverse” as possible with respect to all categories
27
26. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Method-Based Approach
Define an algorithm to build a balanced assignment
Start with all groups empty
Make a list of people (employees)
For each person in the list:
Add to the group whose resulting “sameness” will be least
Balanced Assignment
Initialize all groups G = { }
Repeat for each person p
sMin = Infinity
Repeat for each group G
s = total "sameness" in G ∪ {p}
if s < sMin then
sMin = s
GMin = G
Assign person p to group GMin
28
27. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Method-Based Approach (cont’d)
Define a computable concept of “sameness”
Sameness of a pair of people:
Number of categories in which they are the same
Sameness in a group:
Sum of the sameness of all pairs of people in the group
Refine the algorithm to get better results
Reorder the list of people
Locally improve the initial “greedy” solution
by swapping group members
Seek further improvement through
local search metaheuristics
What are the neighbors of an assignment?
How can two assignments combine to create a better one?
Balanced Assignment
29
28. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based Approach
Formulate a “minimal sameness” model
Define decision variables for assignment of people to groups
𝑥 1 if person 1 assigned to group 𝑗
𝑥 0 otherwise
Specify valid assignments through constraints on the variables
Formulate sameness as an objective to be minimized
Total sameness = sum of the sameness of all groups
Send to an off-the-shelf solver
Choice of excellent linear-quadratic mixed-integer solvers
Zero-one optimization is a special case
Balanced Assignment
30
29. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based Formulation
Given
𝑃 set of people
𝐶 set of categories of people
𝑡 type of person 𝑖 within category 𝑘, for all 𝑖 ∈ 𝑃, 𝑘 ∈ 𝐶
and
𝐺 number of groups
𝑔 lower limit on people in a group
𝑔 upper limit on people in a group
Define
𝑠 | 𝑘 ∈ 𝐶: 𝑡 𝑡 |, for all 𝑖 ∈ 𝑃, 𝑖 ∈ 𝑃
sameness of persons 𝑖 and 𝑖
Balanced Assignment
31
30. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Determine
𝑥 ∈ 0,1 1 if person 𝑖 is assigned to group 𝑗
0 otherwise, for all 𝑖 ∈ 𝑃, 𝑗 1, . . . , 𝐺
To minimize
∑ ∑ 𝑠 ∑ 𝑥∈ 𝑥∈
total sameness of all pairs of people in all groups
Subject to
∑ 𝑥 1, for each 𝑖 ∈ 𝑃
each person must be assigned to one group
𝑔 ∑ 𝑥∈ 𝑔 , for each 𝑗 1, . . . , 𝐺
each group must be assigned an acceptable number of people
32
Model-Based Formulation (cont’d)
Balanced Assignment
31. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based Solution
Optimize with an off-the-shelf solver
Choose among many alternatives
Linearize and send to a mixed-integer linear solver
CPLEX, Gurobi, Xpress; CBC
Send quadratic formulation to a mixed-integer solver
that automatically linearizes products involving binary variables
CPLEX, Gurobi, Xpress
Send quadratic formulation to a nonlinear solver
Mixed-integer nonlinear: Knitro, BARON
Continuous nonlinear (might come out integer): MINOS, Ipopt, . . .
Balanced Assignment
33
32. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Where Is the Work?
Method-based
Programming an implementation of the method
Model-based
Constructing a formulation of the model
Balanced Assignment
34
33. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Complications in Balanced Assignment
“Total Sameness” is problematical
Hard for client to relate to goal of diversity
Minimize “total variation” instead
Sum over all types: most minus least assigned to any group
Client has special requirements
No employee should be “isolated” within their group
No group can have exactly one woman
Every person must have a group-mate
from the same location and of equal or adjacent rank
Room capacities are variable
Different groups have different size limits
Minimize “total deviation”
Sum over all types: greatest violation of target range for any group
35
34. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Method-Based (cont’d)
Revise or replace the original solution approach
Total variation is less suitable to a greedy algorithm
Re-think improvement procedures
Total variation is harder to locally improve
Client constraints are challenging to enforce
Update or re-implement the method
Even small changes to the problem can necessitate
major changes to the method and its implementation
Balanced Assignment
36
35. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Add variables
𝑦 fewest people of category 𝑘, type 𝑙 in any group,
𝑦 most people of category 𝑘, type 𝑙 in any group,
for each 𝑘 ∈ 𝐶, 𝑙 ∈ 𝑇 ⋃ 𝑡∈
Add defining constraints
𝑦 ∑ 𝑥∈ : , for each 𝑗 1, . . . , 𝐺; 𝑘 ∈ 𝐶, 𝑙 ∈ 𝑇
𝑦 ∑ 𝑥∈ : , for each 𝑗 1, . . . , 𝐺; 𝑘 ∈ 𝐶, 𝑙 ∈ 𝑇
Minimize total variation
∑ ∑ 𝑦∈∈ 𝑦 )
. . . generalizes to handle varying group sizes
37
Model-Based (cont’d)
Balanced Assignment
36. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based (cont’d)
To express client requirement for women in a group, let
𝑄 𝑖 ∈ 𝑃: 𝑡 , ⁄ female
Add constraints
∑ 𝑥 0∈ or ∑ 𝑥 2∈ , for each 𝑗 1, . . . , 𝐺
Balanced Assignment
38
37. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based (cont’d)
To express client requirement for women in a group, let
𝑄 𝑖 ∈ 𝑃: 𝑡 , ⁄ female
Define logic variables
𝑧 ∈ 0,1 1 if any women assigned to group 𝑗
0 otherwise, for all 𝑗 1, . . . , 𝐺
Add constraints relating logic to assignment variables
2𝑧 ∑ 𝑥 𝑄 𝑧∈ , for each 𝑗 1, . . . , 𝐺
Balanced Assignment
39
38. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based (cont’d)
To express client requirements for group-mates, let
𝑃 𝑖 ∈ 𝑃: 𝑡 , 𝑙 , 𝑡 , 𝑙 , for all 𝑙 ∈ 𝑇 , 𝑙 ∈ 𝑇
𝐴 ⊆ 𝑇 ranks adjacent to rank 𝑙, for all 𝑙 ∈ 𝑇
Add constraints
∑ 𝑥 0∈ or ∑ 𝑥 ∑ ∑ 𝑥 2∈∈∈ ,
for each 𝑙 ∈ 𝑇 , 𝑙 ∈ 𝑇 , 𝑗 1, . . . , 𝐺
Balanced Assignment
40
39. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Model-Based (cont’d)
To express client requirements for group-mates, let
𝑃 𝑖 ∈ 𝑃: 𝑡 , 𝑙 , 𝑡 , 𝑙 , for all 𝑙 ∈ 𝑇 , 𝑙 ∈ 𝑇
𝐴 ⊆ 𝑇 ranks adjacent to rank 𝑙, for all 𝑙 ∈ 𝑇
Define logic variables
𝑤 ∈ 0,1 1 if group 𝑗 has anyone from location 𝑙 of rank 𝑙
0 otherwise, for all 𝑙 ∈ 𝑇 , 𝑙 ∈ 𝑇 , 𝑗 1, . . . , 𝐺
Add constraints relating logic to assignment variables
𝑤 ∑ 𝑥 𝑃 𝑤∈ ,
∑ 𝑥 ∑ ∑ 𝑥 2𝑤∈∈∈ ,
for each 𝑙 ∈ 𝑇 , 𝑙 ∈ 𝑇 , 𝑗 1, . . . , 𝐺
Balanced Assignment
41
40. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Given
Observed vector 𝐲
Regressor vectors 𝐱 , 𝐱 , . . . , 𝐱
Choose multipliers 𝛽 , 𝛽 , . . . , 𝛽 to . . .
Approximate 𝐲 by ∑ 𝐱 𝛽
Explain 𝐲 convincingly
42
Example #3: Linear Regression
41. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Iteratively improve choice of regressors
Repeat
Solve a minimum-error problem using a subset of the regressors
Remove and re-add regressors as suggested by results
Until remaining regressors are judged satisfactory
Results are varied
Depends on interpretation and judgement of results
“As much an art as a science”
43
Method-Based (traditional)
42. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Define a “best” choice of regressors
Build a mixed-integer optimization model
Objective function minimizes error
Constraints specify desired properties of the regressor set
Optimize with an off-the-shelf solver
Many excellent choices available
Linear-quadratic mixed-integer
General nonlinear mixed-integer
44
Model-Based
Dimitris Bertsimas and Angela King,
“An Algorithmic Approach to Linear Regression.”
Operations Research 64 (2016) 2–16.
43. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Given
𝑚 number of observations
𝑛 number of regressors
and
𝑦 observations, for each 𝑖 1, . . . , 𝑚
𝑥 regressor values corresponding to observation 𝑖,
for each 𝑗 1, . . . , 𝑛 and 𝑖 1, . . . , 𝑚
45
Algebraic Formulation
Linear Regression
44. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Determine
𝛽 Multiplier for regressor 𝑗, for each 𝑗 1, . . . , 𝑛
𝑧 1 if 𝛽 0: regressor 𝑗 is used,
0 if 𝛽 0: regressor 𝑗 is not used, for each 𝑗 1, . . . , 𝑛
to minimize
∑ 𝑦 ∑ 𝑥 𝛽 Γ ∑ 𝛽
Sum of squared errors
plus “lasso” term for regularization and robustness
46
Algebraic Formulation
Linear Regression
45. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Subject to
𝑀𝑧 𝛽 𝑀𝑧 for all 𝑗 1, . . . , 𝑛
If the 𝑗 regressor is used then 𝑧 1
(where 𝑀 is a reasonable bound on 𝛽 )
∑ 𝑧 𝑘
At most 𝑘 regressors may be used
𝑧 . . . 𝑧 for 𝑗 , . . . , 𝑗 ∈ 𝒢𝒮 , 𝑝 1, . . . , 𝑛 𝒢𝒮
All regressors in each group sparsity set 𝒢𝒮
are either used or not used
𝑧 𝑧 1 for all 𝑗 , 𝑗 ∈ ℋ𝒞
For any pair of highly collinear regressors,
only one may be used
47
Algebraic Formulation
Linear Regression
46. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Subject to
∑ 𝑧 1∈𝒯 for all 𝑝 1, . . . , 𝑛 𝒯
For a regressor and any of its transformations,
only one may be used
𝑧 1 for all 𝑗 ∈ ℐ
Specified regressors must be used
∑ 𝑧 𝒮 1∈ for all 𝑝 1, . . . , 𝑛
Exclude previous solutions using 𝛽 , 𝑗 ∈ 𝑆
48
Algebraic Formulation
Linear Regression
47. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Problems hard to formulate for off-the-shelf solvers
“Logic” constraints
sequencing, scheduling, cutting, packing
“Black box” functions
simulations, approximations
Large, specialized applications
Routing delivery trucks nationwide
Finding shortest routes in mapping apps
Metaheuristic schemes
Evolutionary methods, simulated annealing, . . .
Artificial intelligence and related computer science
Constraint programming
Training deep neural networks
49
Method-Based Remains Popular for . . .
48. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Diverse application areas
Operations research & management science
Business analytics
Engineering & science
Economics & finance
Diverse kinds of users
Anyone who took an “optimization” class
Anyone else with a technical background
Newcomers to optimization
These have in common . . .
Good algebraic formulations for off-the-shelf solvers
Users focused on modeling
50
Model-Based Has Become Standard for . . .
49. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Off-the-shelf solvers keep improving
Solve the same problems faster and faster
Handle broader problem classes
Recognize special cases automatically
Optimization has become more model-based
Off-the-shelf solvers for constraint programming
Model-based metaheuristics (“Matheuristics”)
Hybrid approaches have become easier to build
Model-based APIs for solvers
APIs for algebraic modeling systems
51
Trends Favor Model-Based Optimization
50. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Background
The modeling lifecycle
Matrix generators
Modeling languages
Algebraic modeling languages
Design approaches: declarative, executable
Example: AMPL vs. gurobipy
Survey of available software
Balanced assignment model in AMPL
Formulation
Solution
52
Modeling Languages
for Model-Based Optimization
51. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
53
The Optimization Modeling Lifecycle
Communicate with Client
Build Model
Generate Optimization Problem
Submit Problem to Solver
Report & Analyze Results
Prepare Data
52. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Goals for optimization software
Repeat the cycle quickly and reliably
Get results before client loses interest
Deploy for application
Complication: two forms of an optimization problem
Modeler’s form
Mathematical description, easy for people to work with
Solver’s form
Explicit data structure, easy for solvers to compute with
Challenge: translate between these two forms
54
Managing the Modeling Lifecycle
53. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Write a program to generate the solver’s form
Read data and compute objective & constraint coefficients
Communicate with the solver via its API
Convert the solver’s solution for viewing or processing
Some attractions
Ease of embedding into larger systems
Access to advanced solver features
Serious disadvantages
Difficult environment for modeling
program does not resemble the modeler’s form
model is not separate from data
Very slow modeling cycle
hard to check the program for correctness
hard to distinguish modeling from programming errors
55
Matrix Generators
54. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
[1980]Over the past seven years we have perceived that the size
distribution of general structure LP problems being run on commercial LP
codes has remained about stable. . . . A 3000 constraint LP model is still
considered large and very few LP problems larger than 6000 rows are being
solved on a production basis. . . . That this distribution has not noticeably
changed despite a massive change in solution economics is unexpected.
We do not feel that the linear programming user’s most pressing need over the
next few years is for a new optimizer that runs twice as fast on a machine that
costs half as much (although this will probably happen). Cost of optimization
is just not the dominant barrier to LP model implementation. The process
required to manage the data, formulate and build the model, report on and
analyze the results costs far more, and is much more of a barrier to effective
use of LP, than the cost/performance of the optimizer.
Why aren’t more larger models being run? It is not because they could not be
useful; it is because we are not successful in using them. . . . They become
unmanageable. LP technology has reached the point where anything that can
be formulated and understood can be optimized at a relatively modest cost.
C.B. Krabek, R.J. Sjoquist and D.C. Sommer, The APEX Systems: Past and Future.
SIGMAP Bulletin 29 (April 1980) 3–23.
56
55. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Describe your model
Write your symbolic model in a
computer-readable modeler’s form
Prepare data for the model
Let computer translate to & from the solver’s form
Limited drawbacks
Need to learn a new language
Incur overhead in translation
Make formulations clearer and hence easier to steal?
Great advantages
Faster modeling cycles
More reliable modeling
More maintainable applications
57
Modeling Languages
56. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
[1982]The aim of this system is to provide one representation of a model
which is easily understood by both humans and machines. . . . With such a
notation, the information content of the model representation is such that a
machine can not only check for algebraic correctness and completeness, but
also interface automatically with solution algorithms and report writers.
. . . a significant portion of total resources in a modeling exercise . . . is spent
on the generation, manipulation and reporting of models. It is evident that this
must be reduced greatly if models are to become effective tools in planning
and decision making.
The heart of it all is the fact that solution algorithms need a data structure
which, for all practical purposes, is impossible to comprehend by humans,
while, at the same time, meaningful problem representations for humans are
not acceptable to machines. We feel that the two translation processes
required (to and from the machine) can be identified as the main source of
difficulties and errors. GAMS is a system that is designed to eliminate these
two translation processes, thereby lifting a technical barrier to effective
modeling . . .
J. Bisschop and A. Meeraus, On the Development of a General Algebraic Modeling System in
a Strategic Planning Environment. Mathematical Programming Study 20 (1982) 1–29.
58
57. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
[1983]These two forms of a linear program — the modeler’s form
and the algorithm’s form — are not much alike, and yet neither can be
done without. Thus any application of linear optimization involves
translating the one form to the other. This process of translation has long
been recognized as a difficult and expensive task of practical linear
programming.
In the traditional approach to translation, the work is divided between
modeler and machine. . . .
There is also a quite different approach to translation, in which as much
work as possible is left to the machine. The central feature of this
alternative approach is a modeling language that is written by the modeler
and translated by the computer. A modeling language is not a
programming language; rather, it is a declarative language that expresses
the modeler’s form of a linear program in a notation that a computer
system can interpret.
R. Fourer, Modeling Languages Versus Matrix Generators for Linear Programming.
ACM Transactions on Mathematical Software 9 (1983) 143–183.
59
58. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Designed for a model-based approach
Define data in terms of sets & parameters
Analogous to database keys & records
Define decision variables
Minimize or maximize a function of decision variables
Subject to equations or inequalities
that constrain the values of the variables
Advantages
Familiar
Powerful
Proven
60
Algebraic Modeling Languages
59. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Design alternatives
Executable: object libraries for programming languages
Declarative: specialized optimization languages
Design comparison
Executable versus declarative using one simple example
Survey
Solver-independent vs. solver-specific
Proprietary vs. free
Notable specific features
61
Overview
Algebraic Modeling Languages
60. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Concept
Create an algebraic modeling language
inside a general-purpose programming language
Redefine operators like + and <=
to return constraint objects rather than simple values
Advantages
Ready integration with applications
Good access to advanced solver features
Disadvantages
Programming issues complicate description of the model
Modeling and programming bugs are hard to separate
Efficiency issues are more of a concern
62
Executable
Algebraic Modeling Languages
61. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Concept
Design a language specifically for optimization modeling
Resembles mathematical notation as much as possible
Extend to command scripts and database links
Connect to external applications via APIs
Disadvantages
Adds a system between application and solver
Does not have a full object-oriented programming framework
Advantages
Streamlines model development
Promotes validation and maintenance of models
Works with many popular programming languages
63
Declarative
Algebraic Modeling Languages
62. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Two representative widely used systems
Executable: gurobipy
Python modeling interface for Gurobi solver
http://gurobi.com
Declarative: AMPL
Specialized modeling language with multi-solver support
http://ampl.com
64
Comparison: Executable vs. Declarative
Algebraic Modeling Languages
63. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
65
Data
gurobipy
Assign values to Python
lists and dictionaries
commodities = ['Pencils', 'Pens']
nodes = ['Detroit', 'Denver',
'Boston', 'New York', 'Seattle']
arcs, capacity = multidict({
('Detroit', 'Boston'): 100,
('Detroit', 'New York’): 80,
('Detroit', 'Seattle’): 120,
('Denver’, 'Boston'): 120,
('Denver’, 'New York'): 120,
('Denver’, 'Seattle’): 120 })
Comparison
AMPL
Define symbolic model
sets and parameters
set COMMODITIES := Pencils Pens ;
set NODES := Detroit Denver
Boston 'New York' Seattle ;
param: ARCS: capacity:
Boston 'New York' Seattle :=
Detroit 100 80 120
Denver 120 120 120 ;
set COMMODITIES;
set NODES;
set ARCS within {NODES,NODES};
param capacity {ARCS} >= 0;
Provide data later
in a separate file
66. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
68
Data (cont’d)
AMPL
param cost {COMMODITIES,ARCS} >= 0;
Comparison
param cost
[Pencils,*,*] (tr) Detroit Denver :=
Boston 10 40
'New York' 20 40
Seattle 60 30
[Pens,*,*] (tr) Detroit Denver :=
Boston 20 60
'New York' 20 70
Seattle 80 30 ;
67. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
69
Model
gurobipy
m = Model('netflow')
flow = m.addVars(commodities, arcs, obj=cost, name="flow")
m.addConstrs(
(flow.sum('*',i,j) <= capacity[i,j] for i,j in arcs), "cap")
m.addConstrs(
(flow.sum(h,'*',j) + inflow[h,j] == flow.sum(h,j,'*')
for h in commodities for j in nodes), "node")
Comparison
for i,j in arcs:
m.addConstr(sum(flow[h,i,j] for h in commodities) <= capacity[i,j],
"cap[%s,%s]" % (i,j))
m.addConstrs(
(quicksum(flow[h,i,j] for i,j in arcs.select('*',j)) + inflow[h,j] ==
quicksum(flow[h,j,k] for j,k in arcs.select(j,'*'))
for h in commodities for j in nodes), "node")
alternatives
68. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
70
(Note on Summations)
gurobipy quicksum
m.addConstrs(
(quicksum(flow[h,i,j] for i,j in arcs.select('*',j)) + inflow[h,j] ==
quicksum(flow[h,j,k] for j,k in arcs.select(j,'*'))
for h in commodities for j in nodes), "node")
Comparison
69. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
71
Model (cont’d)
AMPL
var Flow {COMMODITIES,ARCS} >= 0;
minimize TotalCost:
sum {h in COMMODITIES, (i,j) in ARCS} cost[h,i,j] * Flow[h,i,j];
subject to Capacity {(i,j) in ARCS}:
sum {h in COMMODITIES} Flow[h,i,j] <= capacity[i,j];
subject to Conservation {h in COMMODITIES, j in NODES}:
sum {(i,j) in ARCS} Flow[h,i,j] + inflow[h,j] =
sum {(j,i) in ARCS} Flow[h,j,i];
Comparison
70. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
72
Solution
gurobipy
m.optimize()
if m.status == GRB.Status.OPTIMAL:
solution = m.getAttr('x', flow)
for h in commodities:
print('nOptimal flows for %s:' % h)
for i,j in arcs:
if solution[h,i,j] > 0:
print('%s -> %s: %g' % (i, j, solution[h,i,j]))
Comparison
72. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
gurobipy
Works closely with the Gurobi solver:
callbacks during optimization, fast re-solves after problem changes
Offers convenient extended expressions:
min/max, and/or, if-then-else
AMPL
Supports all popular solvers
Extends to general nonlinear and logic expressions
Connects to nonlinear function libraries and user-defined functions
Automatically computes nonlinear function derivatives
74
Integration with Solvers
Comparison
73. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
gurobipy
Everything can be developed in Python
Extensive data, visualization, deployment tools available
Limited modeling features also in C++, C#, Java
AMPL
Modeling language extended with loops, tests, assignments
Application programming interfaces (APIs) for calling AMPL
from C++, C#, Java, MATLAB, Python, R
Efficient methods for data interchange
Add-ons for streamlined deployment
QuanDec by Cassotis
Opalytics Cloud Platform
75
Integration with Applications
Comparison
74. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Solver-specific
Associated with popular commercial solvers
Executable and declarative alternatives
Solver-independent
Support multiple solvers and solver types
Mostly commercial/declarative and free/executable
76
Software Survey
Algebraic Modeling Languages
75. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Declarative, commercial
OPL for CPLEX (IBM)
MOSEL* for Xpress (FICO)
OPTMODEL for SAS/OR (SAS)
Executable, commercial
Concert Technology C++ for CPLEX
gurobipy for Gurobi
sasoptpy for SAS Optimization
77
Solver-Specific
Survey
77. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Commercial, declarative modeling systems
Established lineup of solver-independent modeling systems
that represent decades of development and support
Continuing trend toward integration with
popular programming languages and data science tools
Commercial, executable modeling systems
Increasingly essential to commercial solver offerings
Becoming the recommended APIs for solvers
Free, executable modeling systems
A major current focus
of free optimization software development
Interesting new executable modeling languages
have become easier to develop than interesting new solvers
79
Trends
Algebraic Modeling Languages
78. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Notable cases not detailed earlier . . .
AIMMS (solver-independent, commercial, declarative)
has extensive application development tools built in
CMPL (solver-independent, free, declarative)
has an IDE, Python and Java APIs, and remote server support
GMPL/MathProg (solver-independent, free, declarative)
is a free implementation of mainly a subset of AMPL
JuMP (solver-independent, free, executable) claims greater
efficiency through use of a new programming language, Julia
MOSEL for Xpress (solver-specific, commercial)
a hybrid of declarative and executable,
has recently been made free and may accept other solvers
80
Special Notes
Algebraic Modeling Languages
79. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
81
Balanced Assignment Revisited in AMPL
Sets, parameters, variables (for people)
set PEOPLE; # individuals to be assigned
set CATEG;
param type {PEOPLE,CATEG} symbolic;
# categories by which people are classified;
# type of each person in each category
param numberGrps integer > 0;
param minInGrp integer > 0;
param maxInGrp integer >= minInGrp;
# number of groups; bounds on size of groups
var Assign {i in PEOPLE, j in 1..numberGrps} binary;
# Assign[i,j] is 1 if and only if
# person i is assigned to group j
80. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
82
Modeling Language Formulation
Variables, constraints (for variation)
set TYPES {k in CATEG} := setof {i in PEOPLE} type[i,k];
# all types found in each category
var MinType {k in CATEG, TYPES[k]};
var MaxType {k in CATEG, TYPES[k]};
# fewest and most people of each type, over all groups
subj to MinTypeDefn {j in 1..numberGrps, k in CATEG, l in TYPES[k]}:
MinType[k,l] <= sum {i in PEOPLE: type[i,k] = l} Assign[i,j];
subj to MaxTypeDefn {j in 1..numberGrps, k in CATEG, l in TYPES[k]}:
MaxType[k,l] >= sum {i in PEOPLE: type[i,k] = l} Assign[i,j];
# values of MinTypeDefn and MaxTypeDefn variables
# must be consistent with values of Assign variables
𝑦 ∑ 𝑥∈ : , for each 𝑗 1, . . . , 𝐺; 𝑘 ∈ 𝐶, 𝑙 ∈ 𝑇
Balanced Assignment
81. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
83
Modeling Language Formulation
Objective, constraints (for assignment)
minimize TotalVariation:
sum {k in CATEG, l in TYPES[k]} (MaxType[k,l] - MinType[k,l]);
# Total variation over all types
subj to AssignAll {i in PEOPLE}:
sum {j in 1..numberGrps} Assign[i,j] = 1;
# Each person must be assigned to one group
subj to GroupSize {j in 1..numberGrps}:
minInGrp <= sum {i in PEOPLE} Assign[i,j] <= maxInGrp;
# Each group must have an acceptable size
𝑔 ∑ 𝑥∈ 𝑔 , for each 𝑗 1, . . . , 𝐺
Balanced Assignment
82. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
84
Modeling Language Data
210 people
set PEOPLE :=
BIW AJH FWI IGN KWR KKI HMN SML RSR TBR
KRS CAE MPO CAR PSL BCG DJA AJT JPY HWG
TLR MRL JDS JAE TEN MKA NMA PAS DLD SCG
VAA FTR GCY OGZ SME KKA MMY API ASA JLN
JRT SJO WMS RLN WLB SGA MRE SDN HAN JSG
AMR DHY JMS AGI RHE BLE SMA BAN JAP HER
MES DHE SWS ACI RJY TWD MMA JJR MFR LHS
JAD CWU PMY CAH SJH EGR JMQ GGH MMH JWR
MJR EAZ WAD LVN DHR ABE LSR MBT AJU SAS
JRS RFS TAR DLT HJO SCR CMY GDE MSL CGS
HCN JWS RPR RCR RLS DSF MNA MSR PSY MET
DAN RVY PWS CTS KLN RDN ANV LMN FSM KWN
CWT PMO EJD AJS SBK JWB SNN PST PSZ AWN
DCN RGR CPR NHI HKA VMA DMN KRA CSN HRR
SWR LLR AVI RHA KWY MLE FJL ESO TJY WHF
TBG FEE MTH RMN WFS CEH SOL ASO MDI RGE
LVO ADS CGH RHD MBM MRH RGF PSA TTI HMG
ECA CFS MKN SBM RCG JMA EGL UJT ETN GWZ
MAI DBN HFE PSO APT JMT RJE MRZ MRK XYF
JCO PSN SCS RDL TMN CGY GMR SER RMS JEN
DWO REN DGR DET FJT RJZ MBY RSN REZ BLW ;
Balanced Assignment
83. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
85
Modeling Language Data
4 categories, 18 types
set CATEG := dept loc rate title ;
param type:
dept loc rate title :=
BIW NNE Peoria A Assistant
KRS WSW Springfield B Assistant
TLR NNW Peoria B Adjunct
VAA NNW Peoria A Deputy
JRT NNE Springfield A Deputy
AMR SSE Peoria A Deputy
MES NNE Peoria A Consultant
JAD NNE Peoria A Adjunct
MJR NNE Springfield A Assistant
JRS NNE Springfield A Assistant
HCN SSE Peoria A Deputy
DAN NNE Springfield A Adjunct
.......
param numberGrps := 12 ;
param minInGrp := 16 ;
param maxInGrp := 19 ;
Balanced Assignment
84. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
86
Modeling Language Solution
Model + data = problem instance to be solved (CPLEX)
ampl: model BalAssign.mod;
ampl: data BalAssign.dat;
ampl: option solver cplex;
ampl: option show_stats 1;
ampl: solve;
2556 variables:
2520 binary variables
36 linear variables
654 constraints, all linear; 25632 nonzeros
210 equality constraints
432 inequality constraints
12 range constraints
1 linear objective; 36 nonzeros.
CPLEX 12.8.0.0: optimal integer solution; objective 16
59597 MIP simplex iterations
387 branch-and-bound nodes 8.063 sec
Balanced Assignment
85. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
87
Modeling Language Solution
Model + data = problem instance to be solved (Gurobi)
ampl: model BalAssign.mod;
ampl: data BalAssign.dat;
ampl: option solver gurobi;
ampl: option show_stats 1;
ampl: solve;
2556 variables:
2520 binary variables
36 linear variables
654 constraints, all linear; 25632 nonzeros
210 equality constraints
432 inequality constraints
12 range constraints
1 linear objective; 36 nonzeros.
Gurobi 7.5.0: optimal solution; objective 16
338028 simplex iterations
1751 branch-and-cut nodes 66.344 sec
Balanced Assignment
86. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
88
Modeling Language Solution
Model + data = problem instance to be solved (Xpress)
ampl: model BalAssign.mod;
ampl: data BalAssign.dat;
ampl: option solver xpress;
ampl: option show_stats 1;
ampl: solve;
2556 variables:
2520 binary variables
36 linear variables
654 constraints, all linear; 25632 nonzeros
210 equality constraints
432 inequality constraints
12 range constraints
1 linear objective; 36 nonzeros.
XPRESS 8.4(32.01.08): Global search complete
Best integer solution found 16
6447 branch and bound nodes 61.125 sec
Balanced Assignment
87. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
89
Modeling Language Formulation (revised)
Add bounds on variables
var MinType {k in CATEG, t in TYPES[k]}
<= floor (card {i in PEOPLE: type[i,k] = t} / numberGrps);
var MaxType {k in CATEG, t in TYPES[k]
>= ceil (card {i in PEOPLE: type[i,k] = t} / numberGrps);
ampl: include BalAssign+.run
Presolve eliminates 72 constraints.
...
Gurobi 7.5.0: optimal solution; objective 16
2203 simplex iterations 0.203 sec
Balanced Assignment
88. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Off-the-shelf solvers for broad problem classes
Two main problem classes
“Linear” solvers
“Nonlinear” solvers
Other useful classes
“Constraint” programming solvers
“Global” optimization solvers
91
Solvers for Model-Based Optimization
89. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Algorithms
Provisions for integer-valued variables
Extensions of the technology to related problem classes
Parallel implementation on multiple processor cores
Support for . . .
Model-based optimization
Application deployment
Cloud-based services
Optimization on demand
Server clusters
92
Typical Enhancements
Off-the-Shelf Solvers
93. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Linear objective and constraints
Continuous variables
Primal simplex, dual simplex, interior-point
Integer (including zero-one) variables
Branch-and-bound + feasibility heuristics + cut generation
Automatic transformations to integer:
piecewise-linear, discrete variable domains, indicator constraints
“Callbacks” to permit problem-specific algorithmic extensions
Quadratic extensions
Convex elliptic objectives and constraints
Convex conic constraints
Variable binary in objective
Transformed to linear (or to convex if binary binary)
96
“Linear” Solvers
94. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
CPLEX, Gurobi, Xpress
Dominant commercial solvers
Similar features
Supported by many modeling systems
SAS Optimization, MATLAB intlinprog
Components of widely used commercial analytics packages
SAS performance within 2x of the “big three”
MOSEK
Commercial solver strongest for conic problems
CBC, MIPCL, SCIP
Fastest noncommercial solvers
Effective alternatives for easy to moderately difficult problems
MIPCL within 7x on some benchmarks
97
“Linear” Solvers (cont'd)
95. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Special abilities of certain solvers . . .
CPLEX has an option to handle nonconvex quadratic objectives
MOSEK extends to general semidefinite optimization problems
SCIP extends to certain logical constraints
98
Special Notes
“Linear” Solvers
96. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Continuous variables
Smooth objective and constraint functions
Locally optimal solutions
Variety of methods
Interior-point, sequential quadratic, reduced gradient
Extension to integer variables
99
“Nonlinear” Solvers
97. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Knitro
Most extensive commercial nonlinear solver
Choice of methods; automatic choice of multiple starting points
Parallel runs and parallel computations within methods
Continuous and integer variables
CONOPT, LOQO, MINOS, SNOPT
Highly regarded commercial solvers for continuous variables
Implement a variety of methods
Bonmin, Ipopt
Highly regarded free solvers
Ipopt for continuous problems via interior-point methods
Bonmin extends to integer variables
100
“Nonlinear” Solvers
98. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Nonlinear + global optimality
Substantially harder than local optimality
Smooth nonlinear objective and constraint functions
Continuous and integer variables
BARON
Dominant commercial global solver
Couenne
Highly regarded noncommercial global solver
LGO
High-quality solutions, may be global
Objective and constraint functions may be nonsmooth
101
“Global” Solvers
99. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Motivated by “constraint programming”
Logic directly in constraints using and, or, not operators
Range of other nonsmooth and logic operators
“All different” and other special constraints
Variables in subscripts to other variables and parameter
Encoding of logic in binary variables not necessary
Alternative solvers employed
Globally optimal solutions
Branching search like other “integer” solvers
Not originally model-based,
but trend has been toward model-based implementations
More general modeling languages needed
102
“Constraint” Solvers
100. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
IBM ILOG CP
Established commercial constraint programming solver
Solver-specific modeling language support
C++ API combining model-based and method-based approaches
C++ “Concert Technology” executable modeling language
OPL declarative modeling language
Some support from solver-independent languages
Gecode, JaCoP
Notable noncommercial solvers
Limited modeling language support
103
“Constraint” Solvers
108. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Solvers
18 categories, 60+ solvers
Commercial and noncommercial choices
Almost all of the most popular ones
Inputs
AMPL, GAMS, and others
MPS, LP, and other lower-level problem formats
Interfaces
Web browser
Special solver (“Kestrel”) for AMPL and GAMS
Python API
111
About the NEOS Server
109. Model-Based Optimization
INFORMS International 2018, Taipei — 18 June 2018
Limits
8 hours
3 GBytes
Operation
Requests queued centrally,
distributed to various servers for solving
650,000+ requests served in the past year,
about 1800 per day or 75 per hour
17,296 requests on peak day (15 March 2018)
112
About the NEOS Server (cont’d)