This thesis presents a framework for solving constrained optimization problems using factor graphs and the GTSAM toolbox. It demonstrates using factor graphs and the active set method to solve linear and quadratic programs with inequality constraints. It also implements a line search method for sequential quadratic programming to solve nonlinear equality constrained problems. Representing optimization problems as factor graphs allows them to be solved as a series of sparse least squares problems. This provides an open-source way to approach constrained optimization problems commonly found in robotics applications such as control and planning.
The report discusses two decomposition methods: Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD). Two examples of data decomposition applications are presented: approximating the pulsating velocity profile of the Poiseuille flow and approximating the flow behind a cylinder. The results are obtained using Matlab software and various codes, along with the development of the Graphical User Interface (GUI) for performing POD and DMD and for post-processing are shown.
This technical report describes research into solving a two layer linear diffusion equation on a GPU. It first discusses GPU hardware and software models. It then explains how to solve a one layer diffusion equation using LU factorization, and how to parallelize this on a GPU using recursive doubling. Finally, it describes how to model diffusion across the interface between two layers, and presents results implementing and comparing CPU and GPU solutions.
This thesis explores algorithms for optimizing soccer lineups in the context of online football management games. It analyzes the lineup optimization problem, representing it as a graph that is similar to the traveling salesman problem. The thesis then proposes and compares several algorithms, including a recursive Monte Carlo tree search algorithm and genetic algorithms using various mutation and crossover operators. Experimental results show the genetic algorithms generally outperform other approaches for this combinatorial optimization problem.
This document presents a study on a posteriori error estimation for the extended finite element method (XFEM). It discusses modeling discontinuities using XFEM for one- and two-dimensional problems. The study develops a recovery-based error estimator to measure the difference between the direct and post-processed approximations of the gradient from XFEM solutions. It aims to provide a smoothed gradient that is superior to the discontinuous gradient from the original finite element approximation. The document includes an introduction covering element approximation techniques, computational fracture mechanics, a posteriori error estimation, and an outline of the following sections on one-dimensional and two-dimensional model problems, XFEM formulations, and implementation results.
Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Ed...AmeryWalters
Full download : https://alibabadownload.com/product/introduction-to-matlab-programming-and-numerical-methods-for-engineers-1st-edition-siauw-solutions-manual/ Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Edition Siauw Solutions Manual , Introduction to MATLAB Programming and Numerical Methods for Engineers,Siauw,1st Edition,Solutions Manual
This document describes Vivarana, an interactive data visualization tool that generates Complex Event Processing (CEP) rules. It presents the background research on multidimensional data visualization techniques and CEP rule generation. The solution section explains the parallel coordinates visualization used in Vivarana and how user interactions like brushing filter data. It also covers the rule generation process where decision trees are used to automatically create CEP queries based on user selections in the visualization. The discussion section describes challenges faced during the project. It concludes by discussing future work to improve Vivarana's effectiveness.
In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.
Android Application for American Sign Language RecognitionVishisht Tiwari
This document describes a final year project that developed an Android application for American Sign Language (ASL) recognition. The application uses image processing techniques like skin color segmentation, morphological operations, and contour analysis to locate the hand and fingertips in images. Pattern recognition is then used to compare extracted fingertip positions to a dataset of ASL letters and identify the sign. The project aims to provide an affordable and portable solution for ASL recognition. Testing showed the application could correctly identify several ASL letters with reasonable accuracy.
The report discusses two decomposition methods: Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD). Two examples of data decomposition applications are presented: approximating the pulsating velocity profile of the Poiseuille flow and approximating the flow behind a cylinder. The results are obtained using Matlab software and various codes, along with the development of the Graphical User Interface (GUI) for performing POD and DMD and for post-processing are shown.
This technical report describes research into solving a two layer linear diffusion equation on a GPU. It first discusses GPU hardware and software models. It then explains how to solve a one layer diffusion equation using LU factorization, and how to parallelize this on a GPU using recursive doubling. Finally, it describes how to model diffusion across the interface between two layers, and presents results implementing and comparing CPU and GPU solutions.
This thesis explores algorithms for optimizing soccer lineups in the context of online football management games. It analyzes the lineup optimization problem, representing it as a graph that is similar to the traveling salesman problem. The thesis then proposes and compares several algorithms, including a recursive Monte Carlo tree search algorithm and genetic algorithms using various mutation and crossover operators. Experimental results show the genetic algorithms generally outperform other approaches for this combinatorial optimization problem.
This document presents a study on a posteriori error estimation for the extended finite element method (XFEM). It discusses modeling discontinuities using XFEM for one- and two-dimensional problems. The study develops a recovery-based error estimator to measure the difference between the direct and post-processed approximations of the gradient from XFEM solutions. It aims to provide a smoothed gradient that is superior to the discontinuous gradient from the original finite element approximation. The document includes an introduction covering element approximation techniques, computational fracture mechanics, a posteriori error estimation, and an outline of the following sections on one-dimensional and two-dimensional model problems, XFEM formulations, and implementation results.
Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Ed...AmeryWalters
Full download : https://alibabadownload.com/product/introduction-to-matlab-programming-and-numerical-methods-for-engineers-1st-edition-siauw-solutions-manual/ Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Edition Siauw Solutions Manual , Introduction to MATLAB Programming and Numerical Methods for Engineers,Siauw,1st Edition,Solutions Manual
This document describes Vivarana, an interactive data visualization tool that generates Complex Event Processing (CEP) rules. It presents the background research on multidimensional data visualization techniques and CEP rule generation. The solution section explains the parallel coordinates visualization used in Vivarana and how user interactions like brushing filter data. It also covers the rule generation process where decision trees are used to automatically create CEP queries based on user selections in the visualization. The discussion section describes challenges faced during the project. It concludes by discussing future work to improve Vivarana's effectiveness.
In this thesis, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.
Android Application for American Sign Language RecognitionVishisht Tiwari
This document describes a final year project that developed an Android application for American Sign Language (ASL) recognition. The application uses image processing techniques like skin color segmentation, morphological operations, and contour analysis to locate the hand and fingertips in images. Pattern recognition is then used to compare extracted fingertip positions to a dataset of ASL letters and identify the sign. The project aims to provide an affordable and portable solution for ASL recognition. Testing showed the application could correctly identify several ASL letters with reasonable accuracy.
Rapport d'analyse Dimensionality ReductionMatthieu Cisel
This document summarizes dimensionality reduction and clustering techniques performed on a dataset of 3000 user profiles from an imaginary dating platform with 16 variables. It first explores the data, checks for correlations between variables, and performs dimensionality reduction using PCA and MCA. It then clusters the data using k-means clustering and hierarchical clustering on the principal components to identify optimal cluster numbers and visualize the clusters.
This document summarizes Kanika Anand's master's thesis which examines global optimization of noisy computer simulators using surrogate models. The thesis compares two improvement functions - one proposed by Picheny et al. and one by Ranjan - for choosing new points to minimize a simulator output observed with noise. Gaussian process and Bayesian additive regression tree models are used as surrogates. Four test functions acting as simulators are optimized using either a one-shot design or genetic algorithm to find new points. Results show how well the surrogate minimum matches the true minimum and distance between the two minimizers under different settings.
The document provides information about MATLAB, including how to contact MathWorks for sales, support, or to access the user community. It lists the company address and provides the copyright and revision history of the MATLAB Primer document.
This thesis describes the implementation of simultaneous localization and mapping (SLAM) on a differential drive autonomous ground vehicle (AGV). The AGV was designed and built with 3D printing. It uses a laser scanner and motor encoders for localization. Various SLAM algorithms like RANSAC, split and merge were tested for landmark extraction from laser data. An extended Kalman filter is used to estimate the robot's pose and map landmarks. Testing showed the robot could map an area but the loop closure problem was not solved, resulting in drift over time.
Conversion and visualization of business processes in dsm format s. nicastroSaraNicastro1
The document discusses the conversion of business process models created using the Business Process Model and Notation (BPMN) standard into a modified Design Structure Matrix (DSM) and Multiple Domains Matrix (MDM) format. It first provides background on DSMs, MDMs, and BPMN. It then presents a step-by-step approach to converting various BPMN diagrams representing different workflow patterns into a DSM/MDM format. This includes handling gateways, events, iterations, lanes, collaborations, and data objects. Finally, it discusses an implementation of the conversion algorithm in a web application to automatically convert BPMN diagrams into a visualized DSM.
Internship Report: Interaction of two particles in a pipe flowPau Molas Roca
The present document sums up the development and results of the research internship carried out at LEGI Laboratory. The study aimed to understand the hydrodynamic forces involvement in the interaction between two red blood cells located in a capillary (pipe flow). The problem regarding Red Blood Cells (RBCs) moving through a capillary has been tackled from a two-dimensional point of view and has been both analytically and numerically outlined. Finite elements have been used to discretize the geometries considered. Several boundary conditions and geometries were simulated and deeply examined aiming to understand the mechanism governing hydrodynamic attraction and repulsion between red blood cells. The consequent results are analyzed in this report.
This document provides contact information and revision history for the MATLAB Primer. It lists ways to contact MathWorks for sales, support, and community help. It also provides the copyright information and lists revisions made from 1996 to 2014. The document contains 23 printing revisions and online revisions for MATLAB releases from versions 5 through 8.4 (R2014b).
Calculus of variations & solution manual russakJosé Pallo
This document discusses finding minima or maxima of functions of multiple variables. It presents necessary conditions for a point to be an unconstrained relative minimum of such a function: the first partial derivatives must be zero at that point, and the Hessian matrix must be positive semidefinite. These conditions come from applying the single-variable optimization conditions to variations of the function in different directions. The document also notes that satisfying these necessary conditions is sufficient for a relative minimum. Overall, the document outlines the basic theory and necessary/sufficient conditions for optimizing functions of multiple variables without constraints.
Incorporating Learning Strategies in Training of Deep Neural Networks for Au...Artur Filipowicz
Majority of machine learning models are trained by presentation of examples in random order. Recently, new research emerged which suggests that better performance can be obtained from neural networks if examples are presented in an order of increasing difficulty. In this report, I review example presentation, or learning schemes, which following this paradigm; curriculum learning, self-paced learning, and self-paced curriculum learning, and I attempt to apply self-paced learning to improve the performance of a car driving neural network.
In the process, I explore several error measures to determine example difficulty and observe differences in their performance, demonstrating in the process the difficulty of using curriculum learning for this particular application. I develop an error measure, risk residual, which considers collision risk when determining the error a neural network makes in predicting affordance indicators of a driving scene. I show that this measure is more holistic than a square error. I also propose a probability based measure for example difficulty and explore the computational difficulty of using such a measure.
Lastly, I develop an algorithm for self-paced learning and use it to train a convolutional neural network for DeepDriving. While the performance of the network degrades compared to normal training, I observe that over-fitting may be the reason for the results. I propose two research paths to resolve the problem.
This document presents a thesis on leader-following consensus control of multi-agent systems considering communication constraints. It introduces the motivation for studying consensus problems in multi-agent systems and provides an overview of relevant literature. It then describes the modeling of multi-agent systems using algebraic graph theory to represent different communication constraints. A Lyapunov-based control approach is developed to achieve consensus under constant and time-varying communication delays and packet data losses. Simulation and experimental results validate the proposed control approach.
This document is the preface to a textbook on algorithmic mathematics. It introduces algorithms as unambiguous instructions to solve mathematical problems. The textbook aims to teach algorithm design and analysis, and provide constructive tools for abstract algebra. Algorithms are written in a simple language using standard notation, with if-statements and while-loops. Exercises similar to exam questions are included to aid learning.
This document is a final year project report that evaluates genetic algorithms and tabu search algorithms for minimizing energy consumption by scheduling household appliances. It develops genetic algorithms and tabu search as alternatives to linear programming approaches. The report tests the algorithms on different appliance scenarios and compares the results to those from a linear programming solver to evaluate the algorithms' performance and ability to reduce runtime. It finds that with proper tuning, the genetic algorithms and tabu search can match or outperform the linear programming solution while providing significant savings in runtime, especially for larger problem instances.
This document is Brian Donhauser's 2012 doctoral dissertation submitted to the University of Washington. It proposes new methods for estimating jump variation (JV) and integrated variance (IV) from high-frequency asset return data. In Chapter 2, the document develops JV and IV estimation in a continuous semimartingale model. It presents previous work on JV and IV estimation and proposes new estimators based on jump location estimation. In simulation and empirical analysis, the new estimators outperform established methods. In Chapter 3, the document develops JV and IV estimation in a discrete Bayesian model. It derives theoretically optimal estimators of JV and IV within this model. Again in simulation and empirical analysis, the new estimators perform better than
This document is the dissertation of Davi Correia submitted in 2006 for the degree of Doctor of Philosophy in Electrical and Computer Engineering. The dissertation proposes a higher-order perfectly matched layer (PML) that combines advantages of regular and complex frequency shifted PMLs. It applies the second-order PML to open-region, waveguide and periodic electromagnetic problems. Results show the second-order PML outperforms regular and CFS-PMLs and its performance is independent of simulation technique used. The dissertation contains 7 chapters presenting formulation, implementation and numerical results of the higher-order PML applied to various problems.
The document discusses different methods for aggregating data in MongoDB, including:
1) The aggregation pipeline, which allows data to be transformed through multi-stage pipelines of operations modeled on data processing. It is the preferred method.
2) Map-reduce operations, which use custom JavaScript functions to perform aggregation but are less efficient than pipelines.
3) Single-purpose aggregation commands that provide counts, distinct values, and grouping in a simple way but with less flexibility than pipelines.
This document introduces vectors and vector calculus. It defines a vector as a directed line segment with a magnitude and direction. Vectors are used to represent phenomena with both magnitude and direction, such as velocity and force. The document discusses Euclidean spaces R^2 and R^3, which are 2-dimensional and 3-dimensional spaces. It introduces right-handed coordinate systems for labeling points and representing graphs of functions of two or three variables in these spaces.
This document introduces vectors and vector calculus. It defines a vector as a directed line segment with a magnitude and direction. Vectors are used to represent physical quantities that have both magnitude and direction, such as velocity, acceleration, and force. The document discusses Euclidean spaces R^2 and R^3, which are used to graph functions of two and three variables. It introduces right-handed coordinate systems for labeling points and directions in three-dimensional space.
This document introduces vectors and vector operations in Euclidean space. It defines Euclidean space as R3, with three perpendicular coordinate axes (x, y, z). The graph of a function f(x,y) lies in R3, consisting of points (x,y,f(x,y)). It describes the right-handed coordinate system, where pointing thumb, index, and middle fingers represent the positive x, y, z axes. Vector operations like dot and cross products are introduced.
1999 Ap Us History Dbq Sample Essay. AP World History Sample DBQWendy Berg
The document discusses agricultural foreign investments abroad by Gulf Arab countries during the 1970s. Rising populations in Gulf countries led to food shortages and high cereal prices, triggering food price spikes and "Bread Revolutions." The article will focus on agricultural foreign investments made by Gulf nations abroad during this time period to address growing food gaps within their own countries.
Process Analysis Thesis Statement Examples. HowWendy Berg
The document discusses Abraham Lincoln's views and actions regarding slavery over the course of his life and political career. It describes how seeing slavery firsthand in New Orleans as a young man initially shaped his views, and how the Kansas-Nebraska Act later caused him to take an abolitionist stance. As a politician, Lincoln opposed the expansion of slavery and believed in white supremacy. He became a leading abolitionist and fought against the pro-slavery Democrat Stephen Douglas in the 1858 Senate race, further establishing himself as an opponent of slavery.
More Related Content
Similar to A Factor Graph Approach To Constrained Optimization
Rapport d'analyse Dimensionality ReductionMatthieu Cisel
This document summarizes dimensionality reduction and clustering techniques performed on a dataset of 3000 user profiles from an imaginary dating platform with 16 variables. It first explores the data, checks for correlations between variables, and performs dimensionality reduction using PCA and MCA. It then clusters the data using k-means clustering and hierarchical clustering on the principal components to identify optimal cluster numbers and visualize the clusters.
This document summarizes Kanika Anand's master's thesis which examines global optimization of noisy computer simulators using surrogate models. The thesis compares two improvement functions - one proposed by Picheny et al. and one by Ranjan - for choosing new points to minimize a simulator output observed with noise. Gaussian process and Bayesian additive regression tree models are used as surrogates. Four test functions acting as simulators are optimized using either a one-shot design or genetic algorithm to find new points. Results show how well the surrogate minimum matches the true minimum and distance between the two minimizers under different settings.
The document provides information about MATLAB, including how to contact MathWorks for sales, support, or to access the user community. It lists the company address and provides the copyright and revision history of the MATLAB Primer document.
This thesis describes the implementation of simultaneous localization and mapping (SLAM) on a differential drive autonomous ground vehicle (AGV). The AGV was designed and built with 3D printing. It uses a laser scanner and motor encoders for localization. Various SLAM algorithms like RANSAC, split and merge were tested for landmark extraction from laser data. An extended Kalman filter is used to estimate the robot's pose and map landmarks. Testing showed the robot could map an area but the loop closure problem was not solved, resulting in drift over time.
Conversion and visualization of business processes in dsm format s. nicastroSaraNicastro1
The document discusses the conversion of business process models created using the Business Process Model and Notation (BPMN) standard into a modified Design Structure Matrix (DSM) and Multiple Domains Matrix (MDM) format. It first provides background on DSMs, MDMs, and BPMN. It then presents a step-by-step approach to converting various BPMN diagrams representing different workflow patterns into a DSM/MDM format. This includes handling gateways, events, iterations, lanes, collaborations, and data objects. Finally, it discusses an implementation of the conversion algorithm in a web application to automatically convert BPMN diagrams into a visualized DSM.
Internship Report: Interaction of two particles in a pipe flowPau Molas Roca
The present document sums up the development and results of the research internship carried out at LEGI Laboratory. The study aimed to understand the hydrodynamic forces involvement in the interaction between two red blood cells located in a capillary (pipe flow). The problem regarding Red Blood Cells (RBCs) moving through a capillary has been tackled from a two-dimensional point of view and has been both analytically and numerically outlined. Finite elements have been used to discretize the geometries considered. Several boundary conditions and geometries were simulated and deeply examined aiming to understand the mechanism governing hydrodynamic attraction and repulsion between red blood cells. The consequent results are analyzed in this report.
This document provides contact information and revision history for the MATLAB Primer. It lists ways to contact MathWorks for sales, support, and community help. It also provides the copyright information and lists revisions made from 1996 to 2014. The document contains 23 printing revisions and online revisions for MATLAB releases from versions 5 through 8.4 (R2014b).
Calculus of variations & solution manual russakJosé Pallo
This document discusses finding minima or maxima of functions of multiple variables. It presents necessary conditions for a point to be an unconstrained relative minimum of such a function: the first partial derivatives must be zero at that point, and the Hessian matrix must be positive semidefinite. These conditions come from applying the single-variable optimization conditions to variations of the function in different directions. The document also notes that satisfying these necessary conditions is sufficient for a relative minimum. Overall, the document outlines the basic theory and necessary/sufficient conditions for optimizing functions of multiple variables without constraints.
Incorporating Learning Strategies in Training of Deep Neural Networks for Au...Artur Filipowicz
Majority of machine learning models are trained by presentation of examples in random order. Recently, new research emerged which suggests that better performance can be obtained from neural networks if examples are presented in an order of increasing difficulty. In this report, I review example presentation, or learning schemes, which following this paradigm; curriculum learning, self-paced learning, and self-paced curriculum learning, and I attempt to apply self-paced learning to improve the performance of a car driving neural network.
In the process, I explore several error measures to determine example difficulty and observe differences in their performance, demonstrating in the process the difficulty of using curriculum learning for this particular application. I develop an error measure, risk residual, which considers collision risk when determining the error a neural network makes in predicting affordance indicators of a driving scene. I show that this measure is more holistic than a square error. I also propose a probability based measure for example difficulty and explore the computational difficulty of using such a measure.
Lastly, I develop an algorithm for self-paced learning and use it to train a convolutional neural network for DeepDriving. While the performance of the network degrades compared to normal training, I observe that over-fitting may be the reason for the results. I propose two research paths to resolve the problem.
This document presents a thesis on leader-following consensus control of multi-agent systems considering communication constraints. It introduces the motivation for studying consensus problems in multi-agent systems and provides an overview of relevant literature. It then describes the modeling of multi-agent systems using algebraic graph theory to represent different communication constraints. A Lyapunov-based control approach is developed to achieve consensus under constant and time-varying communication delays and packet data losses. Simulation and experimental results validate the proposed control approach.
This document is the preface to a textbook on algorithmic mathematics. It introduces algorithms as unambiguous instructions to solve mathematical problems. The textbook aims to teach algorithm design and analysis, and provide constructive tools for abstract algebra. Algorithms are written in a simple language using standard notation, with if-statements and while-loops. Exercises similar to exam questions are included to aid learning.
This document is a final year project report that evaluates genetic algorithms and tabu search algorithms for minimizing energy consumption by scheduling household appliances. It develops genetic algorithms and tabu search as alternatives to linear programming approaches. The report tests the algorithms on different appliance scenarios and compares the results to those from a linear programming solver to evaluate the algorithms' performance and ability to reduce runtime. It finds that with proper tuning, the genetic algorithms and tabu search can match or outperform the linear programming solution while providing significant savings in runtime, especially for larger problem instances.
This document is Brian Donhauser's 2012 doctoral dissertation submitted to the University of Washington. It proposes new methods for estimating jump variation (JV) and integrated variance (IV) from high-frequency asset return data. In Chapter 2, the document develops JV and IV estimation in a continuous semimartingale model. It presents previous work on JV and IV estimation and proposes new estimators based on jump location estimation. In simulation and empirical analysis, the new estimators outperform established methods. In Chapter 3, the document develops JV and IV estimation in a discrete Bayesian model. It derives theoretically optimal estimators of JV and IV within this model. Again in simulation and empirical analysis, the new estimators perform better than
This document is the dissertation of Davi Correia submitted in 2006 for the degree of Doctor of Philosophy in Electrical and Computer Engineering. The dissertation proposes a higher-order perfectly matched layer (PML) that combines advantages of regular and complex frequency shifted PMLs. It applies the second-order PML to open-region, waveguide and periodic electromagnetic problems. Results show the second-order PML outperforms regular and CFS-PMLs and its performance is independent of simulation technique used. The dissertation contains 7 chapters presenting formulation, implementation and numerical results of the higher-order PML applied to various problems.
The document discusses different methods for aggregating data in MongoDB, including:
1) The aggregation pipeline, which allows data to be transformed through multi-stage pipelines of operations modeled on data processing. It is the preferred method.
2) Map-reduce operations, which use custom JavaScript functions to perform aggregation but are less efficient than pipelines.
3) Single-purpose aggregation commands that provide counts, distinct values, and grouping in a simple way but with less flexibility than pipelines.
This document introduces vectors and vector calculus. It defines a vector as a directed line segment with a magnitude and direction. Vectors are used to represent phenomena with both magnitude and direction, such as velocity and force. The document discusses Euclidean spaces R^2 and R^3, which are 2-dimensional and 3-dimensional spaces. It introduces right-handed coordinate systems for labeling points and representing graphs of functions of two or three variables in these spaces.
This document introduces vectors and vector calculus. It defines a vector as a directed line segment with a magnitude and direction. Vectors are used to represent physical quantities that have both magnitude and direction, such as velocity, acceleration, and force. The document discusses Euclidean spaces R^2 and R^3, which are used to graph functions of two and three variables. It introduces right-handed coordinate systems for labeling points and directions in three-dimensional space.
This document introduces vectors and vector operations in Euclidean space. It defines Euclidean space as R3, with three perpendicular coordinate axes (x, y, z). The graph of a function f(x,y) lies in R3, consisting of points (x,y,f(x,y)). It describes the right-handed coordinate system, where pointing thumb, index, and middle fingers represent the positive x, y, z axes. Vector operations like dot and cross products are introduced.
Similar to A Factor Graph Approach To Constrained Optimization (20)
1999 Ap Us History Dbq Sample Essay. AP World History Sample DBQWendy Berg
The document discusses agricultural foreign investments abroad by Gulf Arab countries during the 1970s. Rising populations in Gulf countries led to food shortages and high cereal prices, triggering food price spikes and "Bread Revolutions." The article will focus on agricultural foreign investments made by Gulf nations abroad during this time period to address growing food gaps within their own countries.
Process Analysis Thesis Statement Examples. HowWendy Berg
The document discusses Abraham Lincoln's views and actions regarding slavery over the course of his life and political career. It describes how seeing slavery firsthand in New Orleans as a young man initially shaped his views, and how the Kansas-Nebraska Act later caused him to take an abolitionist stance. As a politician, Lincoln opposed the expansion of slavery and believed in white supremacy. He became a leading abolitionist and fought against the pro-slavery Democrat Stephen Douglas in the 1858 Senate race, further establishing himself as an opponent of slavery.
College Apa Format College Research Paper Outline TWendy Berg
Starbucks started as a small coffee shop in Seattle over 40 years ago and has since expanded to become a global leader in the coffee house industry with thousands of stores worldwide. As the largest coffeehouse company in the world, Starbucks has established itself as a dominant brand by offering high quality coffee and a consistent customer experience across its extensive international footprint. The company's success demonstrates humans' strong affinity for coffee and Starbucks' ability to capitalize on growing worldwide demand through strategic expansion and establishment of its well-known brand.
Exactely How Much Is Often A 2 Web Site EsWendy Berg
The document discusses migration patterns in North Africa and the Eastern Mediterranean region during the 19th century. It outlines how countries like Algeria, Tunisia, Egypt, and what is now Libya experienced significant European immigration and colonial influence during this time period. Broader migration trends across other areas are also touched upon at a high level.
The article discusses Kurt Lewin's model of change which involves three stages: unfreezing, moving to a
new state, and refreezing in the new state. This model is still relevant for understanding organizational
change today. For workplace education programs to be effective in creating behavioral change, they need
to recognize that change is a process, not an event, and address attitudes, skills and situations
simultaneously. Programs also need strategies to "unfreeze" current behaviors and habits in order to prepare
employees for change. Finally, for changes to stick, programs must provide support structures and incentives
to
Writing Music - Stock Photos Motion ArrayWendy Berg
The document provides instructions for requesting writing assistance from the HelpWriting.net website. It outlines a 5 step process: 1) Create an account with a password and email, 2) Complete a form with assignment details and deadline, 3) Review bids from writers and select one, 4) Receive the completed paper, and 5) Request revisions if needed. The website promises original, high-quality content and refunds for plagiarized work.
Writing Legal Essays - Law Research Writing Skills -Wendy Berg
1. The document provides instructions for using the writing assistance service HelpWriting.net. It outlines a 5-step process: register for an account, submit a request form with instructions and deadline, review writer bids and select one, authorize payment after receiving satisfactory work, and request revisions if needed.
2. The service offers original, plagiarism-free assignments and allows multiple revisions to ensure customer satisfaction. Writers are selected through a bidding system based on qualifications.
Writing Paper Clipart Writing Paper Png VectorWendy Berg
Dell is a leading technology company that offers a wide range of products and services including desktop computers, servers, networking equipment, storage, software and IT services. The company mainly operates in the consumer and commercial markets selling directly to customers and through retailers. Dell was founded in 1984 by Michael Dell and has grown to become one of the largest technology companies in the world with a focus on innovation, customer service and delivering the latest technology.
This document discusses how cancer affected one person's family. It describes how the author's mother was diagnosed with stage four cancer that had spread to her bladder, giving her less than 90 days to live. The family decided not to tell the mother about the terminal diagnosis. The author took a month off work to spend time with their mother during her final days. Eventually the mother passed away peacefully surrounded by family remembering past holidays together.
The document outlines Uyen's philosophy on partnerships with children and families in early childhood education. It emphasizes building safe, respectful relationships through listening to children and families, supporting their interests and needs, and involving families in decision making. Partnerships are important for children's learning and sense of identity. The philosophy is focused on respecting children and families' cultures, languages, and ways of developing skills.
Pros:
- Small town feel with a population of just over 12,000 provides a more relaxed environment than nearby Denver.
- Close proximity to Denver (only 7 miles away) allows easy access to the city's amenities and activities.
- Historically served as a railroad town in the late 1920s, giving it industrial roots.
Cons:
- As a suburb of Denver, it has less employment opportunities than the larger city.
- Small population means fewer cultural activities and dining/entertainment options compared to Denver.
- Development did not really take off until 1940, so it lacks the long-established infrastructure
How To Write A Policy Paper For A Nurse -. Online assignment writing service.Wendy Berg
The document discusses rush week activities for a fraternity, noting that the author spoke to potential new members as part of rush week. One conversation was with a friend who was going through rush. A variety of topics were discussed with the group of freshmen and sophomores during rush week conversations.
The document discusses the process of ordering essay writing help from the website HelpWriting.net, including creating an account, submitting a request with instructions and sources, reviewing bids from writers and choosing one, making a deposit, reviewing and authorizing payment for the completed work, and having the option to request revisions. The website promises original, high-quality content and a refund if work is plagiarized, aiming to fully meet customer needs.
The document provides instructions for creating an account and submitting assignment requests on the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with an email and password. 2) Complete a form with assignment details, sources, and deadline. 3) Review bids from writers and choose one. 4) Review the completed paper and authorize payment. 5) Request revisions until satisfied, with a refund option for plagiarism. The purpose is to outline the process for students to receive writing help and ensure satisfaction.
D.J. and I share some key similarities as we are both kind and caring individuals. However, we also have our differences as D.J. is a fictional character from a TV show while I am a real person. Overall, while we share positive traits, D.J. and I ultimately lead very different lives.
4 Best Printable Christmas Borders - Printablee.ComWendy Berg
This document provides instructions for requesting writing assistance from HelpWriting.net. It outlines a 5-step process: 1) Create an account, 2) Complete an order form providing instructions, sources, and deadline, 3) Review bids from writers and select one, 4) Receive the paper and authorize payment if satisfied, 5) Request revisions until satisfied. It emphasizes original, high-quality content and refunds for plagiarized work.
How To Write An Analytical Essay On A Short Story - HoWendy Berg
The document provides instructions on how to request an analytical essay be written through the HelpWriting.net website. It outlines a 5-step process: 1) Create an account with a password and email; 2) Complete a 10-minute order form with instructions, sources, and deadline; 3) Review bids from writers and choose one; 4) Review the completed paper and authorize payment; 5) Request revisions until satisfied. It emphasizes original, high-quality work and refunds for plagiarism.
008 Essay Example Life Changing Experience PsycWendy Berg
Here are the key aspects I would focus on in order to persuade a foreign company to invest in India:
1. Market potential and size. India has a population of over 1.3 billion people, giving foreign companies access to a huge domestic market. With rising incomes and consumer spending, India offers great potential for market expansion.
2. Low costs. Labor costs in India are significantly lower than developed markets. This allows companies to produce goods and deliver services at highly competitive prices. Combined with the large market size, low costs allow good prospects for profits.
3. Skilled workforce. India has a large, English-speaking population that provides a skilled workforce in areas like engineering, IT, finance etc. The workforce is adapted
The document provides information about poverty in the Czech Republic compared to global poverty rates and other European countries. It shows that from 2006 to 2012, less than 0% of the Czech population lived on less than $1.90 per day, compared to around 15% of the world population. Additionally, the percentage of the Czech population living under the at-risk poverty threshold (set at 60% of the national median income) fluctuated between 8-10% from 2005 to 2014, which is lower than the average of 27 European countries. The document also notes that Czech Republic has a lower inequality of income distribution compared to other European nations.
10 Easy Steps How To Write A Thesis For An Essay In 2024Wendy Berg
This document provides a 10 step process for writing a thesis for an essay in 2024. It instructs the reader to create an account on HelpWriting.net to request paper writing assistance. The reader is then told to complete an order form with their instructions and attach any sample work. Writers will bid on the request and the reader can choose a writer based on qualifications. The writer will complete the paper and the reader can request revisions until satisfied.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Manage Reception Report in Odoo 17Celine George
A business may deal with both sales and purchases occasionally. They buy things from vendors and then sell them to their customers. Such dealings can be confusing at times. Because multiple clients may inquire about the same product at the same time, after purchasing those products, customers must be assigned to them. Odoo has a tool called Reception Report that can be used to complete this assignment. By enabling this, a reception report comes automatically after confirming a receipt, from which we can assign products to orders.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Juneteenth Freedom Day 2024 David Douglas School District
A Factor Graph Approach To Constrained Optimization
1. A FACTOR GRAPH APPROACH TO CONSTRAINED
OPTIMIZATION
A Thesis
Presented to
The Academic Faculty
By
Ivan Dario Jimenez
In Partial Fulfillment
of the Requirements for the Degree
B.S. in Computer Science with the Research Option
in the College of Computing
Georgia Institute of Technology
December 2016
Copyright c Ivan Dario Jimenez 2016
2. A FACTOR GRAPH APPROACH TO CONSTRAINED
OPTIMIZATION
Approved by:
Dr.Frank Dellaert
School of Computer Science,
College of Computing
Georgia Institute of Technology
Dr.Byron Boots
School of Computer Science,
College of Computing
Georgia Institute of Technology
Date Approved: December 15,
2016
3. ACKNOWLEDGEMENTS
I would like to especially thank Doctor Duy-Nguyen Ta and Jing Dong for their
invaluable help and guidance throughout the research process. Without them this
thesis would not have been possible.
v
6. LIST OF FIGURES
1.1 A sample factor graph on three variables represented by the circle nodes
labeled X, Y and Z. The black boxes represent factors: evidence on the
value of the variables that they connect to. . . . . . . . . . . . . . . . 1
2.1 This is a visual representation of what a the active set method does
when solving min x + y + z subject to the inequalities that define the
polytope shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 This a visual representation of a working graph on three variables.
The red nodes represent the unary factor for ||x − xk − (−∇xfT
x)||2
Σ.
The blue, purple and green factors represent unary, binary and ternary
constraints on the three variables of the form ||ci(x)||2
Σ where ci(x) =
Ax − b, i ∈ I ∪ E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 The dual graph that solves for the lagrange multipliers of a linear
program. Each factor has the form ||∇xf − λi∇xci||2
Σ where i ∈ I.
This dual graph in particular is a special case of a problem with three
variables each with one unary constraint. . . . . . . . . . . . . . . . . 7
2.4 This is a graph of the feasible region for optimization problem with cost
function x2
+ y2
− 10y − 10x + 50 subject to a set of linear constraints. 10
2.5 This factor graph is used to represent the linearized Quadratic Program
Solved at each iteration. The black factor represents the quadratic cost
function: pT
∇2
Lp+pT
∇f. The green factor represents the error on the
linearized equality constraints: ||∇2
ci(X)||2
Σ | i ∈ E. The red factor
represents the error on the inequality constraints || max(0, ∇2
ci(x))||2
Σ | i ∈
I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
viii
7. 3.1 This factor graph can be used to solve both the linear and quadratic
programs solved in Equation 3.1 and Equation 3.2 respectively. The
blue factor can represent the objective function, the red factors rep-
resent the binary constraints on the variables and the green factors
represent unary constraints on the variables. . . . . . . . . . . . . . . 17
3.2 This is a graph of the feasible region for optimization problem shown
in Equation 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 This is a graph of the feasible region with the cost function overlaid
for optimization problem shown in Equation 3.1. . . . . . . . . . . . . 19
3.4 The starting iterate of the linear program in Equation 3.1. It starts at
coordinates (2, 20), marked with a red circle. The inequalities in the
active set are highlighted in red. . . . . . . . . . . . . . . . . . . . . . 20
3.5 When solving Equation 3.2, this is the dual graph solved in the first
iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6 This is the second iteration of the Active Set Solver on Equation 3.1.
The current iterate moved from the dashed blue circle at (2, 20) to the
location of the red circle at (0, 20). The active set is highlighted in red. 20
3.7 This is the dual graph solved by the Active Set Solver when solving
Equation 3.1 on the third iteration. . . . . . . . . . . . . . . . . . . . 20
3.8 On the fourth iteration of solving Equation 3.1, the active set solver
moves the current iterate from the location highlighted by the dashed
blue circle at (0, 20) to the red circle (0, 0). The active constraints at
the new iterate are highlighted in red. . . . . . . . . . . . . . . . . . . 21
3.9 This is the dual graph solved during the fifth and final iteration of the
active set solver when processing Equation 3.1. . . . . . . . . . . . . . 21
3.10 This figure shows the initial iterate of the active set solver at (2, 20)
and the active set highlighted in red when solving Equation 3.2. . . . 23
3.11 This is the dual graph solved during the first iteration of the active set
solver when processing Equation 3.2. . . . . . . . . . . . . . . . . . . 23
3.12 The active set solver will have the iterate circled in red when solving
Equation 3.2. The active set is highlighted in red at (11, 11). The
previous iterate is circled with blue dashes at (2, 20). . . . . . . . . . 23
ix
8. 3.13 This shows the dual graph solved by the active set solver in the third
iteration when solving Equation 3.2. . . . . . . . . . . . . . . . . . . . 23
3.14 In the fourth iteration of solving Equation 3.2, the iterate has moved
from the blue dashed circle at (11, 11) to (5, 5) indicated by the red
circle. There are no constraints in the active set. . . . . . . . . . . . . 24
3.15 This factor graph represents the working graph solved in the 5th iter-
ation of the algorithm while solving the DUAL2 problem of the Maros
Meszaros Problem Set. . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.16 This factor graph represents the dual graph solved in the 5th itera-
tion of the algorithm while solving the DUAL2 problem of the Maros
Meszaros Problem Set. Each variable node stands for the lagrange
multiplier of a constraint. . . . . . . . . . . . . . . . . . . . . . . . . 25
3.17 This is a visual representation of the problem in Equation 3.3. The
background’s shade is used to show the cost function while the con-
straint is shown as a blue line overlaid on top of that. . . . . . . . . . 26
3.18 This is a graph of the iterate as it goes from it’s starting position to
the solution of the program. Notice how the steps get shorter as the
iterate approaches the solution. . . . . . . . . . . . . . . . . . . . . . 26
3.19 This a typical factor graph used to represent the cost function of the
QP Subproblem at each iteration. The black factor represents the
quadratic cost pT
∇2
Lkp + pT
∇fk while the green factor represents the
linearization of the error on the constraint. . . . . . . . . . . . . . . . 27
3.20 During a typical iteration, the algorithm will construct the following
this dual graph used to represent the KKT stability condition. The
algorithm only uses this graph to calculate the error on the KKT con-
dition since it depends on the QP and the iteration process to compute
the actual values of λ. . . . . . . . . . . . . . . . . . . . . . . . . . . 27
x
9. SUMMARY
Several problems in robotics can be solved using constrained optimization. For
example, solutions in areas like control and planning frequently use it. Meanwhile, the
Georgia Tech Smoothing and Mapping (GTSAM) toolbox provides a straight forward
way to represent sparse least-square optimization problems as factor graphs. Factor
graphs, are a popular graphical model to represent a factorization of a probability
distribution allowing for efficient computations. This paper demonstrates the use of
the GTSAM and factor graphs to solve linear and quadratic constrained optimization
programs using the active set method. It also includes an implementation of a line
search method for sequential quadratic programming that can solve nonlinear equality
constrained problems. The result is a constrained optimization framework that allows
the user to think of optimization problems as solving a series of factor graphs and is
open-source.
xi
10. CHAPTER 1
INTRODUCTION AND BACKGROUND
Factor graphs are probabilistic graphical models that represent random variables and
an unconstrained objective function between them. They are a way to express prob-
lems in Simultaneous Localization and Mapping (SLAM) [1] and Model Predictive
Control (MPC) [2]. As you can see in figure 1.1, these are bipartite graphs where vari-
able nodes represent unknown random variables and factor nodes represent evidence
on those variables gathered from measurements or prior knowledge.
Figure 1.1: A sample factor graph on three variables represented by the circle nodes
labeled X, Y and Z. The black boxes represent factors: evidence on the value of the
variables that they connect to.
Georgia Tech Smoothing and Mapping (GTSAM) Toolbox is frequently used to
solve factor graphs. GTSAM exploits the sparse factor graph representation of a
problem to solve it as a sum of least squares as shown in Equation 1.1. It is important
to note here that this limits the scope of factor graph problems to unconstrained
optimization problems. Although there are methods to solve constrained optimization
problems with unconstrained optimization methods, they have the drawback of having
1
11. to manually set the penalty parameters [3].
min
n
X
i
||ri(x)||2
Σ (1.1)
The goal of this research is to solve constrained optimization problems using a series
of factor graphs. Specifically, this research focuses on inequality constrained opti-
mization problems that take the form of Equation 1.2. In this form, f is the objective
function, E is the set of equality constraint indexes and I the set of inequality con-
straint indexes. If the objective function f or the constraints c are nonlinear, there
might be many or no solutions to the problem. Thus, the scope of the problem is
restricted to convex feasible regions where a unique solution can be found.
min
x
f(x)
s.t. ci(x) = 0 i ∈ E
ci(x) ≤ 0 i ∈ I
(1.2)
Linear Programs constitute the basis of a hierarchy of constrained optimization
problems where solvers for problems of higher complexity require solvers for programs
of lower complexity. In order to solve quadratic constrained optimization problems
of the form Equation 1.4, a solver has to be able to solve Linear Programs which take
form Equation 1.3. Linear programming algorithms are a mature technology that can
manage even large versions of these problems.
min
x
cT
x
s.t. Aix = 0 i ∈ E
Aix ≤ 0 i ∈ I
(1.3)
2
12. In a similar way to linear programming, a quadratic programming solver is re-
quired to solve nonlinear programs. Quadratic programs are the same type as Equa-
tion 1.2 but they take the form of Equation 1.4. Unlike linear programs, however, not
all quadratic programs are guaranteed to be convex if the feasible region is convex.
Thus, the scope of this paper only includes quadratic programs where the matrix H
is positive definite.
min
x
1
2
xT
Hx − xT
g +
1
2
f
s.t. Aix = 0 i ∈ E
Aix ≤ 0 i ∈ I
(1.4)
The ultimate goal is to use factor graphs to solve nonlinear programs that arise in
areas of robotics such as Task Plan Optimization [4], Model Predictive Control [5]
and SLAM [6]. This research provides one of the few open-source alternatives to
proprietary solver libraries such as SNOPT [7]. It also does so in the expressive
language of factor graphs. Combined, these factors will enable GTSAM to solve a
wide variety of problems in robotics.
3
13. CHAPTER 2
METHODS
2.1 Active Set Method for Linear Programming: The Simplex Method
2.1.1 Overview
This is an implementation of algorithm 16.3 from [3] that solves problems of the form
shown in Equation 1.3. The intuition behind iterative optimization algorithms is to
improve on a given solution by solving a simpler problem. Specifically, the simpler
problem in the case of Active Set Methods can be interpreted as sliding along the
vertexes of the polytope created by the inequality constraints of the problem in the
direction that minimizes the gradient of the cost function as seen in figure 2.1.
Figure 2.1: This is a visual representation of what a the active set method does when
solving min x + y + z subject to the inequalities that define the polytope shown.
4
14. Algorithm 1 The Active Set Method for Linear Programming
1: procedure Active-Set Solver(x0, λ0, f, c, I, E)
2: W ← {ci|i ∈ E} ∪ {ci|ci(x0) = 0 ∧ i ∈ I}
3: xk ← x0
4: λk ← λ0
5: k ← 1
6: converged ← false
7: while !converged do
8: WG ← BuildWorkingGraphLP(W, xk)
9: k ← k + 1
10: xk ← WG.optimize()
11: pk ← xk − xk−1
12: if pk = 0 then
13: DG ← BuildDualGraph(f, c, xk)
14: λk ← DG.optimize()
15: leavingFactor ← {Ci|λi > 0 ∧ i ∈ W}
16: if leavingFactor = ∅ then
17: converged ← true
18: else
19: W ← W − leavingFactor
20: else
21: (α, factor) ← computeStepSizeLP(xk−1, pk, W)
22: if factor! = ∅ then
23: W ← W ∪ factor
24: xk ← xk−1 + αpk
return (xk, λk)
5
15. 2.1.2 Algorithm
First, the starting iterate and the feasible region of the program are used to create
the first working set. The iterate is defined by a start point x0 and initial λ values λ0.
The feasible region is implicitly defined by the set of constraints c and the indexes of
equality and inequality constraints, E and I respectively. From this information an
active set is selected: the union of all equality constraints and all inequalities such
that ci(x) = 0, i ∈ I. With the initialization complete, the first iteration can begin.
Figure 2.2: This a visual representation of a working graph on three variables. The
red nodes represent the unary factor for ||x−xk −(−∇xfT
x)||2
Σ. The blue, purple and
green factors represent unary, binary and ternary constraints on the three variables
of the form ||ci(x)||2
Σ where ci(x) = Ax − b, i ∈ I ∪ E
.
After initialization, a working graph WG must be constructed. This factor graph,
shown in figure 2.2, finds a local minimum on the part of the constraint surface defined
by the active set W. Notice that the graph in that figure is a worst case scenario
where there are constraints on all subsets of variables. In reality, constraints are only
present for some subsets of variables which justifies a sparse representation. Clearly,
the factors with the form ||ci(x)||2
Σ are there to ensure the result stays within the
constraint surface. On the other hand, the unary factor ||x − xk − (−∇xfT
x)||2
Σ tries
to match the step x − xk with the negative gradient of the cost function −∇xfT
x.
6
16. The result of optimizing this graph is an iterate x which would result from moving
from xk in the direction of the negative gradient of the cost function while staying in
the constraint surface.
Figure 2.3: The dual graph that solves for the lagrange multipliers of a linear program.
Each factor has the form ||∇xf −λi∇xci||2
Σ where i ∈ I. This dual graph in particular
is a special case of a problem with three variables each with one unary constraint.
If the result of optimizing the factor graph is the same as the starting point, the
algorithm has reached a local minimum and must either converge or change the active
set. More specifically, if the result is not the global solution, the algorithm removes
a constraint from the current active set that is blocking its advance. In order to
tell which one is the blocking constraint, a dual graph is built to get the lagrange
multipliers for each constraint. The Dual Graph shown in figure 2.2 is equivalent to
optimizing the lagragian: ∇xL(x, λ) = 0 where L(x, λ) = f(x) − λc(x).
Generating the dual graph can be thought of as a transformation on the factor
graph of the constraints. In this transformation each constraint factor is replaced by
its corresponding dual variable λ. Each variable node is replaced by a factor corre-
sponding to the gradient of the lagrangian with respect to that variable. Algorithm 2
shows this process.
Algorithm 2 Building the Dual Graph
1: procedure BuildDualGraph(f, c, xk)
2: DG ← ∅
3: for v ∈ VARS(c) do
4: DG ← DG ∪ {||∇vcλ − ∇vf||2
}
return DG
7
17. The next step is to interpret the lagrange multipliers. If λi for some constraint
ci, i ∈ I is positive on the current point x, it is understood that the constraint ci
must be the leaving constraint and can removed safely. Furthermore, a zero λ implies
the minimum is on the constraint while a negative λ implies removing the constraint
from the active set would cause the iterate to leave the feasible region.
Algorithm 3 Computing step size α and leavingFactor for linear programs.
1: procedure computeStepSizeLP(xk−1, pk, W)
2: α ← ∞
3: leavingFactor ← ∅
4: for {ci|i ∈ I ∧ ci /
∈ W} do
5: if AT
i pk > 0 then
6: αi ← b−AT xk−1
AT pk
7: if αi < α then
8: α ← αi
9: leavingFactor ← Ci
return (α, leavingFactor)
Meanwhile, if the result of the working graph optimization is different from the
current solution, the iterate can advance in the direction pk = xk −xk−1. The working
set only operates in a subset of the problem’s constraints. Thus, the step size pk may
need to be scaled by a factor α before continuing such that the resulting iterate
doesn’t violate any of the other constraints. If α is used to put the resulting iterate
on a constraint, that constraint has to be added to W. More generally, you can think
of computing a step size as a line search in the direction pk.
Aixk−1 − bi ≤ 0 ∀i i ∈ I (2.1)
Ai(xk−1 + αpk) − bi ≤ 0 ∀i i ∈ I − W (2.2)
Aixk−1 + Aiαpk ≤ bi ∀i i ∈ I − W (2.3)
α =
bi − Aixk−1
Aipk
∀i i ∈ I − W (2.4)
8
18. Computing α warrants further explanation. Its derivation starts from the as-
sumption that xk−1 does not violate any constraint which can be represented as
Equation 2.1. The goal is to ensure that the same holds true for the updated it-
erate as well which is shown in Equation 2.2. From Equation 2.3 it is clear that if
Aiαpk ≤ 0 the constraint is not violated by the current scaled step size. Since the
step size should be as big as possible, α is solved for using the inequality shown in
Equation 2.3 which results in Equation 2.4.
2.1.3 Initialization
min
ℓ
ℓ
s.t. Aix + ℓ = 0 i ∈ E
Aix + ℓ ≤ 0 i ∈ I
(2.5)
Initializing the solver without an a given feasible initial value requires solving a dif-
ferent linear program which changes the objective function to a slack variable ℓ and
adds that same slack variable to the constraints as shown in Equation 2.5. The in-
tuition behind this is that minimizing the slack (or violation of the constraints) will
yield a feasible point that can be used by the algorithm. Notice that any value for
the original set of variables will lead to a feasible point in the new problem.
2.2 Active Set Method for Quadratic Programming
2.2.1 Overview
Quadratic programs can be solved in a similar fashion to Linear Programs with one
added caveat: as shown in figure 2.4, an optimal solution may not necessarily fall
on a vertex of the constraint surface. Thus, to the algorithm is modified slightly to
ensure convergence to the right location.
9
19. Figure 2.4: This is a graph of the feasible region for optimization problem with cost
function x2
+ y2
− 10y − 10x + 50 subject to a set of linear constraints.
2.2.2 Algorithm
Building WG
The Working Graph WG changes to include the objective function as a factor to
minimize instead of ||x − xk − (−∇xfT
x)||2
Σ. This takes advantage of the fact that
GTSAM can directly optimize quadratic factors like those used to express the objec-
tive function in a quadratic program.
Computing α
Unlike linear programs where α could be greater that 1, step sizes in quadratic pro-
grams are guaranteed to be less than the initial estimate. This comes from the fact
that linear objective functions are monotonically decreasing in the direction −∇xf.
Meanwhile, quadratic objective functions lose monotonicity meaning that a solution
to WG must the at minimum on the active set’s constraint surface. Thus, α can be
initialized to 1 instead of ∞.
10
20. 2.2.3 Initialization
Since linear constraints are also being used, the same method for initializing linear
programs can be used to initialize quadratic programs.
2.2.4 Implementation
Evidently, the method for solving quadratic and linear programming is almost iden-
tical. Thus, the GTSAM implementation of these algorithms is a single templated
class that takes the differing functions as template parameters. More specifically,
GTSAM uses the exact same code to solve quadratic and linear programs except for
the generation of the working graph and computing α.
2.3 Line Search Sequential Quadratic Programming for Nonlinear Con-
strained Optimization
min
x
f(x)
s.t. c(x) = 0 i ∈ E
c(x) <= 0 i ∈ I
(2.6)
This section presents an implementation of algorithm 18.3 from [3] that solves prob-
lems of the form shown in Equation 2.6 based on the algorithms presented previously.
This algorithm belongs to the Line Search Family where it uses the QP sub-problem
at each iteration to determine a direction and line search to determine the magnitude
of the step. Notice, that this this algorithm solves an inequality constrained quadratic
problem at each iteration.
11
21. 2.3.1 Algorithm
Line Search Sequential Quadratic Programming starts by solving the inequality con-
strained quadratic program shown in line 10 of Algorithm 4. The results are pk and
b
λk which represent the raw step direction and a new dual value used to calculate
the direction of the dual step. With the information given by solving the inequality
constrained quadratic program, we begin the line search at line 16.
The line search is ensuring that there is sufficient decrease in the merit function
to justify the step. The quadratic program gives the algorithm knowledge of the
direction in that will give progress but due to the approximate nature of linearization
of constraints (especially inequality constraints), the step may need to be scaled
down to ensure actual progress. The merit function attempts to balance the two
often conflicting goals of ensuring the feasibility of the constraints and minimizing
the error function.
The condition that satisfies the line search is comparing the value of the merit
function φ1 at the result of taking the step scaled by α and the sum of the current
value of the merit function with the directional derivative of φ1 in the direction of pk,
Dφ1, scaled by α and a factor η.
After the line search has successfully concluded, the next iterate is defined by the
current iterate plus the step determined by the quadratic program solved scaled by
the alpha determined during the line search.
2.3.2 Merit Function
φ1(x; µ) := f(x) + µ||c(x)||1 (2.7)
In this implementation I have used Equation 2.7 where it is assumed all inequalities
are treated as equalities with a slack variable that is not taken into account by the
12
22. Algorithm 4 The Line Search Sequential Quadratic Programming for Nonlinear
Programs
1: procedure LineSearchSQP Solver(x0, λ0, f, c, I, E)
2: µk ← 0
3: ρ ← 0.7
4: η ← 0.3
5: xk ← x0
6: λk ← λ0
7: k ← 1
8: while !checkConvergence(f, c, I, E, xk, λk) do
9: k ← k + 1
10: (pk, b
λk) ← arg minp pT
∇2
Lp + pT
∇fk
s.t. ∇ci(xk)p + ci(xk) = 0 i ∈ E
∇ci(xk)p + ci(xk) = 0 i ∈ I
11: pλ ← b
λk − λk
12: µ ←
1
2
pT
k ∇2Lpk+∇fkpk
(1−ρ)||c(xk)||1
13: if µk < µ then
14: µk ← µ
15: α ← 1
16: while φ1(xk + αpk; µk) > φ1(xk) + ηαDφ1(xk; µk) do
17: α ← ρα
18: if α < 10ρ then
19: FAILURE TO CONVERGE
20: xk+1 ← xk + αpk
21: λk+1 ← λk + αpλ
return (xk, λk)
13
23. merit function. Although this function is not differentiable, it does have a directional
derivative .
Dpk
φ1(x, µ) := ∇f(x)pk − µ||c(x)||1 (2.8)
From inspection it clear that µ is a multiplier on the violation on the constraints.
Clearly, it is being used to balance the need to optimize the f with the need to avoid
a violation of the constraints.
2.3.3 Checking for Convergence
The function checkConvergence must check that the current solution is feasible and
that it satisfies the KKT conditions for optimality. The conditions can be divided
Algorithm 5 Procedure for checking convergence of the algorithm
1: procedure CheckConvernce(f, c, I, E, xk, λk)
return ∀i∈E||ci(x)||∞ < 0 ∧
∀i∈I ci(x) ≤ 0 ∧
∇L(x) = 0 ∧
∀i∈I∪Eλi ≥ 0
into four clauses in a single and statement shown in Algorithm 5. The first and second
clauses ensure the feasibility of the solution. The third clause ensures the stationary
of the Lagrangian. Finally, the fourth clause ensures the dual feasibility of the KKT
system.
2.4 Factor Graphs
There are two factor graphs that are used in the process of solving the nonlinear
program: The factor graph used to represent the quadratic problem solved at each
iteration and the dual graph used to determine whether the KKT conditions have
been satisfied.
14
24. Figure 2.5: This factor graph is used to represent the linearized Quadratic Program
Solved at each iteration. The black factor represents the quadratic cost function:
pT
∇2
Lp + pT
∇f. The green factor represents the error on the linearized equality
constraints: ||∇2
ci(X)||2
Σ | i ∈ E. The red factor represents the error on the
inequality constraints || max(0, ∇2
ci(x))||2
Σ | i ∈ I.
The first factor graph is used to represent the quadratic program solved at each
iteration. It is show in figure 2.5. Notice that there is one green green factor for each
equality constraint and one red factor for each inquality constraint. This graph is
then given to the quadratic solver.
The dual graph is the same as the one seen on figure 2.3. Both the variables and
the factors hold the same meaning although the Lagrangian is now nonlinear. The
main difference comes from how the factor is used. Whereas before the factor was
optimized to obtain the λ values in QP and LP, during SQP the factor graph is given
the dual variables in order to determine whether the ∇L = 0 necessary optimality
condition is satisfied.
2.4.1 Limitations
Unlike previous iterations, this implementation requires a feasible starting point. The
sequential quadratic programming solver provided can only solve problems where the
hessian is positive semi-definite. A problem like those used in the previous solvers
where a slack value is used to initialize the program is almost guaranteed to have
a negative-definite matrix. That is because the hessian of the sub-problem is the
hessian of the Lagrangian of the greater problem: ∇2
L = ∇2
f − λ∇c. Unless the
15
25. constraints are linear, the hessian of the Lagrangian of Equation 2.9 is guaranteed to
be negative if the hessian of the constraints is positive-definite.
min
x
s
s.t. c(x) − s = 0 i ∈ E
c(x) − s <= 0 i ∈ I
(2.9)
In addition to this, the linear and quadratic approximations of the nonlinear
functions become increasingly worse as the function grows faster. The algorithm
thus, will fail to converge in very fast-changing functions such as ex
or sin(x) but
converges on low degree polynomials.
16
26. CHAPTER 3
RESULTS
3.1 Overview
This section will show how GTSAM goes about solving a set of constrained opti-
mization problems. Both the quadratic and linear programs presented will have an
arbitrary starting point (2, 20) and will share the same set of inequality constraints.
Figure 3.1: This factor graph can be used to solve both the linear and quadratic
programs solved in Equation 3.1 and Equation 3.2 respectively. The blue factor can
represent the objective function, the red factors represent the binary constraints on
the variables and the green factors represent unary constraints on the variables.
Figure 3.1 shows the factor graph that can be used to represent both the linear
and quadratic programs solved in this section. The green and red factors are used to
represent the constraints on the variables. The blue factor can represent either the
linear or quadratic objective functions. Notice that this factor is not solved directly.
Instead, different subsets of the factors are selected in each iteration as part of the
active set which eventually converges to the solution of the problem.
17
27. 3.2 GTSAM Solving a Linear Program
min
x
xT
1
1
s.t. x ≥ 0
y ≥ 0
y ≤ 20
y ≤ −x + 22
y ≥ 0.5x − 3
(3.1)
The linear program presented in Equation 3.1 can have its constraints represented
by figure 3.2. Clearly, this gives us a convex feasible region. Furthermore, upon visual
inspection, it is obvious from figure 3.3 that the minimum inside this feasible region
lies at the origin. A good optimization algorithm, however, must be able to converge
from distant locations. To demonstrate this, the algorithm is initialized to (2, 20).
Although this happens to be a vertex between two constraints it does not need to be.
Upon initialization, Algorithm 1 must determine the active set for the given it-
erate. Figure 3.4 shows the state of the program upon determining that subset of
the inequalities. Since the iterate is at the intersection of two inequalities, further
optimization should not yield improvements (at least not in two dimensions). Thus,
the next iteration must remove one of the active constraints. For that purpose the
dual factor graph shown in figure 3.5 is built.
The dual factor graph is used as a way to solve ∇xL(x, λ) = 0 to obtain the
lagrange multipliers for the constraints in the active set. λ4 corresponds to the con-
straint y ≤ −x + 22 while λ3 corresponds to y ≤ 20. Optimizing this factor graph
results in (λ4, λ3) = (1, 0) which means that y ≤ −x + 22, having the maximum
18
28. Figure 3.2: This is a graph of the fea-
sible region for optimization problem
shown in Equation 3.1 .
Figure 3.3: This is a graph of the feasi-
ble region with the cost function over-
laid for optimization problem shown in
Equation 3.1.
lagrange multiplier of the two should be removed from the active set. To understand
the intuition behind this inspect figure 3.4. You can see that if you projected the gra-
dient of the cost function unto the normal vector of both constraints, the constraint
removed would have the longest projection.
The algorithm begins the second iteration with only constraint y ≤ 20 in W and
thus the iterate must move. The optimization of WG yields xk = (1, 22). Notice that
this is not on the vertex of the polytope and isn’t actually the next iterate. Instead,
the vector pk = xk − xk−1 = (−1, 0) defines the direction in which to search for the
next iterate. Algorithm 3 returns an α of 2. Notice that this α can take values
greater than one to ensure every iterate is at a vertex of the polytope by the end of
the iteration.
Finally, the variable xk can be assigned the final value of the iterate xk−1 +α∗pk =
(0, 20). This means the iterate has entered in contact with constraint x ≥ 0 and thus
has to be added to the active set. This results in the state shown in figure 3.6.
Since there are two intersecting constraints in the working set, further optimization
19
29. Figure 3.4: The starting iterate of the
linear program in Equation 3.1. It
starts at coordinates (2, 20), marked
with a red circle. The inequalities in
the active set are highlighted in red.
Figure 3.5: When solving Equa-
tion 3.2, this is the dual graph solved
in the first iteration.
Figure 3.6: This is the second itera-
tion of the Active Set Solver on Equa-
tion 3.1. The current iterate moved
from the dashed blue circle at (2, 20) to
the location of the red circle at (0, 20).
The active set is highlighted in red.
Figure 3.7: This is the dual graph
solved by the Active Set Solver when
solving Equation 3.1 on the third iter-
ation.
of WG will not change the iterate. Instead the factor graph shown in figure 3.9 is
used to choose a constraint to deactivate. From inspection of the graph, one can see
20
30. that only constraint y ≤ 20 is stopping the optimization from following the gradient
of the cost function. As expected, it has a dual variable λ3 of 1 which is greater
than λ2 with a value of −1 for x ≥ 0. Thus the constraint from W with the greatest
lambda is removed and the loop continues.
Figure 3.8: On the fourth iteration
of solving Equation 3.1, the active set
solver moves the current iterate from
the location highlighted by the dashed
blue circle at (0, 20) to the red circle
(0, 0). The active constraints at the
new iterate are highlighted in red.
Figure 3.9: This is the dual graph
solved during the fifth and final itera-
tion of the active set solver when pro-
cessing Equation 3.1.
In fourth iteration of the active set solver, the optimization of the working graph
yields xk = (0, 19). Since it is a different from the current iterate a direction pk =
(0, −1) is defined. With an α of 20, the iterate arrives at the origin: the minimum
for this problem. Still, in this iteration xk has collided with the constraint y ≥ 0 and
thus must be added to the working set.
In the final iteration of the algorithm, it checks for the Lagrange multipliers of the
two constraints active at the origin by creating the dual graph shown in figure 3.9.
Both of the multipliers are negative which pass the convergence condition and thus
terminate the algorithm.
21
31. 3.3 GTSAM Solving a Quadratic Program
min
x
xT
1 0
0 1
−xT
10
10
+ 50
s.t. x ≥ 0
y ≥ 0
y ≤ 20
y ≤ −x + 22
y ≥ 0.5x − 3
(3.2)
The quadratic solver concludes its initialization by determining the active set W
highlighted in figure 3.10. As expected, the optimization on the working graph WG
will yield no improvement on the current iterate. The algorithm then builds the dual
graph shown in figure 3.11 which solves for the lagrange multipliers for the active
constraints. Since the constraints are linear the process of building and solving this
dual graph is exactly the same as in the linear programming version. Constraint
y ≤ 20 and y ≤ −x+22 have Lagrange multipliers λ3 = 36 and λ4 = −6 respectively.
Thus, the constraint with the largest Lagrange multiplier from W has to be removed.
In the second iteration though, the quadratic programming solver starts exhibiting
it’s unique behavior. The solution of the working graph suggests a new iterate at
xk = (11, 11) as shown on figure 3.12. Clearly there are no constraints blocking the
advance of this solution so with α = 1 the step-length is preserved and the loop
proceeds to the next iteration.
The only way to continue advancing is to remove the last remaining constraint.
The dual graph seen in figure 3.13 checks for a positive multiplier (λ4 = 12) to be
removed from W.
22
32. Figure 3.10: This figure shows the ini-
tial iterate of the active set solver at
(2, 20) and the active set highlighted
in red when solving Equation 3.2.
Figure 3.11: This is the dual graph
solved during the first iteration of
the active set solver when processing
Equation 3.2.
Figure 3.12: The active set solver will
have the iterate circled in red when
solving Equation 3.2. The active set is
highlighted in red at (11, 11). The pre-
vious iterate is circled with blue dashes
at (2, 20).
Figure 3.13: This shows the dual
graph solved by the active set solver in
the third iteration when solving Equa-
tion 3.2.
In the fourth iteration of the algorithm the iterate is translated to the center of the
quadratic at (5, 5) where it has reached the minimum. In the fifth and final iteration
23
33. the algorithm will converge because there are no remaining constraints in the active
set which is a terminating condition.
Figure 3.14: In the fourth iteration of solving Equation 3.2, the iterate has moved
from the blue dashed circle at (11, 11) to (5, 5) indicated by the red circle. There are
no constraints in the active set.
3.4 Solving Large Quadratic Programs
As part of the testing the algorithm, parts of the Maros Mészáros Convex Quadratic
Programming Test Problem Set were used [8]. Figure 3.15 and figure 3.16 are the
working graph and dual graph solved during the 5th
iteration of the algorithm respec-
tively while it solves the problem dual2. This problem has 96 variables, one equality
constraint, and 192 linear inequality constraints.
Currently, GTSAM actively uses 13 problems from the data set as part of the unit
tests for this tool. It also includes 4 disabled unit tests with larger problems that
may be use to profile much larger problems.
24
34. Figure 3.15: This factor graph represents the working graph solved in the 5th iteration
of the algorithm while solving the DUAL2 problem of the Maros Meszaros Problem
Set.
Figure 3.16: This factor graph represents the dual graph solved in the 5th iteration
of the algorithm while solving the DUAL2 problem of the Maros Meszaros Problem
Set. Each variable node stands for the lagrange multiplier of a constraint.
25
35. 3.4.1 GTSAM Solving an Equality Constrained Program
min(x − 1)4
+ (y − 1)4
s
s.t. y = (x − 1)3
+ 1
(3.3)
For this exercise I assume an initial location x0 = (3, 9) with corresponding dual
λ0 = 0. The problem can be visually represented in the figure 3.17. Notice that
the initial location is far away from the from the origin. From both the image and
the representation of the problem it can be deduced that the minimum is located at
(0, 0).
Figure 3.17: This is a visual represen-
tation of the problem in Equation 3.3.
The background’s shade is used to
show the cost function while the con-
straint is shown as a blue line overlaid
on top of that.
Figure 3.18: This is a graph of the it-
erate as it goes from it’s starting po-
sition to the solution of the program.
Notice how the steps get shorter as the
iterate approaches the solution.
You can see in figure 3.18 how the algorithm converges through its 13 iterations.
Notice that it stays very close or exactly on the constraint through most of the process.
In the last few iterations the algorithm overshoots slightly but quickly reaches the
minimum.
At every iteration a linearization factor graph must be constructed to be solved
26
36. by the QP Solver. The objective function of this subproblem can be represented
with the factor graph shown in figure 3.19. In that factor graph, the cost function
is represented by the black factor while each equality constraint would be a green
factor.
Figure 3.19: This a typical factor
graph used to represent the cost func-
tion of the QP Subproblem at each it-
eration. The black factor represents
the quadratic cost pT
∇2
Lkp + pT
∇fk
while the green factor represents the
linearization of the error on the con-
straint.
Figure 3.20: During a typical itera-
tion, the algorithm will construct the
following this dual graph used to rep-
resent the KKT stability condition.
The algorithm only uses this graph to
calculate the error on the KKT condi-
tion since it depends on the QP and
the iteration process to compute the
actual values of λ.
Finally, during the convergence check the SQP solver has to build a dual graph
like the one shown in figure 3.20. In this case the values of the lambda value is given
and the factor is use to determine the error on the KKT stationarity condition.
27
37. CHAPTER 4
CONCLUSION
As it stands I have presented two implementations of the active set method that are
capable of solving linear and quadratic optimization problems in GTSAM. Although
both of them have exponential complexity they work well in practice [3]. This is
especially true in applications where you have a good initial guess. More importantly,
this demonstrates that GTSAM can also be used as a general-purpose constrained
optimization framework.
Such a framework should be able to solve nonlinear constrained optimization prob-
lems. To this end I have presented a Sequential Quadratic Programming solver based
on a Line Search method that can solve some nonlinear constrained programs.
Constrained nonlinear optimization is the most general form of optimization pro-
gram which can be used to solve robotics problems such as Model Predictive Control
and Trajectory Optimization [9]. Although the current implementation is limited in
the scope of problems it can solve, I have demonstrated the potential of GTSAM and
factor graphs as a constrained optimization framework to both solve and understand
these types of problems.
28
38. REFERENCES
[1] F. Dellaert, “Factor graphs and gtsam: A hands-on introduction,” 2012.
[2] D.-N. Ta, “A factor graph approach to estimation and model predictive control
on unmanned aerial vehicles,”
International Conference on Unmanned Aircraft Systems (ICUAS), 2014.
[3] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd. New York:
Springer, 2006.
[4] D. Hadfield-Menell, C. Lin, R. Chitnis, S. Russell, and P. Abbeel, “Sequential
quadratic programming for task plan optimization,” in
2014 IEEE/RSJ International Conference on Intelligent Robots and Systems,
IEEE, 2016.
[5] B. Houska, H. J. Ferreau, and M. Diehl, “An auto-generated real-time iteration
algorithm for nonlinear mpc in the microsecond range,” Automatica, vol. 47,
no. 10, pp. 2279–2285, 2011.
[6] A. Cunningham, B. Paluri, and F. Dellaert, “Fully distributed slam using
constrained factor graphs,” in
Proceedings of the IEEE Conference on Intelligent Robots and Systems, 2010.
[7] P. E. Gill, W. Murray, and M. A. Saunders, “Snopt: An SQP algorithm for
large-scale constrained optimization,” SIAM Rev., vol. 47, pp. 99–131, 2005.
[8] I. Maros and C. Mészáros, “A repository of convex quadratic programming
problems,” Optimization Methods and Software, vol. 11, no. 1-4, pp. 671–681,
1999.
[9] J. Schulman, J. Ho, A. X. Lee, I. Awwal, H. Bradlow, and P. Abbeel, “Finding
locally optimal, collision-free trajectories with sequential convex optimization.,”
in Robotics: Science and systems, Citeseer, vol. 9, 2013, pp. 1–10.
29