The document discusses approximation algorithms for solving hard combinatorial optimization problems. It defines optimization problems and covers NP-hard problems like the clique, independent set, vertex cover, and traveling salesman problems. Approaches for solving NP-hard problems include exact algorithms, approximation algorithms that provide guaranteed good solutions, and heuristics without guarantees. Approximation algorithms aim to settle for good enough solutions rather than optimal ones.
Presentation of daa on approximation algorithm and vertex cover problem sumit gyawali
This document presents an approximation algorithm for the vertex cover problem. It first defines approximation algorithms as techniques for dealing with NP-complete optimization problems that provide near-optimal solutions in polynomial time. It then defines the vertex cover problem as finding a subset of vertices that covers all graph edges, and provides an example 2-approximation algorithm that iteratively adds vertices covering arbitrary remaining edges to the cover set.
The document discusses approximation algorithms for NP-complete problems. It introduces the idea of finding near-optimal solutions in polynomial time for problems where optimal solutions cannot be found efficiently. It provides examples of the vertex cover problem and set cover problem, describing greedy approximation algorithms that provide performance guarantees for finding near-optimal solutions for these problems. The document also discusses some open questions around whether these approximation ratios can be improved.
This document discusses NP-complete problems and their properties. Some key points:
- NP-complete problems have an exponential upper bound on runtime but only a polynomial lower bound, making them appear intractable. However, their intractability cannot be proven.
- NP-complete problems are reducible to each other in polynomial time. Solving one would solve all NP-complete problems.
- NP refers to problems that can be verified in polynomial time. P refers to problems that can be solved in polynomial time.
- A problem is NP-complete if it is in NP and all other NP problems can be reduced to it in polynomial time. Proving a problem is NP-complete involves showing
1) NP-Completeness refers to problems that are in NP (can be verified in polynomial time) and are as hard as any problem in NP.
2) The first problem proven to be NP-Complete was the Circuit Satisfiability problem, which asks whether there exists an input assignment that makes a Boolean circuit output 1.
3) To prove a problem P is NP-Complete, it must be shown that P is in NP and that any problem in NP can be reduced to P in polynomial time. This establishes P as at least as hard as any problem in NP.
This document discusses propositional logic and knowledge representation. It introduces propositional logic as the simplest form of logic that uses symbols to represent facts that can then be joined by logical connectives like AND and OR. Truth tables are presented as a way to determine the truth value of propositions connected by these logical operators. The document also discusses concepts like models of formulas, satisfiable and valid formulas, and rules of inference like modus ponens and disjunctive syllogism that allow deducing new facts from initial propositions. Examples are provided to illustrate each concept.
This document discusses using the Branch and Bound technique to solve the traveling salesman problem and water jug problem. Branch and Bound is a method for solving discrete and combinatorial optimization problems by breaking the problem into smaller subsets, calculating bounds on the objective function, and discarding subsets that cannot produce better solutions than the best found so far. The document provides examples of applying Branch and Bound to find the optimal path between states for the water jug problem and the shortest route between cities for the traveling salesman problem.
The Floyd-Warshall algorithm finds the shortest paths between all pairs of vertices in a weighted graph. It works by computing the shortest path between every pair of vertices through dynamic programming. The algorithm proceeds in steps, where in each step it considers all vertices as potential intermediate vertices to find even shorter paths between vertex pairs. This is done by comparing the newly computed shortest paths to the values stored in the previous matrix.
Presentation of daa on approximation algorithm and vertex cover problem sumit gyawali
This document presents an approximation algorithm for the vertex cover problem. It first defines approximation algorithms as techniques for dealing with NP-complete optimization problems that provide near-optimal solutions in polynomial time. It then defines the vertex cover problem as finding a subset of vertices that covers all graph edges, and provides an example 2-approximation algorithm that iteratively adds vertices covering arbitrary remaining edges to the cover set.
The document discusses approximation algorithms for NP-complete problems. It introduces the idea of finding near-optimal solutions in polynomial time for problems where optimal solutions cannot be found efficiently. It provides examples of the vertex cover problem and set cover problem, describing greedy approximation algorithms that provide performance guarantees for finding near-optimal solutions for these problems. The document also discusses some open questions around whether these approximation ratios can be improved.
This document discusses NP-complete problems and their properties. Some key points:
- NP-complete problems have an exponential upper bound on runtime but only a polynomial lower bound, making them appear intractable. However, their intractability cannot be proven.
- NP-complete problems are reducible to each other in polynomial time. Solving one would solve all NP-complete problems.
- NP refers to problems that can be verified in polynomial time. P refers to problems that can be solved in polynomial time.
- A problem is NP-complete if it is in NP and all other NP problems can be reduced to it in polynomial time. Proving a problem is NP-complete involves showing
1) NP-Completeness refers to problems that are in NP (can be verified in polynomial time) and are as hard as any problem in NP.
2) The first problem proven to be NP-Complete was the Circuit Satisfiability problem, which asks whether there exists an input assignment that makes a Boolean circuit output 1.
3) To prove a problem P is NP-Complete, it must be shown that P is in NP and that any problem in NP can be reduced to P in polynomial time. This establishes P as at least as hard as any problem in NP.
This document discusses propositional logic and knowledge representation. It introduces propositional logic as the simplest form of logic that uses symbols to represent facts that can then be joined by logical connectives like AND and OR. Truth tables are presented as a way to determine the truth value of propositions connected by these logical operators. The document also discusses concepts like models of formulas, satisfiable and valid formulas, and rules of inference like modus ponens and disjunctive syllogism that allow deducing new facts from initial propositions. Examples are provided to illustrate each concept.
This document discusses using the Branch and Bound technique to solve the traveling salesman problem and water jug problem. Branch and Bound is a method for solving discrete and combinatorial optimization problems by breaking the problem into smaller subsets, calculating bounds on the objective function, and discarding subsets that cannot produce better solutions than the best found so far. The document provides examples of applying Branch and Bound to find the optimal path between states for the water jug problem and the shortest route between cities for the traveling salesman problem.
The Floyd-Warshall algorithm finds the shortest paths between all pairs of vertices in a weighted graph. It works by computing the shortest path between every pair of vertices through dynamic programming. The algorithm proceeds in steps, where in each step it considers all vertices as potential intermediate vertices to find even shorter paths between vertex pairs. This is done by comparing the newly computed shortest paths to the values stored in the previous matrix.
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
Fuzzy logic was introduced by Lotfi Zadeh in 1965 to address problems with classical logic being too precise. Fuzzy logic allows for truth values between 0 and 1 rather than binary true/false. It involves fuzzy sets, membership functions, linguistic variables, and fuzzy rules. Fuzzy logic can be applied to knowledge representation and inference using concepts like fuzzy predicates, relations, modifiers and quantifiers. It has various applications including household appliances, animation, industrial automation, and more.
The document discusses approximation algorithms and genetic algorithms for solving optimization problems like the traveling salesman problem (TSP) and vertex cover problem. It provides examples of approximation algorithms for these NP-hard problems, including algorithms that find near-optimal solutions within polynomial time. Genetic algorithms are also presented as an approach to solve TSP and other problems by encoding potential solutions and applying genetic operators like crossover and mutation.
The document discusses approximation algorithms for NP-hard problems. It begins with an introduction that defines approximation algorithms as algorithms that find feasible but not necessarily optimal solutions to optimization problems in polynomial time.
It then discusses different types of approximation schemes - absolute approximation where the approximate solution is within a constant of optimal, epsilon (ε)-approximation where the approximate solution is within a factor of ε times optimal, and polynomial time approximation schemes that run in polynomial time for any fixed ε.
The document provides examples of problems that admit absolute approximation algorithms, such as planar graph coloring and maximum programs stored on disks. It also discusses Graham's theorem, which proves that the largest processing time scheduling algorithm generates schedules within 1/3
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
The document discusses the Travelling Salesman Problem (TSP). TSP aims to find the shortest possible route for a salesman to visit each city in a list only once and return to the origin city. It describes the problem as finding the optimal or least cost Hamiltonian circuit in a graph where cities are nodes and distances between cities are edge costs. The document provides an example problem with 5 cities, calculates possible routes and costs, and illustrates the branch and bound algorithm to solve TSP by systematically eliminating suboptimal routes until the optimal route is found.
The document discusses various concepts in predicate logic including:
1. Universal and existential quantification allow representing statements like "for all" or "there exists".
2. Syntax of first-order logic includes constants, variables, functions, predicates, and quantifiers.
3. A predicate is satisfiable if true for some values, valid if true for all values, and unsatisfiable if false for all values.
4. Negating quantifiers flips the quantifier and negates the predicate. Free variables can be substituted while bound variables cannot. Restrictions filter domains.
- NP-hard problems are at least as hard as problems in NP. A problem is NP-hard if any problem in NP can be reduced to it in polynomial time.
- Cook's theorem states that if the SAT problem can be solved in polynomial time, then every problem in NP can be solved in polynomial time.
- Vertex cover problem is proven to be NP-hard by showing that independent set problem reduces to it in polynomial time, meaning there is a polynomial time algorithm that converts any instance of independent set into an instance of vertex cover.
- Therefore, if there was a polynomial time algorithm for vertex cover, it could be used to solve independent set in polynomial time. Since independent set is NP-complete
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses the knapsack problem, which involves selecting a subset of items that fit within a knapsack of limited capacity to maximize the total value. There are two versions - the 0-1 knapsack problem where items can only be selected entirely or not at all, and the fractional knapsack problem where items can be partially selected. Solutions include brute force, greedy algorithms, and dynamic programming. Dynamic programming builds up the optimal solution by considering all sub-problems.
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
This document discusses greedy algorithms and dynamic programming techniques for solving optimization problems. It covers the activity selection problem, which can be solved greedily by always selecting the shortest remaining activity. It also discusses the knapsack problem and how the fractional version can be solved greedily while the 0-1 version requires dynamic programming due to its optimal substructure but non-greedy nature. Dynamic programming builds up solutions by combining optimal solutions to overlapping subproblems.
This presentation discusses the knapsack problem and its two main versions: 0/1 and fractional. The 0/1 knapsack problem involves indivisible items that are either fully included or not included, and is solved using dynamic programming. The fractional knapsack problem allows items to be partially included, and is solved using a greedy algorithm. Examples are provided of solving each version using their respective algorithms. The time complexity of these algorithms is also presented. Real-world applications of the knapsack problem include cutting raw materials and selecting investments.
The document discusses various optimization problems that can be solved using the greedy method. It begins by explaining that the greedy method involves making locally optimal choices at each step that combine to produce a globally optimal solution. Several examples are then provided to illustrate problems that can and cannot be solved with the greedy method. These include shortest path problems, minimum spanning trees, activity-on-edge networks, and Huffman coding. Specific greedy algorithms like Kruskal's algorithm, Prim's algorithm, and Dijkstra's algorithm are also covered. The document concludes by noting that the greedy method can only be applied to solve a small number of optimization problems.
Artificial intelligence and knowledge representationSajan Sahu
The document discusses artificial intelligence and knowledge representation. It describes how computers can be made intelligent through speed of computation, filtering responses, using algorithms and neural networks. It also discusses knowledge representation techniques in AI like propositional logic, semantic networks, frames, predicate logic and nonmonotonic reasoning. The document provides examples and applications of AI like pattern recognition, robotics and natural language processing. It also discusses some fundamental problems of AI.
The document describes algorithms for solving geometric problems in computational geometry. It discusses algorithms for determining if line segments intersect in O(n log n) time using a sweep line approach. It also describes using the cross product to compare orientations of segments and determine if consecutive segments make a left or right turn.
1. Finite fields are algebraic structures that are both fields and finite sets. They have important applications in computer science, coding theory, and cryptography.
2. For each prime p and positive integer n, there exists a unique finite field of order pn, denoted as GF(pn).
3. GF(pn) contains subfields of order pm for each divisor m of n. The only subfields are those of order pm.
This document provides an introduction to automata theory and finite automata. It defines an automaton as an abstract computing device that follows a predetermined sequence of operations automatically. A finite automaton has a finite number of states and can be deterministic or non-deterministic. The document outlines the formal definitions and representations of finite automata. It also discusses related concepts like alphabets, strings, languages, and the conversions between non-deterministic and deterministic finite automata. Methods for minimizing deterministic finite automata using Myhill-Nerode theorem and equivalence theorem are also introduced.
Propositional Resolution is a powerful rule of inference for Propositional Logic. Using Propositional Resolution (without axiom schemata or other rules of inference), it is possible to build a theorem prover that is sound and complete for all of Propositional Logic. What's more, the search space using Propositional Resolution is much smaller than for standard Propositional Logic.
Reoptimization techniques for solving hard problemsJhoirene Clemente
Unless P=NP, we cannot obtain a polynomial-time algorithm solving hard combinatorial problems. One practical approach in solving this kind of problem is to relax the condition of always finding the optimal solution for an instance and settle for “good enough” solutions. The kind of algorithms which are guaranteed to obtain a solution with a certain quality are called approximative algorithms. However, not all hard problems are approximable, i.e., we can obtain a polynomial-time algorithm that can guarantee the goodness of the solution for a problem.
In this lecture, we will present the concept of reoptimization. In this approach, given an instance I of some problem Π, an optimal solution OPT for Π in I, and a modified instance I' resulting from a local perturbation of I, we wish to use OPT in order to solve Π in I'. With this additional information, reoptimization may help to improve the approximability of the problem or the running time of the solution to it. In fact, we can obtain a polynomial-time approximation scheme (PTAS) for a reoptimization variant of a problem given that the unmodified problem is approximable.
Lecture 1 from https://irdta.eu/deeplearn/2022su/
Covers concepts from Part 1 of my new book, https://meyn.ece.ufl.edu/2021/08/01/control-systems-and-reinforcement-learning/
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
Fuzzy logic was introduced by Lotfi Zadeh in 1965 to address problems with classical logic being too precise. Fuzzy logic allows for truth values between 0 and 1 rather than binary true/false. It involves fuzzy sets, membership functions, linguistic variables, and fuzzy rules. Fuzzy logic can be applied to knowledge representation and inference using concepts like fuzzy predicates, relations, modifiers and quantifiers. It has various applications including household appliances, animation, industrial automation, and more.
The document discusses approximation algorithms and genetic algorithms for solving optimization problems like the traveling salesman problem (TSP) and vertex cover problem. It provides examples of approximation algorithms for these NP-hard problems, including algorithms that find near-optimal solutions within polynomial time. Genetic algorithms are also presented as an approach to solve TSP and other problems by encoding potential solutions and applying genetic operators like crossover and mutation.
The document discusses approximation algorithms for NP-hard problems. It begins with an introduction that defines approximation algorithms as algorithms that find feasible but not necessarily optimal solutions to optimization problems in polynomial time.
It then discusses different types of approximation schemes - absolute approximation where the approximate solution is within a constant of optimal, epsilon (ε)-approximation where the approximate solution is within a factor of ε times optimal, and polynomial time approximation schemes that run in polynomial time for any fixed ε.
The document provides examples of problems that admit absolute approximation algorithms, such as planar graph coloring and maximum programs stored on disks. It also discusses Graham's theorem, which proves that the largest processing time scheduling algorithm generates schedules within 1/3
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
The document discusses the Travelling Salesman Problem (TSP). TSP aims to find the shortest possible route for a salesman to visit each city in a list only once and return to the origin city. It describes the problem as finding the optimal or least cost Hamiltonian circuit in a graph where cities are nodes and distances between cities are edge costs. The document provides an example problem with 5 cities, calculates possible routes and costs, and illustrates the branch and bound algorithm to solve TSP by systematically eliminating suboptimal routes until the optimal route is found.
The document discusses various concepts in predicate logic including:
1. Universal and existential quantification allow representing statements like "for all" or "there exists".
2. Syntax of first-order logic includes constants, variables, functions, predicates, and quantifiers.
3. A predicate is satisfiable if true for some values, valid if true for all values, and unsatisfiable if false for all values.
4. Negating quantifiers flips the quantifier and negates the predicate. Free variables can be substituted while bound variables cannot. Restrictions filter domains.
- NP-hard problems are at least as hard as problems in NP. A problem is NP-hard if any problem in NP can be reduced to it in polynomial time.
- Cook's theorem states that if the SAT problem can be solved in polynomial time, then every problem in NP can be solved in polynomial time.
- Vertex cover problem is proven to be NP-hard by showing that independent set problem reduces to it in polynomial time, meaning there is a polynomial time algorithm that converts any instance of independent set into an instance of vertex cover.
- Therefore, if there was a polynomial time algorithm for vertex cover, it could be used to solve independent set in polynomial time. Since independent set is NP-complete
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses the knapsack problem, which involves selecting a subset of items that fit within a knapsack of limited capacity to maximize the total value. There are two versions - the 0-1 knapsack problem where items can only be selected entirely or not at all, and the fractional knapsack problem where items can be partially selected. Solutions include brute force, greedy algorithms, and dynamic programming. Dynamic programming builds up the optimal solution by considering all sub-problems.
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
This document discusses greedy algorithms and dynamic programming techniques for solving optimization problems. It covers the activity selection problem, which can be solved greedily by always selecting the shortest remaining activity. It also discusses the knapsack problem and how the fractional version can be solved greedily while the 0-1 version requires dynamic programming due to its optimal substructure but non-greedy nature. Dynamic programming builds up solutions by combining optimal solutions to overlapping subproblems.
This presentation discusses the knapsack problem and its two main versions: 0/1 and fractional. The 0/1 knapsack problem involves indivisible items that are either fully included or not included, and is solved using dynamic programming. The fractional knapsack problem allows items to be partially included, and is solved using a greedy algorithm. Examples are provided of solving each version using their respective algorithms. The time complexity of these algorithms is also presented. Real-world applications of the knapsack problem include cutting raw materials and selecting investments.
The document discusses various optimization problems that can be solved using the greedy method. It begins by explaining that the greedy method involves making locally optimal choices at each step that combine to produce a globally optimal solution. Several examples are then provided to illustrate problems that can and cannot be solved with the greedy method. These include shortest path problems, minimum spanning trees, activity-on-edge networks, and Huffman coding. Specific greedy algorithms like Kruskal's algorithm, Prim's algorithm, and Dijkstra's algorithm are also covered. The document concludes by noting that the greedy method can only be applied to solve a small number of optimization problems.
Artificial intelligence and knowledge representationSajan Sahu
The document discusses artificial intelligence and knowledge representation. It describes how computers can be made intelligent through speed of computation, filtering responses, using algorithms and neural networks. It also discusses knowledge representation techniques in AI like propositional logic, semantic networks, frames, predicate logic and nonmonotonic reasoning. The document provides examples and applications of AI like pattern recognition, robotics and natural language processing. It also discusses some fundamental problems of AI.
The document describes algorithms for solving geometric problems in computational geometry. It discusses algorithms for determining if line segments intersect in O(n log n) time using a sweep line approach. It also describes using the cross product to compare orientations of segments and determine if consecutive segments make a left or right turn.
1. Finite fields are algebraic structures that are both fields and finite sets. They have important applications in computer science, coding theory, and cryptography.
2. For each prime p and positive integer n, there exists a unique finite field of order pn, denoted as GF(pn).
3. GF(pn) contains subfields of order pm for each divisor m of n. The only subfields are those of order pm.
This document provides an introduction to automata theory and finite automata. It defines an automaton as an abstract computing device that follows a predetermined sequence of operations automatically. A finite automaton has a finite number of states and can be deterministic or non-deterministic. The document outlines the formal definitions and representations of finite automata. It also discusses related concepts like alphabets, strings, languages, and the conversions between non-deterministic and deterministic finite automata. Methods for minimizing deterministic finite automata using Myhill-Nerode theorem and equivalence theorem are also introduced.
Propositional Resolution is a powerful rule of inference for Propositional Logic. Using Propositional Resolution (without axiom schemata or other rules of inference), it is possible to build a theorem prover that is sound and complete for all of Propositional Logic. What's more, the search space using Propositional Resolution is much smaller than for standard Propositional Logic.
Reoptimization techniques for solving hard problemsJhoirene Clemente
Unless P=NP, we cannot obtain a polynomial-time algorithm solving hard combinatorial problems. One practical approach in solving this kind of problem is to relax the condition of always finding the optimal solution for an instance and settle for “good enough” solutions. The kind of algorithms which are guaranteed to obtain a solution with a certain quality are called approximative algorithms. However, not all hard problems are approximable, i.e., we can obtain a polynomial-time algorithm that can guarantee the goodness of the solution for a problem.
In this lecture, we will present the concept of reoptimization. In this approach, given an instance I of some problem Π, an optimal solution OPT for Π in I, and a modified instance I' resulting from a local perturbation of I, we wish to use OPT in order to solve Π in I'. With this additional information, reoptimization may help to improve the approximability of the problem or the running time of the solution to it. In fact, we can obtain a polynomial-time approximation scheme (PTAS) for a reoptimization variant of a problem given that the unmodified problem is approximable.
Lecture 1 from https://irdta.eu/deeplearn/2022su/
Covers concepts from Part 1 of my new book, https://meyn.ece.ufl.edu/2021/08/01/control-systems-and-reinforcement-learning/
This document describes a regularized version of the simplex method for solving linear programming problems. It discusses interpreting the simplex method as a cutting-plane method for approximating a convex objective function when solving a special class of problems known as ball-fitting problems. A ball-fitting problem involves finding the largest ball that fits within a given polyhedron. The document proposes regularizing the simplex method by applying regularization in the dual space during the pricing mechanism. It presents Algorithm 2 as a regularized simplex method for solving ball-fitting problems and interpreting the results as approximating a convex objective function. Computational test results comparing the basic simplex method to the regularized version are also mentioned.
This document discusses optimization problems and their solutions. It begins by defining optimization problems as seeking to maximize or minimize a quantity given certain limits or constraints. Both deterministic and stochastic models are discussed. Examples of discrete optimization problems include the traveling salesman and shortest path problems. Solution methods mentioned include integer programming, network algorithms, dynamic programming, and approximation algorithms. The document then focuses on convex optimization problems, which can be solved efficiently. It discusses using tools like CVX for solving convex programs and the duality between primal and dual problems. Finally, it presents the collaborative resource allocation algorithm for solving non-convex optimization problems in a suboptimal way.
Introduction to Optimization revised.pptJahnaviGautam
The document provides an introduction to optimization problems. It defines optimization as involving an objective function to minimize or maximize, subject to constraints on variables. It categorizes problems as continuous or discrete and with or without objectives/constraints. Examples covered include shortest path problems, maximum flow problems, transportation problems, and task assignment problems. Algorithms for some problems are also mentioned.
The document discusses linear programming and the simplex method for solving linear programming problems. It begins with definitions of linear programming and its history. It then provides an example production planning problem that can be formulated as a linear programming problem. The document goes on to describe the standard form of a linear programming problem and terminology used. It explains how the simplex method works through iterative improvements to find the optimal solution. This is illustrated both geometrically and through an algebraic example solved using the simplex method.
The document discusses primal-dual algorithms as an approximation technique for optimization problems. It provides an overview and introduction to primal-dual algorithms. As an example, it describes how the minimum weighted vertex cover problem can be solved using a primal-dual strategy. Specifically, it formulates the problem as an integer linear program and linear program. Rather than fully solving the linear program, the primal-dual algorithm maintains feasible integer and dual solutions and iteratively derives more feasible solutions until the integer solution is feasible.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
- The document discusses approximation algorithms for NP-complete problems. Approximation algorithms aim to find near-optimal solutions in polynomial time, rather than guaranteed optimal solutions.
- It provides examples of approximation algorithms for the vertex cover problem and traveling salesman problem (TSP). The greedy vertex cover algorithm is a 2-approximation algorithm, while the minimum spanning tree approach provides a TSP tour within a factor of 2 of optimal.
- Performance ratios and terminology used in analyzing approximation algorithms are defined, such as absolute, f(n)-approximation, and ε-approximation. The document also discusses approximation schemes.
The document discusses computational complexity problems that are solvable in polynomial time but for which no significantly faster algorithms are known. It presents several such problems from areas like graph algorithms, computational biology, and computational geometry. It then discusses recent work that aims to establish conditional lower bounds for the runtime of such problems by relating their hardness to standard conjectures like 3SUM, APSP, SETH, orthogonal vectors, and small universe hitting set. Fine-grained reductions are used to show relationships between problems. Overall, the document outlines an approach for proving conditional lower bounds for problems solvable in polynomial time based on reasonable complexity theoretic conjectures.
This document summarizes support vector machines (SVMs), a machine learning technique for classification and regression. SVMs find the optimal separating hyperplane that maximizes the margin between positive and negative examples in the training data. This is achieved by solving a convex optimization problem that minimizes a quadratic function under linear constraints. SVMs can perform non-linear classification by implicitly mapping inputs into a higher-dimensional feature space using kernel functions. They have applications in areas like text categorization due to their ability to handle high-dimensional sparse data.
The document discusses primal-dual algorithms as an approximation technique for optimization problems. It provides an overview and introduction to primal-dual algorithms. It then uses the minimum weighted vertex cover problem (WVC) as a case study. It formulates WVC as an integer linear program and as a linear program by relaxing the integrality constraints. It describes using the primal-dual method to obtain an approximate solution to WVC without fully solving the linear program. The primal-dual method works by maintaining a feasible integer solution for the primal problem and a feasible solution for the dual problem, and iteratively deriving a more feasible solution for both.
The document discusses approximation algorithms for NP-hard optimization problems. It provides examples of approximation algorithms for problems like set cover, vertex cover, traveling salesman problem (TSP), and knapsack. For set cover, it shows that a greedy algorithm provides a (1+ln n)-approximation. For vertex cover and TSP, it describes 2-approximation algorithms. It also presents a fully polynomial-time approximation scheme (FPTAS) for knapsack that provides a solution within (1-eps) of optimal.
Molodtsov's Soft Set Theory and its Applications in Decision Makinginventionjournals
Molodtsov's soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. In this paper, we apply the theory of soft set to solve a decision making problem in terms of rough mathematics
Molodtsov's Soft Set Theory and its Applications in Decision Makinginventionjournals
Molodtsov's soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. In this paper, we apply the theory of soft set to solve a decision making problem in terms of rough mathematics.
1) The document discusses generalized inequality constraints in convex optimization problems, where the constraints are defined by proper cones instead of just non-negativity constraints.
2) It provides examples of conic forms of optimization problems including second-order cone programming, semidefinite programming, and moment problems.
3) Vector optimization problems aim to optimize multiple objectives simultaneously and the concepts of Pareto optimality and scalarization techniques for finding Pareto optimal solutions are introduced. Trade-off analysis is discussed for analyzing the trade-offs between objectives.
ANURAG TYAGI CLASSES (ATC) is an organisation destined to orient students into correct path to achieve
success in IIT-JEE, AIEEE, PMT, CBSE & ICSE board classes. The organisation is run by a competitive staff comprising of Ex-IITians. Our goal at ATC is to create an environment that inspires students to recognise and explore their own potentials and build up confidence in themselves.ATC was founded by Mr. ANURAG TYAGI on 19 march, 2001.
MEET US AT:
www.anuragtyagiclasses.com
Skiena algorithm 2007 lecture19 introduction to np completezukun
1. The document introduces the concept of NP-completeness and discusses how it can be used to show that many problems that cannot be solved efficiently are essentially the same problem.
2. It describes how reductions can be used to show that if one problem can be transformed or reduced to another problem, then finding an efficient algorithm for one would imply an efficient algorithm for the other.
3. The traveling salesman problem and the satisfiability problem are used as examples to illustrate decision problems, instances, encodings, and reductions.
This document discusses combinatorial design problems and approaches to solving them. It introduces combinatorial problems and challenges like huge search spaces. Methods covered include modeling problems as constraint satisfaction problems (CSPs) and using constraint propagation to reduce the search space. Specific problems discussed include ternary Steiner systems, Hamming distance optimization, and modeling the Data Encryption Standard (DES) cipher as a SAT problem for cryptanalysis.
Similar to Introduction to Approximation Algorithms (20)
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Basics of crystallography, crystal systems, classes and different forms
Introduction to Approximation Algorithms
1. Approximation Algorithms
Jhoirene B Clemente
Algorithms and Complexity Lab
Department of Computer Science
University of the Philippines Diliman
October 14, 2014
2. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Overview
1. Combinatorial Optimization Problems
2. NP-Hard Problems
2.1 Clique
2.2 Independent Set Problem
2.3 Vertex Cover
2.4 Traveling Salesman Problem
3. Approaches in Solving Hard Problems
4. Approximation Algorithms
5. Approximable Problems
6. Inapproximable Problems
3. 50
Approximation
Algorithms
Jhoirene B Clemente
3 Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Optimization Problems
[Papadimitriou and Steiglitz, 1998]
Definition (Instance of an Optimization Problem)
An instance of an optimization problem is a pair (F, c), where F
is any set, the domain of feasible points; c is the cost function, a
mapping
c : F ! R
The problem is to find an f 2 F for which
c(f ) c(y) 8 y 2 F
Such a point f is called a globally optimal solution to the given
instance, or, when no confusion can arise, simply an optimal
solution.
4. 50
Approximation
Algorithms
Jhoirene B Clemente
3 Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Optimization Problems
[Papadimitriou and Steiglitz, 1998]
Definition (Instance of an Optimization Problem)
An instance of an optimization problem is a pair (F, c), where F
is any set, the domain of feasible points; c is the cost function, a
mapping
c : F ! R
The problem is to find an f 2 F for which
c(f ) c(y) 8 y 2 F
Such a point f is called a globally optimal solution to the given
instance, or, when no confusion can arise, simply an optimal
solution.
Definition (Optimization Problem)
An optimization problem is a set of instances of an optimization
problem.
5. 50
Approximation
Algorithms
Jhoirene B Clemente
4 Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Optimization Problems
[Papadimitriou and Steiglitz, 1998]
Two categories
1. with continuous variables, where we look for a set of real
numbers or a function
2. with discrete variables, which we call combinatorial, where
we look for an object from a finite, or possibly countably
infinite set, typically an integer, set, permutation, or graph.
6. 50
Approximation
Algorithms
Jhoirene B Clemente
5 Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Combinatorial Optimization Problem
Definition (Combinatorial Optimization Problem
[Papadimitriou and Steiglitz, 1998])
An optimization problem = (D,R, cost, goal) consists of
1. A set of valid instances D. Let I 2 D, denote an input
instance.
2. Each I 2 D has a set of feasible solutions, R(I ).
3. Objective function, cost, that assigns a nonnegative
rational number to each pair (I , SOL), where I is an instance
and SOL is a feasible solution to I.
4. Either minimization or maximization problem:
goal 2 {min, max}.
7. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
6 Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
NP-Hard Problems
1. Clique Problem
2. Independent Set Problem
3. Vertex Cover Problem
4. Traveling Salesman Problem
8. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
7 Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Max Clique Problem
Definition (Max Clique Problem)
Given a complete weighted graph, find the largest clique
9. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
7 Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Max Clique Problem
Definition (Max Clique Problem)
Given a complete weighted graph, find the largest clique
Theorem
Max Clique is NP-hard [Garey and Johnson, 1979].
10. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
7 Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Max Clique Problem
Definition (Max Clique Problem)
Given a complete weighted graph, find the largest clique
Theorem
Max Clique is NP-hard [Garey and Johnson, 1979].
Proposition
The decision variant of MAX-SAT is NP-Complete
[Garey and Johnson, 1979].
11. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
8 Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Max Clique Reduction from SAT
Example
(a _ b _ c) ^ (b _ ¯c _ ¯d) ^ (¯a _ c _ d) ^ (a _¯b
_ ¯d)
12. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
9 Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Max Clique Reduction from SAT
13. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
10 Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Independent Set
Definition (Max Independent Set Problem)
Given a complete weighted graph, find the largest independent set
14. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
10 Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Independent Set
Definition (Max Independent Set Problem)
Given a complete weighted graph, find the largest independent set
Theorem
Independent Set Problem is NP-hard [Garey and Johnson, 1979].
15. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
11 Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Vertex Cover Problem
Definition (Min Vertex Cover Problem)
Given a complete weighted graph, find the minimum vertex cover.
16. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
11 Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Vertex Cover Problem
Definition (Min Vertex Cover Problem)
Given a complete weighted graph, find the minimum vertex cover.
Theorem
Vertex Cover Problem is NP-hard [Garey and Johnson, 1979].
17. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
12 Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Traveling Salesman Problem (TSP)
Definition
INPUT: Edge weighted Graph G = (V,E)
OUTPUT: Minimum cost Hamiltonian Cycle.
18. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
13 Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approaches in Solving NP-hard Problems
I Exact algorithm always obtain the optimal solution
I Approximation algorithm settle for good enough solutions.
Goodness of solution is guaranteed and measured using
approximation ratio.
I Heuristic algorithms produce solutions, which are not
guaranteed to be close to the optimum. The performance of
heuristics is often evaluated empirically.
19. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
14 Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approaches in Solving NP-hard Problems
Design technique have a well specified structure that even
provides a framework for possible implementations
Concepts formulate ideas and rough frameworks about how to
attack hard algorithmic problems .
20. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
14 Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approaches in Solving NP-hard Problems
Design technique have a well specified structure that even
provides a framework for possible implementations
I Dynamic Programming
I Greedy Schema
I Divide and conquer
I Branch and Bound
I Local search
I . . .
Concepts formulate ideas and rough frameworks about how to
attack hard algorithmic problems .
I Approximation Algorithms
I Parameterized Algorithms
I Randomized Algorithms
I . . .
21. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
15 Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Algorithms
Definition (Approximation Algorithm
[Williamson and Shmoy, 2010] )
An -approximation algorithm for an optimization problem is a
polynomial-time algorithm that for all instances of the problem
produces a solution whose value is within a factor of of the
value of an optimal solution.
Given an problem instance I with an optimal solution Opt(I ), i.e.
the cost function cost(Opt(I )) is minimum/maximum.
22. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
15 Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Algorithms
Definition (Approximation Algorithm
[Williamson and Shmoy, 2010] )
An -approximation algorithm for an optimization problem is a
polynomial-time algorithm that for all instances of the problem
produces a solution whose value is within a factor of of the
value of an optimal solution.
Given an problem instance I with an optimal solution Opt(I ), i.e.
the cost function cost(Opt(I )) is minimum/maximum.
I An algorithm for a minimization problem is called
-approximative algorithm for some 1, if the algorithm
obtains a maximum cost of · cost(Opt(I )), for any input
instance I .
23. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
15 Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Algorithms
Definition (Approximation Algorithm
[Williamson and Shmoy, 2010] )
An -approximation algorithm for an optimization problem is a
polynomial-time algorithm that for all instances of the problem
produces a solution whose value is within a factor of of the
value of an optimal solution.
Given an problem instance I with an optimal solution Opt(I ), i.e.
the cost function cost(Opt(I )) is minimum/maximum.
I An algorithm for a minimization problem is called
-approximative algorithm for some 1, if the algorithm
obtains a maximum cost of · cost(Opt(I )), for any input
instance I .
I An algorithm for a maximization problem is called
-approximative algorithm, for some 1, if the algorithm
obtains a minimum cost of · cost(Opt(I )), for any input
instance I .
24. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
16 Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Ratio
I Minimization, 1
cost(Opt(I )) cost(SOL) cost(Opt(I ))
I Maximization, 1
cost(Opt(I )) cost(SOL) cost(Opt(I ))
25. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
17 Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Ratio
I Additive Approximation Algorithms:
SOL OPT + c
I Constant Approximation Algorithms:
SOL c · OPT
I Logarithmic Approximation Algorithms:
SOL = O(log n) · OPT
I Polynomial Approximation Algorithms:
SOL = O(nc) · OPT,
where c 1
26. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
18 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Example: Vertex Cover Problem
Definition (Vertex Cover [Vazirani, 2001] )
Given a graph G = (V,E), a vertex cover is a subset C V
such that every edge has at least one end point incident at C.
27. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
18 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Example: Vertex Cover Problem
Definition (Vertex Cover [Vazirani, 2001] )
Given a graph G = (V,E), a vertex cover is a subset C V
such that every edge has at least one end point incident at C.
28. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
18 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Example: Vertex Cover Problem
Definition (Vertex Cover [Vazirani, 2001] )
Given a graph G = (V,E), a vertex cover is a subset C V
such that every edge has at least one end point incident at C.
Definition (Minimum Vertex Cover Problem)
Given a complete weighted graph G = (V,E), find a minimum
cardinality vertex cover C.
29. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
19 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
Maximal Matching
Definition (Matchings)
Given a graph G = (V,E), a subset M E is called a matching
if no two edges in M are adjacent in G.
Figure : (a) Maximal matching (b) Perfect matching
30. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
20 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
2-Approximation Algorithm
1. Find a maximal matching in G, and output the set of
matched vertices.
Algorithm 1: 2-Approximation Algorithm for minimum vector
cover problem
31. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
21 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
2-Approximation Algorithm
Theorem
Algorithm 1 is a 2-approximation algorithm.
32. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
21 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
2-Approximation Algorithm
Theorem
Algorithm 1 is a 2-approximation algorithm.
Proof.
Let M be a maximal matching in G = (V,E) and OPT be the
minimum vertex cover in G, then
|M| |OPT|
The cover picked by the algorithm has cardinality |SOL| 2 · |M|.
|SOL| 2 · |OPT|
33. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
22 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
2-Approximation Algorithm
Example:
34. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
22 Vertex Cover
Traveling Salesman Problem
References
CS 397
October 14, 2014
2-Approximation Algorithm
Example:
35. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
23 Traveling Salesman Problem
References
CS 397
October 14, 2014
Solving Traveling Salesman Problem
INPUT: Edge weighted graph
1. Compute a Minimum Spanning Tree for G
2. Select a vertex r 2 V(G) to be the root vertex
3. Let L be the list of vertices visited in a preorder walk
OUTPUT: Hamiltonian Cycle that visits
A preorder tree walk recursively visits every vertex in the
tree, listing a vertex when it is first encountered , before
any of its children are visited.
36. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
24 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
37. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
25 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
38. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
26 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
39. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
27 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
40. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
28 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
41. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
29 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
42. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
30 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
43. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
31 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
44. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
32 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
45. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
33 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
46. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
34 Traveling Salesman Problem
References
CS 397
October 14, 2014
Computing the Minimum Spanning Tree
47. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
35 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
48. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
36 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
49. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
37 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
50. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
38 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
51. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
39 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
52. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
40 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
53. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
41 Traveling Salesman Problem
References
CS 397
October 14, 2014
Preorder
54. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
42 Traveling Salesman Problem
References
CS 397
October 14, 2014
Hamiltonian Cycle from the Preorder Walk
55. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
43 Traveling Salesman Problem
References
CS 397
October 14, 2014
Hamiltonian Cycle from the Preorder Walk
4 + 4 + 10 + 15 + 10 + 16 = 59
56. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
44 Traveling Salesman Problem
References
CS 397
October 14, 2014
Total tour cost vs. the optimal cost
Let H denote an optimal Hamiltonian tour in G. Let T be the
minimum spanning tree of G.
c(T) c(H)
Let W be the full length cost of the preorder walk.
c(W) = 2c(T)
Let H be the output Hamiltonian Cycle from the algorithm
c(W) c(H)
Then
c(H) 2c(H)
57. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
44 Traveling Salesman Problem
References
CS 397
October 14, 2014
Total tour cost vs. the optimal cost
Let H denote an optimal Hamiltonian tour in G. Let T be the
minimum spanning tree of G.
c(T) c(H)
Let W be the full length cost of the preorder walk.
c(W) = 2c(T)
Let H be the output Hamiltonian Cycle from the algorithm
c(W) c(H)
Then
c(H) 2c(H)
Approximation ratio () is 2
58. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
45 Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximable Problems [Vazirani, 2001]
Definition (APX)
I An abbreviation for “Approximable, is the set of NP
optimization problems that allow polynomial-time
approximation algorithms with approximation ratio bounded
by a constant.
59. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
45 Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximable Problems [Vazirani, 2001]
Definition (APX)
I An abbreviation for “Approximable, is the set of NP
optimization problems that allow polynomial-time
approximation algorithms with approximation ratio bounded
by a constant.
I Problems in this class have efficient algorithms that can find
an answer within some fixed percentage of the optimal
answer.
60. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
46 Traveling Salesman Problem
References
CS 397
October 14, 2014
Polynomial-time Approximation Schemes
Definition (PTAS [Aaronson et al., 2008])
The subclass of NPO problems that admit an approximation
scheme in the following sense.
For any 0, there is a polynomial-time algorithm that is
guaranteed to find a solution whose cost is within a 1 + factor
of the optimum cost. Contains FPTAS, and is contained in APX.
61. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
46 Traveling Salesman Problem
References
CS 397
October 14, 2014
Polynomial-time Approximation Schemes
Definition (PTAS [Aaronson et al., 2008])
The subclass of NPO problems that admit an approximation
scheme in the following sense.
For any 0, there is a polynomial-time algorithm that is
guaranteed to find a solution whose cost is within a 1 + factor
of the optimum cost. Contains FPTAS, and is contained in APX.
Definition (FPTAS [Aaronson et al., 2008] )
The subclass of NPO problems that admit an approximation
scheme in the following sense.
For any 0, there is an algorithm that is guaranteed to find a
solution whose cost is within a 1 + factor of the optimum cost.
Furthermore, the running time of the algorithm is polynomial in n
(the size of the problem) and in 1/.
62. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
47 Traveling Salesman Problem
References
CS 397
October 14, 2014
Approximation Ratio of well known Hard
Problems
1. FPTAS: Bin Packing Problem
2. PTAS: Makespan Scheduling Problem
3. APX:
3.1 Min Steiner Tree Problem (1.55 Approximable) [Robins,2005]
3.2 Min Metric TSP (3/2 Approximable) [Christofides,1977]
3.3 Max SAT (0.77 Approximable) [Asano,1997]
3.4 Vertex Cover (2 Approximable) [Vazirani,2001]
4. MAX SNP
4.1 Independent Set Problem
4.2 Clique Problem
4.3 Travelling Salesman Problem
63. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
48 Traveling Salesman Problem
References
CS 397
October 14, 2014
Complexity Classes [Ausiello et al., 2011]
FPTAS ( PTAS ( APX ( NPO
64. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
49 Traveling Salesman Problem
References
CS 397
October 14, 2014
Inapproximable Problems
Many problems have polynomial-time approximation schemes.
However, there exists a class of problems that is not so easy
[Williamson and Shmoy, 2010]. This class is called MAX SNP.
Theorem
For any MAXSNP-hard problem, there does not exist a
polynomial-time approximation scheme, unless P = NP
[Williamson and Shmoy, 2010].
65. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
49 Traveling Salesman Problem
References
CS 397
October 14, 2014
Inapproximable Problems
Many problems have polynomial-time approximation schemes.
However, there exists a class of problems that is not so easy
[Williamson and Shmoy, 2010]. This class is called MAX SNP.
Theorem
For any MAXSNP-hard problem, there does not exist a
polynomial-time approximation scheme, unless P = NP
[Williamson and Shmoy, 2010].
66. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
49 Traveling Salesman Problem
References
CS 397
October 14, 2014
Inapproximable Problems
Many problems have polynomial-time approximation schemes.
However, there exists a class of problems that is not so easy
[Williamson and Shmoy, 2010]. This class is called MAX SNP.
Theorem
For any MAXSNP-hard problem, there does not exist a
polynomial-time approximation scheme, unless P = NP
[Williamson and Shmoy, 2010].
Theorem
If P6= NP, then for any constant 1, there is no
polynomial-time approximation algorithm with approximation
ratio for the general travelling salesman problem.
67. 50
Approximation
Algorithms
Jhoirene B Clemente
Optimization Problems
Hard Combinatorial
Optimization Problems
Clique
Independent Set Problem
Vertex Cover
Approaches in Solving
Hard Problems
Approximation
Algorithms
Vertex Cover
Traveling Salesman Problem
50 References
CS 397
October 14, 2014
References I
Aaronson, S., Kuperberg, G., and Granade, C. (2008).
The complexity zoo.