Upcoming SlideShare
×

# Approx

618 views
539 views

Published on

Published in: Technology, Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
618
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
16
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Approx

1. 1. Approximations for hard problems Example: Scheduling Given: n jobs with processing times, m identical machines. Problem: Schedule so as to minimize makespan. … just a few examples … what is an approximation algorithm … quality of approximations: from arbitrarily bad to arbitrarily good
2. 2. Algorithm: List scheduling Basic idea: In a list of jobs, schedule the next one as soon as a machine is free a b c d e machine 1 machine 4 machine 3 machine 2 Good or bad ?
3. 3. Algorithm: List scheduling Basic idea: In a list of jobs, schedule the next one as soon as a machine is free a b c d e machine 1 machine 4 machine 3 machine 2 f S A job f finishes last, at time A compare to time OPT of best schedule: how ?
4. 4. a b c d e machine 1 machine 4 machine 3 machine 2 f S A job f finishes last, at time A compare to time OPT of best schedule: how ? <ul><li>job f must be scheduled in the best schedule at some time: </li></ul><ul><li>A – S <= OPT. </li></ul><ul><li>(2) up to time S, all machines were busy all the time, and OPT </li></ul><ul><li>cannot beat that, and job f was not yet included: S < OPT. </li></ul><ul><li>(3) both together: A = A – S + S < 2 OPT. </li></ul><ul><li>“ 2-approximation” (Graham, 1966) </li></ul>
5. 5. Approximations in more generality P = set of problems with polynomial time solutions N P = set of problems with polynomial time certificates “ guess and verify” Example Problem: Clique. Given: Graph G = (V,E); positive integer k. Decide: Does G contain a clique of size k ?
6. 6. Hardness : with respect to transformation in P Problem  is NP-hard : every problem  ’ in NP can be transformed into   in polynomial time Problem  is NP-complete :  is NP-hard and is in NP. Basis for transformation (reduction): 3-SAT
7. 7. Theorem: Clique is NP-complete. Proof: (1) is in NP: guess and verify; (2) is NP-hard, by reduction: a or b or not c a or not b or not c d or b or c edge = compatibility literal = vertex requested clique size k = number of clauses
8. 8. NP for decision problem value problem optimization problem Compute the largest value k for which G contains a clique of size k Compute a largest clique in G polynomial relationship: IF decision in P  value in P  optimization in P more interesting most realistic
9. 9. Problem: Independent set. Given: G = (V,E); positive integer bound k. Decide: Is there a subset V’ of V of at least k vertices, such that no two vertices in V’ share an edge ? Theorem: Independent set is NP-complete. Proof: (1) is in NP: guess and verify; (2) is NP-hard, by reduction from Clique: build complement graph: edge  no edge no edge  edge.
10. 10. Problem: Minimum vertex cover. Given: G = (V,E). Minimize: Find a smallest subset V’ of V, such that every edge in E has an incident vertex in V’ (“is covered”) ? Theorem: Vertex cover is NP-hard. Proof: by reduction from Independent set: vertex v in independent set  v not in vertex cover. Lots of hard problems; what can we do ?
11. 11. Solve problem approximately…. Minimum vertex cover First idea Repeat greedily: pick vertex that covers the largest number of edges not yet covered, and remove it and its incident edges.
12. 12. Solve problem approximately…. Minimum vertex cover First idea Repeat greedily: pick vertex that covers the largest number of edges not yet covered, and remove it and its incident edges. Not too good … as we will see later
13. 13. Solve problem approximately…. Minimum vertex cover Second idea Repeat greedily: pick both vertices of an arbitrary edge, and remove both and their incident edges.
14. 14. Solve problem approximately…. Minimum vertex cover Second idea Repeat greedily: pick both vertices of an arbitrary edge, and remove both and their incident edges. great Theorem: This is a 2-approximation. Proof: One vertex per edge is needed, two are taken.
15. 15. Solve problem approximately…. Independent set … by the reduction we know …
16. 16. Solve problem approximately…. Independent set … by the reduction we know … <ul><li>… does not work: </li></ul><ul><li>assume graph with 1000 vertices, </li></ul><ul><li>minimum vertex cover 499 vertices, </li></ul><ul><li>approximate vertex cover of 998 vertices. </li></ul><ul><li>maximum independent set has 501 vertices, </li></ul><ul><li>approximate independent set has 2 vertices. </li></ul>
17. 17. Solve problem approximately…. Independent set … by the reduction we know … <ul><li>… does not work: </li></ul><ul><li>assume graph with 1000 vertices, </li></ul><ul><li>minimum vertex cover 499 vertices, </li></ul><ul><li>approximate vertex cover of 998 vertices. </li></ul><ul><li>maximum independent set has 501 vertices, </li></ul><ul><li>approximate independent set has 2 vertices. </li></ul>polynomial reductions need not preserve approximability  careful, use special reductions …
18. 18. Decision versus optimization ? NPO = set of “NP-hard optimization problems” = roughly: verify that a proposed solution is feasible and compute its value in polynomial time.
19. 19. What is an approximation algorithm ? A is an approximation algorithm for problem  in NPO: for any input I, A runs in time polynomial in the length of I and if I is a legal input, A outputs a feasible solution A(I).
20. 20. What is an approximation algorithm ? A is an approximation algorithm for problem  in NPO: for any input I, A runs in time polynomial in the length of I and if I is a legal input, A outputs a feasible solution A(I). The approximation ratio of A for  on input I is value (A(I)) / value (OPT(I)) … is at least 1 for minimization problems and at most 1 for maximization problems.
21. 21. What is the approximation ratio of an approximation algorithm ? … the maximum over all inputs for minimization problems minimum maximization (and sometimes only the asymptotic ratio is of interest, for large problem instances) The approximation ratio of A for  on input I is value (A(I)) / value (OPT(I)) … is at least 1 for minimization problems and at most 1 for maximization problems.
22. 22. Example Problem: k-center. Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j; positive integer number k of clusters. Compute set S of k cluster centers, S a subset of V, such that the largest distance of any point to its closest cluster center is minimum.
23. 23. Example Problem: k-center. Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j; positive integer number k of clusters. Compute set S of k cluster centers, S a subset of V, such that the largest distance of any point to its closest cluster center is minimum. Theorem: k-center is NP-complete (decision version). Proof: Vertex cover reduces to dominating set reduces to k-center.
24. 24. Example Problem: k-center. Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j; positive integer number k of clusters. Compute set S of k cluster centers, S a subset of V, such that the distance of any point to its closest cluster center is minimum. Theorem: k-center is NP-complete (decision version). Proof: Vertex cover reduces to dominating set reduces to k-center. For G=(V,E), find a smallest subset V’ of V such that every vertex is either itself in V’ or has a neighbor in V’.
25. 25. Proof: Vertex cover reduces to dominating set reduces to k-center. For G=(V,E), find a smallest subset V’ of V such that every vertex is either itself in V’ or has a neighbor in V’. V C D S
26. 26. Proof: Vertex cover reduces to dominating set reduces to k-center. For G=(V,E), find a smallest subset V’ of V such that every vertex is either itself in V’ or has a neighbor in V’. D S k-center 1 2 bound 1 on cluster radius
27. 27. Non-approximability Theorem: Finding a ratio-M-approximation for fixed M is NP-hard for k-center. Proof: Replace 2 in the construction above by more than M.
28. 28. Non-approximability Theorem: Finding a ratio-M-approximation for fixed M is NP-hard for k-center. Proof: Replace 2 in the construction above by more than M. Theorem: Finding a ratio-less-than-2-approximation is NP-hard for k-center with triangle inequality. Proof: Exactly as in the reduction above.
29. 29. Theorem: A 2-approximation for k-center with triangle inequality exists.
30. 30. Theorem: A 2-approximation for k-center with triangle inequality exists. Proof: Gonzalez’ algorithm. Pick v1 arbitrarily as the first cluster center. Pick v2 farthest from v1. Pick v3 farthest from the closer of v1 and v2. … Pick vi farthest from the closest of the v1,…vi-1. … until vk is picked.
31. 31. Example Problem: Traveling salesperson with triangle inequality. Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j. Compute a round trip that visits each vertex exactly once and has minimum total cost. Comment: is NP-hard.
32. 32. Example Problem: Traveling salesperson with triangle inequality. Given: G = (V,E); E = V x V; c(i,j) for all edges (i,j) with i  j. Compute a round trip that visits each vertex exactly once and has minimum total cost. Approximation algorithm Find a minimum spanning tree. Run around it and take shortcuts to avoid repeated visits.
33. 33. Approximation algorithm Find a minimum spanning tree. Run around it and take shortcuts to avoid repeated visits. Quality ? Theorem: This is a 2-approximation. Proof: (1) OPT-TSP minus 1 edge is spanning tree. (2) MST is not longer than any spanning tree. (3) APX-TSP <= 2 MST <= 2 ST <= 2 OPT-TSP.
34. 34. <ul><li>Better quality ? </li></ul><ul><li>Christofides’ algorithm. </li></ul><ul><li>Find MST. </li></ul><ul><li>Find all odd degree vertices V’ in MST. </li></ul><ul><li>Comment: There is an even number of them. </li></ul><ul><li>Find a minimum cost perfect matching for V’ in the induced </li></ul><ul><li>subgraph of G (no even vertices present). Call this M. </li></ul><ul><li>In MST + M, find an Euler circuit. </li></ul><ul><li>Take shortcuts in the Euler circuit. </li></ul>
35. 35. Theorem: Christofides’ algorithm is a 1.5-approximation of TSP with triangle inequality.
36. 36. <ul><li>Theorem: Christofides’ algorithm is a 1.5-approximation </li></ul><ul><li>of TSP with triangle inequality. </li></ul><ul><li>Proof: </li></ul><ul><li>MST <= TSP. </li></ul><ul><li>M <= TSP / 2 …. as we see soon. </li></ul><ul><li>3. Shortcuts make the tour only shorter. </li></ul>
37. 37. <ul><li>Proof of </li></ul><ul><li>M <= TSP / 2: </li></ul><ul><li>consider the subgraph induced by odd degree vertices V’; </li></ul><ul><li>consider TSP restricted to that subgraph, and compare </li></ul><ul><li>with the sub-TSP(V’) found there: </li></ul><ul><li>sub-TSP(V’) <= TSP; </li></ul><ul><li>picking alternate edges in sub-TSP(V’) gives </li></ul><ul><li>two perfect matchings for V’, </li></ul><ul><li>call them M1 and M2; </li></ul><ul><li>pick the cheaper of M1, M2, call it M, with M <= TSP / 2. </li></ul>
38. 38. <ul><li>Notes: </li></ul><ul><li>Christofides algorithm may be as bad as 1.5 really. </li></ul><ul><li>It is unkown whether this is best possible. </li></ul><ul><li>For Euclidean TSP, a better bound is known: </li></ul><ul><li>any quality above 1 can be achieved. </li></ul><ul><li>For TSP without triangle inequality, no fixed approximation </li></ul><ul><li>is possible. </li></ul>
39. 39. Example Problem: Set cover. Given: Universe U of elements e1, e2, …, en; collection of subsets S1, S2, …, Sk of U, nonnegative cost per Si. Find a collection of subsets that cover U with minimum cost.
40. 40. Example Problem: Set cover. Given: Universe U of elements e1, e2, …, en; collection of subsets S1, S2, …, Sk of U, nonnegative cost per Si. Find a collection of subsets that cover U with minimum cost. Idea for an approximation: Repeat greedily choose a best set (cheapest per new covered element) until all of U is covered.
41. 41. Quality ? Consider a step in the iteration. The greedy algorithm has selected some of the Sj, with covered C = union of all selected Sj so far. Choose Si in this step. Price per new element in Si is cheapest: price(i) = cost(Si) / (number of new elements of Si) is minimum. Consider the elements in the order they are chosen.
42. 42. Consider the elements in the order they are chosen, and rename: e1, e2, e3, …., ek, …. en Consider ek, call the set in which ek is chosen Si. What is the price of ek? Bound the price from above: Instead of Si, the greedy algorithm could have picked any of the sets in OPT that have not been picked yet (there must be one), but at which price?  compare with the optimum
43. 43. Instead of Si, the greedy algorithm could have picked any of the sets in OPT that have not been picked yet (there must be one), but at which price? Take all elements not covered yet, and their total cost will be at most all of OPT. Across all sets in OPT not picked yet, the average cost is therefore at most OPT / size of U-C. Hence, at least one of the sets in OPT not picked yet has at most this average price. This set could have been picked. Hence, the price of ek is at most OPT / size of U-C.
44. 44. For the k-th picked element, the size of U-C is at least n-k+1. Therefore, price(ek) <= OPT / (n-k+1) for each ek. The sum of all prices of ek gives the total cost of the greedy solution: SUM(k=1,..,n) price(ek) <= SUM(k=1,..,n) OPT / (n-k+1) <= OPT SUM(k=1,..,n) 1 / k <= OPT (1 + ln n)
45. 45. Theorem: Greedy set cover gives a (1+ln n)- approximation. Notes: It can really be that bad. That is also a best possible approximation for set cover.
46. 46. Example Problem: Knapsack. Given: n objects with weights and values, and weight bound: positive integers w1,w2, …, wn, W (weights, total weight); positive integers v1, v2, …, vn (values). Find a subset of the set of objects with total weight at most W and maximum total value. … is NP-hard
47. 47. An exact algorithm for knapsack A 1 2 3 v’ n vmax 1 2 3 j n A(j,v’) = smallest weight subset of objects 1,…,j with total value =v’.
48. 48. A(j,v’) = smallest weight subset of objects 1,…,j with total value =v’. inductively: A(1,v) = If v = v1 then w1 else infinity A(i+1,v) = min ( A(i,v) , A(i, v – v(i+1) ) + w(i+1) ) if >= 0 … the result is: max v such that A(n,v) <= W … the runtime is: O(n 2 vmax) …. pseudopolynomial
49. 49. pseudopolynomial ? polynomial if numbers are small = value is polynomial in input length
50. 50. pseudopolynomial ? polynomial if numbers are small = value is polynomial in input length Idea: scale numbers down, i.e., ignore less significant digits.
51. 51. <ul><li>Approximation algorithm for knapsack (FPTAS) </li></ul><ul><li>Given error bound eps > 0, define K := eps vmax / n . </li></ul><ul><li>For each object i, define vi := rounded down (vi / K). </li></ul><ul><li>Use dynamic programming to find optimal solution S </li></ul><ul><li>for the rounded problem version. </li></ul><ul><li>4. Return max (original value for S , vmax). </li></ul>
52. 52. Theorem: Let A be the set of objects so computed. Then value(A) >= (1 – eps) OPTvalue.
53. 53. Theorem: Let A be the set of objects so computed. Then value(A) >= (1 – eps) OPTvalue. Proof: Observe the rounding effect: (1) K vi <= vi (2) vi – K <= K vi Sum all objects in OPT, with value(OPT) = SUM(i in OPT) vi : value(OPT) – K value (OPT) <= K n But: for the rounded values , S is at least as good as OPT
54. 54. But: for the rounded values , S is at least as good as OPT: value( S ) >= value ( S ) K >= value (OPT) K >= value(OPT) – K n = value(OPT) – eps vmax.
55. 55. But: for the rounded values , S is at least as good as OPT: value( S ) >= value ( S ) K >= value (OPT) K >= value(OPT) – K n = value(OPT) – eps vmax. But: dynamic programming delivers A (and not necessarily S ): value(A) >= value ( S ) >= value(OPT) – eps value(A) … because vmax <= value(A)
56. 56. <ul><li>But: for the rounded values , S is at least as good as OPT: </li></ul><ul><li>value( S ) >= value ( S ) K >= value (OPT) K </li></ul><ul><li>>= value(OPT) – K n </li></ul><ul><li>= value(OPT) – eps vmax. </li></ul><ul><li>But: dynamic programming delivers A (and not necessarily S ): </li></ul><ul><li>value(A) >= value ( S ) >= value(OPT) – eps value(A) </li></ul><ul><li>… because </li></ul><ul><li>vmax <= value(A) </li></ul><ul><li>value(A) >= ( 1 / (1+eps) ) value(OPT) </li></ul><ul><li>that’s all, since ( 1 / (1+eps) ) >= 1 – eps . </li></ul>
57. 57. Theorem: This is a fully polynomial approximation scheme . = given eps, delivers solution with ratio (1-eps) for max and ratio (1+eps) for min, and runs in time polynomial in the input size and (1/eps) <ul><li>Proof: </li></ul><ul><li>Quality of the solution: above. </li></ul><ul><li>Runtime of dynamic programming: </li></ul><ul><li>O( n 2 vmax / K) = O(n 2 rounded down (n / eps)) </li></ul><ul><li>= O(n 3 / eps). </li></ul><ul><li>Comment: nothing better can exist unless P = NP. </li></ul>
58. 58. Example: Use set cover to solve other problems approximately Shortest superstring: Given set of strings s1, s2, …, sn. Find a shortest superstring s. Cleanup: No si is a substring of an sj. For superstring instance, create set cover instance: - each si is an element ei of the universe in set cover; - each legal nonzero overlap of two strings is a set representing all given strings that it contains TCGCG and GCGAA overlap as GCGAA and as GCGAA
59. 59. <ul><li>Cost of a set = string length of the “overlap string”. </li></ul><ul><li>Algorithm: Solve set cover approximately. </li></ul><ul><li> Concatenate strings of chosen sets in any order. </li></ul><ul><li>The solution is a superstring. </li></ul><ul><li>Quality of the solution? </li></ul><ul><li>Quality Lemma : For OPT solution of superstring problem, </li></ul><ul><li>there is a set cover of “string length” at most 2 OPT. </li></ul><ul><li>Proof: </li></ul>OPT superstring group 1 s1 s2 s3 = first occ. of an si in OPT
60. 60. group 1 s1 s2 s3 OPT superstring group 2 group 3 <ul><li>per group: the proper set covers the group </li></ul><ul><li>All these group sets form a set cover. It’s string has length </li></ul><ul><li>at most 2 OPT (only adjacent groups can overlap). </li></ul><ul><li> Approx set cover <= 2 (1 + ln n) OPT superstring. </li></ul>
61. 61. Bin packing Given n items with sizes a1, a2, …. ,an in [0,1] Minimize the number of unit size bins to pack the items. <ul><li>Algorithm First Fit (FF) </li></ul><ul><li>Pack the next item in the leftmost bin that can take it. </li></ul><ul><li>Quality </li></ul><ul><li>Out of the m bins that FF uses, at least m-1 are more than half full. </li></ul><ul><li>(m-1)/2 < sum of all ai <= OPT number of bins </li></ul><ul><li>m-1 < 2 OPT </li></ul><ul><li>m <= 2 OPT 2-approximation </li></ul>
62. 62. Lower bound on the approximation ratio? Theorem: No bin packing approximation algorithm with ratio less than 1.5 exists. Proof: Reduce from PARTITION with a1, a2, … an and bins of size half the sum of all ai. Answer “yes” if 2 bins are enough, “no” otherwise.  approx better than 3/2 must solve exactly. But: Small instances are boring. Large instances, high number of bins?
63. 63. <ul><li>Theorem </li></ul><ul><li>For any e(psilon), 0 < e <= ½, there is an algorithm Ae </li></ul><ul><li>with runtime polynomial in n </li></ul><ul><li>that finds a packing with <= (1 + 2 e) OPT + 1 bins. </li></ul><ul><li>“ asymptotic PTAS”: for any eps > 0 exists N > 0 </li></ul><ul><li>s.t. approx <= (1 + eps) OPT </li></ul><ul><li>Proof: </li></ul><ul><li>The structure of the algorithm </li></ul><ul><li>The details and quality analysis </li></ul>
64. 64. <ul><li>The structure of the algorithm </li></ul><ul><li>1. Remove all items of size < e(psilon). </li></ul><ul><li>2. Round the item sizes [see B] so that only a constant number </li></ul><ul><li>of different item sizes remains. </li></ul><ul><li>3. Find an optimal packing for the rounded items [see A and B]. </li></ul><ul><li>4. Use this packing for the original items. </li></ul><ul><li>5. Pack the items with size < e with First Fit. </li></ul>
65. 65. (2) The details and quality analysis [A] Find optimal packing for special case of <= K different item sizes and each item size >= e Lemma: This can be done optimally in polynomial time, for fixed values of K and e. Proof: per bin: number m of items is <= floor of 1/e  number t of different bin types (no. of items per size) is function of m and K only  constant. Total number of bins used <= n  total number of possible feasible packings is polynomial in n (but not in 1/e) . Algorithm enumerates them all and picks the best.
66. 66. [B] Lemma: Given items with size >= e, there is an algorithm with runtime polynomial in n and approximation factor 1+e. Proof: let input instance be I = {a1, a2, …. , an}. - Sort items by increasing size. - Partition items into K= ceiling(1/e^2) groups: <ul><li>each group has <= floor (n e^2) items. </li></ul>- Round up the size of each item to the largest in its group: <ul><li>this instance J has <= K different item sizes. </li></ul>By [A], J can be solved optimally in time poly in n.
67. 67. Solution for J is valid for the original items; is it good? Quality lemma: OPT(J) <= (1+e) OPT (I). Proof: Define J’ in analogy with J, but rounded to the lowest value per group: … obvious: OPT(J’) <= OPT(I). Trick: Discard highest group of J and lowest group of J’, and match the remaining groups.
68. 68. J J’ <ul><li>A packing for J’ yields a packing for all except the group </li></ul><ul><li>of largest size items of J. </li></ul><ul><li>… pack each of these floor (n e^2) large items in its own bin </li></ul><ul><li>OPT(J) <= OPT(J’) + floor (n e^2) </li></ul><ul><li> <= OPT(I) + floor (n e^2) </li></ul>
69. 69. <ul><li>We have: OPT(J) <= OPT(I) + floor (n e^2) </li></ul><ul><li>Since no small items are present: OPT (I) >= n e </li></ul><ul><li>floor (n e^2) <= e OPT (I) </li></ul><ul><li>OPT (J) <= OPT(I) + e OPT (I) = (1 + e) OPT (I). </li></ul><ul><li>… this proves the quality lemma. </li></ul>
70. 70. Remainder of the algorithm: Step 5 {situation: we have approx solution for input I without small items} Put small items of original input origI by first fit into bins. Case 1: no extra bins are needed for this. number of bins is <= (1+e) OPT(I) <= (1+e) OPT(origI). Case 2: extra bins are needed, totalling M bins.  at least M-1 of them are full to at least level 1-e  sum of all item sizes >= (M-1)(1-e)  OPT(origI) >= (M-1)(1-e)  M <= OPT(origI) / (1-e) + 1 <= (1+2e) OPT(origI) + 1. … PTAS for bin packing, but not FPTAS.
71. 71. TSP: Huge difference from triangle inequality…. New approximation algorithm for TSP Idea: Build MST and go around it. Take shortcuts differently. p q p’ q’ e MST edge T1 T2 T1, T2 are two parts of the MST. Invariant in the induction: path within T1 contains edge (p,p’) once and each other edge of T1 twice.
72. 72. p q p’ q’ e inductive step: add the path p’, p, q, q’ preserves the invariant: edge (p,q) once, any other edge twice. Induction: Basis: 1 vertex  vacuously true 2 vertices p, p’  path is edge (p,p’) Rest by induction as above….  effect: shortcuts only for three edges at a time.
73. 73. <ul><li>Algorithm (Sekanina) </li></ul><ul><li>Build MST T. </li></ul><ul><li>Build T^3. </li></ul><ul><li>Find round trip in T^3 such that each of T appears exactly twice </li></ul><ul><li>(induction, and a single extra edge at the very end). </li></ul><ul><li>Quality for TSP with slight violation of triangle inequality ? </li></ul><ul><li>cost(i,j) <= (1 +r ) (cost(i,k) + cost(k,j)) for all i,j,k </li></ul><ul><li>Shortcut increases length by factor <= (1+r)^2 </li></ul><ul><li>Approximation ratio 2 (1+r)^2 </li></ul><ul><li>… stability of the approximation </li></ul>
74. 74. <ul><li>Stability of approximation </li></ul><ul><li>what about r < 0, e.g. r = -1/2 in the extreme ? </li></ul><ul><li>- what about other problems with stable approximations ? </li></ul>
75. 75. Summary Problems in NPO have very different approximability properties: … some are impossible to approximate (k-center, TSP) … some are hard, with a bound depending on the input size (set cover) … some can be approximated with some constant ratio (vertex cover, k-center with triangle inequality, TSP with triangle inequality) … and some can be approximated as closely as you like (knapsack) Approximability has its own hierarchy of complexity classes