Successfully reported this slideshow.
Upcoming SlideShare
×

# Analysis of Algorithm

3,541 views

Published on

Published in: Education, Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

### Analysis of Algorithm

1. 1. Approximation Algorithms Presented By Ahlam Ansari
2. 2. <ul><li>1. Approximation Algorithm </li></ul><ul><li>2. Performance Ratio of AA </li></ul><ul><li>3. Examples of AA </li></ul><ul><li>4. The vertex-cover problem </li></ul><ul><li>- Algorithm </li></ul><ul><li>- Example </li></ul><ul><li>5. Traveling salesman problem </li></ul><ul><li>- Algorithm </li></ul><ul><li>- Example </li></ul><ul><li>6. Cover-Set Problem </li></ul><ul><li>- Algorithm </li></ul><ul><li>- Example </li></ul>7. Genetic Algorithm -Algorithm for GA - Pictorial Representation of GA - Working Principle 8. Solving TSP using GA - Steps - Flowchart -Solution with Example 9. Analysis of TSP Algorithms 10. Results 11.Conclusion
3. 3. Abstract <ul><li>In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems . </li></ul><ul><li>The Travelling Salesman Problem (TSP) is a deceptively simple combinatorial problem. It can be stated very simply: A salesman spends his time visiting N cities (or nodes) cyclically. In one tour he visits each city just once, and finishes up where he started. In what order should he visit them to minimize the distance traveled? TSP is applied in many different places such as warehousing, material handling and facility planning. </li></ul><ul><li>Although optimal algorithms exist for solving the TSP, like the IP formulation, industrial engineers have realized that it is computationally infeasible to obtain the optimal solution to TSP. And it has been proved that for large-size TSP, it is almost impossible to generate an optimal solution within a reasonable amount of time. Heuristics, instead of optimal algorithms, are extensively used to solved such problems. People conceived many heuristic algorithms to get near optimal solutions. Among them, there are greedy and genetic algorithm, etc. But their efficiencies vary from case to case, from size to size . </li></ul><ul><li>In this paper we will compare a normal Approximation Algorithm with the Genetic Algorithm to see which one gives the best optimal solution </li></ul>
4. 4. Approximation Algorithm
5. 5. <ul><li>In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. </li></ul>Approximation Algorithm
6. 6. <ul><li>Approximation algorithms are often associated with NP-hard problems. </li></ul><ul><li>Since it is unlikely that there can ever be efficient polynomial time exact algorithms solving NP-hard problems, one settles for polynomial time sub-optimal solutions. </li></ul>Approximation Algorithm
7. 7. <ul><li>Approximation algorithms are increasingly being used for problems where exact polynomial-time algorithms are known but are too expensive due to the input size.[1] </li></ul>Approximation Algorithm
8. 8. Performance ratios of Approximation Algorithm
9. 9. <ul><li>A problem (knapsack, traveling salesperson) is denoted by P. </li></ul><ul><li>Let I be an instance of problem P </li></ul><ul><li>Cost of optimal solution for instance I is given by C (I), and C(I) > 0 </li></ul><ul><li>Optimal solution may be defined as maximum or minimum possible cost (maximization or minimization problem) </li></ul>Performance ratios of Approximation Algorithm
10. 10. <ul><li>If for any input of size n, the cost C of the solution produced by the algorithm is within a factor of ρ(n) of the cost C* of an optimal solution: </li></ul><ul><li>Max ( C/C* , C*/C ) ≤ ρ(n) </li></ul><ul><li>We call this algorithm as an ρ(n)-approximation algorithm. </li></ul>Performance ratios of Approximation Algorithm
11. 11. <ul><li>Cost of approximate solution C*(I) </li></ul><ul><li>C*(I) < C (I) [Approximate solution < optimal solution] if P is a maximization problem. 0<C*≤C , ρ(n) = C/C* </li></ul><ul><li>C*(I) > C(I) [Approximate solution > optimal solution] if P is a minimization problem. 0<C≤C* , ρ(n) = C*/C [2] </li></ul>Performance ratios of Approximation Algorithm
12. 12. Examples of Approximation Algorithm
13. 13. <ul><li>Vertex cover problem. </li></ul><ul><li>Traveling salesman problem. </li></ul><ul><li>Set cover problem. </li></ul>Examples of Approximation Algorithm
14. 14. The vertex-cover problem
15. 15. <ul><li>A vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The problem of finding a minimum vertex cover is a classical optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. </li></ul>The vertex-cover problem
16. 16. <ul><li>A vertex cover of an undirected graph G = (V,E) is a subset V` of the vertices of the graph which contains at least one of the two endpoints of each edge and V` is a subset of V . </li></ul><ul><li>The size of a vertex cover is the number of vertices in it.[3] </li></ul>The vertex-cover problem
17. 17. Example Fig: 1 The vertex-cover problem Optimal Size=3 Near Optimal size=6
18. 18. Procedures <ul><li>1. Given a simple graph G with n vertices and a vertex cover C of G, if C has no removable vertices, output C. Else, for each removable vertex v of C, find the number ρ(C−{v}) of removable vertices of the vertex cover C−{v}. Let vmax denote a removable vertex such that ρ(C−{vmax}) is a maximum and obtain the vertex cover C−{vmax}. Repeat until the vertex cover has no removable vertices. </li></ul>The vertex-cover problem
19. 19. Contd… <ul><li>2. Given a simple graph G with n vertices and a minimal vertex cover C of G, if there is no vertex v in C such that v has exactly one neighbor w outside C, output C. Else, find a vertex v in C such that v has exactly one neighbor w outside C. Define Cv,w by removing v from C and adding w to C. Perform procedure 3.1 on Cv,w and output the resulting vertex cover.[5] </li></ul>The vertex-cover problem
20. 20. Algorithm <ul><li>Given as input a simple graph G with n vertices labeled 1, 2, …, n, search for a vertex cover of size at most k. At each stage, if the vertex cover obtained has size at most k, then stop. </li></ul>The vertex-cover problem
21. 21. Contd… <ul><li>Part 1: For i = 1, 2, ..., n in turn </li></ul><ul><li>Initialize the vertex cover Ci = V−{i}. </li></ul><ul><li>Perform procedure 1 on Ci. </li></ul><ul><li>For r = 1, 2, ..., n−k perform procedure 2 repeated r times. </li></ul><ul><li>The result is a minimal vertex cover Ci. </li></ul>The vertex-cover problem
22. 22. Contd… <ul><li>Part 2: For each pair of minimal vertex covers Ci, Cj found in Part 1 </li></ul><ul><li>Initialize the vertex cover Ci, j = Ci∪Cj . </li></ul><ul><li>Perform procedure 1 on Ci, j. </li></ul><ul><li>For r = 1, 2, ..., n−k perform procedure 2 repeated r times. </li></ul><ul><li>The result is a minimal vertex cover Ci, j.[5] </li></ul>The vertex-cover problem
23. 23. Example <ul><li>The input is the graph [6] shown below with n = 12 vertices labeled </li></ul><ul><li>V = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12}. </li></ul>The vertex-cover problem
24. 24. Contd… Fig: 2 The vertex-cover problem
25. 25. Contd… <ul><li>We search for a vertex cover of size at most k = 7. Part I for i = 1 and i = 2 yields vertex covers C1 and C2 of size 8, so we give the details starting from i = 3 </li></ul>The vertex-cover problem
26. 26. Contd… <ul><li>We initialize the vertex cover as </li></ul><ul><li>C3 = V−{3}={1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12}. </li></ul>The vertex-cover problem
27. 27. Contd… <ul><li>We now perform procedure 1. Here are the results in tabular form: </li></ul><ul><li>Vertex Cover C3 = {1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12}. Size: 11. </li></ul>The vertex-cover problem
28. 28. Contd… The vertex-cover problem
29. 29. Contd… The vertex-cover problem
30. 30. Contd… The vertex-cover problem
31. 31. Contd… The vertex-cover problem
32. 32. Traveling salesman problem
33. 33. <ul><li>The traveling salesman problem (TSP) is an NP-hard problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their pair wise distances, the task is to find a shortest possible tour that visits each city exactly once. [3] </li></ul>Traveling salesman problem
34. 34. <ul><li>Given an undirected weighted Graph G we are to find a minimum cost Hamiltonian cycle. </li></ul>Traveling salesman problem
35. 35. Algorithm Traveling salesman problem
36. 36. Example Fig: 3 Traveling salesman problem 2 2 3 9 8 7 4 1 3 9 9 9 9 7 7 5 5 7 7 4 Solution for TSP
37. 37. The set cover problem
38. 38. <ul><li>The set cover problem is to identify the smallest number of sets whose union still contains all elements in the universe. </li></ul>The set cover problem
39. 39. Algorithm The set cover problem
40. 40. Example <ul><li>Assume we are given the following elements U = {1,2,3,4,5} and sets S = {{1,2,3},{2,4},{3,4},{4,5}}. Clearly the union of all the sets in S contain all elements in U. However, we can cover all of the elements with the following, smaller number of sets: SETCOVER = {{1,2,3},{4,5}}. [4] </li></ul>The set cover problem
41. 41. The set cover problem Fig: 4 1 2 5 4 3 SET COVER SET COVER
42. 42. Genetic Algorithm
43. 43. <ul><li>Genetic Algorithms are a family of computational models inspired by evolution. These algorithms encode a potential solution to a specific problem on a simple chromosome-like data structure and apply recombination operators to these structures as to preserve critical information. Genetic algorithms are often viewed as function optimizer, although the range of problems to which genetic algorithms have been applied are quite broad.[7] </li></ul>Genetic Algorithm
44. 44. <ul><li>An implementation of genetic algorithm begins with a population of (typically random) chromosomes. One then evaluates these structures and allocated reproductive opportunities in such a way that these chromosomes which represent a better solution to the target problem are given more chances to `reproduce’ than those chromosomes which are poorer solutions. The 'goodness' of a solution is typically defined with respect to the current population. </li></ul>Genetic Algorithm
45. 45. Algorithm GA <ul><li>formulate initial population </li></ul><ul><li>randomly initialize population </li></ul><ul><li>repeat </li></ul><ul><li>evaluate objective function </li></ul><ul><li>find fitness function </li></ul><ul><li>apply genetic operators </li></ul><ul><li>reproduction </li></ul><ul><li>crossover </li></ul><ul><li>mutation </li></ul><ul><li>until stopping criteria </li></ul>Genetic Algorithm
46. 46. The Pictorial representation of the Working Principle of a Simple Genetic Algorithm Fig: 5
47. 47. <ul><li>The Travelling salesman and Vertex cover can be solved using Genetic Algorithm. In order to use GA’s to solve the above problems we need to follow the working principles of GA. </li></ul>Genetic Algorithm
48. 48. <ul><li>1.Coding </li></ul><ul><li>Variables are first coded in some </li></ul><ul><li>string structures . </li></ul>Fig: 6 Genetic Algorithm
49. 49. <ul><li>Binary-coded strings having 1's and 0's are mostly used. The length of the string is usually determined according to the desired solution accuracy. For example, if four bits are used to code each variable in a two-variable optimization problem, the strings (0000 0000) and (1111 1111) would represent the points </li></ul><ul><ul><ul><ul><ul><li>respectively </li></ul></ul></ul></ul></ul>Genetic Algorithm
50. 50. <ul><li>B ecause the sub-strings (0000) and (1111) have the minimum and the maximum decoded values. Any other eight bit string can be found to represent a point in the search space according to a xed mapping rule. Usually, the following linear mapping rule is used: </li></ul>Genetic Algorithm
51. 51. 2. Fitness function <ul><li>A Fitness function F(i) is first derived from the objective function and used in successive genetic operations. Fitness in biological sense is a quality value which is a measure of the reproductive efficiency of chromosomes. </li></ul>Genetic Algorithm
52. 52. <ul><li>Individuals with higher fitness value will have higher probability of being selected as candidates for further examination. Certain genetic operators require that the fitness function be non-negative, although certain operators need not have this requirement. </li></ul>Genetic Algorithm
53. 53. Two commonly adopted fitness mappings is presented below: 1. 2. Genetic Algorithm
54. 54. 3. GA operator <ul><li>The operation of GA’s begins with a population of a random strings representing design or decision variables. The population is then operated by three main operators; reproduction, crossover and mutation to create a new population of points. GA’s can be viewed as trying to maximize the fitness function, by evaluating several solution vectors. The purpose of these operators is to create new solution vectors by selection, combination or alteration of the current solution vectors that have shown to be good temporary solutions. </li></ul>Genetic Algorithm
55. 55. 4. Reproduction <ul><li>Reproduction (or selection) is an operator that makes more copies of better strings in a new population. Reproduction is usually the first operator applied on a population. Reproduction selects good strings in a population and forms a mating pool. This is one of the reason for the reproduction operation to be sometimes known as the selection operator. Thus, in reproduction operation the process of natural selection cause those individuals that encode successful structures to produce copies more frequently. To sustain the generation of a new population, the reproduction of the individuals in the current population is necessary. For better individuals, these should be from the fittest individuals of the previous population. </li></ul>Genetic Algorithm
56. 56. 5. Crossover <ul><li>A crossover operator is used to recombine two strings to get a better string. In crossover operation, recombination process creates different individuals in the successive generations by combining material from two individuals of the previous generation. In reproduction, good strings in a population are probabilistic-ally assigned a larger number of copies and a mating pool is formed. It is important to note that no new strings are formed in the reproduction phase. In the crossover operator, new strings are created by exchanging information among strings of the mating pool. </li></ul><ul><li>The two strings participating in the crossover operation are known as parent strings and the resulting strings are known as children strings. </li></ul>Genetic Algorithm
57. 57. <ul><li>One site crossover and two site crossover are the most common ones adopted. In most crossover operators, two strings are picked from the mating pool at random and some portion of the strings are exchanged between the strings. Crossover operation is done at string level by randomly selecting two strings for crossover operations. A one site crossover operator is performed by randomly choosing a crossing site along the string and by exchanging all bits on the right side of the crossing site. </li></ul>Genetic Algorithm
58. 58. One site crossover operation Fig: 7 Genetic Algorithm
59. 59. <ul><ul><li>Two site crossover operation </li></ul></ul>Fig: 8 Genetic Algorithm
60. 60. 6. Mutation <ul><li>Mutation adds new information in a random way to the genetic search process and ultimately helps to avoid getting trapped at local optima. It is an operator that introduces diversity in the population whenever the population tends to become homogeneous due to repeated use of reproduction and crossover operators. Mutation may cause the chromosomes of individuals to be different from those of their parent individuals. </li></ul>Genetic Algorithm
61. 61. <ul><li>Mutation in a way is the process of randomly disturbing genetic information. They operate at the bit level; when the bits are being copied from the current string to the new string, there is probability that each bit may become mutated. This probability is usually a quite small value, called as mutation probability p m.[7] </li></ul>Genetic Algorithm
62. 62. Solving TSP using GA
63. 63. Step to be followed <ul><li>i. Randomly create the initial population of individual strings of the given TSP problem and create a matrix representation of each, must satisfy the two basic conditions as mentioned earlier. </li></ul><ul><li>ii. Assign a fitness to each individual in the population using fitness criteria measure, </li></ul><ul><li>F(t)=value of assignment of given problem </li></ul><ul><li>value of the string </li></ul><ul><li>The selection criterion depends upon the value of the strings if it is close to 1. </li></ul>Solving TSP using GA
64. 64. <ul><li>iii. Create new off-spring population of strings from the two existing strings in the parent population by applying crossover operation. </li></ul><ul><li>iv. Mutate the resultant off-springs if required. </li></ul><ul><li>Note: After the cross-over and mutation off-spring population has the fitness higher than the parents. </li></ul><ul><li>v. Call the new off-springs as parent population and continue the steps (iii) and (iv) until we get a single offspring that will be an optimal or near optimal solution to the problem. </li></ul>Solving TSP using GA
65. 65. Fig 9: Flow chart of TSP using GA
66. 66. 1. Matrix Representation: <ul><li>According to the algorithm we are presenting the tour as binary matrix. In figure (A,B,C,D,E) every gene is presented as binary bits, if the element (i ,j) in the matrix is a set to 1 its mean there is an edge (directed path) between the city I to city j in the tour.[8] </li></ul>Solving TSP using GA
67. 67. Fig: 10 Solving TSP using GA
68. 68. <ul><li>According to the algorithm we are presenting the tour as a binary matrix. In fig 10 gene is represented as a binary bit, if the element (i,j) is set to (1) its mean that there is an edge (directed path) between the city I and city j in the tour. The above representation is for symmetric matrix TSP that the d ij = d ji . For asymmetric matrix that is when d ij ≠ d ji the above representation will be considered as, </li></ul>Solving TSP using GA
69. 69. Fig: 11 Solving TSP using GA
70. 70. <ul><li>The left upper triangle (LUTM) represents the movement from left to right that is the forward movement. </li></ul><ul><li>In (LUTM) we have the path from A->B, B->C, C->E. where as the upper right triangular matrix (URTM) represents the movement from bottom to top, in (URTM) there is the path from E->D, D->A. </li></ul><ul><li>In this way the complete tour will be, A->B-> C->E -> D->A </li></ul>Solving TSP using GA
71. 71. <ul><li>The matrix representation must satisfy the two conditions to satisfy a legal tour: </li></ul><ul><li>symmetric case </li></ul><ul><li>asymmetric case </li></ul>Solving TSP using GA
72. 72. <ul><li>For symmetric case </li></ul><ul><li>(i) The number of matrix element that has the value (1) must equal to the number of vertices in the tour. </li></ul><ul><li>(ii) The number of matrix elements that have the value of (1) in each row and each column of the same vertex must be equal to two. </li></ul>Solving TSP using GA
73. 73. <ul><li>For asymmetric case: </li></ul><ul><li>(i) The total number of the element that has the value (1) in both (LUTM) and (RUTM) must be equal to the number of vertices in the tour. </li></ul><ul><li>(ii) For the same vertex the sum of both matrix elements that has the value (1) must be equal to 2. </li></ul>Solving TSP using GA
74. 74. 2. Cross-over Operation <ul><li>Here we are using the cross-over operator by applying the OR operation on the two parent matrices to get a single matrix. </li></ul>Solving TSP using GA
75. 75. 3. Mutation Operation <ul><li>If the resultant tour (matrix) is an illegal tour (i.e does not satisfy the two condition mentioned above), then it must be repaired. This is done by counting the number of element with (1) value in each row and column for the city, if the number is greater than 2 then repeat deleting the longest edge from the resultant tour until the number of element in the resultant tour is equal to 2. However, if the number of elements in the resultant tour is less then 2 than add the missing edges in the tour by greedy algorithm. Considering two tours </li></ul><ul><li>T1: A->E-> C->D -> B->A=17 </li></ul><ul><li>T2: A->B-> E->C -> D->A=22 </li></ul>Solving TSP using GA
76. 76. <ul><li>Then cross-over and mutation of these two tours will be, </li></ul>Fig: 12
77. 77. Example <ul><li>Consider the weighted matrix, </li></ul>Solving TSP using GA
78. 78. <ul><li>The value of the assignment of the above problem is 13 </li></ul><ul><li>Initial population: </li></ul><ul><li>A->E-> C->D -> B->A=17 </li></ul><ul><li>A->B-> E->C -> D->A=22 </li></ul><ul><li>A->C-> D->E -> B->A=23 </li></ul><ul><li>A->B-> D->C -> E->A=24 </li></ul><ul><li>A->E-> B->C -> D->A=25 </li></ul><ul><li>A->B-> C->E -> D->A=26 </li></ul><ul><li>A->E-> D->C -> B->A=28 </li></ul><ul><li>A->D-> C->E -> B->A=29 </li></ul><ul><li>A->E-> C->B -> D->A=30 </li></ul><ul><li>According to the fitness criteria we are selecting the tour having values, </li></ul><ul><li>17,22,25,29 </li></ul>Solving TSP using GA
79. 79. The resultant tour will be, A->B-> C->D-> E->A=15 Solving TSP using GA
80. 80. Analysis of TSP Algorithms
81. 81. <ul><li>Greedy algorithm is the simplest improvement algorithm. It starts with the departure Node 1. Then the algorithm calculates all the distances to other n-1 nodes. Go to the next closest node. Take the current node as the departing node, and select the next nearest node from the remaining n-2 nodes. The process continues until all the nodes are visited once and only once then back to Node 1. When the algorithm is terminated, the sequence is returned as the best tour; that is the final solution. The advantage of this algorithm is its simplicity to understand and implement. Sometime, it may lead to a very good solution when the problem size is small. Also, because this algorithm does not entail any exchange of nodes, it saves considerably much computational time. </li></ul>Analysis of TSP Algorithms
82. 82. <ul><li>Genetic Algorithm as the name implies, this algorithm gets its idea from genetics. The algorithm works this way. By obtaining the maximum number of individuals in the population and the maximum number of generations from the user, generate solutions for the first generation’s population randomly, and represent each solution as a string. Mutation to generate a population by swapping any two randomly selected genes. Genetic Algorithm works better than the Greedy method but its complicated and require a large amount of computation time. [9] </li></ul>Analysis of TSP Algorithms
83. 83. Result
84. 84. Result Based on Length Comparison Result 9906 4302 2295 Genetic Algorithm 10402 4478 2330 Greedy 500 100 30 Algorithm City Size
85. 85. Result Based on Time Comparison Result 27 6 2 Genetic Algorithm 34 8 6 Greedy 500 100 30 Algorithm City Size
86. 86. Conclusion
87. 87. <ul><li>The Greedy algorithm takes a very long time for a large number of cities such as 500. It is not efficient to use Greedy algorithm for problem with more than 100 cites. </li></ul><ul><li>For small-size TSP, improved greedy algorithm is recommended. For medium-size large-size problem, the genetic algorithm is recommended. </li></ul>Conclusion
88. 88. Reference <ul><li>[1] http://www.cs.uiuc.edu/class/sp09/cs598csc/ </li></ul><ul><li>[2] Introduction to Algorithms, Thomas H. Cormen </li></ul><ul><li>[3] http://www.cs.umsl.edu/~sanjiv/classes/cs5130/lectures/ approx.pdf </li></ul><ul><li>[4] http://en.wikipedia.org/wiki/Set_cover_problem </li></ul><ul><li>[5] Hybrid Genetic Algorithm for Minimum Vertex Cover Problem, in Proceedings of Met heuristics International Conference, 2003, pp 13-17 </li></ul><ul><li>[6] R. Frucht Graphs of degree three with a given abstract group, Canad. J. Math., 1949 </li></ul><ul><li>[7] http://www.civil.iitb.ac.in/tvm/2701_dga/2701-ga-notes/gadoc.pdf </li></ul><ul><li>[8] http://www.ijens.org/96110-1414%20IJBAS-IJENS.pdf </li></ul><ul><li>[9] D. Whitley, “Modeling Hybrid Genetic Algorithms”. Genetic Algorithms in </li></ul><ul><li>Engineering and Computer Science, Winter, Periaux, Galan and Cuesta, eds. pp: 191- </li></ul><ul><li>201, Wiley, 1995. </li></ul><ul><li>[10] Traveling Salesman Problem Based on Genetic Algorithm, Third International Conference on Natural Computation (ICNC 2007) </li></ul>
89. 89. <ul><li>Thank you for your attendance and attention. </li></ul>