Upcoming SlideShare
Loading in …5
×

# Discrete ad

3,630 views

Published on

0 Comments
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Be the first to comment

• Be the first to like this

No Downloads
Views
Total views
3,630
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
41
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Discrete ad

1. 1. GRAPHS<br />A pictorial presentation of numerical data. Numerical data can also be presented in a table of values, or sometimes in a formula , but graphic presentations usually have the advantage of displaying at a glance much of the qualitative behavior of the number, as well as the overall features data. A disadvantage is that a high degree of accuracy is often not possible in graphs.<br />A graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges. Typically, a graph is depicted in diagrammatic form as a set of dots for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics.<br />In the most common sense of the term, a graph is an ordered pair G = (V, E) comprising a set V of vertices or nodes together with a set E of edges or lines, which are 2-element subsets of V (i.e., an edge is related with two vertices, and the relation is represented as unordered pair of the vertices with respect to the particular edge). To avoid ambiguity, this type of graph may be described precisely as undirected and simple.<br />Other senses of graph stem from different conceptions of the edge set. In one more generalized notion, E is a set together with a relation of incidence that associates with each edge two vertices. In another generalized notion, E is a multiset of unordered pairs of (not necessarily distinct) vertices. Many authors call this type of object a multigraph or pseudograph.<br />All of these variants and others are described more fully below.<br />The vertices belonging to an edge are called the ends, endpoints, or end vertices of the edge. A vertex may exist in a graph and not belong to an edge.<br />V and E are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is | V | (the number of vertices). A graph's size is | E | , the number of edges. The degree of a vertex is the number of edges that connect to it, where an edge that connects to the vertex at both ends (a loop) is counted twice.<br />For an edge {u, v}, graph theorists usually use the somewhat shorter notation uv.<br />THE EDGES OF THE GRAPHS<br />The edges may be directed (asymmetric) or undirected (symmetric). For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this is an undirected graph, because if person A shook hands with person B, then person B also shook hands with person A. On the other hand, if the vertices represent people at a party, and there is an edge from person A to person B when person A knows of person B, then this graph is directed, because knowing of someone is not necessarily a symmetric relation (that is, one person knowing of another person does not necessarily imply the reverse; for example, many fans may know of a celebrity, but the celebrity is unlikely to know of all their fans). This latter type of graph is called a directed graph and the edges are called directed edges or arcs; in contrast, a graph where the edges are not directed is called undirected.<br />A.ASYMMETRIC GRAPHS<br />a semi-symmetric graph is an undirected graph that is edge-transitive and regular, but not transitive. In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of its edges to any other of its edges, but there is some pair of vertices that cannot be mapped into each other by a symmetry. A semi-symmetric graph must be bipartite, and its automorphism group must act transitively on each of the two vertex sets of the bipartition. In the diagram on the right, green vertices can not be mapped to red ones by any automorphism.Semi-symmetric graphs were first studied by Folkman in 1967, who discovered the smallest semi-symmetric graph, the Folkman graph on 20 vertices<br />The smallest cubic semi-symmetric graph is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by Bouwer (1968). It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič.<br />All the cubic semi-symmetric graphs on up to 768 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graph after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices,a graph on 120 vertices with girth 8 and the Tutte 12-cage.<br />Example of Asymmetric graphs<br />Folkman Graphs<br />The Folkman graph, is the smallest semi-symmetric graph, discovered in 1967 by J. Folkman.<br />As a semi-symmetric graph, the Folkman graph is bipartite, and its automorphism group acts transitively on each of the two vertex sets of the bipartition. In the diagram below indicating the chromatic number of the graph, the green vertices can not be mapped to red ones by any automorphism, but any red vertex can be mapped on any other red vertex and any green vertex can be mapped on any other green vertex.<br />The characteristic polynomial of the Folkman graph is (x − 4)x10(x + 4)(x2 − 6).<br />B.SYMMETRIC GRAPHS<br />a symmetric (or arc-transitive) if, given any two pairs of linked vertices u1—v1 and u2—v2 of G, there is an automorphism<br />f : V(G) -> V(G)<br />such that<br />f(u1) = u2 and f(v1) = v2. <br />In other words, a graph is symmetric if its automorphism group acts transitively upon ordered pairs of linked vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called 1-arc-transitive or flag-transitive.<br />By definition (ignoring u1 and u2), a symmetric graph without isolated vertices must also be transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge transitive. However, an edge-transitive graph need not be symmetric, since a—b might map to c—d, but not to d—c. Semi-symmetric graphs, for example, are edge-transitive and regular, but not vertex-transitive.<br />Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree.However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric.Such graphs are called half-transitive.The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term " symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above.<br />A distance-transitive graph is one where instead of considering pairs of linked vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition.<br />Example of Symmetric graphs<br />Petersen graph<br />The Petersen graph is a (cubic) symmetric graph. Any pair of linked vertices can be mapped to another by an automorphism, since any five-vertex ring can be mapped to any other.<br />Petersen graph is an undirected graph with 10 vertices and 15 edges. It is a small graph that serves as a useful example and counterexample for many problems in graph theory. The Petersen graph is named for Julius Petersen, who in 1898 constructed it to be the smallest bridgeless cubic graph with no three-edge-coloring. Although the graph is generally credited to Petersen, it had in fact first appeared 12 years earlier, in 1886. <br />Donald Knuth states that the Petersen graph is " a remarkable configuration that serves as a counterexample to many optimistic predictions about what might be true for graphs in general." <br />I've taken out the statement that the Petersen graph is " the unique (3,5)-Moore graph" . This comes from MathWorld, but according to their definition of Moore graph, a (3,5)-Moore graph is a degree-3 graph of girth 5 with the maximum number of nodes. But the dodecahedron is degree 3, girth 5, and has 20 nodes. I'm confused. dbenbenn.<br />And now I put it back in. Apparently a (v,g)-Moore graph is a v-regular graph of girth g that achieves the naive lower bound on the number of vertices. (Pick a vertex. It has v neighbors; there are v(v − 1) vertices at distance two, v(v − 1)2 at distance 3, etc.) dbenbenn .<br />TYPES OF GRAPHS<br />A.BAR GRAPH<br /> Bar graphs are used to display data in a similar way to line graphs. However, rather than using a point on a plane to define a value, a bar graph uses a horizontal or vertical rectangular bar that levels off at the appropriate level. <br /> There are many characteristics of bar graphs that make them useful. Some of these are that: <br /> They make comparisons between different variables very easy to see. <br /> They clearly show trends in data, meaning that they show how one variable is affected as the other rises or falls. <br /> Given one variable, the value of the other can be easily determined. <br />Bar charts are used for plotting discrete (or 'discontinuous') data i.e. data which has discrete values and is not continuous. Some examples of discontinuous data include 'shoe size' or 'eye color', for which you would use a bar chart. In contrast, some examples of continuous data would be 'height' or 'weight'. A bar chart is very useful if you are trying to record certain information whether it is continuous or not continuous data.<br />Example of Bar Graph<br />It is a really good way to show relative sizes: it is easy to see which types of movie are most liked, and which are least liked, at a glance.<br />You can use bar graphs to show the relative sizes of many things, such as what type of car people have, how many cutomers a shop has on different days and so on.<br />B.PICTOGRAPH<br />A pictograph (also called pictogram or pictogramme) is an ideogram that conveys its meaning through its pictorial resemblance to a physical object. Earliest examples of pictographs include ancient or prehistoric drawings or paintings found on rock walls. Pictographs are also used in writing and graphic systems in which the characters are to considerable extent pictorial in appearance.<br />Pictography is a form of writing which uses representational, pictorial drawings. It is a basis of cuneiform and, to some extent, hieroglyphic writing, which uses drawings also as phonetic letters or determinative rhymes.<br /> You first encounter pictographs during childhood and bump into them all through lifeâ€”at school, work, and all over magazines and on TV. These diagrams, which use small picture symbols to compare information, are a media favorite; statisticians, though, do not share the sentiment. Find out why, and learn more about the uses of pictographs.<br />Example of Pictograph<br />The pictograph shows the number of varieties of apples stored at a supermarket.<br />Choices:<br />A. 150C. 140<br />B. 120D. 200<br />Correct Answer: A<br />Solution:<br />Step 1: The pictograph shows 14 full apples and 2 half apples. <br />Step 2: So, there are 140 + 10 = 150 apples stores in the supermarket. <br /> As you can see the Pictograph is a way of representing statistical data using symbolic figures to match the frequencies of different kinds of data.<br />DIGRAPH<br />A digraph or digram (from the Greek: δίς, dís, " double" and γράφω, gráphō, " write" ) is a pair of characters used to write one phoneme (distinct sound), or else a sequence of phonemes that does not correspond to the normal values of the two characters combined. The sound is often, but not necessarily, one which cannot be expressed using a single character in the orthography used by the language. Usually, the term " digraph" is reserved for graphemes whose pronunciation is always or nearly always the same.<br />When digraphs do not represent a distinct phoneme, they may be relics from an earlier period of the language when they did have a different pronunciation, or represent a distinction which is made only in certain dialects, like wh in English. They may also be used for purely etymological reasons, like rh in English.<br />In some language orthographies, like that of Croatian (lj, nj, dž) or Czech (ch), digraphs are considered individual letters, meaning that they have their own place in the alphabet, in the standard orthography, and cannot be separated into their constituent graphemes; e.g.: when sorting, abbreviating or hyphenating. In others, like English, this is not the case.<br />                         G1<br />A digraph is short for directed graph, and it is a diagram composed of points called vertices (nodes) and arrows called arcs going from a vertex to a vertex. For example the figure below is a digraph with 3 vertices and 4 arcs. <br />Example of Digraph<br />In the example, G1 , given above, V = { 1, 2, 3 } , and A = { <1, 1>, <1, 2>, <1, 3>, <2, 3> } . Digraph representation of binary relations A binary relation on a set can be represented by a digraph. Let R be a binary relation on a set A, that is R is a subset of A A. Then the digraph, call it G, representing R can be constructed as follows:     1. The vertices of the digraph G are the elements of A, and     2. <x, y> is an arc of G from vertex x to vertex y if and only if <x, y> is in R. Example: The less than relation R on the set of integers A = {1, 2, 3, 4} is the set {<1, 2> , <1, 3>, <1, 4>, <2, 3> , <2, 4> , <3, 4> } and it can be represented by the following digraph.                     G2 Let us now define some of the basic concepts on digraphs. Definition (loop): An arc from a vertex to itself such as <1, 1>, is called a loop (or self-loop) Definition (degree of vertex): The in-degree of a vertex is the number of arcs coming to the vertex, and the out-degree is the number of arcs going out of the vertex. For example, the in-degree of vertex 2 in the digraph G2 shown above is 1, and the out-degree is 2. Definition (path): A path from a vertex x0 to a vertex xn in a digraph G = (V, A) is a sequence of vertices x0 , x1 , ....., xn that satisfies the following: for each i,  0 i n - 1 ,   <xi , xi + 1> A , or   <xi + 1 , xi> A ,   that is, between any pair of vertices there is an arc connecting them. A path is called a directed path   if   <xi , xi + 1> A ,   for every i,   0 i n - 1 . If no arcs appear more than once in a path, the path is called a simple path. A path is called elementary if no vertices appear more than once in it.<br />Types of Digraph<br />A.Pan-dialectical digraphs<br />Some languages have a unified orthography with digraphs that represent distinct pronunciations in different dialects. For example, in Breton there is a digraph zh that is pronounced [z] in most dialects, but [h] in Vannetais. Similarly, the Saintongeais dialect of French has a digraph jh that is pronounced [h] in words that correspond to [ʒ] in standard French. Similarly, Catalan has a digraph ix that is pronounced [ʃ] in Eastern Catalan, but [jʃ] or [js] in Western Catalan or Valencian.<br />B.Ambiguity<br />Some letter pairs should not be interpreted as digraphs, but appear due to compounding, like in hogshead and cooperate. This is often not marked in any way (it is an exception which must simply be memorized), but some authors indicate it either by breaking up the digraph with a hyphen, as in hogs-head, co-operate, or with a diaeresis mark, as in coöperate, though usage of a diaeresis has declined in English within the last century. This also occurs in names such as Clapham, Townshend, and Hartshorne, and is not marked here either.<br />In Romanization of Japanese, the constituent sounds ( HYPERLINK " http://wapedia.mobi/en/Mora_%28linguistics%29" morae) are usually indicated by digraphs, but some are indicated by a single letter, and some with a trigraph. The case of ambiguity is the syllabic ん, which is written as n (or sometimes m), except before vowels or y where it is followed by an apostrophe as n'. For example, the given name じゅんいちろう is romanized as Jun'ichirō, so that it is parsed as /ju/n/i/chi/ro/u/, rather than as /ju/ni/chi/ro/u/.<br />C.Discontinuous digraphs<br />The pair of letters making up a phoneme are not always adjacent. This is the case with English silent e. For example, the sequence a…e has the sound /eɪ/ in English cake. This is the result of historical sound changes: cake was originally /kakə/, the open syllable /ka/ came to be pronounced with a long vowel, and later the final schwa dropped off, leaving /kaːk/. Later still, the vowel /aː/ became /eɪ/.<br />However, alphabets may also be designed with discontinuous digraphs. In the Tatar Cyrillic alphabet, for example, the letter ю is used to write both /ju/ and /jy/. Usually the difference is evident from the rest of the word, but when it is not, the sequence ю...ь is used for /jy/, as in юнь /jyn/ 'cheap'.<br />BIPARTITE GRAPHS AND PERFECT MATCHING<br />A bipartite graph (or bigraph) is a graph whose vertices can be divided into two disjoint sets U and V such that every edge connects a vertex in U to one in V; that is, U and V are independent sets. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles.<br />The two sets U and V may be thought of as a coloring of the graph with two colors: if we color all nodes in U blue, and all nodes in V green, each edge has endpoints of differing colors, as is required in the graph coloring problem. In contrast, such a coloring is impossible in the case of a nonbipartite graph, such as a triangle: after one node is colored blue and another green, the third vertex of the triangle is connected to vertices of both colors, preventing it from being assigned either color.<br />568960672465One often writes G = (U, V, E) to denote a bipartite graph whose partition has the parts U and V. If |U| =|V|, that is, if the two subsets have equal cardinality, then G is called a balanced bipartite graph.<br />If a bipartite graph is connected, its bipartition can be defined by the parity of the distances from any arbitrarily chosen vertex v: one subset consists of the vertices at even distance to v and the other subset consists of the vertices at odd distance to v.<br />Thus, one may efficiently test whether a graph is bipartite by using this parity technique to assign vertices to the two subsets U and V, separately within each connected component of the graph, and then examine each edge to verify that it has endpoints assigned to different subsets.<br />Maximum matchings in bipartite graphs<br />Matching problems are often concerned with bipartite graphs. Finding a maximum bipartite matching [2] (often called a maximum cardinality bipartite matching) in a bipartite graph G = (V = (X,Y),E) is perhaps the simplest problem. The augmenting path algorithm finds it by finding an augmenting path from each to Y and adding it to the matching if it exists. As each path can be found in O(E) time, the running time is O(VE). This solution is equivalent to adding a super source s with edges to all vertices in X, and a super sink t with edges from all vertices in Y, and finding a maximal flow from s to t. All edges with flow from X to Y then constitute a maximum matching. An improvement over this is the Hopcroft-Karp algorithm, which runs in time. Another approach is based on the fast matrix multiplication algorithm and gives O(V2.376) complexity,[3] which is better in theory for sufficiently dense graphs, but in practice the algorithm is slower.<br />In a weighted bipartite graph, each edge has an associated value. A maximum weighted bipartite matching is defined as a perfect matching where the sum of the values of the edges in the matching have a maximal value. If the graph is not complete bipartite, missing edges are inserted with value zero. Finding such a matching is known as the assignment problem. It can be solved by using a modified shortest path search in the augmenting path algorithm. If the Bellman-Ford algorithm is used, the running time becomes O(V2E), or the edge cost can be shifted with a potential to achieve O(V2log(V) + VE) running time with the Dijkstra algorithm and Fibonacci heap. The remarkable Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. The original approach of this algorithm need O(V2E) running time, but it could be improved to O(V2log(V) + VE) time with extensive use of priority queues.<br />A perfect matching (a.k.a. 1-factor) is a matching which matches all vertices of the graph. That is, every vertex of the graph is incident to exactly one edge of the matching. Figure (b) above is an example of a perfect matching. Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, , that is, the size of a maximum matching is no larger than the size of a minimum edge cover.<br />EXAMPLE<br />Given a graph G = (V,E), a matching M in G is a set of pair wise non-adjacent edges; that is, no two edges share a common vertex.<br />A vertex is matched (or saturated) if it is incident to an edge in the matching. Otherwise the vertex is unmatched.<br />A maximal matching is a matching M of a graph G with the property that if any edge not in M is added to M, it is no longer a matching, that is, M is maximal if it is not a proper subset of any other matching in graph G. In other words, a matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs.<br />A maximum matching is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number ν(G) of a graph G is the size of a maximum matching. Note that every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in three graphs.<br />A perfect matching (a.k.a. 1-factor) is a matching which matches all vertices of the graph. That is, every vertex of the graph is incident to exactly one edge of the matching. Figure (b) above is an example of a perfect matching. Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, , that is, the size of a maximum matching is no larger than the size of a minimum edge cover.<br />A near-perfect matching is one in which exactly one vertex is unmatched. This can only occur when the graph has an odd number of vertices, and such a matching must be maximum. In the above figure, part (c) shows a near-perfect matching. If, for every vertex in a graph, there is a near-perfect matching that omits only that vertex, the graph is also called factor-critical.<br />Given a matching M,<br />an alternating path is a path in which the edges belong alternatively to the matching and not to the matching.<br />an augmenting path is an alternating path that starts from and ends on free (unmatched) vertices.<br />One can prove that a matching is maximum if and only if it does not have any augmenting path. (This result is sometimes called Berge's lemma.)<br />Matching polynomials<br />A generating function of the number of k-edge matchings in a graph is called a matching polynomial. Let G be a graph and mk be the number of k-edge matchings. One matching polynomial of G is<br />Another definition gives the matching polynomial as<br />where n is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials.<br />EULER’S FORMULA AND CLOLOURINGS OF GRAPHS<br />Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that demonstrates the deep relationship between the trigonometric functions and the complex exponential function. Euler's formula states that, for any real number x,<br />where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes called cis(x). The formula is still valid if x is a complex number, and so some authors refer to the more general complex version as Euler's formula.[1]<br />Richard Feynman called Euler's formula " our jewel" and " one of the most remarkable, almost astounding, formulas in all of mathematics.<br />It was Bernoulli [1702] who noted that<br />And since<br />the above equation tells us something about complex logarithms. Bernoulli, however, did not evaluate the integral. His correspondence with Euler (who also knew the above equation) shows that he didn't fully understand logarithms. Euler also suggested that the complex logarithms can have infinitely many values.<br />Meanwhile, Roger Cotes, in 1714, discovered<br />(where " ln" means natural logarithm, i.e. log with base e). We now know that the above equation is only true modulo integer multiples of 2πi, but Cotes missed the<br />fact that a complex logarithm can have infinitely many values which owes to the periodicity of the trigonometric functions.<br />Applications in complex number theory<br />Three-dimensional visualization of Euler's formula<br />This formula can be interpreted as saying that the function eix traces out the unit circle in the complex number plane as x ranges through the real numbers. Here, x is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counter clockwise and in radians.<br />The original proof is based on the Taylor series expansions of the exponential function ez (where z is a complex number) and of sin x and cos x for real numbers x (see below). In fact, the same proof shows that Euler's formula is even valid for all complex numbers z.<br />A point in the complex plane can be represented by a complex number written in cartesian coordinates. Euler's formula provides a means of conversion between cartesian coordinates and polar coordinates. The polar form reduces the number of terms from two to one, which simplifies the mathematics when used in multiplication or powers of complex numbers. Any complex number z = x + iy can be written as<br />where<br />the real part<br />the imaginary part<br />the magnitude of z<br />atan2(y, x).<br />is the argument of z—i.e., the angle between the x axis and the vector z measured counterclockwise and in radians—which is defined up to addition of 2π. Many texts write tan-1(y/x) instead of atan2(y,x) but this needs adjustment when x ≤ 0.<br />Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the definition of the logarithm (as the inverse operator of exponentiation) that<br />and that<br />both valid for any complex numbers a and b.<br />Therefore, one can write:<br />for any z ≠ 0. Taking the logarithm of both sides shows that:<br />and in fact this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, because φ is multi-valued.<br />Finally, the other exponential law<br />which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities as well as de Moivre's formula.<br /> Relationship to trigonometry<br />Euler's formula provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function:<br />The two equations above can be derived by adding or subtracting Euler's formulas:<br />and solving for either cosine or sine.<br />These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting x = iy, we have:<br />Complex exponentials can simplify trigonometry, because they are easier to manipulate than their sinusoidal components. One technique is simply to convert sinusoids into equivalent expressions in terms of exponentials. After the manipulations, the simplified result is still real-valued. For example:<br />Another technique is to represent the sinusoids in terms of the real part of a more complex expression, and perform the manipulations on the complex expression. For example:<br />This formula is used for recursive generation of cos(nx) for integer values of n and arbitrary x (in radians).<br />Coloring of Graphs<br />Consider a graph G. A vertex coloring, or simply a coloring of G is an assignment of colors vertices of G such that adjacent vertices have different colors. We say that G is n-colorable if there exists a coloring of G which uses n colors. (Since the word " color" is used as a noun, we will try to avoid as a verb by saying, for example, " paint" G rather than " color" G when we are assigning colors vertices of G.) The minimum number of colors needed to paint G is called the chromatic number of G and is denoted by x(G).<br />We give an algorithm by Welch and Powell for a coloring of a graph G.We emphasize that this algorithm does not always yield a minimal coloring of G.<br />Algorithm 1.10 (Welch-Powell): The input is a graph G.<br />Step 1. Order the vertices of G according to decreasing degrees.<br />Step 2. Assign the first color C to the first vertex and then, in sequential order, assign C to vertex which is not adjacent to a previous vertex which was assigned C.<br />Step 3. Repeat Step 2 with r second color C2 and the subsequence of noncoloted vertices.<br />Step4. Repeat Step 3 with a hird color C3, titer) a fourth color C4, and so on until all vertices are colored.<br />Step5. Exit. <br />EXAMPLE<br />(a) Consider the graph Gin Figure. We use the Welch-Powell Algorithm 1.10 to obtain a coloring of G. Ordering the vertices according to decreasing degrees yields the following sequence:<br />             A5,   A3,   A7,   A1,   A2,   A4,   A6,   A8 <br />3748405-128905The first color is assigned to vertices A5 and A1. The second color is assigned to vertices A3, A4, and A8. The third color is assigned to vertices A7, A2, and A6. All the vertices have been assigned a color, and so G is 3-colorable. Observe that G is not 2-colorable since vertices A1, A2, and A3, which are connected to each other, must be assigned different colors. Accordingly, x(G) = 3.<br />  <br />(b) Consider the complete graph Kn with n vertices. Since every vertex is adjacent to every other vertex, Kn requires n colors is any coloring. Thus x(Kn)=n.<br />There is no simple way to actually determine whether an arbitrary graph is n-colorable. However, the following theorem proved in Problem 1.22) gives a simple characterization of 2-colorable graphs.<br />EULER’S FORMULA FOR PLANAR GRAPHS<br />A planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints.<br />A planar graph already drawn in the plane without edge intersections is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point in 2D space, and from every edge to a plane curve, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points. Plane graphs can be encoded by combinatorial maps.<br />It is easily seen that a graph that can be drawn on the plane can be drawn on the sphere as well, and vice versa.<br />The equivalence class of topologically equivalent drawings on the sphere is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map have a particular status.<br />Generalizations of planar graphs are graphs which can be drawn on a surface of a given genus. In this terminology, planar graphs have graph genus 0, since the plane (and the sphere) are surfaces of genus 0. See " graph embedding" for other related topics.<br />Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and v is the number of vertices, e is the number of edges and f is the number of faces (regions bounded by edges, including the outer, infinitely-large region), then<br />v − e + f = 2.<br />As an illustration, in the butterfly graph given above, v = 5, e = 6 and f = 3. If the second graph is redrawn without edge intersections, it has v = 4, e = 6 and f = 4. In general, if the property holds for all planar graphs of f faces, any change to the graph that creates an additional face while keeping the graph planar would keep v − e + f an invariant. Since the property holds for all graphs with f = 2, by mathematical induction it holds for all cases. Euler's formula can also be proven as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both e and f by one, leaving v − e + f constant. Repeat until the remaining graph is a tree; trees have v =  e + 1 and f = 1, yielding v − e + f = 2. i.e. the Euler characteristic is 2.<br />In a finite, connected, simple, planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces; using Euler's formula, one can then show that these graphs are sparse in the sense that e ≤ 3v − 6 if v ≥ 3.<br />The Goldner–Harary graph is maximal planar. All its faces are bounded by three edges.<br />A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (even the outer one) are then bounded by three edges, explaining the alternative terms triangular and triangulated for these graphs. If a triangular graph has v vertices with v > 2, then it has precisely 3v − 6 edges and 2v − 4 faces.<br />Euler's formula is also valid for simple polyhedra. This is no coincidence: every simple polyhedron can be turned into a connected, simple, planar graph by using the polyhedron's vertices as vertices of the graph and the polyhedron's edges as edges of the graph. The faces of the resulting planar graph then correspond to the faces of the polyhedron. For example, the second planar graph shown above corresponds to a tetrahedron. Not every connected, simple, planar graph belongs to a simple polyhedron in this fashion: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra (equivalently: those formed from simple polyhedra) are precisely the finite 3-connected simple planar graphs.<br />A planar graph is one that can be drawn on a plane in such a way that there are no " edge crossings," i.e. edges intersect only at their common vertices.<br />Example Gas, Water, Electricty Problem. Is there any way to connect each of the three houses to each of the three utilities in such a way that none of the supply lines cross?<br />Example A pictorial representation of the cube graph that makes it easy to see why it is called the cube graph resembles:<br /> . <br />This representation includes many " edge crossing." It is possible to re- draw the cube graph so that no two edges cross. Coloring the vertices and edges shows how the graph is re-drawn more clearly.<br /> <br />COLOURING OF GRAPHS<br />A colouring of graphs or graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called " colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices share the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges share the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.<br />Vertex coloring is the starting point of the subject, and other coloring problems can be transformed into a vertex version. For example, an edge coloring of a graph is just a vertex coloring of its line graph, and a face coloring of a planar graph is just a vertex coloring of its planar dual. However, non-vertex coloring problems are often stated and studied as is. That is partly for perspective, and partly because some problems are best studied in non-vertex form, as for instance is edge coloring.<br />11334751421130The convention of using colors originates from coloring the countries of a map, where each face is literally colored. This was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations it is typical to use the first few positive or nonnegative integers as the " colors" . In general one can use any finite set as the " color set" . The nature of the coloring problem depends on the number of colors but not on what they are.<br />A.Vertex coloring<br />This graph can be 3-colored in 12 differ 1When used without any qualification, a coloring of a graph is almost always a proper vertex coloring, namely a labelling of the graph’s vertices with colors such that no two vertices sharing the same edge have the same color. Since a vertex with a loop could never be properly colored, it is understood that graphs in this context are loopless.<br />A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors needed to color a graph G is called its chromatic number, χ(G). A graph that can be assigned a (proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the same color is called a color class, every such class forms an independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent sets, and the terms k-partite and k-colorable have the same meaning.<br />B.Edge coloring<br />An edge coloring of a graph is a proper coloring of the edges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to the problem of partitioning the edge set into k matchings. The smallest number of colors needed for an edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph. The four color theorem is equivalent to the assertion that every planar cubic bridgeless graph admits a Tait coloring.<br />Bounds on the chromatic number<br />Assigning distinct colors to distinct vertices always yields a proper coloring, so<br />The only graphs that can be 1-colored are edgeless graphs, and the complete graph Kn of n vertices requires χ(Kn) = n colors. In an optimal coloring there must be at least one of the graph‘s m edges between every pair of color classes, so<br />If G contains a clique of size k, then at least k colors are needed to color that clique; in other words, the chromatic number is at least the clique number:<br />For interval graphs this bound is tight.<br />The 2-colorable graphs are exactly the bipartite graphs, including trees and forests. By the four color theorem, every planar graph can be 4-colored.<br />A greedy coloring shows that every graph can be colored with one more color than the maximum vertex degree,<br />Complete graphs have χ(G) = n and Δ(G) = n − 1, and odd cycles have χ(G) = 3 and Δ(G) = 2, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved; Brooks’ theorem[4] states that<br />Brooks’ theorem: for a connected, simple graph G, unless G is a complete graph or an odd cycle.<br />Graphs with high chromatic number<br />Graphs with large cliques have high chromatic number, but the opposite is not true. The Grötzsch graph is an example of a 4-chromatic graph without a triangle, and the example can be generalised to the Mycielskians.<br />Mycielski’s Theorem (Jan Mycielski 1955): There exist triangle-free graphs with arbitrarily high chromatic number.<br />From Brooks’s theorem, graphs with high chromatic number must have high maximum degree. Another local property that leads to high chromatic number is the presence of a large clique. But colorability is not an entirely local phenomenon: A graph with high girth looks locally like a tree, because all cycles are long, but its chromatic number need not be 2:<br />Theorem (Erdős): There exist graphs of arbitrarily high girth and chromatic number.<br />Bounds on the chromatic index.An edge coloring of G is a vertex coloring of its line graph L(G), and vice versa. Thus,<br />There is a strong relationship between edge colorability and the graph’s maximum degree Δ(G). Since all edges incident to the same vertex need their own color, we have<br />ADJACENCY MATRICES<br />In mathematics and computer science, an adjacency matrix is a means of representing which vertices of a graph are adjacent to which other vertices. Another matrix representation for a graph is the incidence matrix.<br />Specifically, the adjacency matrix of a finite graph G on n vertices is the n × n matrix where the nondiagonal entry aij is the number of edges from vertex i to vertex j, and the diagonal entry aii, depending on the convention, is either once or twice the number of edges (loops) from vertex i to itself. Undirected graphs often use the former convention of counting loops twice, whereas directed graphs typically use the latter convention. There exists a unique adjacency matrix for each graph (up to permuting rows and columns), and it is not the adjacency matrix of any other graph. In the special case of a finite simple graph, the adjacency matrix is a (0,1)-matrix with zeros on its diagonal. If the graph is undirected, the adjacency matrix is symmetric.<br />The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory.<br />The adjacency matrix of an undirected simple graph is symmetric, and therefore has a complete set of real eigenvalues and an orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.<br />Suppose two directed or undirected graphs G1 and G2 with adjacency matrices A1 and A2 are given. G1 and G2 are isomorphic if and only if there exists a permutation matrix P such that<br />PA1P − 1 = A2.<br />In particular, A1 and A2 are similar and therefore have the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic – one cannot 'hear' (reconstruct, or 'inverse-scatter') the shape of a graph.<br />If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e., the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) walks of length n from vertex i to vertex j. This implies, for example, that the number of triangles in an undirected graph G is exactly the trace of A3 divided by 6.<br />The main diagonal of every adjacency matrix corresponding to a graph without loops has all zero entries.<br />For -regular graphs, d is also an eigenvalue of A, for the vector , and G is connected if and only if the multiplicity of d is 1. It can be shown that − d is also an eigenvalue of A if G is connected bipartite graph. The above are results of Perron–Frobenius theorem.<br />When used as a data structure, the main alternative for the adjacency matrix is the adjacency list. Because each entry in the adjacency matrix requires only one bit, they can be represented in a very compact way, occupying only n2 / 8 bytes of contiguous space, where n is the number of vertices. Besides just avoiding wasted space, this compactness encourages locality of reference.<br />On the other hand, for a sparse graph, adjacency lists win out, because they do not use any space to represent edges which are not present. Using a naïve array implementation on a 32-bit computer, an adjacency list for an undirected graph requires about 8e bytes of storage, where e is the number of edges.<br />Noting that a simple graph can have at most n2 edges, allowing loops, we can let d = e / n2 denote the density of the graph. Then, 8e > n2 / 8, or the adjacency list representation occupies more space, precisely when d > 1 / 64. Thus a graph must be sparse indeed to justify an adjacency list representation.<br />Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list. With an adjacency matrix, an entire row must instead be scanned, which takes O(n) time. Whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.<br /><ul><li>Here is an example of a labeled graph and its adjacency matrix. The convention followed here is that an adjacent edge counts 1 in the matrix for an undirected graph. (X,Y coordinates are 1-6)</li></ul>Labeled graphAdjacency matrix<br /><ul><li>The adjacency matrix of a complete graph is all 1's except for 0's on the diagonal.</li></ul>The adjacency matrix of an empty graph is a zero matrix.<br />Suppose (J is a graph with m vertices, and suppose the vertices have been ordered, say, vj, vj,..., vm. Then the adjacency matrix A ~ |aij| of the graph G is the m ×m matrix defined by<br /><ul><li>
2. 2. aij =           1 = if v1 is adjacent to vf
3. 3.                  0 = otherwise </li></ul>Figure (b) contains the adjacency matrix of the graph G in Figure (a) where the vertices are ordered A, B, C,D, E. Observe that each edge {vi, vj} of G is represented twice, by aij = 1 and aji = 1. Thus, in particular,the adjacency matrix is symmetric.<br />The adjacency matrix A of a graph G does depend on the ordering of the vertices of G, that is, a different ordering of the vertices yields a different adjacency matrix. However, any two such adjacency matrices are closely related in that one can be obtained from the other by simply interchanging row* and columns. On the other hand, the adjacency matrix does not depend on the order in which the edges (pairs of vertices) are input into the computer.<br />There are variations of the above representation. If G is a multigraph, then we usually let denote the number of edges {vi vj}. Moreover,if G is a weighted graph, then we may let, aij denote the weight of the edge {vi, vj}.<br />Consider a directed graph with n vertices, . The simplest graph representation scheme uses an matrix A of zeroes and ones given by <br />That is, the element of the matrix, is a one only if is an edge in G. The matrix A is called an adjacency matrix  . <br />For example, the adjacency matrix for graph in Figure  is <br />Clearly, the number of ones in the adjacency matrix is equal to the number of edges in the graph. <br />One advantage of using an adjacency matrix is that it is easy to determine the sets of edges emanating from a given vertex. For example, consider vertex . Each one in the row corresponds to an edge that emanates from vertex. Conversely, each one in the column corresponds to an edge incident on vertex. <br />We can also use adjacency matrices to represent undirected graphs. That is, we represent an undirected graph with n vertices, using an matrix A of zeroes and ones given by <br />Since the two sets and are equivalent, matrix A is symmetric about the diagonal. That is, . Furthermore, all of the entries on the diagonal are zero. That is, for . <br />For example, the adjacency matrix for graph in Figure  is <br />In this case, there are twice as many ones in the adjacency matrix as there are edges in the undirected graph. <br />A simple variation allows us to use an adjacency matrix to represent an edge-labeled graph. For example, given numeric edge labels, we can represent a graph (directed or undirected) using an matrix A in which the is the numeric label associated with edge in the case of a directed graph, and edge , in an undirected graph. <br />For example, the adjacency matrix for the graph in Figure  is <br />In this case, the array entries corresponding to non-existent edges have all been set to . Here serves as a kind of sentinel. The value to use for the sentinel depends on the application. For example, if the edges represent routes between geographic locations, then a route of length is much like one that does not exist. <br />Since the adjacency matrix has entries, the amount of spaced needed to represent the edges of a graph is , regardless of the actual number of edges in the graph. If the graph contains relatively few edges, e.g., if , then most of the elements of the adjacency matrix will be zero (or ). A matrix in which most of the elements are zero (or ) is a sparse matrix. <br />WARSHALL’S ALGORITHM<br />In computer science, the Floyd–Warshall algorithm (sometimes known as the WFI Algorithm[clarification needed] or Roy–Floyd algorithm) is a graph analysis algorithm for finding shortest paths in a weighted graph (with positive or negative edge weights). A single execution of the algorithm will find the lengths (summed weights) of the shortest paths between all pairs of vertices though it does not return details of the paths themselves. The algorithm is an example of dynamic programming. It was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962.<br />The Floyd–Warshall algorithm compares all possible paths through the graph between each pair of vertices. It is able to do this with only Θ(V3) comparisons in a graph. This is remarkable considering that there may be up to Ω(V2) edges in the graph, and every combination of edges is tested. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal.<br />Consider a graph G with vertices V, each numbered 1 through N. Further consider a function shortestPath(i, j, k) that returns the shortest possible path from i to j using vertices only from the set {1,2,...,k} as intermediate points along the way. Now, given this function, our goal is to find the shortest path from each i to each j using only vertices 1 to k + 1.<br />There are two candidates for each of these paths: either the true shortest path only uses vertices in the set {1, ..., k}; or there exists some path that goes from i to k + 1, then from k + 1 to j that is better. We know that the best path from i to j that only uses vertices 1 through k is defined by shortestPath(i, j, k), and it is clear that if there were a better path from i to k + 1 to j, then the length of this path would be the concatenation of the shortest path from i to k + 1 (using vertices in {1, ..., k}) and the shortest path from k + 1 to j (also using vertices in {1, ..., k}).<br />Therefore, we can define shortestPath(i, j, k) in terms of the following recursive formula:<br />This formula is the heart of the Floyd–Warshall algorithm. The algorithm works by first computing shortestPath(i, j, k) for all (i, j) pairs for k = 1, then k = 2, etc. This process continues until k = n, and we have found the shortest path for all (i, j) pairs using any intermediate vertices.<br />Conveniently, when calculating the kth case, one can overwrite the information saved from the computation of k − 1. This means the algorithm uses quadratic memory. Be careful to note the initialization conditions:<br /> 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j<br /> 2 (infinity if there is none).<br /> 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0<br /> 4 */<br /> 5<br /> 6 int path[][];<br /> 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path<br /> 8 from i to j using intermediate vertices (1..k−1). Each path[i][j] is initialized to<br /> 9 edgeCost(i,j) or infinity if there is no edge between i and j.<br />10 */<br />11<br />12 procedure FloydWarshall ()<br />13 for k := 1 to n<br />14 for i := 1 to n<br />15 for j := 1 to n<br />16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );<br />The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. For each vertex, one need only store the information about which vertex one has to go through if one wishes to end up at any given vertex. Therefore, information to reconstruct all paths can be stored in an single N×N matrix 'next' where next[i][j] represents the vertex one must travel through if one intends to take the shortest path from i to j. Implementing such a scheme is trivial as when a new shortest path is found between two vertices, the matrix containing the paths is updated. The next matrix is updated along with the path matrix such that at completion both tables are complete and accurate, and any entries which are infinite in the path table will be null in the next table. The path from i to j is then path from i to next[i][j], followed by path from next[i][j] to j. These two shorter paths are determined recursively. This modified algorithm runs with the same time and space complexity as the unmodified algorithm.<br /> 1 procedure FloydWarshallWithPathReconstruction ()<br /> 2 for k := 1 to n<br /> 3 for i := 1 to n<br /> 4 for j := 1 to n<br /> 5 if path[i][k] + path[k][j] < path[i][j] then<br /> 6 path[i][j] := path[i][k]+path[k][j];<br /> 7 next[i][j] := k;<br /> 8<br /> 9 procedure GetPath (i,j)<br />10 if path[i][j] equals infinity then<br />11 return " no path" ;<br />12 int intermediate := next[i][j];<br />13 if intermediate equals 'null' then<br />14 return " " ; /* there is an edge from i to j, with no vertices between */<br />15 else<br />16 return GetPath(i,intermediate) + intermediate + GetPath(intermediate,j);<br />Applications and generalizations<br />The Floyd–Warshall algorithm can be used to solve the following problems, among others:<br />Shortest paths in directed graphs (Floyd's algorithm).<br />Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR).<br />Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm)<br />Inversion of real matrices (Gauss–Jordan algorithm).<br />Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation.<br />Testing whether an undirected graph is bipartite.<br />Fast computation of Pathfinder networks.<br />Maximum Bandwidth Paths in Flow Networks<br />MINIMUM DISTANCES IN WEIGHTED GRAPHS<br />Given a connected, undirected graph, a spanning tree of that graph is a subgraph which is a tree and connects all the vertices together. A single graph can have many different spanning trees. We can also assign a weight to each edge, which is a number representing how unfavorable it is, and use this to assign a weight to a spanning tree by computing the sum of the weights of the edges in that spanning tree. A minimum spanning tree (MST) or minimum weight spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree. More generally, any undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of minimum spanning trees for its connected components.<br />One example would be a cable TV company laying cable to a new neighborhood. If it is constrained to bury the cable only along certain paths, then there would be a graph representing which points are connected by those paths. Some of those paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects to every house. There might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost.<br />Weighted Graph<br />We can add attributes to edges.  We call the attributes weights.  For example if we are using the graph as a map where the vertices are the cites and the edges are highways between the cities. <br />Then if we want the shortest travel distance between cities an appropriate weight would be the road mileage. <br />If we are concerned with the dollar cost of a trip and went the cheapest trip then an appropriate weight for the edges would be the cost to travel between the cities. <br />In both examples we want the shortest trip in terms of the edge-weights.  In other words if we designate the weights of the edges as: <br /> w( (vi , vi+1) )<br /> The length of a path, P, is<br /> w(P) = ∑ w((vi , vi+1)) for all edges in P<br /> We call the distance between u and v, d(u, v) = min w(P) for all paths between u and v. <br />Note weights can be negative.  Then if a negative weight edge is in a cycle of the graph, we could make the distance between two vertices as negative as we want by cycling many times through the cycle.  This would generate an unrealistic answer or cause the algorithm to never exit.  So in the case of negative weighted edges we have to be careful.<br />Optimization and Greedy Algorithms<br />Our goal is find the shortest distance from an initial vertex, v, to each vertices of the graph.  (This is not the traveling sales man problem.)  This is an optimization problem. An optimization problem is a problem where we have a goal to achieve but we also want to achieve the goal at a minimum cost.  We want the best solution if there are many solutions to the problem we want the solution that gives the minimum cost.  We can also optimize for maximum benefit.  For example if some one paid us to go from city to city then naturally we would want the path that paid us the most.  Optimization is a typical class of problem for computer scientist. <br />An algorithm that sometimes can solve optimization problems is the Greedy Algorithm. In the greedy algorithm we make several small steps to our goal and at each step we choose the optimal step, greedy-choice.  The solution is built from these small steps with local optimal solutions.  The greedy algorithm is not guaranteed to give us the optimal solution, if a global optimal solution can be found using a greedy algorithm then we say that the problem posses the greedy choice property. <br />Dijkstra's Algorithm for shortest distance<br />We perform a weighted-breath-first search from the start vertex, v, calculating our best guess distance for each adjacent vertex. The vertex with the smallest distant is assured to have the correct distance. So we can improve our guess of all the vertices adjacent to the vertex with the minimum distance.  <br />The author calls relaxation the process for improving the guess.  He suggests that a metaphor for remembering the term relaxation is spring.  Our initial guess in this case are too large, or the spring is stretched.  Then the improvements make the guess/estimate smaller, or the spring relaxes to it proper shape. <br />For algorithm we let D(u) represent our estimate of the distance u from v. (When the algorithm is done D will contain the correct distance.)  Initialize D to <br />D(v) = 0 D(u) = inf   for u != v<br /> <br />Note that the distance is correct for v.  We can improve D for node adjacent to v by edge relaxation: <br /> <br />Edge Relaxation: <br /> <br />if (D(u) + w((u, z) ) < D(z)  then  D(z) = D(u) + w( (u, z) )<br /> <br />We then add to the cloud the vertex with the smallest guess for the distance.  We will want to keep a priority queue Q of the vertices not in the cloud. <br />Algorithm: ShortestPath(G, v)  // a little miss leading since the output is only the distance <br />input: A simple undirected weighted graph G <br />with non negative edge weights and a start vertex, v. <br />output: D(u) the distance u is from v. <br />Initialize D(v) = 0 and D(u) = inf for u != v Initialize priority queue Q of all vertices in G using D as the key. while Q is not empty do <br />u = Q.removeMin() for each vertex z adjacent to u and in Q do <br />if  D(u) + w((u, z)) < D(z) then <br />    D(z) = D(u) + w((u, z)) <br />    update z in Q<br />return D<br />Note how the use of the priority queue makes the algorithm quite easy to understand and implement. <br />The running time of algorithm depends on the graph, G, and priority queue, Q, implementation.  We assume that G is implemented by adjacency list structure.  This allows us to update D by iterating through the adjacent vertices of u in time O(degree(u)). <br />Implementing the priority queue, Q, with a heap makes removal efficient O(lg n) where n is the number of vertices in G.  If also keep a locater data type which gives us the location of item in the priory queue in O(1), for example and additional reference, locater,  kept with the item = (key, element). The location reference is the Position in the heap.  If we did not have the locater then search through the heap for the adjacent vertex would take O(n) instead of O(1) as with the locater. Then when we insert an item we get the location returned and can access the item in the heap using the locater.  We can update the D for the adjacent vertex in O(degree(u) lg n) the time. <br />EULERIAN AND HAMILTONIAN CIRCUITS<br />In graph theory, an Eulerian path is a path in a graph which visits each edge exactly once. Similarly, an Eulerian circuit is an Eulerian path which starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. Mathematically the problem can be stated like this:<br />Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published in 1873 by Carl Hierholzer.<br />The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs.<br />For the existence of Eulerian paths it is necessary that no more than two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian paths are circuits. If there are exactly two vertices of odd degree, all Eulerian paths start at one of them and end at the other. Sometimes a graph that has an Eulerian path, but not an Eulerian circuit (in other words, it is an open path, and does not start and end at the same vertex) is called semi-Eulerian.<br />Constructing Eulerian paths and circuits<br />Consider a graph known to have all edges in the same component and at most two vertices of odd degree. We can construct an Eulerian path out of this graph by using Fleury's algorithm, which dates to 1883. We start with a vertex of odd degree—if the graph has none, then start with any vertex. At each step we move across an edge whose deletion would not disconnect the graph, unless we have no choice, then we delete that edge. At the end of the algorithm there are no edges left, and the sequence of edges we moved across forms an Eulerian cycle if the graph has no vertices of odd degree; or an Eulerian path if there are exactly two vertices of odd degree.<br />Special cases<br />The asymptotic formula for the number of Eulerian circuits in the complete graphs was determined by McKay and Robinson (1995): HYPERLINK " http://en.wikipedia.org/wiki/Eulerian_path" l " cite_note-7" [8]<br />A similar formula was later obtained by M.I. Isaev (2009) for complete bipartite graphs: HYPERLINK " http://en.wikipedia.org/wiki/Eulerian_path" l " cite_note-8" [9]<br />A.Eulerian Graph<br />An Eulerian graph is as graph containing an Eulerian cycle. The numbers of Eulerian graphs with , 2, ... nodes are 1, 0, 1, 1, 4, 8, 37, 184, 1782, ... (Sloane's A003049; Robinson 1969; Liskovec 1972; Harary and Palmer 1973, p. 117). <br />Some care is needed in interpreting this term, however, since some authors define an Euler graph as a different object, namely a graph for which all vertices are of even degree (motivated by the following theorem). <br />Euler showed (without proof) that a connected simple graph is Eulerian iff it has no graph vertices of odd degree (i.e., all vertices are of even degree). The number of connected Euler graphs on nodes is therefore equal to the number of Eulerian graphs on nodes. <br />A directed graph is Eulerian iff every graph vertex has equal indegree and outdegree. A planar bipartite graph is dual to a planar Eulerian graph and vice versa. The numbers of Eulerian digraphs on , 2, ... nodes are 1, 1, 3, 12, 90, 2162, ... (Sloane's A058337). <br />Finding the largest subgraph of graph having an odd number of vertices which is Eulerian is an NP-complete problem (Skiena 1990, p. 194). <br />B.Hamiltonian path<br />A Hamiltonian graph, also called a Hamilton graph, is a graph possessing a Hamiltonian cycle. A graph that is not Hamiltonian is said to be nonhamiltonian. <br />While it would be easy to make a general definition of " Hamiltonian" that goes either way as far as the singleton graph is concerned, defining " Hamiltonian" to mean " has a Hamiltonian cycle" and taking " Hamiltonian cycles" to be a subset of " cycles" in general would lead to the convention that the singleton graph is nonhamiltonian (B. McKay, pers. comm., Oct. 11, 2006). However, by convention, the singleton graph is generally considered to be Hamiltonian (B. McKay, pers. comm., Mar. 22, 2007). The convention in this work and in GraphData is that is Hamiltonian, while is nonhamiltonian. <br />The numbers of simple Hamiltonian graphs on nodes for , 2, ... are then given by 1, 0, 1, 3, 8, 48, 383, 6196, 177083, ... (Sloane's A003216). <br />A graph can be tested to see if it is Hamiltonian using the command HamiltonianQ[g] in the Mathematica package Combinatorica` . <br />Testing whether a graph is Hamiltonian is an NP-complete problem (Skiena 1990, p. 196). Rubin (1974) describes an efficient search procedure that can find some or all Hamilton paths and circuits in a graph using deductions that greatly reduce backtracking and guesswork. <br />All Hamiltonian graphs are biconnected, although the converse is not true (Skiena 1990, p. 197). Any bipartite graph with unbalanced vertex parity is not Hamiltonian. <br />If the sums of the degrees of nonadjacent vertices in a graph is greater than the number of nodes for all subsets of nonadjacent vertices, then is Hamiltonian (Ore 1960; Skiena 1990, p. 197). <br />All planar 4-connected graphs have Hamiltonian cycles, but not all polyhedral graphs do. For example, the smallest polyhedral graph that is not Hamiltonian is the Herschel graph on 11 nodes. <br />19050001326515In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected graph which visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a cycle in an undirected graph which visits each vertex exactly once and also returns to the starting vertex. Determining whether such paths and cycles exist in graphs is the Hamiltonian path problem which is NP-complete.<br /> <br /> <br />A Hamiltonian cycle in a dodecahedron<br />20288251162050Hamiltonian paths and cycles are named after William Rowan Hamilton who invented the Icosian game, now also known as Hamilton's puzzle, which involves finding a Hamiltonian cycle in the edge graph of the dodecahedron. Hamilton solved this problem using the Icosian Calculus, an algebraic structure based on roots of unity with many similarities to the quaternions (also invented by Hamilton). This solution does not generalize to arbitrary graphs.<br />A Hamiltonian path (black) over a graph<br />All Platonic solids are Hamiltonian (Gardner 1957), as illustrated above. <br />Although not explicitly stated by Gardner (1957), all Archimedean solids have Hamiltonian circuits as well, several of which are illustrated above.<br /> Two points are called adjacent if there is an edge connecting them. <br />  Euler path and circuit.  If it is possible to start at a vertex and move along each path so as to pass along each edge without going over any of them more than once the graph has an Euler path.  If the path ends at the same vertex at which you started it is called an Euler circuit. Some nice problems, explanations, and illustrations are shown at Isaac Reed's wonderful web site.   Even some very simple graphs like the one above do not have an Euler path (try it).  The reason can be found at the web site just mentioned.<br />  Hamiltonian Circuit a Hamiltonian circuit, named for Irish mathematician Sir William Rowan Hamilton, is a circuit (a path that ends where it starts) that visits each vertex once without touching any vertex more than once. There may be more than one Hamilton path for a graph, and then we often wish to solve for the shortest such path. This is often referred to as a traveling salesman or postman problem.   Every complete graph (n>2) has a Hamilton circuit. <br />KNIGHT’S TOUR (64 SQUARES)<br />The Knight's Tour is a mathematical problem involving a knight on a chessboard. The knight is placed on the empty board and, moving according to the rules of chess, must visit each square exactly once. A knight's tour is called a closed tour if the knight ends on a square attacking the square from which it began (so that it may tour the board again immediately with the same path). Otherwise the tour is open. The exact number of open tours is still unknown. Creating a program to solve the knight's tour is a common problem given to computer science students.[1] Variations of the knight's tour problem involve chessboards of different sizes than the usual 8 × 8, as well as irregular (non-rectangular) boards.<br />The knight's tour problem is an instance of the more general Hamiltonian path problem in graph theory. The problem of finding a closed knight's tour is similarly an instance of the hamiltonian cycle problem. Note however that, unlike the general Hamiltonian path problem, the knight's tour problem can be solved in linear time.[2]<br />The earliest known references to the Knight's Tour problem date back to the 9th century CE. The pattern of a knight's tour on a half-board has been presented in verse form (as a literary constraint) in the highly stylized Sanskrit poem Kavyalankara written by the 9th century Kashmiri poet Rudrata, which discusses the art of poetry, especially with relation to theater ( HYPERLINK " http://en.wikipedia.org/wiki/Natyashastra" o " Natyashastra" Natyashastra). As was often the practice in ornate Sanskrit poetry, the syllabic patterns of this poem elucidate a completely different motif, in this case an open knight's tour on a half-chessboard.<br />One of the first mathematicians to investigate the knight's tour was Leonhard Euler. The first algorithm for completing the Knight's Tour was Warnsdorff's algorithm, first described in 1823 by H. C. Warnsdorff.<br />In the 20th century the Oulipo group of writers used it among many others. The most notable example is the 10 × 10 Knight's Tour which sets the order of the chapters in Georges Perec's novel Life: A User's Manual. The sixth game of the 2010 World Chess Championship between Viswanathan Anand and Veselin Topalov saw Anand making 13 consecutive knight moves.<br />The Knight's Tour problem is the problem of finding a Hamilton cycle (closed loop) for a knight traversing a chess board in the standard manner.<br />This problem is easily implemented as a simple backtracking-based depth-first-search algorithm. We maintain a list of visited squares, visited, and at each iteration we perform the following algorithm to progress from the current square: <br />3314700-228600Find the list of moves available to us <br />If there are none, return visited if it is a valid solution (isSolution) <br />Otherwise, try the moves available to us, collecting solutions using the list monad <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:solutions" solutions>>=<br />import Data.List (()<br />solutions :: (Int,Int) -> [(Int,Int)] -> [[(Int,Int)]]<br />solutions square visited = <br /> case nextMoves square visited of<br /> [] -> filter isSolution (return (reverse (square : visited)))<br /> moves -><br /> do move <- moves<br /> solutions move (square : visited)<br />Squares are represented as 1-based pairs of x and y co-ordinates, ie. a full board is: [(x,y) | x <- [1..8] , y <- [1..8]]. <br />The list of available moves is found by determining all the moves available to us (nextMoves square), then filtering then to remove those we have visited using visited (is the set difference operator). <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:nextMoves" nextMoves>>=<br />nextMoves :: (Int,Int) -> [(Int,Int)]<br />nextMoves (x,y) = <br /> [(x',y') | <br /> (x',y') <- [(x + 1,y + 2),(x + 1, y - 2),<br /> (x - 1,y + 2),(x - 1, y - 2),<br /> (x + 2,y + 1),(x + 2, y - 1),<br /> (x - 2,y + 1),(x - 2, y - 1)],<br /> 1 <= x' && x' <= 8 && 1 <= y' && y' <= 8]<br />Knight's graph showing all possible paths for a Knight's tour on a standard 8×8 chessboard. The numbers on each node indicate the number of possible moves that can be made from that position.<br />Conversions <br />The following functions convert between the two systems: <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:conversions" conversions>>=<br />fromSquare :: (Int,Int) -> Int<br />fromSquare (x,y) = (x + y * 8 - 9)<br />toSquare :: Int -> (Int,Int)<br />toSquare x = (r+1, q+1)<br /> where (q,r) = x `divMod` 8<br />Writing squares as integers, we now look at the types of the operations we would like to perform, namely: <br />Visit a square: addSquare :: Int -> a -> a <br />Check if we're done: isEmpty :: a -> Bool <br />Filter the list of moves: filterList :: b -> a -> b <br />The question is: what is required of our type a? It must: <br />Quickly determine membership for an integer between 0 and 63 <br />Determine whether no integers between 0 and 63 are members <br />Filter a list of moves, of type b <br />A.A Knight's Tour using words <br />It is easy to observe that these are simply bit operations, on a 64-bit word! We therefore maintain a 64-bit word representing available moves. <br />We can therefore rewrite solutions, to solutionsF: <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:solutionsF" solutionsF>>=<br />solutionsF :: Bits a => Int -> a -> [Int] -> [[(Int,Int)]]<br />solutionsF square visited list =<br /> let visited' = clearBit visited square<br /> in if visited' == 0<br /> then return (reverse \$ map toSquare (square : list))<br /> else do m <- findMoves square visited<br /> solutionsF m visited' (square : list)<br />B.Listing the moves <br />This only leaves the challenge of writing findMoves :: Bits a => Int -> a -> [Int]. This is where things get a little tricky... at first glance it seems we need to convert an integer, to a board square, to a list of board squares (nextMoves) to a list of integers, and finally to a filtered list of integers. This all seems rather wasteful. <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:findMoves" findMoves>>=<br />findMoves :: Bits a => Int -> a -> [Int]<br />findMoves sqr visited = <br /> movesList sqr \$ filteredMoves sqr visited<br />filteredMoves :: Bits a => Int -> a -> b then finds the available moves and movesList :: Int -> b -> [Int] converts this to a list. The next question is then: what is a suitable type to easily represent a list of moves from a 64-bit word? The answer: a 64-bit word! <br />More precisely: a 64-bit word centered around the 'reference square', (3,3). We can then write filteredMoves as follows: <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:filteredMoves" filteredMoves>>=<br />filteredMoves :: Bits a => Int -> a -> a<br />filteredMoves sqr visited = <br /> let x = sqr `mod` 8<br /> mask<br /> | x == 0 = makeMask (3 <=)<br /> | x == 1 = makeMask (2 <=)<br /> | x == 7 = makeMask (3 >=)<br /> | x == 6 = makeMask (4 >=)<br /> | otherwise = makeMask (const True)<br /> in mask .&. shiftR visited (sqr - fromSquare (3,3))<br />The last line is the most interesting: it shows that we can perform the filter operation as two bitwise operations. <br />The remainder of this function concentrates on ensuring that the mask does not include values 'off the edge of the board'. It does this by filtering the mask (the list of moves) when the x coordinate approaches the edge. <br />makeMask is given as follows: <br /><< HYPERLINK " http://en.literateprograms.org/The_Knight%27s_Tour_%28Haskell%29" l " chunk%20use:makeMask" makeMask>>=<br />makeMask::Bits a => (Int -> Bool) -> a<br />makeMask f = <br /> foldl setBit 0 \$ map fromSquare \$ filter (f . fst) (nextMoves (3,3))<br />C.Performing the Knight's Tour <br />To perform the Knight's Tour, you only need a pencil and paper. Draw an 8x8 grid, and fill in the coordinates (algebraic and linear). You can also carry around a portable chess board, with each square marked. Remember that you'll also need some way to mark squares that have already been landed upon. <br />To begin the performance, you may need to explain the nature of the challenge first. Introduce the board, and explain how the knight moves, and that you have to hit each square. You can also offer to look away or be blindfolded during the challenge. <br />To start the challenge itself, have your spectator choose a starting square. <br />From this point, simply recall your links, and have the audience member cross out the squares as you call them, until you get to the final square. Don't forget the square on which you started, so you don't overshoot the final square. <br />D.Memorizing the Path <br />To memorize the path, you'll simply link each coordinate to the following coordinate in the path list. <br />With the linear coordinates, you might link TIE (1) to DOVE (18), then DOVE (18) to MULE (35), and so on, finishing with linking TOT (11) to TIE (1). <br />With the algebraic coordinates, you might link A VOW (a8) to BEACH (b6), then BEACH (b6) to CAR (c4), and so on, finishing with linking CAKE (c7) to A VOW (a8). <br />