Funddamentals of data structures
Upcoming SlideShare
Loading in...5
×
 

Funddamentals of data structures

on

  • 1,365 views

we are innovative,we are different,we are genius so they call us idiots

we are innovative,we are different,we are genius so they call us idiots
Visit us for movies,videos,documentaries,sports,funny pics and many more join www.globalidiots.com

Statistics

Views

Total Views
1,365
Views on SlideShare
1,365
Embed Views
0

Actions

Likes
2
Downloads
135
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Funddamentals of data structures Funddamentals of data structures Presentation Transcript

  • Fundamentals of Data Structure http://www.Globalidiots.com
  • Data Structures• "Once you succeed in writing the programs for complicated algorithms, they usually run extremely fast. The computer doesnt need to understand the algorithm, it’s task is only to run the programs.“• There are a number of facets to good programs: they must – run correctly – run efficiently – be easy to read and understand – be easy to debug and – be easy to modify. http://www.Globalidiots.com
  • Data Structure (Cont.) http://www.Globalidiots.comWhat is Data Structure ?• A scheme for organizing related pieces of information• A way in which sets of data are organized in a particular system• An organised aggregate of data items• A computer interpretable format used for storing, accessing, transferring and archiving data• The way data is organised to ensure efficient processing: this may be in lists, arrays, stacks, queues or trees Data structure is a specialized format for organizing and storing data so that it can be be accessed and worked with in appropriate ways to make an a program efficient
  • Data Structures (Cont.)• Data Structure = Organised Data + Allowed OperationsThere are two design aspects to every data structure:the interface part The publicly accessible functions of the type. Functions like creation and destruction of the object, inserting and removing elements (if it is a container), assigning values etc.the implementation part : Internal implementation should be independent of the interface. Therefore, the details of the implementation aspect should be hidden out from the users. http://www.Globalidiots.com
  • Collections•Programs often deal with collections of items.•These collections may be organised in many ways and usemany different program structures to represent them, yet, froman abstract point of view, there will be a few commonoperations on any collection.create Create a new collectionadd Add an item to a collectiondelete Delete an item from a collectionfind Find an item matching some criterion in the collectiondestroy Destroy the collection http://www.Globalidiots.com
  • Analyzing an Algorithm• Simple statement sequence s1; s2; …. ; sk – Complexity is O(1) as long as k is constant• Simple loops for(i=0;i<n;i++) { s; } where s is O(1) – Complexity is n O(1) or O(n)• Loop index doesn’t vary linearly h = 1; while ( h <= n ) { s; h = 2 * h;} – Complexity O(log n)• Nested loops (loop index depends on outer loop index) for(i=0;i<n;i++) f for(j=0;j<n;j++) { s; } – Complexity is n O(n) or O(n2) http://www.Globalidiots.com
  • ArraysAn Array is the simplest form of implementing a collection• Each object in an array is called an array element• Each element has the same data type (although they may have different values)• Individual elements are accessed by index using a consecutive range of integers One Dimensional Array or vector int A[10]; for ( i = 0; i < 10; i++) A[i] = i +1; A[0] A[1] A[2] A[n-2] A[n-1] 1 2 3 N-1 N http://www.Globalidiots.com
  • Arrays (Cont.)Multi-dimensional ArrayA multi-dimensional array of dimension n (i.e., an n-dimensional array or simply n-D array) is a collection of items which is accessed via n subscript expressions. Forexample, in a language that supports it, the (i,j) th element of the two-dimensionalarray x is accessed by writing x[i,j]. Column 0 1 2 3 4 5 6 7 8 9 10 j n 0 1 R 2 O : : : : : : : : : : : : : : : W i x m http://www.Globalidiots.com
  • Arrays (Cont.) http://www.Globalidiots.com
  • Array : Limitations• Simple and Fast but must specify size during construction• If you want to insert/ remove an element to/ from a fixed position in the list, then you must move elements already in the list to make room for the subsequent elements in the list.• Thus, on an average, you probably copy half the elements.• In the worst case, inserting into position 1 requires to move all the elements.• Copying elements can result in longer running times for a program if insert/ remove operations are frequent, especially when you consider the cost of copying is huge (like when we copy strings)• An array cannot be extended dynamically, one have to allocate a new array of the appropriate size and copy the old array to the new array
  • Linked Lists• The linked list is a very flexible dynamic data structure: items may be added to it or deleted from it at will – Dynamically allocate space for each element as needed – Include a pointer to the next item – the number of items that may be added to a list is limited only by the amount of memory available Linked list can be perceived as connected (linked) nodes Each node of the list contains • the data item • a pointer to the next node • The last node in the list contains a NULL pointer to indicate that it is the end or tail of the list. Data Next object
  • Linked Lists (Cont.)• Collection structure has a pointer to the list head – Initially NULL The variable (or handle)• Add first item which represents the list – Allocate space for node is simply a pointer to the – Set its data pointer to object node at the head of the list. – Set Next to NULL – Set Head to point to new node Collection node Head Data Next Tail object
  • Linked Lists (Cont.)• Add a node – Allocate space for node – Set its data pointer to object – Set Next to current Head – Set Head to point to new nodeCollection Head node node Data Next Data Next object2 object
  • Linked Lists - Add implementation• Implementation struct t_node { void *item; Recursive type definition - struct t_node *next; C allows it! } node; typedef struct t_node *Node; struct collection { Node head; …… }; int AddToCollection( Collection c, void *item ) { Node new = malloc( sizeof( struct t_node ) ); new->item = item; new->next = c->head; c->head = new; Error checking, asserts return TRUE; omitted for clarity! }
  • Linked Lists - Find implementation • Implementationvoid *FindinCollection( Collection c, void *key ) { Node n = c->head; while ( n != NULL ) { if ( KeyCmp( ItemKey( n->item ), key ) == 0 ) { return n->item; n = n->next; } return NULL; } Add time Constant - independent of n Search time Worst case - n • A recursive implementation is also possible!
  • Linked Lists - Delete implementation • Implementationvoid *DeleteFromCollection( Collection c, void *key ) { Node n, prev; n = prev = c->head; while ( n != NULL ) { if ( KeyCmp( ItemKey( n->item ), key ) == 0 ) { prev->next = n->next; return n; } prev = n; n = n->next; head } return NULL; }
  • Linked Lists - Variations• Simplest implementation By ensuring that the tail of the list is always pointing to the head, we can – Add to head build a circularly linked list – Last-In-First-Out (LIFO) semantics head is tail->next• Modifications LIFO or FIFO using ONE pointer – First-In-First-Out (FIFO) head – Keep a tail pointer struct t_node { tail void *item; struct t_node *next; } node; typedef struct t_node *Node; struct collection { Node head, tail; };
  • Linked Lists - Doubly linked• Doubly linked lists – Can be scanned in both directions Applications requiring both way search struct t_node { Eg. Name search in void *item; telephone directory struct t_node *prev, *next; } node; typedef struct t_node *Node; struct collection { Node head, tail; }; head prev prev prev tail
  • Binary Tree• The simplest form of Tree is a Binary Tree Note the – Binary Tree Consists of recursive • Node (called the ROOT node) definition! • Left and Right sub-trees • Both sub-trees are binary trees • The nodes at the lowest levels of the tree (theIn an ordered binary tree ones with no sub- trees) are called leaves the keys of all the nodes in • the left sub-tree are less than that of the root • the keys of all the nodes in the right sub-tree are greater than that of the root, • the left and right sub- trees are themselves ordered binary trees. Each sub-tree is itself a binary tree
  • Binary Tree (Cont.)• If A is the root of a binary tree and B is the root of its left/right subtree then A o A is the father of B o B is the left/right son of A B C• Two nodes are brothers if they are left and right sons of the same father D E• Node n1 is an ancestor of n2 (and n2 is descendant of n1) if n1 is either the father of n2 or the father of F G some ancestor of n2• Strictly Binary Tree: If every nonleaf node in a binary tree has non empty left and right subtrees• Level of a node: Root has level 0. Level of any node is one more than the level of its father• Depth: Maximum level of any leaf in the tree A binary tree can contain at most 2l nodes at level l Total nodes for a binary tree with depth d = 2d+1 - 1
  • Binary Tree - Implementationstruct t_node { void *item; struct t_node *left; struct t_node *right; };typedef struct t_node *Node;struct t_collection { Node root; …… };
  • Binary Tree - Implementation• Findextern int KeyCmp( void *a, void *b );/* Returns -1, 0, 1 for a < b, a == b, a > b */void *FindInTree( Node t, void *key ) { Less, if ( t == (Node)0 ) return NULL; search switch( KeyCmp( key, ItemKey(t->item) ) ) { left case -1 : return FindInTree( t->left, key ); case 0: return t->item; case +1 : return FindInTree( t->right, key ); } } Greater, search rightvoid *FindInCollection( collection c, void *key ) { return FindInTree( c->root, key ); }
  • Binary Tree - Performance• Find – Complete Tree – Height, h • Nodes traversed in a path from the root to a leaf – Number of nodes, h • n = 1 + 21 + 22 + … + 2h = 2h+1 - 1 • h = floor( log2 n )
  • Binary Tree - TraversingTraverse: Pass through the tree, enumerating each node once• PreOrder (also known as depth-first order) 1. Visit the root 2. Traverse the left subtree in preorder 3.Traverse the right subtree in preorder• InOrder (also known as symmetric order) 1. Traverse the left subtree in inorder 2. Visit the root 3. Traverse te right subtree in inorder• PostOrder (also known as symmetric order) 1. Traverse the left subtree in postorder 2. Traverse the right subtree in postorder 3. Visit the root
  • Binary Tree - Applications• A binary tree is a useful data structure when two-way decisions must be made at each point in a process – Example: Finding duplicates in a list of numbers• A binary tree can be used for representing an expression containing operands (leaf) and operators (nonleaf node). Traversal of the tree will result in infix, prefix or postfix forms of expressionTwo binary trees are MIRROR SIMILAR if they are both empty or if they are nonempty, the left subtree of each is mirror similar to the right subtree
  • General Tree• A tree is a finite nonempty set of elements in which one element is called the ROOT and remaining element partitioned into m >=0 disjoint subsets, each of which is itself a tree• Different types of trees – binary tree, n-ary tree, red-black tree, AVL tree A Hierarchical Tree
  • HeapsHeaps are based on the notion of a complete treeA binary tree is completely full if it is of height, h, and has 2h+1-1 nodes.• A binary tree of height, h, is complete iff – it is empty or – its left subtree is complete of height h-1 and its right subtree is completely full of height h-2 or – its left subtree is completely full of height h-1 and its right subtree is complete of height h-1.• A complete tree is filled from the left: – all the leaves are on – the same level or two adjacent ones and – all nodes at the lowest level are as far to the left as possible.• A binary tree has the heap property iff – it is empty or – the key in the root is larger than that in either child and both subtrees have the heap property.
  • Heaps (Cont.)• A heap can be used as a priority queue:• the highest priority item is at the root and is trivially extracted. But if the root is deleted, we are left with two sub-trees and we must efficiently re- create a single tree with the heap property.• The value of the heap structure is that we can both extract the highest priority item and insert a new one in O(logn) time. Example: A deletion will remove the T at the root
  • Heaps (Cont.)To work out how were going to maintainthe heap property, use the fact that acomplete tree is filled from the left. So thatthe position which must become empty isthe one occupied by the M. Put it in thevacant root position.This has violated the condition that the rootmust be greater than each of its children.So interchange the M with the larger of itschildren.The left subtree has now lost the heapproperty. So again interchange the M withthe larger of its children.We need to make at most h interchanges ofa root of a subtree with one of its children to
  • Heaps (Cont.)Addition to a HeapTo add an item to a heap, we follow the reverse procedure.Place it in the next leaf position and move it up.Again, we require O(h) or O(logn) exchanges.
  • Comparisons Arrays Linked List Trees Simple, fast Simple Still Simple Inflexible Flexible FlexibleAdd O(1) O(1) O(log n) O(n) inc sort sort -> no advDelete O(n) O(1) - any O(log n) O(n) - specificFind O(n) O(n) O(log n) O(logn) (no bin search) binary search
  • QueuesQueues are dynamic collections which have some concept of order• FIFO queue – A queue in which the first item added is always the first one out.• LIFO queue – A queue in which the item most recently added is always the first one out.• Priority queue – A queue in which the items are sorted so that the highest priority item is always the next one to be extracted. Queues can be implemented by Linked Lists
  • Stacks• Stacks are a special form of collection with LIFO semantics• Two methods – int push( Stack s, void *item ); - add item to the top of the stack – void *pop( Stack s ); - remove most recently pushed item from the top of the stack• Like a plate stacker• Other methods int IsEmpty( Stack s ); Determines whether the stack has anything in it void *Top( Stack s ); Return the item at the top without deleting it
  • Stack (Cont.)• Stack very useful for Recursions• Key to call / return in functions & proceduresfunction f( int x, int y) { int a; if ( term_cond ) return …; a = ….; return g( a ); }function g( int z ) { int p, q; p = …. ; q = …. ; return f(p,q); } Context for execution of f
  • Searching Computer systems are often used to store large amounts of data from which individual records must be retrieved according to some search criterion. Thus the efficient storage of data to facilitate fast searching is an important issue Things to consider – the average time – the worst-case time and – the best possible time.• Sequential Searches – Time is proportional to n – We call this time complexity O(n) – Both arrays (unsorted) and linked lists
  • Binary Search• Sorted array on a key• first compare the key with the item in the middle position of the array• If theres a match, we can return immediately.• If the key is less than the middle key, then the item sought must lie in the lower half of the array• if its greater then the item sought must lie in the upper half of the array• Repeat the procedure on the lower (or upper) half of the array - RECURSIVE Time complexity O(log n)
  • Binary Search Implementationstatic void *bin_search( collection c, int low, int high, void *key ) { int mid; if (low > high) return NULL; /* Termination check */ mid = (high+low)/2; switch (memcmp(ItemKey(c->items[mid]),key,c->size)) { case 0: return c->items[mid]; /* Match, return item found */ case -1: return bin_search( c, low, mid-1, key); /* search lower half */ case 1: return bin_search( c, mid+1, high, key ); /* search upper half */ default : return NULL; } }void *FindInCollection( collection c, void *key ) {/* Find an item in a collection Pre-condition: c is a collection created by ConsCollection c is sorted in ascending order of the key key != NULL Post-condition: returns an item identified by key if one exists, otherwise returns NULL */ int low, high; low = 0; high = c->item_cnt-1; return bin_search( c, low, high, key ); }
  • Binary Search vs Sequential Search• Find method Logs – Sequential search Base 2 is by far the most common • Worst case time: c1 n in this course. Assume base 2 – Binary search unless otherwise noted! • Worst case time: c2 log2n Compare n with logn 60 50Smallproblems - 40 nwe’re not Large 4 log n Time 30interested! problems - n 20 we’re log2n interested 10Binary search in this gap! More complex 0 Higher constant factor 0 10 20 30 40 50 60 n
  • Sorting A file is said to be SORTED on the key if i < j implies that k[i] preceeds k[j] in some ordering of the keys Different types of Sorting• Exchange Sorts • Bubble Sort • Quick Sort• Insertion Sorts• Selection Sorts • Binary Tree Sort • Heap Sort
  • Insertion SortFirst card is already sortedWith all the rest, ‚Scan back from the end until you find the first card larger than the new one O(n) —Move all the lower ones up one slot O(n) „insert it O(1) ‚ For n cards Complexity O(n2) A K 10 5 J 4 2 Q 9 o 9 „
  • Bubble Sort Bubble Sort • From the first element – Exchange pairs if they’re out of order – Repeat from the first to n-1 – Stop when you have only one element to check/* Bubble sort for integers */#define SWAP(a,b) { int t; t=a; a=b; b=t; }void bubble( int a[], int n ) { Outer loop n iterations int i, j; for(i=0;i<n;i++) { /* n passes thru the array */ Inner loop /* From start to the end of unsorted part */ n-1, n-2, n-3, … , 1 iterations for(j=1;j<(n-i);j++) { /* If adjacent items out of order, swap */ if( a[j-1]>a[j] ) SWAP(a[j-1],a[j]); O(1) statement } } } Overall O(n2)
  • Partition Exchange or Quicksort• Example of Divide and Conquer algorithm• Two phases – Partition phase • Divides the work into half < pivot pivot > pivot – Sort phase • Conquers the halves!< pivot > pivot < p’ p’ > p’ pivot < p” p” > p”quicksort( void *a, int low, int high ) {int pivot;if ( high > low ) /* Termination condition! */ { pivot = partition( a, low, high ); quicksort( a, low, pivot-1 ); quicksort( a, pivot+1, high ); }}
  • Heap SortHeaps also provide a means of sorting:• construct a heap,• add each item to it (maintaining the heap property!),• when all items have been added, remove them one by one (restoring the heap property as each one is removed).• Addition and deletion are both O(logn) operations. We need to perform n additions and deletions, leading to an O(nlogn) algorithm• Generally slower
  • Comparisons of Sorting• The Sorting Repertoire – Insertion O(n2) Guaranteed – Bubble O(n2) Guaranteed – Heap O(n log n) Guaranteed – Quick O(n log n) Most of the time! O(n2) – Bin O(n) Keys in small range O(n+m) – Radix O(n) Bounded keys/duplicates O(nlog n)
  • Hashing• A Hash Table is a data structure that associates each element (e) to be stored in our table with a particular value known as a key (k)• We store item’s (k,e) in our tables• Simplest form of a Hash Table is an Array• A bucket array for a hash table is an array A of size N, where each cell of A is thought of as a bucket and the integer N defines the capacity of the array,
  • Bucket Arrays• If the keys (k) associated with each element (e) are well distributed in the range [0, N-1] this bucket array is all that is needed.• An element (e) with key (k) is simply inserted into bucket A[k].• So A[k] = (Item)(k, e);• Any bucket cells associated with keys not present, stores a NO_SUCH_KEY object.• If keys are not unique, that is there exists element key pairs (e1, k) and (e2, k) we will have two different elements mapped to the same bucket.• This is known as a collision. And we will discuss this later.• We generally want to avoid such collisions.
  • Direct Access Table• If we have a collection of n elements whose keys are unique integers in (1,m), where m >= n, then we can store the items in a direct address table, T[m], where Ti is either empty or contains one of the elements of our collection.• Searching a direct address table is clearly an O(1) operation: for a key, k, we access Tk,• if it contains an element, return it,• if it doesnt then return a NULL.
  • Analysis of Bucket Arrays• Drawback 1: The Hash Table uses O(N) space which is not necessarily related to the number of elements n actually present in our set.• If N is large relative to n, then this approach is wasteful of space.• Drawback 2: The bucket array implementation of Hash Tables requires key values (k) associated with elements (e) to be unique and in the range [0, N-1], which is often not the case.
  • Hash Functions• Associated with each Hash Table is a function h, known as a Hash Function.• This Hash Function maps each key in our set to an integer in the range [0, N-1]. Where N is the capacity of the bucket array.• The idea is to use the hash function value, h(k) as an index into our bucket array.• So we store the item (k, e) in our bucket at A[h(k)]. That is A[h(k)] = (Item)(k, e);• Unfortunately, finding a perfect hashing function is not always possible. Lets say that we can find a hash function, h(k), which maps most of the keys onto unique integers, but maps a small number of keys on to the same integer. If the number of collisions (cases where multiple keys map onto the same integer), is sufficiently small, then hash tables work quite well and give O(1) search times.
  • Handling the collisions• In the small number of cases, where multiple keys map to the same integer, then elements with different keys may be stored in the same "slot" of the hash table. It is clear that when the hash function is used to locate a potential match, it will be necessary to compare the key of that element with the search key. But there may be more than one element which should be stored in a single slot of the table. Various techniques are used to manage this problem:• chaining,• overflow areas,• re-hashing,• using neighbouring slots (linear probing),• quadratic probing,• random probing,
  • Chaining• Chaining One simple scheme is to chain all collisions in lists attached to the appropriate slot. This allows an unlimited number of collisions to be handled and doesnt require a priori knowledge of how many elements are contained in the collection. The tradeoff is the same as with linked lists versus array implementations of collections: linked list overhead in space and, to a lesser extent, in time.
  • Rehashing• Re-hashing schemes use a second hashing operation when there is a collision. If there is a further collision, we re-hash until an empty "slot" in the table is found. The re-hashing function can either be a new function or a re-application of the original one. As long as the functions are applied to a key in the same order, then a sought key can always be located.• h(j)=h(k), so the next hash function, h1 is used. A second collision occurs, so h2 is used.
  • Overflow• Divide the pre-allocated table into two sections: the primary area to which keys are mapped and an area for collisions, normally termed the overflow area.• When a collision occurs, a slot in the overflow area is used for the new element and a link from the primary slot established as in a chained system. This is essentially the same as chaining, except that the overflow area is pre-allocated and thus possibly faster to access. As with re-hashing, the maximum number of elements must be known in advance, but in this case, two parameters must be estimated: the optimum size of the primary and overflow areas.• design systems with multiple overflow tables
  • Comparisons
  • GraphA graph consists of a set of nodes (or vertices) and a set of arcs (or edges)Graph G = Nodes {A,B, C} Arcs {(A,C), (B,C)}Terminology :• V = Set of vertices (or nodes)• |V| = # of vertices or cardinality of V (in usual terminology |V| = n)• E = Set of edges, where an edge is defined by two vertices• |E| = # of edges or cardinality of E• A Graph, G is a pair G = (V, E) Labeled Graphs: We may give edges and vertices labels. Graphing applications oftenrequire the labeling of vertices Edges might also be numerically labeled.For instance if the vertices represent cities, the edges might be labeled to represent distances.
  • Graph Terminology Directed (or digraph) & Undirected Graphs A directed graph is one in which every edge (u, v) has a direction, so that (u, v) is different from (v, u). In an undirected graph, there is no distinction between (u, v) and (v, u). There are two possible situations that can arise in a directed graph between vertices u and v.• i) only one of (u, v) and (v, u) is present.• ii) both (u, v) and (v, u) are present. •An edge (u, v) is said to be directed from u to v if the pair (u, v) is ordered with u preceding v. E.g. A Flight Route •An edge (u, v) is said to be undirected if the pair (u, v) is not ordered E.g. Road Map
  • Graph Terminology• Two vertices joined by an edge are called the end vertices or endpoints of the edge.• If an edge is directed its first endpoint is called the origin and the other is called the destination.• Two vertices are said to be adjacent if they are endpoints of the same edge.• The degree of a vertex v, denoted deg(v), is the number of incident edges of v.• The in-degree of a vertex v, denoted indeg(v) is the number of incoming edges of v.• The out-degree of a vertex v, denoted outdeg(v) is the number of outgoing edges of v.
  • Graph Terminology• Two vertices joined by an edge are called the end vertices or endpoints of the edge.• If an edge is directed its first endpoint is called the origin and the other is called the destination.• Two vertices are said to be adjacent if they are endpoints of the same edge.
  • Graph Terminology a B b hA d D F j e i c C g f E
  • Graph Terminology a B b h A d D F j e i c C g Vertices A and B fare endpoints of edge a E
  • Graph Terminology a B b h A d D F j e i c C gVertex A is the forigin of edge a E
  • Graph Terminology a B b h A d D F j e i c C g Vertex B is the fdestination of edge a E
  • Graph Terminology a B b h A d D F j e i c C gVertices A and B are fadjacent as they areendpoints of edge a E
  • Graph Terminology• An edge is said to be incident on a vertex if the vertex is one of the edges endpoints.• The outgoing edges of a vertex are the directed edges whose origin is that vertex.• The incoming edges of a vertex are the directed edges whose destination is that vertex.
  • Graph Terminology a V b hU d X Z j e i c W g Edge a is incident on vertex V Edge h is incident on vertex Z f Edge g is incident on vertex Y Y
  • Graph Terminology a V b hU d X Z j e i c W g The outgoing edges of vertex W are the edges with vertex W as f origin {d, e, f} Y
  • Graph Terminology a V b hU d X Z j e i c W g The incoming edges of vertex X are the edges with vertex X as f destination {b, e, g, i} Y
  • Graph Terminology• The degree of a vertex v, denoted deg(v), is the number of incident edges of v.• The in-degree of a vertex v, denoted indeg(v) is the number of incoming edges of v.• The out-degree of a vertex v, denoted outdeg(v) is the number of outgoing edges of v.
  • Graph Terminology a V b hU d X Z j e i c W g The degree of vertex X is the number of incident f edges on X. deg(X) = ? Y
  • Graph Terminology a V b hU d X Z j e i c W g The degree of vertex X is the number of incident f edges on X. deg(X) = 5 Y
  • Graph Terminology a V b hU d X Z j e i c W g The in-degree of vertex X is the number of edges that f have vertex X as a destination. indeg(X) = ? Y
  • Graph Terminology a V b hU d X Z j e i c W g The in-degree of vertex X is the number of edges that f have vertex X as a destination. indeg(X) = 4 Y
  • Graph Terminology a V b hU d X Z j e i c W g The out-degree of vertex X is the number of edges that f have vertex X as an origin. outdeg(X) = ? Y
  • Graph Terminology a V b hU d X Z j e i c W g The out-degree of vertex X is the number of edges that f have vertex X as an origin. outdeg(X) = 1 Y
  • Graph Terminology• Path: » Sequence of alternating vertices and edges » begins with a vertex » ends with a vertex » each edge is preceded and followed by its endpoints• Simple Path: » A path where all where all its edges and vertices are distinct
  • Graph Terminology V P1 a b hU d X Z j e i c W g We can see that P1 is a simple path. f P1 = {U, a, V, b, X, h, Z} Y
  • Graph Terminology a V b h U d X Z j e i c W g f P2 is not a simple path as not all its edges and Y vertices are distinct.P2 = {U, c, W, e, X, g, Y, f, W, d, V}
  • Graph Terminology• Cycle: » Circular sequence of alternating vertices and edges. » Each edge is preceded and followed by its endpoints.• Simple Cycle: » A cycle such that all its vertices and edges are unique.
  • Graph Terminology a V b hU d X Z j e i c W g f Y Simple cycle {U, a, V, b, X, g, Y, f, W, c}
  • Graph Terminology a V b hU d X Z j e i c W g f Y Non-Simple Cycle {U, c, W, e, X, g, Y, f, W, d, V, a}
  • Graph Properties
  • Graph RepresentationAdjacency Matrix ImplementationA |V| × |V| matrix of 0s and 1s. A 1 represents a connection or an edge.Storage = |V|² (this is huge!!)For a non-directed graph there will always be symmetry along the top left to bottomright diagonal. This diagonal will always be filled with zeros. This simplifies Coding.Coding is concerned with storing the graphin an efficient manner.One way is to take all the bits from theadjacency matrix and concatenatethem to form a binary string.For undirected graphs, it suffices to concatenate the bits of the upper right triangle ofthe adjacency matrix. Graph number zero is a graph with no edges.
  • Graph ApplicationsSome of the applications of graphs are :• Networks (computer, cities ....)• Maps (any geographic databases )• Graphics : Geometrical Objects• Neighborhood graphs• Flow Problem• Workflow
  • Reachability• Reachability: Given two vertices u and v of a directed graph G, we say that u reaches v if G has a directed path from u to v.• That is v is reachable from u.• A directed graph is said to be strongly connected if for any two vertices u and v of G, u reaches v.
  • Graphs• Depth First Search• Breadth First Search• Directed Graphs (Reachability)• Application to Garbage Collection in Java• Shortest Paths• Dijkstras Algorithm
  • Depth First SearchAlgorithim DFS() Input graph G Output labeling of the edges of G as discovery edges and back edges for all u in G.vertices() setLabel(u, Unexplored) for all e in G.incidentEdges() setLabel(e, Unexplored) for all v in G.vertices() if getLabel(v) = Unexplored DFS(G, v).
  • Algorithm DFS(G, v) Input graph G and a start vertex v of G Output labeling of the edges of G as discovery edges and back edges setLabel(v, Visited) for all e in G.incidentEdges(v) if getLabel(e) = Unexplored w <--- opposite(v, e) if getLabel(w) = Unexplored setLabel(e, Discovery) DFS(G, w) else setLabel(e, BackEdge)
  • Depth First SearchA Unexplored VertexA Visited Vertex Unexplored Edge Discovery Edge Back Edge
  • Start At Vertex A AB D E C
  • ADiscovery Edge B D E C
  • A B D EVisited Vertex B C
  • A B D EDiscovery Edge C
  • AB D E C Visited Vertex C
  • AB D E C Back Edge
  • AB D E C Discovery Edge
  • AB D E C Visited Vertex D
  • AB D E C Back Edge
  • AB D E C Discovery Edge
  • AB D E C Visited Vertex E
  • A Discovery EdgeB D E C
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • A B C DE F G HI J K LM N O P
  • Breadth First Search Algorithm BFS(G) Input graph G Output labeling of the edges and a partitioning of the vertices of G for all u in G.vertices() setLabel(u, Unexplored) for all e in G.edges() setLabel(e, Unexplored) for all v in G.vertices() if getLabel(v) = Unexplored BFS(G, v)
  • Algorithm BFS(G, v) L0 <-- new empty list L0.insertLast(v) setLabel(v, Visited) i <-- 0 while(¬Li.isEmpty()) Li+1 <-- new empty list for all v in G.vertices(v) for all e in G.incidentEdges(v) if getLabel(e) = Unexplored w <-- opposite(v) if getLabel(w) = Unexplored setLabel(e, Discovery) setLabel(w, Visited) Li+1.insertLast(w) else setLabel(e, Cross) i <-- i + 1
  • AB C D E F
  • Start Vertex A Create a sequence L0 A insert(A) into L0B C D E F
  • Start Vertex A while L0 is not empty L0 A create a new empty list L1B C D E F
  • Start Vertex A for each v in L0 do L0 A get incident edges of vB C D E F
  • Start Vertex A if first incident edge is unexplored get opposite of v, say w L0 A if w is unexplored set edge as discoveryB C D E F
  • Start Vertex A set vertex w as visited L0 A and insert(w) into L1L1 B C D E F
  • Start Vertex A get next incident edge L0 AL1 B C D E F
  • Start Vertex A if edge is unexplored L0 A we get vertex opposite v say w, if w is unexploredL1 B C D E F
  • Start Vertex A if w is unexplored L0 set edge as discovery AL1 B C D E F
  • Start Vertex A set w as visited and L0 add w to L1 AL1 B C D E F
  • Start Vertex A continue in this fashion L0 A until we have visited all incident edge of vL1 B C D E F
  • Start Vertex A continue in this fashion L0 A until we have visited all incident edge of vL1 B C D E F
  • Start Vertex A as L0 is now empty L0 A we continue with list L1L1 B C D E F
  • Start Vertex A as L0 is now empty L0 A we continue with list L1L1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • Start Vertex A L0 AL1 B C D L2 E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • AB C D E F
  • Weighted Graphs• In a weighted graph G, each edge e of G has associated with it, a numerical value, known as a weight.• Edge weights may represent distances, costs etc.• Example: In a flight route graph the weights associated with each graph edge could represent the distances between airports.
  • Shortest Paths• Given a weighted graph G and two vertices u and v of G, we require that we find a path between u and v that has a minimum total weight between u and v also known as a (shortest path).• The length of a path is the sum of the weights of the paths edges.
  • Dijkstras Algorithm• The distance of a vertex v from a vertex s is the length of a shortest path between s and v• Dijkstra’s algorithm computes the distances of all the vertices from a given start vertex s Assumptions:  the graph is connected  the edges are undirected  the edge weights are nonnegative
  • Dijkstras Algorithm• We grow a “cloud” of vertices, beginning with s and eventually covering all the vertices• We store with each vertex v a label d(v) representing the distance of v from s in the subgraph consisting of the cloud and its adjacent vertices At each step  We add to the cloud the vertex u outside the cloud with the smallest distance label, d(u)  We update the labels of the vertices adjacent to u
  • Edge Relaxation• Consider an edge e = (u,z) such that  u is the vertex most recently added to the cloud  z is not in the cloudThe relaxation of edge e updates distance d(z) as follows: d(z) ← min{d(z),d(u) + weight(e)}
  • 8 A 4 2B 7 1 D C2 3 9 5 E F
  • Add starting vertex to cloud. 8 A(0) 4 2B 7 1 D C2 3 9 5 E F
  • We store with each vertex v a label d(v) representing the distance of v from s 8 A(0) 4 in the subgraph consisting 2 of the cloud and its adjacentB 7 1 D vertices. C2 3 9 5 E F
  • We store with each vertex v a label d(v) representing the distance of v from s 8 A(0) 4 in the subgraph consisting of the cloud and its adjacent 2 7 1 D(4) vertices.B(8) C(2)2 3 9 5 E F
  • At each step we add to the cloud the vertex outside the cloud 8 A(0) 4 with the smallest distance label d(v). 2B(8) 7 1 D(4) C(2)2 3 9 5 E F
  • We update the vertices adjacent to v. d(v) = min{d(z), d(v) + weight(e)} 8 A(0) 4 2B(8) 7 1 D(3) C(2)2 3 9 5 E(5) F(11)
  • At each step we add to the cloud the vertex outside the cloud with the smallest distance label d(v). 8 A(0) 4 2B(8) 7 1 D(3) C(2)2 3 9 5 E(5) F(11)
  • We update the vertices adjacent to v. d(v) = min{d(z), d(v) + weight(e)}. 8 A(0) 4 2B(8) 7 1 D(3) C(2)2 3 9 5 E(5) F(8)
  • At each step we add to the cloud the vertex outside the cloud with the smallest 8 A(0) 4 distance label d(v). 2B(8) 7 1 D(3) C(2)2 3 9 5 E(5) F(8)
  • We update the vertices adjacent to v. d(v) = min{d(z), d(v) + weight(e)}. 8 A(0) 4 2B(7) 7 1 D(3) C(2)2 3 9 5 E(5) F(8)
  • At each step we add to the cloud the vertex outside the cloud with the smallest 8 A(0) 4 distance label d(v). 2B(7) 7 1 D(3) C(2)2 3 9 5 E(5) F(8)
  • At each step we add to the cloud the vertex outside the cloud with the smallest 8 A(0) 4 distance label d(v). 2B(7) 7 1 D(3) C(2)2 3 9 5 E(5) F(8)
  • 7 7 B C 2 3 3 2 2 DA E F 1 2 2 6 4 H G
  • 7 7 B C 2 3 3 2 2 DA(0) E F 1 2 2 6 4 H GInsert
  • 7 7 B(2) C 2 3 3 2 2 DA(0) E F 1 2 2 6 4 H G(6) Update
  • 7 7 B(2) C 2 3 3 2 2 DA(0) E F 1 2 2 6 4 H G(6) Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F 1 2 2 6 4 H G(6)Update
  • 7 7 B(2) C(9) 2 3 3 2 2 D A(0) E(4) F 1 2 2 6 4 H G(6)Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F(6) 1 2 2 6 4 H G(5) Update
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F(6) 1 2 2 6 4 H G(5) Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F(6) 1 2 2 6 4 H(9) G(5) Update
  • 7 7 B(2) C(9) 2 3 3 2 2 A(0) E(4) F(6) D 1 2 2 6 4 H(9) G(5)Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F(6) 1 2 2 6 4 H(8) G(5)Update
  • 7 7 B(2) C(9) 2 3 3 2 2 DA(0) E(4) F(6) 1 2 2 6 4 H(8) G(5)Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 D(10A(0) E(4) F(6) ) 1 2 2 6 4 H(8) G(5)Update
  • 7 7 B(2) C(9) 2 3 3 2 2 D(10A(0) E(4) F(6) ) 1 2 2 6 4 H(8) G(5)Insert
  • 7 7 B(2) C(9) 2 3 3 2 2 D(10 A(0) E(4) F(6) ) 1 2 2 6 4 H(8) G(5)Update
  • 7 7 B(2) C(9) 2 3 3 2 2 D(10A(0) E(4) F(6) ) 1 2 2 6 4 H(8) G(5) Insert
  • we are innovative, we are different, we are genius so they call us idiots Visit us formovies,videos,documentaries,sports,funny pics and many more join http://www.globalidiots.com