Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this document? Why not share!

# Q

## on May 04, 2009

• 2,841 views

### Views

Total Views
2,841
Views on SlideShare
2,841
Embed Views
0

Likes
0
Downloads
77
Comments
0

No embeds

### Upload Details

Uploaded via as Microsoft Word

### Usage Rights

© All Rights Reserved

### Report content

• Comment goes here.
Are you sure you want to
Your message goes here
Edit your comment

## QDocument Transcript

• Questions and exercises for Review Note. Questions/Exercises in green numbers apply to the midterm 1. What steps should one take when solving a problem using a computer? A: first to construct an exact model in terms of which we can express allowed solutions. Once we have a suitable mathematical model, we can specify a solution in terms of that model 2. Explain some issues when dealing with the representation of real world objects in a computer program. A: how real world objects are modeled as mathematical entities, the set of operations that we define over these mathematical entities, how these entities are stored in a computer's memory (e.g. how they are aggregated as fields in records and how these records are arranged in memory, perhaps as arrays or as linked structures), and the algorithms that are used to perform these operations. 3. Explain the notions: model of computation; computational problem; problem instance; algorithm; and program A: Model of Computation: An abstract sequential computer, called a Random Access Machine or RAM. Uniform cost model. Computational Problem: A specification in general terms of inputs and outputs and the desired input/output relationship. Problem Instance: A particular collection of inputs for a given problem. Algorithm: A method of solving a problem which can be implemented on a computer. Usually there are many algorithms for a given problem. Program: Particular implementation of some algorithm. 4. Show the algorithm design algorithm A: 5. What might be the resources considered in algorithm analysis? A: • running time • memory usage (space) • number of accesses to secondary storage • number of basic arithmetic operations
• • network traffic 6. Explain the big-oh class of growth. A: O(g) is the set of functions that grow no faster than g. g(n) describes the worst case behavior of an algorithm that is O(g) Examples: n lg n + n = O(n2);;;;; lgk n = O(n) for all k ∈ N 7. Explain the big-omega class of growth. A: Ω (g(n)): class of functions f(n) that grow at least as fast as g(n) g(n) describes the best case behavior of an algorithm that is Ω(g) Example: a n2 + b n + c = Ω(n) provided a > 0 8. Explain the big-theta class of growth. A: Θ (g(n)): class of functions f(n) that grow at same rate as g(n) Example:n2 / 2 - 3 n = Θ(n2) We have to determine c1 > 0,c2 > 0, n0 ∈ N such that:c2 n2 ≤ n2 / 2 - 3 n ≤ c1 n2 for any n > n0. Dividing by n2 yields:c2 ≤ 1 / 2 – 3 / n ≤ c1 This is satisfied for c2 = 1 / 14, c1 = 1 / 2, n0 = 7. 9. What are the steps in mathematical analysis of nonrecursive algorithms? A: • Decide on parameter n indicating input size • Identify algorithm’s basic operation • Determine worst, average, and best case for input of size n • Set up summation for C(n) reflecting algorithm’s loop structure • Simplify summation using standard formulas 10. What are the steps in mathematical analysis of recursive algorithms? A: Decide on parameter n indicating input size • Identify algorithm’s basic operation • Determine worst, average, and best case for input of size n • Set up a recurrence relation and initial condition(s) for C(n) − the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls). • Solve the recurrence to obtain a closed form or estimate the order of magnitude of the solution 11. From lowest to highest, what is the correct order of the complexities O(n2), O(3n), O(2n), O(n2 lg n), O(1), O(n lg n), O(n3), O(n!), O(lg n), O(n)? A: O(1),O(logn),O(n),O(2n),O(3n),O(nlogn),O(n*n lgn ),O(n*n),O(n*n*n),O(n!) 12. What are the complexities of T1(n) = 3n lg n + lg n; T2(n) = 2n + n3 + 25; and T3(n, k) = k + n, where k less-than or equal to n? From lowest to highest, what is the correct order of the resulting complexities? A: O(t1)=nlogn;;;;;O(t2)=2la put n;;;;;;O(t3)=n t3,t1,t2
• 13. Suppose we have written a procedure to add m square matrices of size n × n. If adding two square matrices requires O (n2 ) running time, what is the complexity of this procedure in terms of m and n? A: O((m-1)n 2 ) de fapt ii O(n*n),adica de n patrat 14. Suppose we have two algorithms to solve the same problem. One runs in time T1(n) = 400n, whereas the other runs in time T2(n) = n2. What are the complexities of these two algorithms? For what values of n might we consider using the algorithm with the higher complexity? A: T1->O(n); T2->O(n*n) for n<=400 15.How do we account for calls such as memcpy and malloc in analyzing real code? Although these calls often depend on the size of the data processed by an algorithm, they are really more of an implementation detail than part of an algorithm itself. A: ??????????????????????????????????????????????????????????????????????????/ 16.Explain the stack ADT. A: A stack is an abstract data type (ADT) that supports two main operations: • push(x): Inserts StackElement x onto top of stack • pop(): Removes the top StackElement of stack and returns it; 17.Explain the list ADT. A: The list supports three fundamental operations: • insert(x): Insert ListElement x at the front of the list • delete(x): Remove the ListElement from the front of the list; an error occurs if the list is empty • search(k): search for ListElement of key k on list • Input: key ot search; Output: pointer to ListElement or nil if not found 18.There are occasions when arrays have advantages over linked lists. When are arrays preferable? A: when we need to access an element fast without having to go through all the elements of the list. This is preferable when we have large amounts of information, because accessing an element for whatever reasons is much faster. 19.Explain the queue ADT. A: The queue supports two fundamental operations: • enqueue(o): Insert QueueElement o at the rear of the queue • dequeue(): Remove the QueueElement from the front of the queue and return it; an error occurs if the queue is empty
• 20.Sometimes we need to remove an element from a queue out of sequence (i.e., from somewhere other than the head). What would be the sequence of queue operations to do this if in a queue of five requests, req1, . . . , req5, we wish to process req1, req3 , and req5 immediately while leaving req2 and req4 in the queue in order? What would be the sequence of linked list operations to do this if we morph the queue into a linked list? A: POP 1,2(store),3,4(store),5 and then push back 4,2 21.Recall that each of the linked list data structures presented at the laboratory has a size member. The SLList and DLList data structures also contain a first and last member. Why are each of these members included? A: size member to know permanently how many records do we have inside the list. First and last elements are needed in order to know where to start and where to stop the operations on the lists if we traverse them entirely. 22.When would you use a doubly-linked list instead of a singly-linked one? Why? A: we need double when we need to go back 23.Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 în a doubly linked list with a sentinel. A: desen 24.Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 în a circular queue of capacity 9. A: desen 25.Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 în a stack of capacity 12.
• A: desen 26.Determine the running time of the following program 27.Determine the running time of the following program 28. Define the term quot;rooted treequot; both formally and informally. A: Rooted tree: collection of elements called nodes,one of which is distinguished as root, along with a relation quot;parenthoodquot;) that imposes a hierarchical structure on the nodes Formal definiton: A tree T is a set of nodes storing elements. The nodes have a parent-child relationship, that satisfies the following properties: T is nonempty, it has a special node called root of T. !! If !! The root has no parents node v of T different from the root has a unique parent node w! !! Each node with parent w is a child of w! !! Every 29.Define the terms ancestor, descendant, parent, child, sibling as used with rooted trees. A: Parent: immediately precedent vertex on the path from the root to X. Child: immediately succeeding vertex on the path from the root to X. Ancestor: if it’s on the unique path from root to X. Descendant: if X is on unique path to him Siblings are nodes that share the same parent node. 30. Define the terms path, height, depth, level as used with rooted trees. A: path: walk in which all vertices are distinct A The depth of a node n is the length of the path from the root to the node. The set of all nodes at a given depth is sometimes called a level of the tree. The root node is at depth zero. The height of a tree is the length of the path from the root to the deepest node in the tree. A (rooted) tree with only a node (the root) has a height of zero. Level: distance in edges from root to vertex JOLDI: For a rooted tree T = (V, E) with root r ∈V: • Path: 〈n1, n2, ..., nk〉 such that ni = parent ni+1 for 1≤ i ≤ k. length(path): no. of nodes -1 • The depth of a vertex v ∈V is depth(v) = the length of the path from r to v • The height of a vertex v ∈V is height(v) = the length of the longest path from v to a leaf • The height of the tree T is height(T) = height(r) • The level of a vertex v ∈V is level(v) = height(T) − depth(v) • The subtree generated by a vertex v ∈V is a tree consisting of root v and all its descendants in For the tree on theright… • The root is a. • The leaves are h, g, i,l, m.
• • The proper ancestors of k are a, c, f. • The proper descendants of d are h, i, j, l. • The parent of h is d. • The children of c are f, g. • The siblings of h are i, j. • Height(T)=height(a)=4 31.Show the preorder traversal of the tree given in Fig. 1 Fig. 1. Example Tree preorder: 1, 2, 5, 3, 6, 10, 7, 11, 12, 4, 8, 9. A: 23.Show the postorder traversal of the tree given in Fig. 1 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1. A: postorder 24.Show the inorder traversal of the tree given in Fig. 1 inorder: 5, 2, 1, 10, 6, 3, 11, 7, 12, 8, 4, 9. A: 25.Construct the tree whose preorder traversal is: 1, 2, 5, 3, 6, 10, 7, 11, 12, 4, 8, 9, and inoder traversal is 5, 2, 1, 10, 6, 3, 11, 7, 12, 8, 4, 9.
• A: see fig 1 26.Construct the tree whose postorder traversal is: 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1, and inoder traversal is 5, 2, 1, 10, 6, 3, 11, 7, 12, 8, 4, 9. A: desen 27.Show the vector contents for an implementation of the tree in Fig. 1. A: 011123344677 28.Show the contents of the data structures (in a sketch) for an implementation of the tree in Fig. 1 using lists of children. A: desen 29.Show the contents of the data structures (in a sketch) for an implementation of the tree in Fig. 1 using leftmost child - right sibling method. A: ?????????????????????????????????????????????????????????????????????? 30.Show the binary search tree which results after inserting the nodes with keys 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in that order, in an empty tree.
• 31.Show the binary search tree resulted after deleting keys 10, 5 and 6 in that order from the binary search tree of Fig. 2. 32.How do we find the smallest node in a binary search tree? What is the runtime complexity to do this in both an unbalanced and balanced binary search tree, in the worst case? How do we find the largest node in a binary search tree? What are the runtime complexities for this? A: we find the leftmost leaf. In the worst case the complexity is O(n). for largest we find the rightmost leaf. Same complexities. 33.Compare the performance of operations insert (add), delete and find for arrays, doubly- linked lists and BSTs
• 34. A: Performance comparison 35.What is the purpose of static BSTs and what criteria are used to build them? A: Reduce time of search:• More frequently accessed keys kept closer to root 36.If we would have two functions: bitree_rem_left (for removing the left subtree) and bitree_rem_right (for removing the right subtree), why should we use a postorder traversal used to remove the appropriate subtree? Could a preorder or inorder traversal have been used instead? A: because in order to make a decision when removing a subtree we have to know all the information, this can only be done if we have already visited the left subtree and the root. Therefore the inorder of preorder transversals cannot be used. 37.When might we choose to make use of a tree with a relatively large branching factor, instead of a binary tree, for example? A: when we need fast searches and the keys are put in respecting a strict relation 38.In a binary search tree, the successor of some node x is the next largest node after x. For example, in a binary search tree containing the keys 24, 39, 41, 55, 87, 92, the successor of 41 is 55. How do we find the successor of a node in a binary search tree? What is the runtime complexity of this operation? A: we take the right child. Complexity is O(1) 39.A multiset (see the related topics at the end of the chapter) is a type of set that allows members to occur more than once. How would the runtime complexities of inserting and
• removing members with a multiset compare with the operations for inserting and removing members? A: ?????????????????????????????????????????????????????? 40.The symmetric difference of two sets consists of those members that are in either of the two sets, but not both. The notation for the symmetric difference of two sets, S1 and S2, is S1 ? S2. How could we implement a symmetric difference operation using the set operations union, intersection and difference? Could this operation be implemented more efficiently some other way? A: S1?S2=(s1-s2)U(s2-s1) 41.Sketch the algorithm for HashInsert in a hash table using open addressing. A: 42.Why are hash tables good for random access but not sequential access? For example, in a database system in which records are to be accessed in a sequential fashion, what is the problem with hashing? A: because the elements situated next to each other might not be neighbors in a sequential fashion 43.What is the worst-case performance of searching for an element in a chained hash table? How do we ensure that this case will not occur? A: when all keys inserted in the map collide. We use a very complex hash function. Maybe even quadratic hashing.
• 44.What is the worst-case performance of searching for an element in an open-addressed hash table? How do we ensure that this case will not occur? A: Worst case is to search the whole tree. This happens when the info we are looking for is not where it should be in the hash. We always put the info as close as possible to where it should be on the first empty slot 45.Explain the generation of hash codes using memory addresses, integer cast and component sum. A: Memory address: • We reinterpret the memory address of the key object as an integer • Good in general, except for numeric and string keys Integer cast: • We reinterpret the bits of the key as an integer • Suitable for keys of length less than or equal to the number of bits of the integer type (e.g., byte, short, int,and float in C) Component sum: • We partition the bits of the key into components of fixed length (e.g., 16 or 32 bits) and we sum the components (ignoring overflows) • Suitable for numeric keys of fixed length greater than or equal to the number of bits of the integer type (e.g., long and double in C) 46.Explain the generation of hash codes using polynomial accumulation. A: •We partition the bits of the key into a sequence of components of fixed length (e.g., 8, 16 or 32 bits) a0 a1… an−1 •We evaluate the polynomial p(x) = a0 + a1 x + a2 x2 +… … + an−1xn−1 at a fixed value x, ignoring overflows •Especially suitable for strings (e.g., the choice x = 33 gives at most 6 collisions on a set of 50,000 English words) 47. How can one implement a compression function using the MAD technique? A: Multiply, Add and Divide (MAD): • h2 (y) = (ay + b) mod m • a and b are nonnegative integers such that a mod m ≠ 0 • Otherwise, every integer would map to the same value b 48. Explain the quadratic hashing rehashing strategy. A: Quadratic hashing • h’: an auxiliary hash function; 0 ≤ i ≤ m-1; • c1 ≠ 0 and c2 ≠ 0: auxiliary constants • checks B[h'(k)]; next checked locations depend quadratically on i • secondary clustering effect(Note: i is the number of trial (0 for first)) 49.Explain the double hashing rehashing strategy. A: Double hashing • h1, h2: auxiliary hash functions; initially, checks position B[h1(k)] is checked; • successive positions are h2(k) mod m away from the previous positions (sequence depends in two ways on key k) • h2(k) and m must be relatively prime (to allow for the whole table to be searched). To ensure this condition: • take m=2k and make h2(k) generate an odd number or • take m prime make h2(k) return a positive integer m’ smaller than m
• 50.Show the hash table which results after inserting the values 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in a chained hash table with N=5 and hash function h(x)=x mod N. 51. Show the hash table which results after inserting the values 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in an open addressing hash table with N=16, N'=13 using double hashing, The hash functions are h1(x)=x mod N, and h2(x)=1+ (x mod N'). 52.What are the operations for the priority queue ADT? A: insert and deletemin (as well as the usual createEmpty for initialization of the data structure). min() returns, but does not remove, an entry with smallest key ALSO : size() Empty() 53.Compare the performance of priority queues using sorted and unsorted lists. A: Sorted list list Unsorted Performance: Performance: • insert takes O(n) time (we have to fiind a • insert takes O(1) time (we can insert the place where to insert the item) item at the beginning or end of the list) • deleteMin and min take O(1) time (the • deleteMin and min take O(n) time (we item is at the beginning of the list) have to scan the entire list to find the smallest key) 54.What is a partially ordered tree? A: Binary tree • At the lowest level, where some leaves may be missing,we require that all missing leaves are to the right of all leaves that are not on the lowest level. • Tree is partially ordered: the priority of node v is no greater than the priority of the children of v 55.Show the result of inserting the value 14 in the POT of Fig. 2.
• Fig. 2 A partially ordered tree. Fa desen 55.Explain the notion quot;heapquot;. A: A binary tree has the heap property if and only if: • it is empty or • the key in the root is larger than that in either child and both subtrees have the heap property. 56.What is an AVL tree? A: AVL tree: binary search tree with a balance condition: For every node in an AVL tree T, the height of the left (TL) and right (TR) subtrees can differ by at most 1: |hL - hR| ≤ 1 57.Draw the AVL tree which result from inserting the keys 52, 04, 09, 35, 43, 17, 22, 11 in an empty tree. A: see lower 58.Draw the AVL tree resulting after deleting node 09 from the tree of Fig. 3. Fig. 3. An AVL tree. 59.Describe the left-right double rotation in an AVL tree. A: Left-right: • left rotation around the left child of a node followed by a • right rotation around the node itself • k1 < k2, k1 < k3, k2 < k3 • Rotate to make k2 topmost node
• 60.Draw the AVL tree resulting after deleting node 35 from the tree of Fig. 4. Fig. 4. Another AVL tree. 61. What can you say about the running time for AVL tree operations? A: a single restructure is O(1) • using a linked-structure binary tree find is O(log n) • height of tree is O(log n), no restructures needed insert is O(log n) • initial find is O(log n) • Restructuring up the tree, maintaining heights is O(log n) remove is O(log n) • initial find is O(log n) • Restructuring up the tree, maintaining heights is O(log n) 62.What is a 2-3 tree? A: 2-3 tree properties: • Each interior node has two or three children. • Each path from the root to a leaf has the same length. • A tree with zero or one node(s) is a special case of a 2-3 tree. 63.Show the 2-3 tree which results after inserting the key 13 in the tree of Fig. 5.
• Fig. 5. A 2-3 tree. 64.Show the 2-3 tree which results after deleting the key 13 in the tree of Fig. 6. Fig. 6. Another 2-3 tree. 65.What is a 2-3-4 tree? A: 2-3-4 tree refer to how many links to child nodes can potentially be contained in a given node. For non-leaf nodes, three arrangements : • A node with one data item always has two children • A node with two data items always has three children • A node with three data items always has four children I In short, a non-leaf node must always have one more child than it has data items. Symbolically, if the number of child links is L and the number of data items is D, then L = D + 1 Empty nodes are not allowed. 66. What were the disjoint sets with union and find designed for? A: Applicable to problems where: • start with a collection of objects, each in a set by itself; • combine sets in some order, and from time to time • ask which set a particular object is in classes: Equivalence • If set S has an equivalence relation (reflexive, symmetrical, transitive) defined on it, then the set S can be partitioned into disjoint subsets S1, S2,, ... S with problem: Equivalence • given a set S and a sequence of statements of the form a≡ b • process the statements in order in such a way that at any time we are able to determine in which equivalence class a given element belongs 67.Define the operations of the union-find set ADT. A: Operations: • union(A, B) takes the union of the components A and B and calls the result either A or B, arbitrarily.
• • find(x), a function that returns the name for the component of which x is a member. • initial(A, x) creates a component named A that contains only the element x. 68.Draw a sketch showing a lists implementation for the union-find set ADT with sets: 1: {1, 4, 7}; 2: {2, 3, 6, 9}; 8:{8, 11, 10, 12}. 69.Draw a sketch showing a tree forest implementation for the union-find set ADT with sets: 1: {1, 4, 7}; 2: {2, 3, 6, 9}; 8:{8, 11, 10, 12}. 70.How can one speed up union-find ADT operations? A: Union by size (rank): • When performing a union, make the root of smaller tree point to the root of the larger O(n log n) time for performing n unionfind operations: Implies • Each time we follow a pointer, we are going to a subtree of size at least double the size of the previous subtree • Thus, we will follow at most O(log n) pointers for any find. Path compression: • After performing a find, compress all the pointers on the path just traversed so that they all point to the root Implies O(n log* n) time for performing n union- find