Master of Computer Application (MCA) – Semester 4 MC0080

1,926 views
1,836 views

Published on

Master of Computer Application (MCA) – Semester 4
MC0080

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,926
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
55
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Master of Computer Application (MCA) – Semester 4 MC0080

  1. 1. 1MC0080 – Analysis and Design of AlgorithmsQuestion 1- Describe the following: a) Well-known Sorting AlgorithmsBubble sortIt is a simple sorting algorithm. The algorithm starts at the beginning of the data set. Itcompares the first two elements, and if the first is greater than the second, it swaps them. Itcontinues doing this for each pair of adjacent elements to the end of the data set. It thenstarts again with the first two elements, repeating until no swaps have occurred on the lastpass. This algorithms average and worst case performance is O(n2), so it is rarely used tosort large, unordered, data sets. Bubble sort can be used to sort a small number of items.Selection sortThis is an in-place comparison sort. It has O(n2) complexity, making it inefficient on largelists, and generally performs worse than the similar insertion sort. Selection sort is noted forits simplicity, and also has performance advantages over more complicated algorithms incertain situations. The algorithm finds the minimum value, swaps it with the value in the firstposition, and repeats these steps for the remainder of the list. It does no more than n swaps,and thus is useful where swapping is very expensive.Insertion sortIt is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists,and often is used as part of more sophisticated algorithms. It works by taking elements fromthe list one by one and inserting them in their correct position into a new sorted list. Inarrays, the new list and the remaining elements can share the arrays space, but insertion isexpensive, requiring shifting all following elements over by one.Shell sortIt was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort bymoving out of order elements more than one position at a time. One implementation can bedescribed as arranging the data sequence in a two-dimensional array and then sorting thecolumns of the array using insertion sort.Heap sortThis is a much more efficient version of selection sort. It also works by determining thelargest (or smallest) element of the list, placing that at the end (or beginning) of the list, thencontinuing with the rest of the list, but accomplishes this task efficiently by using a datastructure called a heap, a special type of binary tree. Once the data list has been made intoa heap, the root node is guaranteed to be the largest (or smallest) element. When it isremoved and placed at the end of the list, the heap is rearranged so the largest elementremaining moves to the root. Using the heap, finding the next largest elementtakes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allowsHeap sort to run in O(n log n) time, and this is also the worst case complexity.b) Divide and Conquer TechniquesMerge sortMerge sort takes advantage of the ease of merging already sorted lists into a new sorted list.It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them
  2. 2. 2if the first should come after the second. It then merges each of the resulting lists of two intolists of four, then merges those lists of four, and so on; until at last two lists are merged intothe final sorted list. Of the algorithms described here, this is the first that scales well to verylarge lists, because its worst-case running time is O(n log n).Quick sortThis is a divide and conquer algorithm which relies on a partition operation: to partition anarray an element called a pivot is selected. All elements smaller than the pivot are movedbefore it and all greater elements are moved after it. This can be done efficiently in lineartime and in-place. The lesser and greater sub lists are then recursively sorted. Efficientimplementations of quick sort (with in-place partitioning) are typically unstable sorts andsomewhat complex, but are among the fastest sorting algorithms in practice. Together withits modest O(log n) space usage, quick sort is one of the most popular sorting algorithmsand is available in many standard programming libraries. The most complex issue inquicksort is choosing a good pivot element; consistently poor choices of pivots can result indrastically slower O(n²) performance, if at each step the median is chosen as the pivot thenthe algorithm works in O(n log n). Finding the median however, is an O(n) operation onunsorted lists and therefore exacts its own penalty with sorting.Question 2 - Explain in your own words the different asymptotic functions and notations.We often want to know a quantity only approximately and not necessarily exactly, just tocompare with another quantity. And, in many situations, correct comparison may be possibleeven with approximate values of the quantities. The advantage of the possibility of correctcomparisons through even approximate values of quantities, is that the time required to findapproximate values may be much less than the times required to find exact values. We willintroduce five approximation functions and their notations.The purpose of these asymptotic growth rate functions to be introduced, is to facilitate therecognition of essential character of a complexity function through some simpler functionsdelivered by these notations.For examples, a complexity function f(n) = 5004 n3 + 83 n2 + 19 n + 408, has essentiallysame behavior as that of g(n) = n3 as the problem size n becomes larger and larger. Butg(n) = n3 is much more comprehensible and its value easier to compute than the functionf(n)Enumerate the five well – known approximation functions and how these are pronounced1 O: (O(n2)) is pronounced as ‘big – oh of n2’ or sometimes just as ohof n2)2 Ω: (Ω(n2)) is pronounced as ‘big – omega of n2 or sometimes just as omega of n2’)3 Θ: (Θ(n2)) is pronounced as ‘theta of n2’)4 o : (o (n2) is pronounced as ‘little – oh n2’)5 is pronounced as ‘little – omega of n2These approximations denote relations from functions to functions.For example, if functionsf, g: N ->N are given byf (n) = n2 – 5n and
  3. 3. 3g(n) = n2thenO (f (n)) = g(n) or O (n2 – 5n) = n2To be more precise, each of these notations is a mapping that associates a set of functionsto each function under consideration. For example, if f(n) is a polynomial of degree k thenthe set O(f(n) included all polynomials of degree less than or equal to k.In the discussion of any one of the five notations, generally two functions say f and g areinvolved. The functions have their domains and N, the set of natural numbers, i.e.,f : N Ng : N ->NThese functions may also be considered as having domain and co domain as R.The Notation OProvides asymptotic upper bound for a given function. Let f(x) and g(x) be two functionseach from the set of natural numbers or set of positive real numbers to positive realnumbers.Then f(x) is said to be O (g(x)) (pronounced as big – oh of g of x) if there exists two positiveintegers / real number constants C and k such thatf(x) <= C g(x) for all x >= k(The restriction of being positive on integers/ real is justified as all complexities are positivenumbers)Question 3 - Describe the following:a) Fibonacci HeapsA Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, thekey of a child is always greater than or equal to the key of the parent. This implies that theminimum key is always at the root of one of the trees. To allow fast deletion andconcatenation, the roots of all trees are linked using a circular, doubly linked list. Thechildren of each node are also linked using such a list. For each node, we maintain itsnumber of children and whether the node is marked. Moreover we maintain a pointer to theroot containing the minimum key.Operation find minimum is now trivial because we keep the pointer to the node containing it.It does not change the potential of the heap, therefore both actual and amortized cost isconstant. As mentioned above, merge is implemented simply by concatenating the lists oftree roots of the two heaps. This can be done in constant time and the potential does notchange, leading again to constant amortized time.Operation insert works by creating a new heap with one element and doing merge. Thistakes constant time, and the potential increases by one, because the number of treesincreases. The amortized cost is thus still constant.Operation extract minimum (same as delete minimum) operates in three phases. First wetake the root containing the minimum element and remove it. Its children will become roots
  4. 4. 4of new trees. Operation decrease key will take the node, decrease the key and if the heapproperty becomes violated (the new key is smaller than the key of the parent), the node iscut from its parent. If the parent is not a root, it is marked. If it has been marked already, it iscut as well and its parent is marked. We continue upwards until we reach either the root oran unmarked node.b) Binomial HeapsA binomial heap is implemented as a collection of binomial trees (compare with a binaryheap, which has a shape of a single binary tree). A binomial tree is defined recursively:● A binomial tree of order 0 is a single node● A binomial tree of order k has a root node whose children are roots of binomial trees oforders k−1, k−2., 2, 1, 0 (in this order).A binomial heap is implemented as a set of binomial trees that satisfy the binomial heapproperties:● Each binomial tree in a heap obeys the minimum-heap property: the key of a node isgreater than or equal to the key of its parent. There can only beeither one or zero binomial trees for each order, including zero order.The first property ensures that the root of each binomial tree contains the smallest key in thetree, which applies to the entire heap.The second property implies that a binomial heap with n nodes consists of at most log n + 1binomial trees.Inserting a new element to a heap can be done by simply creating a new heap containingonly this element and then merging it with the original heap.To find the minimum element of the heap, find the minimum among the roots of the binomialtrees.To delete the minimum element from the heap, first find this element, remove it from itsbinomial tree, and obtain a list of its subtrees. Then transform this list of subtrees into aseparate binomial heap by reordering them from smallest to largest order. Then merge thisheap with the original heap.After decreasing the key of an element, it may become smaller than the key of its parent,violating the minimum-heap property. If this is the case, exchange the element with itsparent, and possibly also with its grandparent, and so on, until the minimum-heap property isno longer violated.To delete an element from the heap, decrease its key to negative infinity (that is, some valuelower than any element in the heap) and then delete the minimum in the heap.Question 4 - Discuss the process of flow of Strassen’s Algorithm and also its limitations.Strassen’s recursive algorithm for multiplying nx n matrices runs in time. For sufficiently largevalue of n, therefore, it outperforms the matrixmultiplication algorithm.The idea behind the Strassen’s algorithm is to multiply matrices with
  5. 5. 5only 7 scalar multiplications (instead of 8). Consider the matrices. The seven sub matrixproducts used are,P 1= a .(g-h)P 2= (a+b) hP 3= (c+d) eP 4= d. (f–e)P 5= (a + d). (e + h)P 6 = (b – d). (f + h)P 7 = (a – c) . (e+g)Using these sub matrix products the matrix products are obtained by,r = P 5+ P 4– P 2+ P 6s = P 1+ P 2t = P 3+ P 4u = P 5+ P 1– P 3– P 1This method works as it can be easily seen thats=(ag–ah)+(ah+bh)=ag+bh.In this method there are 7 multiplications and 18 additions. For (n´n)matrices, it can beworth replacing one multiplication by 18 additions, since multiplication costs are much morethan addition costs.The recursive algorithm for multiplying n´n matrices is givenbelow:1. Partition the two matrices A, B into matrices.2. Conquer: Perform 7 multiplicationsrecursively.3. Combine: Form using + and –.The running time of above recurrence is givenby the recurrence given below: The current best upper bound for multiplying matrices isapproximatelyLimitations of Strassen’s AlgorithmFrom a practical point of view, Strassen’s algorithm is often not the method of choice formatrix multiplication, for the following four reasons:1. The constant factor hidden inthe running time of Strassen’s algorithm is larger than the constant factor in the method.2.When the matrices are sparse, methods tailored for sparse matrices are faster.3. Strassen’s algorithm is not quite as numerically stable as the naïvemethod.4. The submatrices formed at the levels of recursion consume space.Question 5 - How do you formalize a greedy technique? Discuss the different steps one byone?In order to develop an algorithm based on the greedy technique to solve a generaloptimization problem, we need the following data structures and functions:i) A set or list of give / candidate values from which choices are made, to reach a solution.For example, in the case of Minimum Number of Notes problem, the list of candidate values(in rupees) of notes is{1, 2, 5, 10, 20, 50, 100, 500, 1000}. Further, the number of notesof each denomination should be clearly mentioned. Otherwise, it is assumed that each
  6. 6. 6candidate value can be used as many times as required for the solution using greedytechnique. Let us call this set asGV : Set of Given Valuesii) Set (rather multi-set) of considered and chosen values:This structure contains those candidate values, which are considered and chosen by thealgorithm based on greedy technique to reach a solution. Let us call this structure as CV :Structure of Chosen Values The structure is generally not a set but a multi-set in the sensethat values may be repeated. For example, in the case of Minimum Number ofNotes problem, if the amount to be collected is Rs. 289 then CV = {100, 100, 50, 20, 10, 5,2, 2}iii) Set of Considered and Rejected Values:As the name suggests, this is the set of all those values, which are considered but rejected.Let us call this set as RV : Set of considered and Rejected Values A candidate value maybelong to both CV and RV. But, once a value is put in RV, then this value cannot be put anymore in CV. For example, to make an amount of Rs. 289, once we have chosen two noteseach of denomination100, we have CV = {100, 100}At this stage, we have collected Rs. 200out of the required Rs. 289. At this stage RV = {1000, 500}. So, we can chose a note ofany denomination except those in RV, i.e., except 1000 and 500. Thus, at this stage, we canchose a note of denomination 100. However, this choice of 100 again will make the totalamount collected so far, as Rs. 300, which exceeds Rs. 289. Hence we reject the choice of100 third time and put 100 in RV, so that now RV = {1000, 500, 100}. From this pointonward, we cannot choose even denomination 100. Next, we consider some of the function,which need to be defined in an algorithm using greedy technique to solve an optimizationproblem.iv) A function say SolF that checks whether a solution is reached or not. However, thefunction does not check for the optimality of the obtained solution. In the case of MinimumNumber of Notes problem, the function SolF finds the sum of all values in the multi-set CVand compares with the desired amount, say Rs. 289. For example, if at one stage CV= {100, 100}then sum of values in CV is 200 which does not equal 289, then the functionSolF returns. ‘Solution not reached’. However, at a later stage, when CV ={100, 100, 50, 20,10, 5, 2, 2}, then as the sum of values in CV equals the required amount, hence the functionSolF returns the message of the form ‘Solution reached’. It may be noted that the functiononly informs about a possible solution. However, solution provided through SolF may not beoptimal. For instance in the Example above, when we reach CV = {60, 10, 10}, then SolFreturns ‘Solution, reached’. However, as discussed earlier, the solution 80 = 60 + 10+ 10using three notes is not optimal, because, another solution using only two notes, viz., 80 =40 + 40, is still cheaper.v) Selection Function say SelF finds out the most promising candidate value out of thevalues not yet rejected, i.e., which are not in RV. In the case of Minimum Number of Notesproblem, for collecting Rs. 289, at the stage when RV = {1000, 500} and CV = {100, 100}then first the function SelF attempts the denomination 100. But, through function SolF, whenit is found that by addition of 100 to the values already in CV, the total value becomes300which exceeds 289, the value 100 is rejected and put in RV. Next, the function SelF attempts
  7. 7. 7the next lower denomination 50. The value 50 when added to the sum of values in CV gives250, which is less than 289. Hence, the value 50 is returned by the function SelF.vi) The Feasibility-Test Function, say FeaF.When a new value say v is chosen by the function SelF, then the function FeaF checkswhether the newest, obtained by adding v to the set CV of already selected values, isa possible part of the final solution. Thus in the case of Minimum Number of Notes problem,if amount to be collected is Rs. 289 and at some stage, CV = {100, 100}, then the functionSelF returns50. At this stage, the function FeaF takes the control. It adds 50 to the sum ofthe values in CV, and on finding that the sum 250 is less than the required value 289 informsthe main/calling program that{100, 100, 50} can be a part of some final solution, and needsto be exploredfurther.vii) The Objective Function, say ObjF, gives the value of the solution.For example, in the case of the problem of collecting Rs. 289; as CV = {100, 100, 50, 20,10, 5, 2, 2} is such that sum of values in CV equals the required value 289, the function ObjFreturns the number of notes in CV ,i.e., the number 8.After having introduced a number ofsets and functions that may be required by an algorithm based on greedy technique, we givebelow the outline of greedy technique, say Greedy-Structure. For any actual algorithm basedon greedy technique, the various structures the functions discussed above have to bereplaced by actual functions. These functions depend upon the problem under consideration.The Greedy-Structure outlined below takes the set GV of given values as input parameterand returns CV, the set of chosen values. For developing any algorithm based on greedytechnique, the following function outline will be used.Function Greedy-Structure (GV : set): setCV ß ? {initially, the set of considered values is empty}While GV ? RV and not SolF (CV)do begin v SelF (GV)If FeaF (CV ? {v} ) then CV CV ? {v}else RV RV ? {v}end// the functionGreedy Structure comes out// of while-loop when either GV = RV, i.e., all// given values arerejected or when solution is found If SolF (CV) then returns ObjF (GV)else return “Nosolution is possible “end function Greedy-StructureQuestion 6 - Briefly explain the Prim’s algorithm.The algorithm due to Prim builds up a minimum spanning tree by adding edges to form asequence of expanding sub trees. The sequence of sub trees is represented by the pair (VT,ET), where VT and ET respectively represent the set of vertices and the set of edges of asub tree in the sequence. Initially, the sub tree, in the sequence, consists of just a singlevertex which is selected arbitrarily from the set V of vertices of the given graph. The sub treeis built-up iteratively by adding an edge that has minimum weight among the remainingedges (i.e., edge selected greedily) and, which at the same time, does not form a cyclewith the earlier selected edges. We illustrate the Prim’s algorithm through an example beforegiving a semi-formal definition of the algorithm.ExampleLet us consider the following graph:InitiallyVT= (a)E
  8. 8. 8In the first iteration, the edge having weight which is the minimum of the weights of theedges having a as one of its vertices, is chosen. In this case, the edge ab with weight 1is chosen out of the edges ab, ac and ad of weights respectively 1, 5 and 2. Thus, after Firstiteration, we have the given graph with chosen edges in bold and VT and ETas follows: VT= (a, b)ET= ( (a, b))In the next iteration, out of the edges, not chosen earlier and not making a cycle with earlierchosen edge and having either a or b as one of its vertices, the edge with minimum weight ischosen. In this case the vertex b does not have any edge originating out of it. In such cases,if required, weight of anon-existent edge may be taken as. Thus choice is restricted to twoedges viz., ad and ac respectively of weights 2 and 5. Hence, in the next iteration the edgead is chosen. Hence, after second iteration, we have the given graph with chosen edges andVT and ET as follows:VT= (a, b, d)ET= ((a, b), (a, d))In the next iteration, out of the edges, not chosen earlier and not making a cycle with earlierchosen edges and having either a, b or d as one of its vertices, the edge with minimumweight is chosen. Thus choice is restricted to edges ac, dc and de with weights respectively5, 3, 1.5. The edge de with weight 1.5 is selected. Hence, after third iteration we have thegiven graph with chosen edges and VT and ET as follows:VT= (a, b, d, e)ET= ((a, b), (a, d); (d, e))In the next iteration, out of the edges, not chosen earlier and not making a cycle with earlierchosen edge and having either a, b, d or e as one of its vertices, the edge with minimumweight is chosen. Thus, choice is restricted to edges dc and ac with weights respectively 3and 5. Hence the edge dc with weight 3 is chosen. Thus, after fourth iteration, we have thegiven graph with chosen edges and VT and ET as follows:VT= (a, b, d, e, c)ET= ((a, b), (a, d) (d, e) (d, c))At this stage, it can be easily seen that each of the vertices, is on some chosen edge and thechosen edges form a tree.

×