Introduction to algorithms__third_edition(rak)Document Transcript
T H O M A S H. C O R M E N C H A R L E S E. L E I S E R S O N R O N A L D L. R I V E S T C L I F F O R D STEININTRODUCTION TO ALGORITHMS T H I R D E D I T I O N
Introduction to AlgorithmsThird Edition
Thomas H. CormenCharles E. LeisersonRonald L. RivestClifford Stein Introduction to Algorithms Third Edition The MIT Press Cambridge, Massachusetts London, England
c 2009 Massachusetts Institute of TechnologyAll rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means(including photocopying, recording, or information storage and retrieval) without permission in writing from thepublisher.For information about special quantity discounts, please email special email@example.com.This book was set in Times Roman and Mathtime Pro 2 by the authors.Printed and bound in the United States of America.Library of Congress Cataloging-in-Publication DataIntroduction to algorithms / Thomas H. Cormen . . . [et al.].—3rd ed. p. cm. Includes bibliographical references and index. ISBN 978-0-262-03384-8 (hardcover : alk. paper)—ISBN 978-0-262-53305-8 (pbk. : alk. paper) 1. Computer programming. 2. Computer algorithms. I. Cormen, Thomas H.QA76.6.I5858 2009005.1—dc22 200900859310 9 8 7 6 5 4 3 2
Contents Preface xiiiI Foundations Introduction 3 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11 2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29 3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 53 4 Divide-and-Conquer 65 4.1 The maximum-subarray problem 68 4.2 Strassen’s algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursion-tree method for solving recurrences 88 4.5 The master method for solving recurrences 93 ? 4.6 Proof of the master theorem 97 5 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator random variables 118 5.3 Randomized algorithms 122 ? 5.4 Probabilistic analysis and further uses of indicator random variables 130
vi ContentsII Sorting and Order Statistics Introduction 147 6 Heapsort 151 6.1 Heaps 151 6.2 Maintaining the heap property 154 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162 7 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 179 7.4 Analysis of quicksort 180 8 Sorting in Linear Time 191 8.1 Lower bounds for sorting 191 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200 9 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worst-case linear time 220III Data Structures Introduction 229 10 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 241 10.4 Representing rooted trees 246 11 Hash Tables 253 11.1 Direct-address tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269 ? 11.5 Perfect hashing 277
Contents vii 12 Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294 ? 12.4 Randomly built binary search trees 299 13 Red-Black Trees 308 13.1 Properties of red-black trees 308 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323 14 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 345 14.3 Interval trees 348IV Advanced Design and Analysis Techniques Introduction 357 15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrix-chain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397 16 Greedy Algorithms 414 16.1 An activity-selection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428 ? 16.4 Matroids and greedy methods 437 ? 16.5 A task-scheduling problem as a matroid 443 17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463
viii ContentsV Advanced Data Structures Introduction 481 18 B-Trees 484 18.1 Deﬁnition of B-trees 488 18.2 Basic operations on B-trees 491 18.3 Deleting a key from a B-tree 499 19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeable-heap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523 20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545 21 Data Structures for Disjoint Sets 561 21.1 Disjoint-set operations 561 21.2 Linked-list representation of disjoint sets 564 21.3 Disjoint-set forests 568 ? 21.4 Analysis of union by rank with path compression 573VI Graph Algorithms Introduction 587 22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadth-ﬁrst search 594 22.3 Depth-ﬁrst search 603 22.4 Topological sort 612 22.5 Strongly connected components 615 23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631
Contents ix 24 Single-Source Shortest Paths 643 24.1 The Bellman-Ford algorithm 651 24.2 Single-source shortest paths in directed acyclic graphs 655 24.3 Dijkstra’s algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortest-paths properties 671 25 All-Pairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The Floyd-Warshall algorithm 693 25.3 Johnson’s algorithm for sparse graphs 700 26 Maximum Flow 708 26.1 Flow networks 709 26.2 The Ford-Fulkerson method 714 26.3 Maximum bipartite matching 732 ? 26.4 Push-relabel algorithms 736 ? 26.5 The relabel-to-front algorithm 748VII Selected Topics Introduction 769 27 Multithreaded Algorithms 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797 28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positive-deﬁnite matrices and least-squares approximation 832 29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 859 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886
x Contents 30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efﬁcient FFT implementations 915 31 Number-Theoretic Algorithms 926 31.1 Elementary number-theoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA public-key cryptosystem 958 ? 31.8 Primality testing 965 ? 31.9 Integer factorization 975 32 String Matching 985 32.1 The naive string-matching algorithm 988 32.2 The Rabin-Karp algorithm 990 32.3 String matching with ﬁnite automata 995 ? 32.4 The Knuth-Morris-Pratt algorithm 1002 33 Computational Geometry 1014 33.1 Line-segment properties 1015 33.2 Determining whether any pair of segments intersects 1021 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039 34 NP-Completeness 1048 34.1 Polynomial time 1053 34.2 Polynomial-time veriﬁcation 1061 34.3 NP-completeness and reducibility 1067 34.4 NP-completeness proofs 1078 34.5 NP-complete problems 1086 35 Approximation Algorithms 1106 35.1 The vertex-cover problem 1108 35.2 The traveling-salesman problem 1111 35.3 The set-covering problem 1117 35.4 Randomization and linear programming 1123 35.5 The subset-sum problem 1128
Contents xiVIII Appendix: Mathematical Background Introduction 1143 A Summations 1145 A.1 Summation formulas and properties 1145 A.2 Bounding summations 1149 B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173 C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201 ? C.5 The tails of the binomial distribution 1208 D Matrices 1217 D.1 Matrices and matrix operations 1217 D.2 Basic matrix properties 1222 Bibliography 1231 Index 1251
Preface Before there were computers, there were algorithms. But now that there are com- puters, there are even more algorithms, and algorithms lie at the heart of computing. This book provides a comprehensive introduction to the modern study of com- puter algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacriﬁcing depth of coverage or mathematical rigor. Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The book contains 244 ﬁgures—many with multiple parts—illustrating how the algorithms work. Since we emphasize efﬁciency as a design criterion, we include careful analyses of the running times of all our algorithms. The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for self-study by technical professionals. In this, the third edition, we have once again updated the entire book. The changes cover a broad spectrum, including new chapters, revised pseudocode, and a more active writing style. To the teacher We have designed this book to be both versatile and complete. You should ﬁnd it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can ﬁt in a typical one-term course, you can consider this book to be a “buffet” or “smorgasbord” from which you can pick and choose the material that best supports the course you wish to teach.
xiv Preface You should ﬁnd it easy to organize your course around just the chapters you need. We have made chapters relatively self-contained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material ﬁrst and the more difﬁcult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter. We have included 957 exercises and 158 problems. Each section ends with exer- cises, and each chapter ends with problems. The exercises are generally short ques- tions that test basic mastery of the material. Some are simple self-check thought exercises, whereas others are more substantial and are suitable as assigned home- work. The problems are more elaborate case studies that often introduce new ma- terial; they often consist of several questions that lead the student through the steps required to arrive at a solution. Departing from our practice in previous editions of this book, we have made publicly available solutions to some, but by no means all, of the problems and ex- ercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to these solutions. You will want to check this site to make sure that it does not contain the solution to an exercise or problem that you plan to assign. We expect the set of solutions that we post to grow slowly over time, so you will need to check it each time you teach the course. We have starred (?) the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more difﬁ- cult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity. To the student We hope that this textbook provides you with an enjoyable introduction to the ﬁeld of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difﬁcult algorithms, we describe each one in a step-by-step manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will ﬁnd the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material. This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook.
Preface xv What are the prerequisites for reading this book? You should have some programming experience. In particular, you should un- derstand recursive procedures and simple data structures such as arrays and linked lists. You should have some facility with mathematical proofs, and especially proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need. We have heard, loud and clear, the call to supply solutions to problems andexercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions fora few of the problems and exercises. Feel free to check your solutions against ours.We ask, however, that you do not send your solutions to us.To the professionalThe wide range of topics in this book makes it an excellent handbook on algo-rithms. Because each chapter is relatively self-contained, you can focus in on thetopics that most interest you. Most of the algorithms we discuss have great practical utility. We thereforeaddress implementation concerns and other engineering issues. We often providepractical alternatives to the few algorithms that are primarily of theoretical interest. If you wish to implement any of the algorithms, you should ﬁnd the transla-tion of our pseudocode into your favorite programming language to be a fairlystraightforward task. We have designed the pseudocode to present each algorithmclearly and succinctly. Consequently, we do not address error-handling and othersoftware-engineering issues that require speciﬁc assumptions about your program-ming environment. We attempt to present each algorithm simply and directly with-out allowing the idiosyncrasies of a particular programming language to obscureits essence. We understand that if you are using this book outside of a course, then youmight be unable to check your solutions to problems and exercises against solutionsprovided by an instructor. Our Web site, http://mitpress.mit.edu/algorithms/, linksto solutions for some of the problems and exercises so that you can check yourwork. Please do not send your solutions to us.To our colleaguesWe have supplied an extensive bibliography and pointers to the current literature.Each chapter ends with a set of chapter notes that give historical details and ref-erences. The chapter notes do not provide a complete reference to the whole ﬁeld
xvi Preface of algorithms, however. Though it may be hard to believe for a book of this size, space constraints prevented us from including many interesting algorithms. Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to ﬁnd it themselves. Changes for the third edition What has changed between the second and third editions of this book? The mag- nitude of the changes is on a par with the changes between the ﬁrst and second editions. As we said about the second-edition changes, depending on how you look at it, the book changed either not much or quite a bit. A quick look at the table of contents shows that most of the second-edition chap- ters and sections appear in the third edition. We removed two chapters and one section, but we have added three new chapters and two new sections apart from these new chapters. We kept the hybrid organization from the ﬁrst two editions. Rather than organiz- ing chapters by only problem domains or according only to techniques, this book has elements of both. It contains technique-based chapters on divide-and-conquer, dynamic programming, greedy algorithms, amortized analysis, NP-Completeness, and approximation algorithms. But it also has entire parts on sorting, on data structures for dynamic sets, and on algorithms for graph problems. We ﬁnd that although you need to know how to apply techniques for designing and analyzing al- gorithms, problems seldom announce to you which techniques are most amenable to solving them. Here is a summary of the most signiﬁcant changes for the third edition: We added new chapters on van Emde Boas trees and multithreaded algorithms, and we have broken out material on matrix basics into its own appendix chapter. We revised the chapter on recurrences to more broadly cover the divide-and- conquer technique, and its ﬁrst two sections apply divide-and-conquer to solve two problems. The second section of this chapter presents Strassen’s algorithm for matrix multiplication, which we have moved from the chapter on matrix operations. We removed two chapters that were rarely taught: binomial heaps and sorting networks. One key idea in the sorting networks chapter, the 0-1 principle, ap- pears in this edition within Problem 8-7 as the 0-1 sorting lemma for compare- exchange algorithms. The treatment of Fibonacci heaps no longer relies on binomial heaps as a precursor.
Preface xvii We revised our treatment of dynamic programming and greedy algorithms. Dy- namic programming now leads off with a more interesting problem, rod cutting, than the assembly-line scheduling problem from the second edition. Further- more, we emphasize memoization a bit more than we did in the second edition, and we introduce the notion of the subproblem graph as a way to understand the running time of a dynamic-programming algorithm. In our opening exam- ple of greedy algorithms, the activity-selection problem, we get to the greedy algorithm more directly than we did in the second edition. The way we delete a node from binary search trees (which includes red-black trees) now guarantees that the node requested for deletion is the node that is actually deleted. In the ﬁrst two editions, in certain cases, some other node would be deleted, with its contents moving into the node passed to the deletion procedure. With our new way to delete nodes, if other components of a program maintain pointers to nodes in the tree, they will not mistakenly end up with stale pointers to nodes that have been deleted. The material on ﬂow networks now bases ﬂows entirely on edges. This ap- proach is more intuitive than the net ﬂow used in the ﬁrst two editions. With the material on matrix basics and Strassen’s algorithm moved to other chapters, the chapter on matrix operations is smaller than in the second edition. We have modiﬁed our treatment of the Knuth-Morris-Pratt string-matching al- gorithm. We corrected several errors. Most of these errors were posted on our Web site of second-edition errata, but a few were not. Based on many requests, we changed the syntax (as it were) of our pseudocode. We now use “D” to indicate assignment and “==” to test for equality, just as C, C++, Java, and Python do. Likewise, we have eliminated the keywords do and then and adopted “/ as our comment-to-end-of-line symbol. We also now use /” dot-notation to indicate object attributes. Our pseudocode remains procedural, rather than object-oriented. In other words, rather than running methods on objects, we simply call procedures, passing objects as parameters. We added 100 new exercises and 28 new problems. We also updated many bibliography entries and added several new ones. Finally, we went through the entire book and rewrote sentences, paragraphs, and sections to make the writing clearer and more active.
xviii Preface Web site You can use our Web site, http://mitpress.mit.edu/algorithms/, to obtain supple- mentary information and to communicate with us. The Web site links to a list of known errors, solutions to selected exercises and problems, and (of course) a list explaining the corny professor jokes, as well as other content that we might add. The Web site also tells you how to report errors or make suggestions. How we produced this book Like the second edition, the third edition was produced in LTEX 2" . We used the A Times font with mathematics typeset using the MathTime Pro 2 fonts. We thank Michael Spivak from Publish or Perish, Inc., Lance Carnes from Personal TeX, Inc., and Tim Tregubov from Dartmouth College for technical support. As in the previous two editions, we compiled the index using Windex, a C program that we wrote, and the bibliography was produced with B IBTEX. The PDF ﬁles for this book were created on a MacBook running OS 10.5. We drew the illustrations for the third edition using MacDraw Pro, with some of the mathematical expressions in illustrations laid in with the psfrag package for LTEX 2" . Unfortunately, MacDraw Pro is legacy software, having not been A marketed for over a decade now. Happily, we still have a couple of Macintoshes that can run the Classic environment under OS 10.4, and hence they can run Mac- Draw Pro—mostly. Even under the Classic environment, we ﬁnd MacDraw Pro to be far easier to use than any other drawing software for the types of illustrations that accompany computer-science text, and it produces beautiful output.1 Who knows how long our pre-Intel Macs will continue to run, so if anyone from Apple is listening: Please create an OS X-compatible version of MacDraw Pro! Acknowledgments for the third edition We have been working with the MIT Press for over two decades now, and what a terriﬁc relationship it has been! We thank Ellen Faran, Bob Prior, Ada Brunstein, and Mary Reilly for their help and support. We were geographically distributed while producing the third edition, working in the Dartmouth College Department of Computer Science, the MIT Computer 1 We investigated several drawing programs that run under Mac OS X, but all had signiﬁcant short- comings compared with MacDraw Pro. We brieﬂy attempted to produce the illustrations for this book with a different, well known drawing program. We found that it took at least ﬁve times as long to produce each illustration as it took with MacDraw Pro, and the resulting illustrations did not look as good. Hence the decision to revert to MacDraw Pro running on older Macintoshes.
Preface xixScience and Artiﬁcial Intelligence Laboratory, and the Columbia University De-partment of Industrial Engineering and Operations Research. We thank our re-spective universities and colleagues for providing such supportive and stimulatingenvironments. Julie Sussman, P.P.A., once again bailed us out as the technical copyeditor. Timeand again, we were amazed at the errors that eluded us, but that Julie caught. Shealso helped us improve our presentation in several places. If there is a Hall of Famefor technical copyeditors, Julie is a sure-ﬁre, ﬁrst-ballot inductee. She is nothingshort of phenomenal. Thank you, thank you, thank you, Julie! Priya Natarajan alsofound some errors that we were able to correct before this book went to press. Anyerrors that remain (and undoubtedly, some do) are the responsibility of the authors(and probably were inserted after Julie read the material). The treatment for van Emde Boas trees derives from Erik Demaine’s notes,which were in turn inﬂuenced by Michael Bender. We also incorporated ideasfrom Javed Aslam, Bradley Kuszmaul, and Hui Zha into this edition. The chapter on multithreading was based on notes originally written jointly withHarald Prokop. The material was inﬂuenced by several others working on the Cilkproject at MIT, including Bradley Kuszmaul and Matteo Frigo. The design of themultithreaded pseudocode took its inspiration from the MIT Cilk extensions to Cand by Cilk Arts’s Cilk++ extensions to C++. We also thank the many readers of the ﬁrst and second editions who reportederrors or submitted suggestions for how to improve this book. We corrected all thebona ﬁde errors that were reported, and we incorporated as many suggestions aswe could. We rejoice that the number of such contributors has grown so great thatwe must regret that it has become impractical to list them all. Finally, we thank our wives—Nicole Cormen, Wendy Leiserson, Gail Rivest,and Rebecca Ivry—and our children—Ricky, Will, Debby, and Katie Leiserson;Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein—for their loveand support while we prepared this book. The patience and encouragement of ourfamilies made this project possible. We affectionately dedicate this book to them.T HOMAS H. C ORMEN Lebanon, New HampshireC HARLES E. L EISERSON Cambridge, MassachusettsRONALD L. R IVEST Cambridge, MassachusettsC LIFFORD S TEIN New York, New YorkFebruary 2009
Introduction to AlgorithmsThird Edition
Introduction This part will start you thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base. Chapter 1 provides an overview of algorithms and their place in modern com- puting systems. This chapter deﬁnes what an algorithm is and lists some examples. It also makes a case that we should consider algorithms as a technology, along- side technologies such as fast hardware, graphical user interfaces, object-oriented systems, and networks. In Chapter 2, we see our ﬁrst algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the struc- ture of the algorithm clearly enough that you should be able to implement it in the language of your choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive tech- nique known as “divide-and-conquer.” Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them. Chapter 3 precisely deﬁnes this notation, which we call asymptotic notation. It starts by deﬁning several asymptotic notations, which we use for bounding algo- rithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation, more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts.
4 Part I Foundations Chapter 4 delves further into the divide-and-conquer method introduced in Chapter 2. It provides additional examples of divide-and-conquer algorithms, in- cluding Strassen’s surprising method for multiplying two square matrices. Chap- ter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the “mas- ter method,” which we often use to solve recurrences that arise from divide-and- conquer algorithms. Although much of Chapter 4 is devoted to proving the cor- rectness of the master method, you may skip this proof yet still employ the master method. Chapter 5 introduces probabilistic analysis and randomized algorithms. We typ- ically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a random-number generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs—thereby ensuring that no particular input always causes poor perfor- mance—or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis. Appendices A–D contain other mathematical material that you will ﬁnd helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the speciﬁc deﬁnitions and notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial ﬂavor.
1 The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions.1.1 Algorithms Informally, an algorithm is any well-deﬁned computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-speciﬁed computa- tional problem. The statement of the problem speciﬁes in general terms the desired input/output relationship. The algorithm describes a speciﬁc computational proce- dure for achieving that input/output relationship. For example, we might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally deﬁne the sorting problem: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. 0 0 0 Output: A permutation (reordering) ha1 ; a2 ; : : : ; an i of the input sequence such 0 0 0 that a1 Ä a2 Ä Ä an . For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem.
6 Chapter 1 The Role of Algorithms in Computing Because many programs use it as an intermediate step, sorting is a fundamental operation in computer science. As a result, we have a large number of good sorting algorithms at our disposal. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, the architecture of the computer, and the kind of storage devices to be used: main memory, disks, or even tapes. An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer. Contrary to what you might expect, incorrect algorithms can sometimes be useful, if we can control their error rate. We shall see an example of an algorithm with a controllable error rate in Chapter 31 when we study algorithms for ﬁnding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms. An algorithm can be speciﬁed in English, as a computer program, or even as a hardware design. The only requirement is that the speciﬁcation must provide a precise description of the computational procedure to be followed. What kinds of problems are solved by algorithms? Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the follow- ing examples: The Human Genome Project has made great progress toward the goals of iden- tifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this informa- tion in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. Although the solutions to the various prob- lems involved are beyond the scope of this book, many methods to solve these biological problems use ideas from several of the chapters in this book, thereby enabling scientists to accomplish tasks while using resources efﬁciently. The savings are in time, both human and machine, and in money, as more informa- tion can be extracted from laboratory techniques. The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include ﬁnding good routes on which the data will travel (techniques for solving such problems appear in
1.1 Algorithms 7 Chapter 24), and using a search engine to quickly ﬁnd pages on which particular information resides (related techniques are in Chapters 11 and 32). Electronic commerce enables goods and services to be negotiated and ex- changed electronically, and it depends on the privacy of personal informa- tion such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures (covered in Chapter 31), which are based on numerical algo- rithms and number theory. Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneﬁcial way. An oil company may wish to know where to place its wells in order to maximize its expected proﬁt. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to ﬂights in the least expensive way possible, making sure that each ﬂight is covered and that government regulations regarding crew schedul- ing are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29. Although some of the details of these examples are beyond the scope of thisbook, we do give underlying techniques that apply to these problems and problemareas. We also show how to solve many speciﬁc problems, including the following: We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Part VI and Appendix B), and we wish to ﬁnd the shortest path from one vertex to another in the graph. We shall see how to solve this problem efﬁciently in Chapter 24. We are given two ordered sequences of symbols, X D hx1 ; x2 ; : : : ; xm i and Y D hy1 ; y2 ; : : : ; yn i, and we wish to ﬁnd a longest common subsequence of X and Y . A subsequence of X is just X with some (or possibly all or none) of its elements removed. For example, one subsequence of hA; B; C; D; E; F; Gi would be hB; C; E; Gi. The length of a longest common subsequence of X and Y gives one measure of how similar these two sequences are. For example, if the two sequences are base pairs in DNA strands, then we might consider them similar if they have a long common subsequence. If X has m symbols and Y has n symbols, then X and Y have 2m and 2n possible subsequences,
8 Chapter 1 The Role of Algorithms in Computing respectively. Selecting all possible subsequences of X and Y and matching them up could take a prohibitively long time unless m and n are very small. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efﬁciently. We are given a mechanical design in terms of a library of parts, where each part may include instances of other parts, and we need to list the parts in order so that each part appears before any part that uses it. If the design comprises n parts, then there are nŠ possible orders, where nŠ denotes the factorial function. Because the factorial function grows faster than even an exponential function, we cannot feasibly generate each possible order and then verify that, within that order, each part appears before the parts using it (unless we have only a few parts). This problem is an instance of topological sorting, and we shall see in Chapter 22 how to solve this problem efﬁciently. We are given n points in the plane, and we wish to ﬁnd the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 1029 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for ﬁnding the convex hull. These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interest- ing algorithmic problems: 1. They have many candidate solutions, the overwhelming majority of which do not solve the problem at hand. Finding one that does, or one that is “best,” can present quite a challenge. 2. They have practical applications. Of the problems in the above list, ﬁnding the shortest path provides the easiest examples. A transportation ﬁrm, such as a trucking or railroad company, has a ﬁnancial interest in ﬁnding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to ﬁnd the shortest path through the network in order to route a message quickly. Or a person wishing to drive from New York to Boston may want to ﬁnd driving directions from an appropriate Web site, or she may use her GPS while driving.
1.1 Algorithms 9 Not every problem solved by algorithms has an easily identiﬁed set of candidatesolutions. For example, suppose we are given a set of numerical values represent-ing samples of a signal, and we want to compute the discrete Fourier transform ofthese samples. The discrete Fourier transform converts the time domain to the fre-quency domain, producing a set of numerical coefﬁcients, so that we can determinethe strength of various frequencies in the sampled signal. In addition to lying atthe heart of signal processing, discrete Fourier transforms have applications in datacompression and multiplying large polynomials and integers. Chapter 30 givesan efﬁcient algorithm, the fast Fourier transform (commonly called the FFT), forthis problem, and the chapter also sketches out the design of a hardware circuit tocompute the FFT.Data structuresThis book also contains several data structures. A data structure is a way to storeand organize data in order to facilitate access and modiﬁcations. No single datastructure works well for all purposes, and so it is important to know the strengthsand limitations of several of them.TechniqueAlthough you can use this book as a “cookbook” for algorithms, you may somedayencounter a problem for which you cannot readily ﬁnd a published algorithm (manyof the exercises and problems in this book, for example). This book will teach youtechniques of algorithm design and analysis so that you can develop algorithms onyour own, show that they give the correct answer, and understand their efﬁciency.Different chapters address different aspects of algorithmic problem solving. Somechapters address speciﬁc problems, such as ﬁnding medians and order statistics inChapter 9, computing minimum spanning trees in Chapter 23, and determining amaximum ﬂow in a network in Chapter 26. Other chapters address techniques,such as divide-and-conquer in Chapter 4, dynamic programming in Chapter 15,and amortized analysis in Chapter 17.Hard problemsMost of this book is about efﬁcient algorithms. Our usual measure of efﬁciencyis speed, i.e., how long an algorithm takes to produce its result. There are someproblems, however, for which no efﬁcient solution is known. Chapter 34 studiesan interesting subset of these problems, which are known as NP-complete. Why are NP-complete problems interesting? First, although no efﬁcient algo-rithm for an NP-complete problem has ever been found, nobody has ever proven
10 Chapter 1 The Role of Algorithms in Computing that an efﬁcient algorithm for one cannot exist. In other words, no one knows whether or not efﬁcient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efﬁcient algo- rithm exists for any one of them, then efﬁcient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efﬁcient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efﬁcient algorithms. Computer scientists are intrigued by how a small change to the problem statement can cause a big change to the efﬁciency of the best known algorithm. You should know about NP-complete problems because some of them arise sur- prisingly often in real applications. If you are called upon to produce an efﬁcient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efﬁcient algorithm that gives a good, but not the best possible, solution. As a concrete example, consider a delivery company with a central depot. Each day, it loads up each delivery truck at the depot and sends it around to deliver goods to several addresses. At the end of the day, each truck must end up back at the depot so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by each truck. This problem is the well-known “traveling-salesman problem,” and it is NP-complete. It has no known efﬁcient algorithm. Under certain assumptions, however, we know of efﬁcient algorithms that give an overall distance which is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.” Parallelism For many years, we could count on processor clock speeds increasing at a steady rate. Physical limitations present a fundamental roadblock to ever-increasing clock speeds, however: because power density increases superlinearly with clock speed, chips run the risk of melting once their clock speeds become high enough. In order to perform more computations per second, therefore, chips are being designed to contain not just one but several processing “cores.” We can liken these multicore computers to several sequential computers on a single chip; in other words, they are a type of “parallel computer.” In order to elicit the best performance from multicore computers, we need to design algorithms with parallelism in mind. Chapter 27 presents a model for “multithreaded” algorithms, which take advantage of multiple cores. This model has advantages from a theoretical standpoint, and it forms the basis of several successful computer programs, including a championship chess program.
1.2 Algorithms as a technology 11 Exercises 1.1-1 Give a real-world example that requires sorting or a real-world example that re- quires computing a convex hull. 1.1-2 Other than speed, what other measures of efﬁciency might one use in a real-world setting? 1.1-3 Select a data structure that you have seen previously, and discuss its strengths and limitations. 1.1-4 How are the shortest-path and traveling-salesman problems given above similar? How are they different? 1.1-5 Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough.1.2 Algorithms as a technology Suppose computers were inﬁnitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were inﬁnitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not inﬁnitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efﬁcient in terms of time or space will help you do so.
12 Chapter 1 The Role of Algorithms in Computing Efﬁciency Different algorithms devised to solve the same problem often differ dramatically in their efﬁciency. These differences can be much more signiﬁcant than differences due to hardware and software. As an example, in Chapter 2, we will see two algorithms for sorting. The ﬁrst, known as insertion sort, takes time roughly equal to c1 n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2 . The second, merge sort, takes time roughly equal to c2 n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Inser- tion sort typically has a smaller constant factor than merge sort, so that c1 < c2 . We shall see that the constant factors can have far less of an impact on the running time than the dependence on the input size n. Let’s write insertion sort’s running time as c1 n n and merge sort’s running time as c2 n lg n. Then we see that where insertion sort has a factor of n in its running time, merge sort has a factor of lg n, which is much smaller. (For example, when n D 1000, lg n is approximately 10, and when n equals one million, lg n is approximately only 20.) Although insertion sort usually runs faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort’s advantage of lg n vs. n will more than com- pensate for the difference in constant factors. No matter how much smaller c1 is than c2 , there will always be a crossover point beyond which merge sort is faster. For a concrete example, let us pit a faster computer (computer A) running inser- tion sort against a slower computer (computer B) running merge sort. They each must sort an array of 10 million numbers. (Although 10 million numbers might seem like a lot, if the numbers are eight-byte integers, then the input occupies about 80 megabytes, which ﬁts in the memory of even an inexpensive laptop com- puter many times over.) Suppose that computer A executes 10 billion instructions per second (faster than any single sequential computer at the time of this writing) and computer B executes only 10 million instructions per second, so that com- puter A is 1000 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world’s craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n2 instructions to sort n numbers. Suppose further that just an average programmer implements merge sort, using a high-level language with an inefﬁcient compiler, with the resulting code taking 50n lg n instructions. To sort 10 million numbers, computer A takes 2 .107 /2 instructions D 20,000 seconds (more than 5.5 hours) ; 1010 instructions/second while computer B takes
1.2 Algorithms as a technology 1350 107 lg 107 instructions 1163 seconds (less than 20 minutes) : 107 instructions/secondBy using an algorithm whose running time grows more slowly, even with a poorcompiler, computer B runs more than 17 times faster than computer A! The advan-tage of merge sort is even more pronounced when we sort 100 million numbers:where insertion sort takes more than 23 days, merge sort takes under four hours.In general, as the problem size increases, so does the relative advantage of mergesort.Algorithms and other technologiesThe example above shows that we should consider algorithms, like computer hard-ware, as a technology. Total system performance depends on choosing efﬁcientalgorithms as much as on choosing fast hardware. Just as rapid advances are beingmade in other computer technologies, they are being made in algorithms as well. You might wonder whether algorithms are truly that important on contemporarycomputers in light of other advanced technologies, such as advanced computer architectures and fabrication technologies, easy-to-use, intuitive, graphical user interfaces (GUIs), object-oriented systems, integrated Web technologies, and fast networking, both wired and wireless.The answer is yes. Although some applications do not explicitly require algorith-mic content at the application level (such as some simple, Web-based applications),many do. For example, consider a Web-based service that determines how to travelfrom one location to another. Its implementation would rely on fast hardware, agraphical user interface, wide-area networking, and also possibly on object ori-entation. However, it would also require algorithms for certain operations, suchas ﬁnding routes (probably using a shortest-path algorithm), rendering maps, andinterpolating addresses. Moreover, even an application that does not require algorithmic content at theapplication level relies heavily upon algorithms. Does the application rely on fasthardware? The hardware design used algorithms. Does the application rely ongraphical user interfaces? The design of any GUI relies on algorithms. Does theapplication rely on networking? Routing in networks relies heavily on algorithms.Was the application written in a language other than machine code? Then it wasprocessed by a compiler, interpreter, or assembler, all of which make extensive use
14 Chapter 1 The Role of Algorithms in Computing of algorithms. Algorithms are at the core of most technologies used in contempo- rary computers. Furthermore, with the ever-increasing capacities of computers, we use them to solve larger problems than ever before. As we saw in the above comparison be- tween insertion sort and merge sort, it is at larger problem sizes that the differences in efﬁciency between algorithms become particularly prominent. Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern com- puting technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more. Exercises 1.2-1 Give an example of an application that requires algorithmic content at the applica- tion level, and discuss the function of the algorithms involved. 1.2-2 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort? 1.2-3 What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine?Problems 1-1 Comparison of running times For each function f .n/ and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f .n/ microseconds.
Notes for Chapter 1 15 1 1 1 1 1 1 1 second minute hour day month year century lg n p n n n lg n n2 n3 2n nŠChapter notes There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman [5, 6]; Baase and Van Gelder ; Brassard and Bratley ; Dasgupta, Papadimitriou, and Vazirani ; Goodrich and Tamassia ; Hofri ; Horowitz, Sahni, and Rajasekaran ; Johnsonbaugh and Schaefer ; Kingston ; Kleinberg and Tardos ; Knuth [209, 210, 211]; Kozen ; Levitin ; Manber ; Mehlhorn [249, 250, 251]; Pur- dom and Brown ; Reingold, Nievergelt, and Deo ; Sedgewick ; Sedgewick and Flajolet ; Skiena ; and Wilf . Some of the more practical aspects of algorithm design are discussed by Bentley [42, 43] and Gonnet . Surveys of the ﬁeld of algorithms can also be found in the Handbook of The- oretical Computer Science, Volume A  and the CRC Algorithms and Theory of Computation Handbook . Overviews of the algorithms used in computational biology can be found in textbooks by Gusﬁeld , Pevzner , Setubal and Meidanis , and Waterman .
2 Getting Started This chapter will familiarize you with the framework we shall use throughout the book to think about the design and analysis of algorithms. It is self-contained, but it does include several references to material that we introduce in Chapters 3 and 4. (It also contains several summations, which Appendix A shows how to solve.) We begin by examining the insertion sort algorithm to solve the sorting problem introduced in Chapter 1. We deﬁne a “pseudocode” that should be familiar to you if you have done computer programming, and we use it to show how we shall specify our algorithms. Having speciﬁed the insertion sort algorithm, we then argue that it correctly sorts, and we analyze its running time. The analysis introduces a notation that focuses on how that time increases with the number of items to be sorted. Following our discussion of insertion sort, we introduce the divide-and-conquer approach to the design of algorithms and use it to develop an algorithm called merge sort. We end with an analysis of merge sort’s running time.2.1 Insertion sort Our ﬁrst algorithm, insertion sort, solves the sorting problem introduced in Chap- ter 1: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. 0 0 0 Output: A permutation (reordering) ha1 ; a2 ; : : : ; an i of the input sequence such 0 0 0 that a1 Ä a2 Ä Ä an . The numbers that we wish to sort are also known as the keys. Although conceptu- ally we are sorting a sequence, the input comes to us in the form of an array with n elements. In this book, we shall typically describe algorithms as programs written in a pseudocode that is similar in many respects to C, C++, Java, Python, or Pascal. If you have been introduced to any of these languages, you should have little trouble
2.1 Insertion sort 17 ♣♣ ♣ 7 ♣ ♣♣ 4 5♣ ♣ ♣♣♣ ♣ ♣ ♣ 10 ♣ ♣♣ ♣ ♣ ♣♣ ♣ 2 ♣ 7 ♣♣♣♣ ♣4 ♣ ♣ ♣ ♣2 ♣ ♣ ♣♣ ♣ 5 0 1Figure 2.1 Sorting a hand of cards using insertion sort.reading our algorithms. What separates pseudocode from “real” code is that inpseudocode, we employ whatever expressive method is most clear and concise tospecify a given algorithm. Sometimes, the clearest method is English, so do notbe surprised if you come across an English phrase or sentence embedded withina section of “real” code. Another difference between pseudocode and real codeis that pseudocode is not typically concerned with issues of software engineering.Issues of data abstraction, modularity, and error handling are often ignored in orderto convey the essence of the algorithm more concisely. We start with insertion sort, which is an efﬁcient algorithm for sorting a smallnumber of elements. Insertion sort works the way many people sort a hand ofplaying cards. We start with an empty left hand and the cards face down on thetable. We then remove one card at a time from the table and insert it into thecorrect position in the left hand. To ﬁnd the correct position for a card, we compareit with each of the cards already in the hand, from right to left, as illustrated inFigure 2.1. At all times, the cards held in the left hand are sorted, and these cardswere originally the top cards of the pile on the table. We present our pseudocode for insertion sort as a procedure called I NSERTION -S ORT, which takes as a parameter an array AŒ1 : : n containing a sequence oflength n that is to be sorted. (In the code, the number n of elements in A is denotedby A:length.) The algorithm sorts the input numbers in place: it rearranges thenumbers within the array A, with at most a constant number of them stored outsidethe array at any time. The input array A contains the sorted output sequence whenthe I NSERTION -S ORT procedure is ﬁnished.
18 Chapter 2 Getting Started 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 (a) 5 2 4 6 1 3 (b) 2 5 4 6 1 3 (c) 2 4 5 6 1 3 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 (d) 2 4 5 6 1 3 (e) 1 2 4 5 6 3 (f) 1 2 3 4 5 6 Figure 2.2 The operation of I NSERTION -S ORT on the array A D h5; 2; 4; 6; 1; 3i. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles. (a)–(e) The iterations of the for loop of lines 1–8. In each iteration, the black rectangle holds the key taken from AŒj , which is compared with the values in shaded rectangles to its left in the test of line 5. Shaded arrows show array values moved one position to the right in line 6, and black arrows indicate where the key moves to in line 8. (f) The ﬁnal sorted array. I NSERTION -S ORT .A/ 1 for j D 2 to A:length 2 key D AŒj 3 / Insert AŒj into the sorted sequence AŒ1 : : j / 1. 4 i Dj 1 5 while i > 0 and AŒi > key 6 AŒi C 1 D AŒi 7 i Di 1 8 AŒi C 1 D key Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A D h5; 2; 4; 6; 1; 3i. The in- dex j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements AŒ1 : : j 1 constitutes the currently sorted hand, and the remaining subarray AŒj C 1 : : n corresponds to the pile of cards still on the table. In fact, elements AŒ1 : : j 1 are the elements originally in positions 1 through j 1, but now in sorted order. We state these properties of AŒ1 : : j 1 formally as a loop invariant: At the start of each iteration of the for loop of lines 1–8, the subarray AŒ1 : : j 1 consists of the elements originally in AŒ1 : : j 1, but in sorted order. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant:
2.1 Insertion sort 19Initialization: It is true prior to the ﬁrst iteration of the loop.Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration.Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct.When the ﬁrst two properties hold, the loop invariant is true prior to every iterationof the loop. (Of course, we are free to use established facts other than the loopinvariant itself to prove that the loop invariant remains true before each iteration.)Note the similarity to mathematical induction, where to prove that a property holds,you prove a base case and an inductive step. Here, showing that the invariant holdsbefore the ﬁrst iteration corresponds to the base case, and showing that the invariantholds from iteration to iteration corresponds to the inductive step. The third property is perhaps the most important one, since we are using the loopinvariant to show correctness. Typically, we use the loop invariant along with thecondition that caused the loop to terminate. The termination property differs fromhow we usually use mathematical induction, in which we apply the inductive stepinﬁnitely; here, we stop the “induction” when the loop terminates. Let us see how these properties hold for insertion sort.Initialization: We start by showing that the loop invariant holds before the ﬁrst loop iteration, when j D 2.1 The subarray AŒ1 : : j 1, therefore, consists of just the single element AŒ1, which is in fact the original element in AŒ1. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the ﬁrst iteration of the loop.Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving AŒj 1, AŒj 2, AŒj 3, and so on by one position to the right until it ﬁnds the proper position for AŒj (lines 4–7), at which point it inserts the value of AŒj (line 8). The subarray AŒ1 : : j then consists of the elements originally in AŒ1 : : j , but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant. A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 5–7. At this point, however,1 When the loop is a for loop, the moment at which we check the loop invariant just prior to the ﬁrstiteration is immediately after the initial assignment to the loop-counter variable and just before theﬁrst test in the loop header. In the case of I NSERTION -S ORT , this time is after assigning 2 to thevariable j but before the ﬁrst test of whether j Ä A: length.
20 Chapter 2 Getting Started we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j > A:length D n. Because each loop iteration increases j by 1, we must have j D n C 1 at that time. Substituting n C 1 for j in the wording of loop invariant, we have that the subarray AŒ1 : : n consists of the elements originally in AŒ1 : : n, but in sorted order. Observing that the subarray AŒ1 : : n is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct. We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well. Pseudocode conventions We use the following conventions in our pseudocode. Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of the while loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to if-else statements2 as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.3 The looping constructs while, for, and repeat-until and the if-else conditional construct have interpretations similar to those in C, C++, Java, Python, and Pascal.4 In this book, the loop counter retains its value after exiting the loop, unlike some situations that arise in C++, Java, and Pascal. Thus, immediately after a for loop, the loop counter’s value is the value that ﬁrst exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j D 2 to A:length, and so when this loop terminates, j D A:length C 1 (or, equivalently, j D n C 1, since n D A:length). We use the keyword to when a for loop increments its loop 2 Inan if-else statement, we indent else at the same level as its matching if. Although we omit the keyword then, we occasionally refer to the portion executed when the test following if is true as a then clause. For multiway tests, we use elseif for tests after the ﬁrst one. 3 Each pseudocode procedure in this book appears on one page so that you will not have to discern levels of indentation in code that is split across pages. 4 Most block-structured languages have equivalent constructs, though the exact syntax may differ. Python lacks repeat-until loops, and its for loops operate a little differently from the for loops in this book.
2.1 Insertion sort 21 counter in each iteration, and we use the keyword downto when a for loop decrements its loop counter. When the loop counter changes by an amount greater than 1, the amount of change follows the optional keyword by. The symbol “/ indicates that the remainder of the line is a comment. /” A multiple assignment of the form i D j D e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j D e followed by the assignment i D j . Variables (such as i, j , and key) are local to the given procedure. We shall not use global variables without explicit indication. We access array elements by specifying the array name followed by the in- dex in square brackets. For example, AŒi indicates the ith element of the array A. The notation “: :” is used to indicate a range of values within an ar- ray. Thus, AŒ1 : : j indicates the subarray of A consisting of the j elements AŒ1; AŒ2; : : : ; AŒj . We typically organize compound data into objects, which are composed of attributes. We access a particular attribute using the syntax found in many object-oriented programming languages: the object name, followed by a dot, followed by the attribute name. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write A:length. We treat a variable representing an array or object as a pointer to the data rep- resenting the array or object. For all attributes f of an object x, setting y D x causes y:f to equal x:f . Moreover, if we now set x:f D 3, then afterward not only does x:f equal 3, but y:f equals 3 as well. In other words, x and y point to the same object after the assignment y D x. Our attribute notation can “cascade.” For example, suppose that the attribute f is itself a pointer to some type of object that has an attribute g. Then the notation x:f :g is implicitly parenthesized as .x:f /:g. In other words, if we had assigned y D x:f , then x:f :g is the same as y:g. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL. We pass parameters to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s attributes are not. For example, if x is a parameter of a called procedure, the assignment x D y within the called procedure is not visible to the calling procedure. The assignment x:f D 3, however, is visible. Similarly, arrays are passed by pointer, so that
22 Chapter 2 Getting Started a pointer to the array is passed, rather than the entire array, and changes to individual array elements are visible to the calling procedure. A return statement immediately transfers control back to the point of call in the calling procedure. Most return statements also take a value to pass back to the caller. Our pseudocode differs from many programming languages in that we allow multiple values to be returned in a single return statement. The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y” we ﬁrst evaluate x. If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression “x or y” we eval- uate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as “x ¤ NIL and x:f D y” without worrying about what happens when we try to evaluate x:f when x is NIL. The keyword error indicates that an error occurred because conditions were wrong for the procedure to have been called. The calling procedure is respon- sible for handling the error, and so we do not specify what action to take. Exercises 2.1-1 Using Figure 2.2 as a model, illustrate the operation of I NSERTION -S ORT on the array A D h31; 41; 59; 26; 41; 58i. 2.1-2 Rewrite the I NSERTION -S ORT procedure to sort into nonincreasing instead of non- decreasing order. 2.1-3 Consider the searching problem: Input: A sequence of n numbers A D ha1 ; a2 ; : : : ; an i and a value . Output: An index i such that D AŒi or the special value NIL if does not appear in A. Write pseudocode for linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulﬁlls the three necessary properties. 2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in
2.2 Analyzing algorithms 23 an .n C 1/-element array C . State the problem formally and write pseudocode for adding the two integers.2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the algo- rithm requires. Occasionally, resources such as memory, communication band- width, or computer hardware are of primary concern, but most often it is compu- tational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, we can identify a most efﬁcient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process. Before we can analyze an algorithm, we must have a model of the implemen- tation technology that we will use, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic one- processor, random-access machine (RAM) model of computation as our imple- mentation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after an- other, with no concurrent operations. Strictly speaking, we should precisely deﬁne the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are de- signed. The RAM model contains instructions commonly found in real computers: arithmetic (such as add, subtract, multiply, divide, remainder, ﬂoor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time. The data types in the RAM model are integer and ﬂoating point (for storing real numbers). Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typ- ically assume that integers are represented by c lg n bits for some constant c 1. We require c 1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.)
24 Chapter 2 Getting Started Real computers contain instructions not listed above, and such instructions rep- resent a gray area in the RAM model. For example, is exponentiation a constant- time instruction? In the general case, no; it takes several instructions to compute x y when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2, so that shifting the bits by k positions to the left is equiv- alent to multiplication by 2k . Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer. In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory. Several computational models attempt to account for memory-hierarchy effects, which are sometimes signiﬁcant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, and so they can be difﬁcult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, alge- braic dexterity, and the ability to identify the most signiﬁcant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas. Even though we typically select only one machine model to analyze a given al- gorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important char- acteristics of an algorithm’s resource requirements, and suppresses tedious details. Analysis of insertion sort The time taken by the I NSERTION -S ORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, I NSERTION - S ORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to deﬁne the terms “running time” and “size of input” more carefully.
2.2 Analyzing algorithms 25 The best notion for input size depends on the problem being studied. For manyproblems, such as sorting or computing discrete Fourier transforms, the most nat-ural measure is the number of items in the input—for example, the array size nfor sorting. For many other problems, such as multiplying two integers, the bestmeasure of input size is the total number of bits needed to represent the input inordinary binary notation. Sometimes, it is more appropriate to describe the size ofthe input with two numbers rather than one. For instance, if the input to an algo-rithm is a graph, the input size can be described by the numbers of vertices andedges in the graph. We shall indicate which input size measure is being used witheach problem we study. The running time of an algorithm on a particular input is the number of primitiveoperations or “steps” executed. It is convenient to deﬁne the notion of step sothat it is as machine-independent as possible. For the moment, let us adopt thefollowing view. A constant amount of time is required to execute each line of ourpseudocode. One line may take a different amount of time than another line, butwe shall assume that each execution of the ith line takes time ci , where ci is aconstant. This viewpoint is in keeping with the RAM model, and it also reﬂectshow the pseudocode would be implemented on most actual computers.5 In the following discussion, our expression for the running time of I NSERTION -S ORT will evolve from a messy formula that uses all the statement costs ci to amuch simpler notation that is more concise and more easily manipulated. Thissimpler notation will also make it easy to determine whether one algorithm is moreefﬁcient than another. We start by presenting the I NSERTION -S ORT procedure with the time “cost”of each statement and the number of times each statement is executed. For eachj D 2; 3; : : : ; n, where n D A:length, we let tj denote the number of times thewhile loop test in line 5 is executed for that value of j . When a for or while loopexits in the usual way (i.e., due to the test in the loop header), the test is executedone time more than the loop body. We assume that comments are not executablestatements, and so they take no time.5 There are some subtleties here. Computational steps that we specify in English are often variantsof a procedure that requires more than just a constant amount of time. For example, later in thisbook we might say “sort the points by x-coordinate,” which, as we shall see, takes more than aconstant amount of time. Also, note that a statement that calls a subroutine takes constant time,though the subroutine, once invoked, may take more. That is, we separate the process of calling thesubroutine—passing parameters to it, etc.—from the process of executing the subroutine.
26 Chapter 2 Getting Started I NSERTION -S ORT .A/ cost times 1 for j D 2 to A:length c1 n 2 key D AŒj c2 n 1 3 / Insert AŒj into the sorted / sequence AŒ1 : : j 1. 0 n 1 4 i Dj 1 c4 n 1 P n 5 while i > 0 and AŒi > key c5 t Pj D2 j n 6 AŒi C 1 D AŒi c6 .t 1/ Pj D2 j n 7 i Di 1 c7 j D2 .tj 1/ 8 AŒi C 1 D key c8 n 1 The running time of the algorithm is the sum of running times for each state- ment executed; a statement that takes ci steps to execute and executes n times will contribute ci n to the total running time.6 To compute T .n/, the running time of I NSERTION -S ORT on an input of n values, we sum the products of the cost and times columns, obtaining X n X n T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 tj C c6 .tj 1/ j D2 j D2 X n C c7 .tj 1/ C c8 .n 1/ : j D2 Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in I NSERTION -S ORT, the best case occurs if the array is already sorted. For each j D 2; 3; : : : ; n, we then ﬁnd that AŒi Ä key in line 5 when i has its initial value of j 1. Thus tj D 1 for j D 2; 3; : : : ; n, and the best-case running time is T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 .n 1/ C c8 .n 1/ D .c1 C c2 C c4 C c5 C c8 /n .c2 C c4 C c5 C c8 / : We can express this running time as an C b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n. If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element AŒj with each element in the entire sorted subarray AŒ1 : : j 1, and so tj D j for j D 2; 3; : : : ; n. Noting that 6 This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily reference mn distinct words of memory.
2.2 Analyzing algorithms 27Xn n.n C 1/ j D 1j D2 2andXn n.n 1/ .j 1/ Dj D2 2(see Appendix A for a review of how to solve these summations), we ﬁnd that inthe worst case, the running time of I NSERTION -S ORT is Â Ã n.n C 1/T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 1 2 Â Ã Â Ã n.n 1/ n.n 1/ C c6 C c7 C c8 .n 1/ 2 2 c5 c6 c7 Á 2 c5 c6 c7 Á D C C n C c1 C c2 C c4 C C c8 n 2 2 2 2 2 2 .c2 C c4 C c5 C c8 / :We can express this worst-case running time as an2 C bn C c for constants a, b,and c that again depend on the statement costs ci ; it is thus a quadratic functionof n. Typically, as in insertion sort, the running time of an algorithm is ﬁxed for agiven input, although in later chapters we shall see some interesting “randomized”algorithms whose behavior can vary even for a ﬁxed input.Worst-case and average-case analysisIn our analysis of insertion sort, we looked at both the best case, in which the inputarray was already sorted, and the worst case, in which the input array was reversesorted. For the remainder of this book, though, we shall usually concentrate onﬁnding only the worst-case running time, that is, the longest running time for anyinput of size n. We give three reasons for this orientation. The worst-case running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse. For some algorithms, the worst case occurs fairly often. For example, in search- ing a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for absent information may be frequent.
28 Chapter 2 Getting Started The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray AŒ1 : : j 1 to insert element AŒj ? On average, half the elements in AŒ1 : : j 1 are less than AŒj , and half the elements are greater. On average, therefore, we check half of the subarray AŒ1 : : j 1, and so tj is about j=2. The resulting average-case running time turns out to be a quadratic function of the input size, just like the worst-case running time. In some particular cases, we shall be interested in the average-case running time of an algorithm; we shall see the technique of probabilistic analysis applied to various algorithms throughout this book. The scope of average-case analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis and yield an expected running time. We explore randomized algorithms more in Chapter 5 and in several other subsequent chapters. Order of growth We used some simplifying abstractions to ease our analysis of the I NSERTION - S ORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: we expressed the worst-case running time as an2 C bn C c for some constants a, b, and c that depend on the statement costs ci . We thus ignored not only the actual statement costs, but also the abstract costs ci . We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the running time that really interests us. We therefore con- sider only the leading term of a formula (e.g., an2 ), since the lower-order terms are relatively insigniﬁcant for large values of n. We also ignore the leading term’s con- stant coefﬁcient, since constant factors are less signiﬁcant than the rate of growth in determining computational efﬁciency for large inputs. For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefﬁcient, we are left with the factor of n2 from the leading term. We write that insertion sort has a worst-case running time of ‚.n2 / (pronounced “theta of n-squared”). We shall use ‚-notation informally in this chapter, and we will deﬁne it precisely in Chapter 3. We usually consider one algorithm to be more efﬁcient than another if its worst- case running time has a lower order of growth. Due to constant factors and lower- order terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower
2.3 Designing algorithms 29 order of growth. But for large enough inputs, a ‚.n2 / algorithm, for example, will run more quickly in the worst case than a ‚.n3 / algorithm. Exercises 2.2-1 Express the function n3 =1000 100n2 100n C 3 in terms of ‚-notation. 2.2-2 Consider sorting n numbers stored in array A by ﬁrst ﬁnding the smallest element of A and exchanging it with the element in AŒ1. Then ﬁnd the second smallest element of A, and exchange it with AŒ2. Continue in this manner for the ﬁrst n 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the ﬁrst n 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in ‚-notation. 2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the in- put sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚-notation? Justify your answers. 2.2-4 How can we modify almost any algorithm to have a good best-case running time?2.3 Designing algorithms We can choose from a wide range of algorithm design techniques. For insertion sort, we used an incremental approach: having sorted the subarray AŒ1 : : j 1, we inserted the single element AŒj into its proper place, yielding the sorted subarray AŒ1 : : j . In this section, we examine an alternative design approach, known as “divide- and-conquer,” which we shall explore in more detail in Chapter 4. We’ll use divide- and-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that we will see in Chapter 4.
30 Chapter 2 Getting Started 2.3.1 The divide-and-conquer approach Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related sub- problems. These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original prob- lem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. The divide-and-conquer paradigm involves three steps at each level of the recur- sion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original prob- lem. The merge sort algorithm closely follows the divide-and-conquer paradigm. In- tuitively, it operates as follows. Divide: Divide the n-element sequence to be sorted into two subsequences of n=2 elements each. Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer. The recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order. The key operation of the merge sort algorithm is the merging of two sorted sequences in the “combine” step. We merge by calling an auxiliary procedure M ERGE .A; p; q; r/, where A is an array and p, q, and r are indices into the array such that p Ä q < r. The procedure assumes that the subarrays AŒp : : q and AŒq C 1 : : r are in sorted order. It merges them to form a single sorted subarray that replaces the current subarray AŒp : : r. Our M ERGE procedure takes time ‚.n/, where n D r p C 1 is the total number of elements being merged, and it works as follows. Returning to our card- playing motif, suppose we have two piles of cards face up on a table. Each pile is sorted, with the smallest cards on top. We wish to merge the two piles into a single sorted output pile, which is to be face down on the table. Our basic step consists of choosing the smaller of the two cards on top of the face-up piles, removing it from its pile (which exposes a new top card), and placing this card face down onto
2.3 Designing algorithms 31the output pile. We repeat this step until one input pile is empty, at which timewe just take the remaining input pile and place it face down onto the output pile.Computationally, each basic step takes constant time, since we are comparing justthe two top cards. Since we perform at most n basic steps, merging takes ‚.n/time. The following pseudocode implements the above idea, but with an additionaltwist that avoids having to check whether either pile is empty in each basic step.We place on the bottom of each pile a sentinel card, which contains a special valuethat we use to simplify our code. Here, we use 1 as the sentinel value, so thatwhenever a card with 1 is exposed, it cannot be the smaller card unless both pileshave their sentinel cards exposed. But once that happens, all the nonsentinel cardshave already been placed onto the output pile. Since we know in advance thatexactly r p C 1 cards will be placed onto the output pile, we can stop once wehave performed that many basic steps.M ERGE .A; p; q; r/ 1 n1 D q p C 1 2 n2 D r q 3 let LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1 be new arrays 4 for i D 1 to n1 5 LŒi D AŒp C i 1 6 for j D 1 to n2 7 RŒj D AŒq C j 8 LŒn1 C 1 D 1 9 RŒn2 C 1 D 110 i D 111 j D 112 for k D p to r13 if LŒi Ä RŒj 14 AŒk D LŒi15 i D i C116 else AŒk D RŒj 17 j D j C1 In detail, the M ERGE procedure works as follows. Line 1 computes the length n1of the subarray AŒp : : q, and line 2 computes the length n2 of the subarrayAŒq C 1 : : r. We create arrays L and R (“left” and “right”), of lengths n1 C 1and n2 C 1, respectively, in line 3; the extra position in each array will hold thesentinel. The for loop of lines 4–5 copies the subarray AŒp : : q into LŒ1 : : n1 ,and the for loop of lines 6–7 copies the subarray AŒq C 1 : : r into RŒ1 : : n2 .Lines 8–9 put the sentinels at the ends of the arrays L and R. Lines 10–17, illus-
32 Chapter 2 Getting Started 8 9 10 11 12 13 14 15 16 17 8 9 10 11 12 13 14 15 16 17 A … 2 4 5 7 1 2 3 6 … A … 1 4 5 7 1 2 3 6 … k k 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 L 2 4 5 7 ∞ R 1 2 3 6 ∞ L 2 4 5 7 ∞ R 1 2 3 6 ∞ i j i j (a) (b) 8 9 10 11 12 13 14 15 16 17 8 9 10 11 12 13 14 15 16 17 A … 1 2 5 7 1 2 3 6 … A … 1 2 2 7 1 2 3 6 … k k 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 L 2 4 5 7 ∞ R 1 2 3 6 ∞ L 2 4 5 7 ∞ R 1 2 3 6 ∞ i j i j (c) (d) Figure 2.3 The operation of lines 10–17 in the call M ERGE.A; 9; 12; 16/, when the subarray AŒ9 : : 16 contains the sequence h2; 4; 5; 7; 1; 2; 3; 6i. After copying and inserting sentinels, the array L contains h2; 4; 5; 7; 1i, and the array R contains h1; 2; 3; 6; 1i. Lightly shaded positions in A contain their ﬁnal values, and lightly shaded positions in L and R contain values that have yet to be copied back into A. Taken together, the lightly shaded positions always comprise the values originally in AŒ9 : : 16, along with the two sentinels. Heavily shaded positions in A contain values that will be copied over, and heavily shaded positions in L and R contain values that have already been copied back into A. (a)–(h) The arrays A, L, and R, and their respective indices k, i, and j prior to each iteration of the loop of lines 12–17. trated in Figure 2.3, perform the r p C 1 basic steps by maintaining the following loop invariant: At the start of each iteration of the for loop of lines 12–17, the subarray AŒp : : k 1 contains the k p smallest elements of LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1, in sorted order. Moreover, LŒi and RŒj are the smallest elements of their arrays that have not been copied back into A. We must show that this loop invariant holds prior to the ﬁrst iteration of the for loop of lines 12–17, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates. Initialization: Prior to the ﬁrst iteration of the loop, we have k D p, so that the subarray AŒp : : k 1 is empty. This empty subarray contains the k p D 0 smallest elements of L and R, and since i D j D 1, both LŒi and RŒj are the smallest elements of their arrays that have not been copied back into A.
2.3 Designing algorithms 33 8 9 10 11 12 13 14 15 16 17 8 9 10 11 12 13 14 15 16 17 A … 1 2 2 3 1 2 3 6 … A … 1 2 2 3 4 2 3 6 … k k 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5L 2 4 5 7 ∞ R 1 2 3 6 ∞ L 2 4 5 7 ∞ R 1 2 3 6 ∞ i j i j (e) (f) 8 9 10 11 12 13 14 15 16 17 8 9 10 11 12 13 14 15 16 17 A … 1 2 2 3 4 5 3 6 … A … 1 2 2 3 4 5 6 6 … k k 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5L 2 4 5 7 ∞ R 1 2 3 6 ∞ L 2 4 5 7 ∞ R 1 2 3 6 ∞ i j i j (g) (h) 8 9 10 11 12 13 14 15 16 17 A … 1 2 2 3 4 5 6 7 … k 1 2 3 4 5 1 2 3 4 5L 2 4 5 7 ∞ R 1 2 3 6 ∞ i j (i) Figure 2.3, continued (i) The arrays and indices at termination. At this point, the subarray in AŒ9 : : 16 is sorted, and the two sentinels in L and R are the only two elements in these arrays that have not been copied into A. Maintenance: To see that each iteration maintains the loop invariant, let us ﬁrst suppose that LŒi Ä RŒj . Then LŒi is the smallest element not yet copied back into A. Because AŒp : : k 1 contains the k p smallest elements, after line 14 copies LŒi into AŒk, the subarray AŒp : : k will contain the k p C 1 smallest elements. Incrementing k (in the for loop update) and i (in line 15) reestablishes the loop invariant for the next iteration. If instead LŒi > RŒj , then lines 16–17 perform the appropriate action to maintain the loop invariant. Termination: At termination, k D r C 1. By the loop invariant, the subarray AŒp : : k 1, which is AŒp : : r, contains the k p D r p C 1 smallest elements of LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1, in sorted order. The arrays L and R together contain n1 C n2 C 2 D r p C 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels.
34 Chapter 2 Getting Started To see that the M ERGE procedure runs in ‚.n/ time, where n D r p C 1, observe that each of lines 1–3 and 8–11 takes constant time, the for loops of lines 4–7 take ‚.n1 C n2 / D ‚.n/ time,7 and there are n iterations of the for loop of lines 12–17, each of which takes constant time. We can now use the M ERGE procedure as a subroutine in the merge sort al- gorithm. The procedure M ERGE -S ORT .A; p; r/ sorts the elements in the subar- ray AŒp : : r. If p r, the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that par- titions AŒp : : r into two subarrays: AŒp : : q, containing dn=2e elements, and AŒq C 1 : : r, containing bn=2c elements.8 M ERGE -S ORT .A; p; r/ 1 if p < r 2 q D b.p C r/=2c 3 M ERGE -S ORT .A; p; q/ 4 M ERGE -S ORT .A; q C 1; r/ 5 M ERGE .A; p; q; r/ To sort the entire sequence A D hAŒ1; AŒ2; : : : ; AŒni, we make the initial call M ERGE -S ORT .A; 1; A:length/, where once again A:length D n. Figure 2.4 il- lustrates the operation of the procedure bottom-up when n is a power of 2. The algorithm consists of merging pairs of 1-item sequences to form sorted sequences of length 2, merging pairs of sequences of length 2 to form sorted sequences of length 4, and so on, until two sequences of length n=2 are merged to form the ﬁnal sorted sequence of length n. 2.3.2 Analyzing divide-and-conquer algorithms When an algorithm contains a recursive call to itself, we can often describe its running time by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. We can then use mathematical tools to solve the recurrence and provide bounds on the performance of the algorithm. 7 We shall see in Chapter 3 how to formally interpret equations containing ‚-notation. 8 The expression dxe denotes the least integer greater than or equal to x, and bxc denotes the greatest integer less than or equal to x. These notations are deﬁned in Chapter 3. The easiest way to verify that setting q to b.p C r/=2c yields subarrays AŒp : : q and AŒq C 1 : : r of sizes dn=2e and bn=2c, respectively, is to examine the four cases that arise depending on whether each of p and r is odd or even.
2.3 Designing algorithms 35 sorted sequence 1 2 2 3 4 5 6 7 merge 2 4 5 7 1 2 3 6 merge merge 2 5 4 7 1 3 2 6 merge merge merge merge 5 2 4 7 1 3 2 6 initial sequenceFigure 2.4 The operation of merge sort on the array A D h5; 2; 4; 7; 1; 3; 2; 6i. The lengths of thesorted sequences being merged increase as the algorithm progresses from bottom to top. A recurrence for the running time of a divide-and-conquer algorithm falls outfrom the three steps of the basic paradigm. As before, we let T .n/ be the runningtime on a problem of size n. If the problem size is small enough, say n Ä cfor some constant c, the straightforward solution takes constant time, which wewrite as ‚.1/. Suppose that our division of the problem yields a subproblems,each of which is 1=b the size of the original. (For merge sort, both a and b are 2,but we shall see many divide-and-conquer algorithms in which a ¤ b.) It takestime T .n=b/ to solve one subproblem of size n=b, and so it takes time aT .n=b/to solve a of them. If we take D.n/ time to divide the problem into subproblemsand C.n/ time to combine the solutions to the subproblems into the solution to theoriginal problem, we get the recurrence ( ‚.1/ if n Ä c ;T .n/ D aT .n=b/ C D.n/ C C.n/ otherwise :In Chapter 4, we shall see how to solve common recurrences of this form.Analysis of merge sortAlthough the pseudocode for M ERGE -S ORT works correctly when the number ofelements is not even, our recurrence-based analysis is simpliﬁed if we assume that
36 Chapter 2 Getting Started the original problem size is a power of 2. Each divide step then yields two subse- quences of size exactly n=2. In Chapter 4, we shall see that this assumption does not affect the order of growth of the solution to the recurrence. We reason as follows to set up the recurrence for T .n/, the worst-case running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows. Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D.n/ D ‚.1/. Conquer: We recursively solve two subproblems, each of size n=2, which con- tributes 2T .n=2/ to the running time. Combine: We have already noted that the M ERGE procedure on an n-element subarray takes time ‚.n/, and so C.n/ D ‚.n/. When we add the functions D.n/ and C.n/ for the merge sort analysis, we are adding a function that is ‚.n/ and a function that is ‚.1/. This sum is a linear function of n, that is, ‚.n/. Adding it to the 2T .n=2/ term from the “conquer” step gives the recurrence for the worst-case running time T .n/ of merge sort: ( ‚.1/ if n D 1 ; T .n/ D (2.1) 2T .n=2/ C ‚.n/ if n > 1 : In Chapter 4, we shall see the “master theorem,” which we can use to show that T .n/ is ‚.n lg n/, where lg n stands for log2 n. Because the logarithm func- tion grows more slowly than any linear function, for large enough inputs, merge sort, with its ‚.n lg n/ running time, outperforms insertion sort, whose running time is ‚.n2 /, in the worst case. We do not need the master theorem to intuitively understand why the solution to the recurrence (2.1) is T .n/ D ‚.n lg n/. Let us rewrite recurrence (2.1) as ( c if n D 1 ; T .n/ D (2.2) 2T .n=2/ C cn if n > 1 ; where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps.9 9 It is unlikely that the same constant exactly represents both the time to solve problems of size 1 and the time per array element of the divide and combine steps. We can get around this problem by letting c be the larger of these times and understanding that our recurrence gives an upper bound on the running time, or by letting c be the lesser of these times and understanding that our recurrence gives a lower bound on the running time. Both bounds are on the order of n lg n and, taken together, give a ‚.n lg n/ running time.
2.3 Designing algorithms 37 Figure 2.5 shows how we can solve recurrence (2.2). For convenience, we as-sume that n is an exact power of 2. Part (a) of the ﬁgure shows T .n/, which weexpand in part (b) into an equivalent tree representing the recurrence. The cn termis the root (the cost incurred at the top level of recursion), and the two subtrees ofthe root are the two smaller recurrences T .n=2/. Part (c) shows this process carriedone step further by expanding T .n=2/. The cost incurred at each of the two sub-nodes at the second level of recursion is cn=2. We continue expanding each nodein the tree by breaking it into its constituent parts as determined by the recurrence,until the problem sizes get down to 1, each with a cost of c. Part (d) shows theresulting recursion tree. Next, we add the costs across each level of the tree. The top level has totalcost cn, the next level down has total cost c.n=2/ C c.n=2/ D cn, the level afterthat has total cost c.n=4/Cc.n=4/ Cc.n=4/Cc.n=4/ D cn, and so on. In general,the level i below the top has 2i nodes, each contributing a cost of c.n=2i /, so thatthe ith level below the top has total cost 2i c.n=2i / D cn. The bottom level has nnodes, each contributing a cost of c, for a total cost of cn. The total number of levels of the recursion tree in Figure 2.5 is lg n C 1, wheren is the number of leaves, corresponding to the input size. An informal inductiveargument justiﬁes this claim. The base case occurs when n D 1, in which case thetree has only one level. Since lg 1 D 0, we have that lg n C 1 gives the correctnumber of levels. Now assume as an inductive hypothesis that the number of levelsof a recursion tree with 2i leaves is lg 2i C 1 D i C 1 (since for any value of i,we have that lg 2i D i). Because we are assuming that the input size is a powerof 2, the next input size to consider is 2i C1 . A tree with n D 2i C1 leaves hasone more level than a tree with 2i leaves, and so the total number of levels is.i C 1/ C 1 D lg 2i C1 C 1. To compute the total cost represented by the recurrence (2.2), we simply add upthe costs of all the levels. The recursion tree has lg n C 1 levels, each costing cn,for a total cost of cn.lg n C 1/ D cn lg n C cn. Ignoring the low-order term andthe constant c gives the desired result of ‚.n lg n/.Exercises2.3-1Using Figure 2.4 as a model, illustrate the operation of merge sort on the arrayA D h3; 41; 52; 26; 38; 57; 9; 49i.2.3-2Rewrite the M ERGE procedure so that it does not use sentinels, instead stoppingonce either array L or R has had all its elements copied back to A and then copyingthe remainder of the other array back into A.
38 Chapter 2 Getting Started T(n) cn cn T(n/2) T(n/2) cn/2 cn/2 T(n/4) T(n/4) T(n/4) T(n/4) (a) (b) (c) cn cn cn/2 cn/2 cn lg n cn/4 cn/4 cn/4 cn/4 cn … c c c c c … c c cn n (d) Total: cn lg n + cn Figure 2.5 How to construct a recursion tree for the recurrence T .n/ D 2T .n=2/ C cn. Part (a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has lg n C 1 levels (i.e., it has height lg n, as indicated), and each level contributes a total cost of cn. The total cost, therefore, is cn lg n C cn, which is ‚.n lg n/.
Problems for Chapter 2 39 2.3-3 Use mathematical induction to show that when n is an exact power of 2, the solu- tion of the recurrence ( 2 if n D 2 ; T .n/ D 2T .n=2/ C n if n D 2k , for k > 1 is T .n/ D n lg n. 2.3-4 We can express insertion sort as a recursive procedure as follows. In order to sort AŒ1 : : n, we recursively sort AŒ1 : : n 1 and then insert AŒn into the sorted array AŒ1 : : n 1. Write a recurrence for the running time of this recursive version of insertion sort. 2.3-5 Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against and eliminate half of the sequence from further consideration. The binary search al- gorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is ‚.lg n/. 2.3-6 Observe that the while loop of lines 5–7 of the I NSERTION -S ORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray AŒ1 : : j 1. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ‚.n lg n/? 2.3-7 ? Describe a ‚.n lg n/-time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x.Problems 2-1 Insertion sort on small arrays in merge sort Although merge sort runs in ‚.n lg n/ worst-case time and insertion sort runs in ‚.n2 / worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when
40 Chapter 2 Getting Started subproblems become sufﬁciently small. Consider a modiﬁcation to merge sort in which n=k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined. a. Show that insertion sort can sort the n=k sublists, each of length k, in ‚.nk/ worst-case time. b. Show how to merge the sublists in ‚.n lg.n=k// worst-case time. c. Given that the modiﬁed algorithm runs in ‚.nk C n lg.n=k// worst-case time, what is the largest value of k as a function of n for which the modiﬁed algorithm has the same running time as standard merge sort, in terms of ‚-notation? d. How should we choose k in practice? 2-2 Correctness of bubblesort Bubblesort is a popular, but inefﬁcient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order. B UBBLESORT .A/ 1 for i D 1 to A:length 1 2 for j D A:length downto i C 1 3 if AŒj < AŒj 1 4 exchange AŒj with AŒj 1 a. Let A0 denote the output of B UBBLESORT .A/. To prove that B UBBLESORT is correct, we need to prove that it terminates and that A0 Œ1 Ä A0 Œ2 Ä Ä A0 Œn ; (2.3) where n D A:length. In order to show that B UBBLESORT actually sorts, what else do we need to prove? The next two parts will prove inequality (2.3). b. State precisely a loop invariant for the for loop in lines 2–4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter. c. Using the termination condition of the loop invariant proved in part (b), state a loop invariant for the for loop in lines 1–4 that will allow you to prove in- equality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.
Problems for Chapter 2 41d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?2-3 Correctness of Horner’s ruleThe following code fragment implements Horner’s rule for evaluating a polynomial X nP .x/ D ak x k kD0 D a0 C x.a1 C x.a2 C C x.an 1 C xan / // ;given the coefﬁcients a0 ; a1 ; : : : ; an and a value for x:1 y D02 for i D n downto 03 y D ai C x ya. In terms of ‚-notation, what is the running time of this code fragment for Horner’s rule?b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule?c. Consider the following loop invariant: At the start of each iteration of the for loop of lines 2–3, X n .i C1/ yD akCi C1 x k : kD0 Interpret a summation with no terms as equaling 0. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show Pn that, at termination, y D kD0 ak x k .d. Conclude by arguing that the given code fragment correctly evaluates a poly- nomial characterized by the coefﬁcients a0 ; a1 ; : : : ; an .2-4 InversionsLet AŒ1 : : n be an array of n distinct numbers. If i < j and AŒi > AŒj , then thepair .i; j / is called an inversion of A.a. List the ﬁve inversions of the array h2; 3; 8; 6; 1i.
42 Chapter 2 Getting Started b. What array with elements from the set f1; 2; : : : ; ng has the most inversions? How many does it have? c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer. d. Give an algorithm that determines the number of inversions in any permutation on n elements in ‚.n lg n/ worst-case time. (Hint: Modify merge sort.)Chapter notes In 1968, Knuth published the ﬁrst of three volumes with the general title The Art of Computer Programming [209, 210, 211]. The ﬁrst volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word “algorithm” is derived from the name “al-Khowˆ rizmˆ,” a ninth-century Persian mathematician. a ı Aho, Hopcroft, and Ullman  advocated the asymptotic analysis of algo- rithms—using notations that Chapter 3 introduces, including ‚-notation—as a means of comparing relative performance. They also popularized the use of re- currence relations to describe the running times of recursive algorithms. Knuth  provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth’s discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell’s sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm. Merge sort is also described by Knuth. He mentions that a mechanical colla- tor capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945. The early history of proving programs correct is described by Gries , who credits P. Naur with the ﬁrst article in this ﬁeld. Gries attributes loop invariants to R. W. Floyd. The textbook by Mitchell  describes more recent progress in proving programs correct.
3 Growth of Functions The order of growth of the running time of an algorithm, deﬁned in Chapter 2, gives a simple characterization of the algorithm’s efﬁciency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge sort, with its ‚.n lg n/ worst-case running time, beats insertion sort, whose worst-case running time is ‚.n2 /. Although we can sometimes determine the exact running time of an algorithm, as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower-order terms of an exact running time are dominated by the effects of the input size itself. When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efﬁciency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efﬁcient will be the best choice for all but very small inputs. This chapter gives several standard methods for simplifying the asymptotic anal- ysis of algorithms. The next section begins by deﬁning several types of “asymp- totic notation,” of which we have already seen an example in ‚-notation. We then present several notational conventions used throughout this book, and ﬁnally we review the behavior of functions that commonly arise in the analysis of algorithms.3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are deﬁned in terms of functions whose domains are the set of natural numbers N D f0; 1; 2; : : :g. Such notations are convenient for describing the worst-case running-time function T .n/, which usually is deﬁned only on integer input sizes. We sometimes ﬁnd it convenient, however, to abuse asymptotic notation in a va-
44 Chapter 3 Growth of Functions riety of ways. For example, we might extend the notation to the domain of real numbers or, alternatively, restrict it to a subset of the natural numbers. We should make sure, however, to understand the precise meaning of the notation so that when we abuse, we do not misuse it. This section deﬁnes the basic asymptotic notations and also introduces some common abuses. Asymptotic notation, functions, and running times We will use asymptotic notation primarily to describe the running times of algo- rithms, as when we wrote that insertion sort’s worst-case running time is ‚.n2 /. Asymptotic notation actually applies to functions, however. Recall that we charac- terized insertion sort’s worst-case running time as an2 CbnCc, for some constants a, b, and c. By writing that insertion sort’s running time is ‚.n2 /, we abstracted away some details of this function. Because asymptotic notation applies to func- tions, what we were writing as ‚.n2 / was the function an2 C bn C c, which in that case happened to characterize the worst-case running time of insertion sort. In this book, the functions to which we apply asymptotic notation will usually characterize the running times of algorithms. But asymptotic notation can apply to functions that characterize some other aspect of algorithms (the amount of space they use, for example), or even to functions that have nothing whatsoever to do with algorithms. Even when we use asymptotic notation to apply to the running time of an al- gorithm, we need to understand which running time we mean. Sometimes we are interested in the worst-case running time. Often, however, we wish to characterize the running time no matter what the input. In other words, we often wish to make a blanket statement that covers all inputs, not just the worst case. We shall see asymptotic notations that are well suited to characterizing running times no matter what the input. ‚-notation In Chapter 2, we found that the worst-case running time of insertion sort is T .n/ D ‚.n2 /. Let us deﬁne what this notation means. For a given function g.n/, we denote by ‚.g.n// the set of functions ‚.g.n// D ff .n/ W there exist positive constants c1 , c2 , and n0 such that 0 Ä c1 g.n/ Ä f .n/ Ä c2 g.n/ for all n n0 g :1 1 Within set notation, a colon means “such that.”
3.1 Asymptotic notation 45 c2 g.n/ cg.n/ f .n/ f .n/ f .n/ cg.n/ c1 g.n/ n n nn0 n0 n0 f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D .g.n// (a) (b) (c) Figure 3.1 Graphic examples of the ‚, O, and notations. In each part, the value of n0 shown is the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func- tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0 , c1 , and c2 such that at and to the right of n0 , the value of f .n/ always lies between c1 g.n/ and c2 g.n/ inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We write f .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0 , the value of f .n/ always lies on or below cg.n/. (c) -notation gives a lower bound for a function to within a constant factor. We write f .n/ D .g.n// if there are positive constants n0 and c such that at and to the right of n0 , the value of f .n/ always lies on or above cg.n/. A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1 and c2 such that it can be “sandwiched” between c1 g.n/ and c2 g.n/, for sufﬁ- ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//” to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write “f .n/ D ‚.g.n//” to express the same notion. You might be confused because we abuse equality in this way, but we shall see later in this section that doing so has its advantages. Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, where f .n/ D ‚.g.n//. For all values of n at and to the right of n0 , the value of f .n/ lies at or above c1 g.n/ and at or below c2 g.n/. In other words, for all n n0 , the function f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is an asymptotically tight bound for f .n/. The deﬁnition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// be asymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf- ﬁciently large. (An asymptotically positive function is one that is positive for all sufﬁciently large n.) Consequently, the function g.n/ itself must be asymptotically nonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that every function used within ‚-notation is asymptotically nonnegative. This assumption holds for the other asymptotic notations deﬁned in this chapter as well.
46 Chapter 3 Growth of Functions In Chapter 2, we introduced an informal notion of ‚-notation that amounted to throwing away lower-order terms and ignoring the leading coefﬁcient of the highest-order term. Let us brieﬂy justify this intuition by using the formal deﬁ- nition to show that 1 n2 3n D ‚.n2 /. To do so, we must determine positive 2 constants c1 , c2 , and n0 such that 1 c 1 n2 Ä n2 3n Ä c2 n2 2 for all n n0 . Dividing by n2 yields 1 3 c1 Ä Ä c2 : 2 n We can make the right-hand inequality hold for any value of n 1 by choosing any constant c2 1=2. Likewise, we can make the left-hand inequality hold for any value of n 7 by choosing any constant c1 Ä 1=14. Thus, by choosing c1 D 1=14, c2 D 1=2, and n0 D 7, we can verify that 1 n2 3n D ‚.n2 /. Certainly, other 2 choices for the constants exist, but the important thing is that some choice exists. 1 Note that these constants depend on the function 2 n2 3n; a different function belonging to ‚.n2 / would usually require different constants. We can also use the formal deﬁnition to verify that 6n3 ¤ ‚.n2 /. Suppose for the purpose of contradiction that c2 and n0 exist such that 6n3 Ä c2 n2 for all n n0 . But then dividing by n2 yields n Ä c2 =6, which cannot possibly hold for arbitrarily large n, since c2 is constant. Intuitively, the lower-order terms of an asymptotically positive function can be ignored in determining asymptotically tight bounds because they are insigniﬁcant for large n. When n is large, even a tiny fraction of the highest-order term suf- ﬁces to dominate the lower-order terms. Thus, setting c1 to a value that is slightly smaller than the coefﬁcient of the highest-order term and setting c2 to a value that is slightly larger permits the inequalities in the deﬁnition of ‚-notation to be sat- isﬁed. The coefﬁcient of the highest-order term can likewise be ignored, since it only changes c1 and c2 by a constant factor equal to the coefﬁcient. As an example, consider any quadratic function f .n/ D an2 C bn C c, where a, b, and c are constants and a > 0. Throwing away the lower-order terms and ignoring the constant yields f .n/ D ‚.n2 /. Formally, to show the same thing, we p take the constants c1 D a=4, c2 D 7a=4, and n0 D 2 max.jbj =a; jcj =a/. You may verify that 0 Ä c1 n2 Ä an2 C bn C c Ä c2 n2 for all n n0 . In general, Pd for any polynomial p.n/ D i D0 ai ni , where the ai are constants and ad > 0, we have p.n/ D ‚.nd / (see Problem 3-1). Since any constant is a degree-0 polynomial, we can express any constant func- tion as ‚.n0 /, or ‚.1/. This latter notation is a minor abuse, however, because the
3.1 Asymptotic notation 47expression does not indicate what variable is tending to inﬁnity.2 We shall oftenuse the notation ‚.1/ to mean either a constant or a constant function with respectto some variable.O-notationThe ‚-notation asymptotically bounds a function from above and below. Whenwe have only an asymptotic upper bound, we use O-notation. For a given func-tion g.n/, we denote by O.g.n// (pronounced “big-oh of g of n” or sometimesjust “oh of g of n”) the set of functionsO.g.n// D ff .n/ W there exist positive constants c and n0 such that 0 Ä f .n/ Ä cg.n/ for all n n0 g :We use O-notation to give an upper bound on a function, to within a constantfactor. Figure 3.1(b) shows the intuition behind O-notation. For all values n at andto the right of n0 , the value of the function f .n/ is on or below cg.n/. We write f .n/ D O.g.n// to indicate that a function f .n/ is a member of theset O.g.n//. Note that f .n/ D ‚.g.n// implies f .n/ D O.g.n//, since ‚-notation is a stronger notion than O-notation. Written set-theoretically, we have‚.g.n// Â O.g.n//. Thus, our proof that any quadratic function an2 C bn C c,where a > 0, is in ‚.n2 / also shows that any such quadratic function is in O.n2 /.What may be more surprising is that when a > 0, any linear function an C b isin O.n2 /, which is easily veriﬁed by taking c D a C jbj and n0 D max.1; b=a/. If you have seen O-notation before, you might ﬁnd it strange that we shouldwrite, for example, n D O.n2 /. In the literature, we sometimes ﬁnd O-notationinformally describing asymptotically tight bounds, that is, what we have deﬁnedusing ‚-notation. In this book, however, when we write f .n/ D O.g.n//, weare merely claiming that some constant multiple of g.n/ is an asymptotic upperbound on f .n/, with no claim about how tight an upper bound it is. Distinguish-ing asymptotic upper bounds from asymptotically tight bounds is standard in thealgorithms literature. Using O-notation, we can often describe the running time of an algorithmmerely by inspecting the algorithm’s overall structure. For example, the doublynested loop structure of the insertion sort algorithm from Chapter 2 immediatelyyields an O.n2 / upper bound on the worst-case running time: the cost of each it-eration of the inner loop is bounded from above by O.1/ (constant), the indices i2 The real problem is that our ordinary notation for functions does not distinguish functions fromvalues. In -calculus, the parameters to a function are clearly speciﬁed: the function n2 could bewritten as n:n2 , or even r:r 2 . Adopting a more rigorous notation, however, would complicatealgebraic manipulations, and so we choose to tolerate the abuse.
48 Chapter 3 Growth of Functions and j are both at most n, and the inner loop is executed at most once for each of the n2 pairs of values for i and j . Since O-notation describes an upper bound, when we use it to bound the worst- case running time of an algorithm, we have a bound on the running time of the algo- rithm on every input—the blanket statement we discussed earlier. Thus, the O.n2 / bound on worst-case running time of insertion sort also applies to its running time on every input. The ‚.n2 / bound on the worst-case running time of insertion sort, however, does not imply a ‚.n2 / bound on the running time of insertion sort on every input. For example, we saw in Chapter 2 that when the input is already sorted, insertion sort runs in ‚.n/ time. Technically, it is an abuse to say that the running time of insertion sort is O.n2 /, since for a given n, the actual running time varies, depending on the particular input of size n. When we say “the running time is O.n2 /,” we mean that there is a function f .n/ that is O.n2 / such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value f .n/. Equivalently, we mean that the worst-case running time is O.n2 /. -notation Just as O-notation provides an asymptotic upper bound on a function, -notation provides an asymptotic lower bound. For a given function g.n/, we denote by .g.n// (pronounced “big-omega of g of n” or sometimes just “omega of g of n”) the set of functions .g.n// D ff .n/ W there exist positive constants c and n0 such that 0 Ä cg.n/ Ä f .n/ for all n n0 g : Figure 3.1(c) shows the intuition behind -notation. For all values n at or to the right of n0 , the value of f .n/ is on or above cg.n/. From the deﬁnitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 3.1-5). Theorem 3.1 For any two functions f .n/ and g.n/, we have f .n/ D ‚.g.n// if and only if f .n/ D O.g.n// and f .n/ D .g.n//. As an example of the application of this theorem, our proof that an2 C bn C c D ‚.n2 / for any constants a, b, and c, where a > 0, immediately implies that an2 C bn C c D .n2 / and an2 C bn C c D O.n2 /. In practice, rather than using Theorem 3.1 to obtain asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight bounds from asymptotic upper and lower bounds.
3.1 Asymptotic notation 49 When we say that the running time (no modiﬁer) of an algorithm is .g.n//,we mean that no matter what particular input of size n is chosen for each valueof n, the running time on that input is at least a constant times g.n/, for sufﬁcientlylarge n. Equivalently, we are giving a lower bound on the best-case running timeof an algorithm. For example, the best-case running time of insertion sort is .n/,which implies that the running time of insertion sort is .n/. The running time of insertion sort therefore belongs to both .n/ and O.n2 /,since it falls anywhere between a linear function of n and a quadratic function of n.Moreover, these bounds are asymptotically as tight as possible: for instance, therunning time of insertion sort is not .n2 /, since there exists an input for whichinsertion sort runs in ‚.n/ time (e.g., when the input is already sorted). It is notcontradictory, however, to say that the worst-case running time of insertion sortis .n2 /, since there exists an input that causes the algorithm to take .n2 / time.Asymptotic notation in equations and inequalitiesWe have already seen how asymptotic notation can be used within mathematicalformulas. For example, in introducing O-notation, we wrote “n D O.n2 /.” Wemight also write 2n2 C 3n C 1 D 2n2 C ‚.n/. How do we interpret such formulas? When the asymptotic notation stands alone (that is, not within a larger formula)on the right-hand side of an equation (or inequality), as in n D O.n2 /, we havealready deﬁned the equal sign to mean set membership: n 2 O.n2 /. In general,however, when asymptotic notation appears in a formula, we interpret it as stand-ing for some anonymous function that we do not care to name. For example, theformula 2n2 C 3n C 1 D 2n2 C ‚.n/ means that 2n2 C 3n C 1 D 2n2 C f .n/,where f .n/ is some function in the set ‚.n/. In this case, we let f .n/ D 3n C 1,which indeed is in ‚.n/. Using asymptotic notation in this manner can help eliminate inessential detailand clutter in an equation. For example, in Chapter 2 we expressed the worst-caserunning time of merge sort as the recurrenceT .n/ D 2T .n=2/ C ‚.n/ :If we are interested only in the asymptotic behavior of T .n/, there is no point inspecifying all the lower-order terms exactly; they are all understood to be includedin the anonymous function denoted by the term ‚.n/. The number of anonymous functions in an expression is understood to be equalto the number of times the asymptotic notation appears. For example, in the ex-pressionXn O.i/ ;i D1
50 Chapter 3 Growth of Functions there is only a single anonymous function (a function of i). This expression is thus not the same as O.1/ C O.2/ C C O.n/, which doesn’t really have a clean interpretation. In some cases, asymptotic notation appears on the left-hand side of an equation, as in 2n2 C ‚.n/ D ‚.n2 / : We interpret such equations using the following rule: No matter how the anony- mous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid. Thus, our example means that for any function f .n/ 2 ‚.n/, there is some func- tion g.n/ 2 ‚.n2 / such that 2n2 C f .n/ D g.n/ for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side. We can chain together a number of such relationships, as in 2n2 C 3n C 1 D 2n2 C ‚.n/ D ‚.n2 / : We can interpret each equation separately by the rules above. The ﬁrst equa- tion says that there is some function f .n/ 2 ‚.n/ such that 2n2 C 3n C 1 D 2n2 C f .n/ for all n. The second equation says that for any function g.n/ 2 ‚.n/ (such as the f .n/ just mentioned), there is some function h.n/ 2 ‚.n2 / such that 2n2 C g.n/ D h.n/ for all n. Note that this interpretation implies that 2n2 C 3n C 1 D ‚.n2 /, which is what the chaining of equations intuitively gives us. o-notation The asymptotic upper bound provided by O-notation may or may not be asymp- totically tight. The bound 2n2 D O.n2 / is asymptotically tight, but the bound 2n D O.n2 / is not. We use o-notation to denote an upper bound that is not asymp- totically tight. We formally deﬁne o.g.n// (“little-oh of g of n”) as the set o.g.n// D ff .n/ W for any positive constant c > 0, there exists a constant n0 > 0 such that 0 Ä f .n/ < cg.n/ for all n n0 g : For example, 2n D o.n2 /, but 2n2 ¤ o.n2 /. The deﬁnitions of O-notation and o-notation are similar. The main difference is that in f .n/ D O.g.n//, the bound 0 Ä f .n/ Ä cg.n/ holds for some con- stant c > 0, but in f .n/ D o.g.n//, the bound 0 Ä f .n/ < cg.n/ holds for all constants c > 0. Intuitively, in o-notation, the function f .n/ becomes insigniﬁcant relative to g.n/ as n approaches inﬁnity; that is,
3.1 Asymptotic notation 51 f .n/lim D0: (3.1)n!1 g.n/Some authors use this limit as a deﬁnition of the o-notation; the deﬁnition in thisbook also restricts the anonymous functions to be asymptotically nonnegative.!-notationBy analogy, !-notation is to -notation as o-notation is to O-notation. We use!-notation to denote a lower bound that is not asymptotically tight. One way todeﬁne it is byf .n/ 2 !.g.n// if and only if g.n/ 2 o.f .n// :Formally, however, we deﬁne !.g.n// (“little-omega of g of n”) as the set!.g.n// D ff .n/ W for any positive constant c > 0, there exists a constant n0 > 0 such that 0 Ä cg.n/ < f .n/ for all n n0 g :For example, n2 =2 D !.n/, but n2 =2 ¤ !.n2 /. The relation f .n/ D !.g.n//implies that f .n/lim D1;n!1 g.n/if the limit exists. That is, f .n/ becomes arbitrarily large relative to g.n/ as napproaches inﬁnity.Comparing functionsMany of the relational properties of real numbers apply to asymptotic comparisonsas well. For the following, assume that f .n/ and g.n/ are asymptotically positive.Transitivity: f .n/ D ‚.g.n// and g.n/ D ‚.h.n// imply f .n/ D ‚.h.n// ; f .n/ D O.g.n// and g.n/ D O.h.n// imply f .n/ D O.h.n// ; f .n/ D .g.n// and g.n/ D .h.n// imply f .n/ D .h.n// ; f .n/ D o.g.n// and g.n/ D o.h.n// imply f .n/ D o.h.n// ; f .n/ D !.g.n// and g.n/ D !.h.n// imply f .n/ D !.h.n// :Reﬂexivity: f .n/ D ‚.f .n// ; f .n/ D O.f .n// ; f .n/ D .f .n// :
52 Chapter 3 Growth of Functions Symmetry: f .n/ D ‚.g.n// if and only if g.n/ D ‚.f .n// : Transpose symmetry: f .n/ D O.g.n// if and only if g.n/ D .f .n// ; f .n/ D o.g.n// if and only if g.n/ D !.f .n// : Because these properties hold for asymptotic notations, we can draw an analogy between the asymptotic comparison of two functions f and g and the comparison of two real numbers a and b: f .n/ D O.g.n// is like aÄb; f .n/ D .g.n// is like a b; f .n/ D ‚.g.n// is like aDb; f .n/ D o.g.n// is like a<b; f .n/ D !.g.n// is like a>b: We say that f .n/ is asymptotically smaller than g.n/ if f .n/ D o.g.n//, and f .n/ is asymptotically larger than g.n/ if f .n/ D !.g.n//. One property of real numbers, however, does not carry over to asymptotic nota- tion: Trichotomy: For any two real numbers a and b, exactly one of the following must hold: a < b, a D b, or a > b. Although any two real numbers can be compared, not all functions are asymptot- ically comparable. That is, for two functions f .n/ and g.n/, it may be the case that neither f .n/ D O.g.n// nor f .n/ D .g.n// holds. For example, we cannot compare the functions n and n1Csin n using asymptotic notation, since the value of the exponent in n1Csin n oscillates between 0 and 2, taking on all values in between. Exercises 3.1-1 Let f .n/ and g.n/ be asymptotically nonnegative functions. Using the basic deﬁ- nition of ‚-notation, prove that max.f .n/; g.n// D ‚.f .n/ C g.n//. 3.1-2 Show that for any real constants a and b, where b > 0, .n C a/b D ‚.nb / : (3.2)
3.2 Standard notations and common functions 53 3.1-3 Explain why the statement, “The running time of algorithm A is at least O.n2 /,” is meaningless. 3.1-4 Is 2nC1 D O.2n /? Is 22n D O.2n /? 3.1-5 Prove Theorem 3.1. 3.1-6 Prove that the running time of an algorithm is ‚.g.n// if and only if its worst-case running time is O.g.n// and its best-case running time is .g.n//. 3.1-7 Prove that o.g.n// !.g.n// is the empty set. 3.1-8 We can extend our notation to the case of two parameters n and m that can go to inﬁnity independently at different rates. For a given function g.n; m/, we denote by O.g.n; m// the set of functions O.g.n; m// D ff .n; m/ W there exist positive constants c, n0 , and m0 such that 0 Ä f .n; m/ Ä cg.n; m/ for all n n0 or m m0 g : Give corresponding deﬁnitions for .g.n; m// and ‚.g.n; m//.3.2 Standard notations and common functions This section reviews some standard mathematical functions and notations and ex- plores the relationships among them. It also illustrates the use of the asymptotic notations. Monotonicity A function f .n/ is monotonically increasing if m Ä n implies f .m/ Ä f .n/. Similarly, it is monotonically decreasing if m Ä n implies f .m/ f .n/. A function f .n/ is strictly increasing if m < n implies f .m/ < f .n/ and strictly decreasing if m < n implies f .m/ > f .n/.
54 Chapter 3 Growth of Functions Floors and ceilings For any real number x, we denote the greatest integer less than or equal to x by bxc (read “the ﬂoor of x”) and the least integer greater than or equal to x by dxe (read “the ceiling of x”). For all real x, x 1 < bxc Ä x Ä dxe < x C 1 : (3.3) For any integer n, dn=2e C bn=2c D n ; and for any real number x 0 and integers a; b > 0, dx=ae lx m D ; (3.4) b ab bx=ac jx k D ; (3.5) b ab la m a C .b 1/ Ä ; (3.6) b b ja k a .b 1/ : (3.7) b b The ﬂoor function f .x/ D bxc is monotonically increasing, as is the ceiling func- tion f .x/ D dxe. Modular arithmetic For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of the quotient a=n: a mod n D a n ba=nc : (3.8) It follows that 0 Ä a mod n < n : (3.9) Given a well-deﬁned notion of the remainder of one integer when divided by an- other, it is convenient to provide special notation to indicate equality of remainders. If .a mod n/ D .b mod n/, we write a Á b .mod n/ and say that a is equivalent to b, modulo n. In other words, a Á b .mod n/ if a and b have the same remain- der when divided by n. Equivalently, a Á b .mod n/ if and only if n is a divisor of b a. We write a 6Á b .mod n/ if a is not equivalent to b, modulo n.
3.2 Standard notations and common functions 55PolynomialsGiven a nonnegative integer d , a polynomial in n of degree d is a function p.n/of the form X dp.n/ D a i ni ; i D0where the constants a0 ; a1 ; : : : ; ad are the coefﬁcients of the polynomial andad ¤ 0. A polynomial is asymptotically positive if and only if ad > 0. For anasymptotically positive polynomial p.n/ of degree d , we have p.n/ D ‚.nd /. Forany real constant a 0, the function na is monotonically increasing, and for anyreal constant a Ä 0, the function na is monotonically decreasing. We say that afunction f .n/ is polynomially bounded if f .n/ D O.nk / for some constant k.ExponentialsFor all real a > 0, m, and n, we have the following identities: a0 D 1; a1 D a; a 1 D 1=a ;.am /n D amn ;.am /n D .an /m ;am an D amCn :For all n and a 1, the function an is monotonically increasing in n. Whenconvenient, we shall assume 00 D 1. We can relate the rates of growth of polynomials and exponentials by the fol-lowing fact. For all real constants a and b such that a > 1, nblim D0; (3.10)n!1 a nfrom which we can conclude thatnb D o.an / :Thus, any exponential function with a base strictly greater than 1 grows faster thanany polynomial function. Using e to denote 2:71828 : : :, the base of the natural logarithm function, wehave for all real x, x2 x3 X xi 1 xe D1CxC C C D ; (3.11) 2Š 3Š i D0 iŠ
56 Chapter 3 Growth of Functions where “Š” denotes the factorial function deﬁned later in this section. For all real x, we have the inequality ex 1Cx ; (3.12) where equality holds only when x D 0. When jxj Ä 1, we have the approximation 1 C x Ä ex Ä 1 C x C x 2 : (3.13) x When x ! 0, the approximation of e by 1 C x is quite good: e x D 1 C x C ‚.x 2 / : (In this equation, the asymptotic notation is used to describe the limiting behavior as x ! 0 rather than as x ! 1.) We have for all x, x Án lim 1 C D ex : (3.14) n!1 n Logarithms We shall use the following notations: lg n D log2 n (binary logarithm) , ln n D loge n (natural logarithm) , lgk n D .lg n/k (exponentiation) , lg lg n D lg.lg n/ (composition) . An important notational convention we shall adopt is that logarithm functions will apply only to the next term in the formula, so that lg n C k will mean .lg n/ C k and not lg.n C k/. If we hold b > 1 constant, then for n > 0, the function logb n is strictly increasing. For all real a > 0, b > 0, c > 0, and n, a D b logb a ; logc .ab/ D logc a C logc b ; logb an D n logb a ; logc a logb a D ; (3.15) logc b logb .1=a/ D logb a ; 1 logb a D ; loga b alogb c D c logb a ; (3.16) where, in each equation above, logarithm bases are not 1.
3.2 Standard notations and common functions 57 By equation (3.15), changing the base of a logarithm from one constant to an-other changes the value of the logarithm by only a constant factor, and so we shalloften use the notation “lg n” when we don’t care about constant factors, such as inO-notation. Computer scientists ﬁnd 2 to be the most natural base for logarithmsbecause so many algorithms and data structures involve splitting a problem intotwo parts. There is a simple series expansion for ln.1 C x/ when jxj < 1: x2 x3 x4 x5ln.1 C x/ D x C C : 2 3 4 5We also have the following inequalities for x > 1: x Ä ln.1 C x/ Ä x ; (3.17)1Cxwhere equality holds only for x D 0. We say that a function f .n/ is polylogarithmically bounded if f .n/ D O.lgk n/for some constant k. We can relate the growth of polynomials and polylogarithmsby substituting lg n for n and 2a for a in equation (3.10), yielding lgb n lgb nlim D lim D0:n!1 .2a /lg n n!1 naFrom this limit, we can conclude thatlgb n D o.na /for any constant a > 0. Thus, any positive polynomial function grows faster thanany polylogarithmic function.FactorialsThe notation nŠ (read “n factorial”) is deﬁned for integers n 0 as ( 1 if n D 0 ;nŠ D n .n 1/Š if n > 0 :Thus, nŠ D 1 2 3 n. A weak upper bound on the factorial function is nŠ Ä nn , since each of the nterms in the factorial product is at most n. Stirling’s approximation, Â Â ÃÃ p n Án 1nŠ D 2 n 1C‚ ; (3.18) e n
58 Chapter 3 Growth of Functions where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as well. As Exercise 3.2-3 asks you to prove, nŠ D o.nn / ; nŠ D !.2n / ; lg.nŠ/ D ‚.n lg n/ ; (3.19) where Stirling’s approximation is helpful in proving equation (3.19). The following equation also holds for all n 1: p n Á n ˛n nŠ D 2 n e (3.20) e where 1 1 < ˛n < : (3.21) 12n C 1 12n Functional iteration We use the notation f .i / .n/ to denote the function f .n/ iteratively applied i times to an initial value of n. Formally, let f .n/ be a function over the reals. For non- negative integers i, we recursively deﬁne ( n if i D 0 ; f .i / .n/ D .i 1/ f .f .n// if i > 0 : For example, if f .n/ D 2n, then f .i / .n/ D 2i n. The iterated logarithm function We use the notation lg n (read “log star of n”) to denote the iterated logarithm, de- ﬁned as follows. Let lg.i / n be as deﬁned above, with f .n/ D lg n. Because the log- arithm of a nonpositive number is undeﬁned, lg.i / n is deﬁned only if lg.i 1/ n > 0. Be sure to distinguish lg.i / n (the logarithm function applied i times in succession, starting with argument n) from lgi n (the logarithm of n raised to the ith power). Then we deﬁne the iterated logarithm function as ˚ « lg n D min i 0 W lg.i / n Ä 1 : The iterated logarithm is a very slowly growing function: lg 2 D 1; lg 4 D 2; lg 16 D 3; lg 65536 D 4; lg .265536 / D 5:
3.2 Standard notations and common functions 59Since the number of atoms in the observable universe is estimated to be about 1080 ,which is much less than 265536 , we rarely encounter an input size n such thatlg n > 5.Fibonacci numbersWe deﬁne the Fibonacci numbers by the following recurrence:F0 D 0 ;F1 D 1 ; (3.22)Fi D Fi 1 C Fi 2 for i 2:Thus, each Fibonacci number is the sum of the two previous ones, yielding thesequence0; 1; 1; 2; 3; 5; 8; 13; 21; 34; 55; : : : :Fibonacci numbers are related to the golden ratio and to its conjugate y, whichare the two roots of the equationx2 D x C 1 (3.23)and are given by the following formulas (see Exercise 3.2-6): p 1C 5 D (3.24) 2 D 1:61803 : : : ; py D 1 5 2 D :61803 : : : :Speciﬁcally, we have i yiFi D p ; 5 ˇ ˇwhich we can prove by induction (Exercise 3.2-7). Since ˇ yˇ < 1, we haveˇ iˇˇy ˇ 1p < p 5 5 1 < ; 2which implies that
60 Chapter 3 Growth of Functions i 1 Fi D p C ; (3.25) 5 2 p which is to say that the ith Fibonacci number Fi is equal to i = 5 rounded to the nearest integer. Thus, Fibonacci numbers grow exponentially. Exercises 3.2-1 Show that if f .n/ and g.n/ are monotonically increasing functions, then so are the functions f .n/ C g.n/ and f .g.n//, and if f .n/ and g.n/ are in addition nonnegative, then f .n/ g.n/ is monotonically increasing. 3.2-2 Prove equation (3.16). 3.2-3 Prove equation (3.19). Also prove that nŠ D !.2n / and nŠ D o.nn /. 3.2-4 ? Is the function dlg neŠ polynomially bounded? Is the function dlg lg neŠ polynomi- ally bounded? 3.2-5 ? Which is asymptotically larger: lg.lg n/ or lg .lg n/? 3.2-6 Show that the golden ratio and its conjugate y both satisfy the equation x 2 D x C 1. 3.2-7 Prove by induction that the ith Fibonacci number satisﬁes the equality i yi Fi D p ; 5 where is the golden ratio and y is its conjugate. 3.2-8 Show that k ln k D ‚.n/ implies k D ‚.n= ln n/.
Problems for Chapter 3 61Problems 3-1 Asymptotic behavior of polynomials Let X d p.n/ D a i ni ; i D0 where ad > 0, be a degree-d polynomial in n, and let k be a constant. Use the deﬁnitions of the asymptotic notations to prove the following properties. a. If k d , then p.n/ D O.nk /. b. If k Ä d , then p.n/ D .nk /. c. If k D d , then p.n/ D ‚.nk /. d. If k > d , then p.n/ D o.nk /. e. If k < d , then p.n/ D !.nk /. 3-2 Relative asymptotic growths Indicate, for each pair of expressions .A; B/ in the table below, whether A is O, o, , !, or ‚ of B. Assume that k 1, > 0, and c > 1 are constants. Your answer should be in the form of the table with “yes” or “no” written in each box. A B O o ! ‚ a. lgk n n b. nk cn p c. n nsin n d. 2n 2n=2 e. nlg c c lg n f. lg.nŠ/ lg.nn / 3-3 Ordering by asymptotic growth rates a. Rank the following functions by order of growth; that is, ﬁnd an arrangement g1 ; g2 ; : : : ; g30 of the functions satisfying g1 D .g2 /, g2 D .g3 /, . . . , g29 D .g30 /. Partition your list into equivalence classes such that functions f .n/ and g.n/ are in the same class if and only if f .n/ D ‚.g.n//.
62 Chapter 3 Growth of Functions n p lg.lg n/ 2lg . 2/lg n n2 nŠ .lg n/Š n . 3 /n 2 n3 lg2 n lg.nŠ/ 22 n1= lg n ln ln n lg n n 2n nlg lg n ln n 1 p 2lg n .lg n/lg n en 4lg n .n C 1/Š lg n p nC1 2 lg n lg .lg n/ 2 n 2n n lg n 22 b. Give an example of a single nonnegative function f .n/ such that for all func- tions gi .n/ in part (a), f .n/ is neither O.gi .n// nor .gi .n//. 3-4 Asymptotic notation properties Let f .n/ and g.n/ be asymptotically positive functions. Prove or disprove each of the following conjectures. a. f .n/ D O.g.n// implies g.n/ D O.f .n//. b. f .n/ C g.n/ D ‚.min.f .n/; g.n///. c. f .n/ D O.g.n// implies lg.f .n// D O.lg.g.n///, where lg.g.n// 1 and f .n/ 1 for all sufﬁciently large n. d. f .n/ D O.g.n// implies 2f .n/ D O 2g.n/ . e. f .n/ D O ..f .n//2 /. f. f .n/ D O.g.n// implies g.n/ D .f .n//. g. f .n/ D ‚.f .n=2//. h. f .n/ C o.f .n// D ‚.f .n//. 3-5 Variations on O and ˝ 1 Some authors deﬁne in a slightly different way than we do; let’s use (read 1 “omega inﬁnity”) for this alternative deﬁnition. We say that f .n/ D .g.n// if there exists a positive constant c such that f .n/ cg.n/ 0 for inﬁnitely many integers n. a. Show that for any two functions f .n/ and g.n/ that are asymptotically nonneg- 1 ative, either f .n/ D O.g.n// or f .n/ D .g.n// or both, whereas this is not 1 true if we use in place of .
Problems for Chapter 3 63 1b. Describe the potential advantages and disadvantages of using instead of to characterize the running times of programs.Some authors also deﬁne O in a slightly different manner; let’s use O 0 for thealternative deﬁnition. We say that f .n/ D O 0 .g.n// if and only if jf .n/j DO.g.n//.c. What happens to each direction of the “if and only if” in Theorem 3.1 if we substitute O 0 for O but still use ? eSome authors deﬁne O (read “soft-oh”) to mean O with logarithmic factors ig-nored:eO.g.n// D ff .n/ W there exist positive constants c, k, and n0 such that 0 Ä f .n/ Ä cg.n/ lgk .n/ for all n n0 g :d. Deﬁne e and ‚ in a similar manner. Prove the corresponding analog to Theo- e rem 3.1.3-6 Iterated functionsWe can apply the iteration operator used in the lg function to any monotonicallyincreasing function f .n/ over the reals. For a given constant c 2 R, we deﬁne theiterated function fc by ˚ «fc .n/ D min i 0 W f .i / .n/ Ä c ;which need not be well deﬁned in all cases. In other words, the quantity fc .n/ isthe number of iterated applications of the function f required to reduce its argu-ment down to c or less. For each of the following functions f .n/ and constants c, give as tight a boundas possible on fc .n/. f .n/ c fc .n/a. n 1 0b. lg n 1c. n=2 1d. n=2 2 pe. n 2 pf. n 1g. n1=3 2h. n= lg n 2
64 Chapter 3 Growth of FunctionsChapter notes Knuth  traces the origin of the O-notation to a number-theory text by P. Bach- mann in 1892. The o-notation was invented by E. Landau in 1909 for his discussion of the distribution of prime numbers. The and ‚ notations were advocated by Knuth  to correct the popular, but technically sloppy, practice in the literature of using O-notation for both upper and lower bounds. Many people continue to use the O-notation where the ‚-notation is more technically precise. Further dis- cussion of the history and development of asymptotic notations appears in works by Knuth [209, 213] and Brassard and Bratley . Not all authors deﬁne the asymptotic notations in the same way, although the various deﬁnitions agree in most common situations. Some of the alternative def- initions encompass functions that are not asymptotically nonnegative, as long as their absolute values are appropriately bounded. Equation (3.20) is due to Robbins . Other properties of elementary math- ematical functions can be found in any good mathematical reference, such as Abramowitz and Stegun  or Zwillinger , or in a calculus book, such as Apostol  or Thomas et al. . Knuth  and Graham, Knuth, and Patash- nik  contain a wealth of material on discrete mathematics as used in computer science.
4 Divide-and-Conquer In Section 2.3.1, we saw how merge sort serves as an example of the divide-and- conquer paradigm. Recall that in divide-and-conquer, we solve a problem recur- sively, applying three steps at each level of the recursion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original prob- lem. When the subproblems are large enough to solve recursively, we call that the recur- sive case. Once the subproblems become small enough that we no longer recurse, we say that the recursion “bottoms out” and that we have gotten down to the base case. Sometimes, in addition to subproblems that are smaller instances of the same problem, we have to solve subproblems that are not quite the same as the original problem. We consider solving such subproblems as part of the combine step. In this chapter, we shall see more algorithms based on divide-and-conquer. The ﬁrst one solves the maximum-subarray problem: it takes as input an array of num- bers, and it determines the contiguous subarray whose values have the greatest sum. Then we shall see two divide-and-conquer algorithms for multiplying n n matri- ces. One runs in ‚.n3 / time, which is no better than the straightforward method of multiplying square matrices. But the other, Strassen’s algorithm, runs in O.n2:81 / time, which beats the straightforward method asymptotically. Recurrences Recurrences go hand in hand with the divide-and-conquer paradigm, because they give us a natural way to characterize the running times of divide-and-conquer algo- rithms. A recurrence is an equation or inequality that describes a function in terms
66 Chapter 4 Divide-and-Conquer of its value on smaller inputs. For example, in Section 2.3.2 we described the worst-case running time T .n/ of the M ERGE -S ORT procedure by the recurrence ( ‚.1/ if n D 1 ; T .n/ D (4.1) 2T .n=2/ C ‚.n/ if n > 1 ; whose solution we claimed to be T .n/ D ‚.n lg n/. Recurrences can take many forms. For example, a recursive algorithm might divide subproblems into unequal sizes, such as a 2=3-to-1=3 split. If the divide and combine steps take linear time, such an algorithm would give rise to the recurrence T .n/ D T .2n=3/ C T .n=3/ C ‚.n/. Subproblems are not necessarily constrained to being a constant fraction of the original problem size. For example, a recursive version of linear search (see Exercise 2.1-3) would create just one subproblem containing only one el- ement fewer than the original problem. Each recursive call would take con- stant time plus the time for the recursive calls it makes, yielding the recurrence T .n/ D T .n 1/ C ‚.1/. This chapter offers three methods for solving recurrences—that is, for obtaining asymptotic “‚” or “O” bounds on the solution: In the substitution method, we guess a bound and then use mathematical in- duction to prove our guess correct. The recursion-tree method converts the recurrence into a tree whose nodes represent the costs incurred at various levels of the recursion. We use techniques for bounding summations to solve the recurrence. The master method provides bounds for recurrences of the form T .n/ D aT .n=b/ C f .n/ ; (4.2) where a 1, b > 1, and f .n/ is a given function. Such recurrences arise frequently. A recurrence of the form in equation (4.2) characterizes a divide- and-conquer algorithm that creates a subproblems, each of which is 1=b the size of the original problem, and in which the divide and combine steps together take f .n/ time. To use the master method, you will need to memorize three cases, but once you do that, you will easily be able to determine asymptotic bounds for many simple recurrences. We will use the master method to determine the running times of the divide-and-conquer algorithms for the maximum-subarray problem and for matrix multiplication, as well as for other algorithms based on divide- and-conquer elsewhere in this book.
Chapter 4 Divide-and-Conquer 67 Occasionally, we shall see recurrences that are not equalities but rather inequal-ities, such as T .n/ Ä 2T .n=2/ C ‚.n/. Because such a recurrence states onlyan upper bound on T .n/, we will couch its solution using O-notation rather than‚-notation. Similarly, if the inequality were reversed to T .n/ 2T .n=2/ C ‚.n/,then because the recurrence gives only a lower bound on T .n/, we would use -notation in its solution.Technicalities in recurrencesIn practice, we neglect certain technical details when we state and solve recur-rences. For example, if we call M ERGE -S ORT on n elements when n is odd, weend up with subproblems of size bn=2c and dn=2e. Neither size is actually n=2,because n=2 is not an integer when n is odd. Technically, the recurrence describingthe worst-case running time of M ERGE -S ORT is really ( ‚.1/ if n D 1 ;T .n/ D (4.3) T .dn=2e/ C T .bn=2c/ C ‚.n/ if n > 1 : Boundary conditions represent another class of details that we typically ignore.Since the running time of an algorithm on a constant-sized input is a constant,the recurrences that arise from the running times of algorithms generally haveT .n/ D ‚.1/ for sufﬁciently small n. Consequently, for convenience, we shallgenerally omit statements of the boundary conditions of recurrences and assumethat T .n/ is constant for small n. For example, we normally state recurrence (4.1)asT .n/ D 2T .n=2/ C ‚.n/ ; (4.4)without explicitly giving values for small n. The reason is that although changingthe value of T .1/ changes the exact solution to the recurrence, the solution typi-cally doesn’t change by more than a constant factor, and so the order of growth isunchanged. When we state and solve recurrences, we often omit ﬂoors, ceilings, and bound-ary conditions. We forge ahead without these details and later determine whetheror not they matter. They usually do not, but you should know when they do. Ex-perience helps, and so do some theorems stating that these details do not affect theasymptotic bounds of many recurrences characterizing divide-and-conquer algo-rithms (see Theorem 4.1). In this chapter, however, we shall address some of thesedetails and illustrate the ﬁne points of recurrence solution methods.
68 Chapter 4 Divide-and-Conquer4.1 The maximum-subarray problem Suppose that you been offered the opportunity to invest in the Volatile Chemical Corporation. Like the chemicals the company produces, the stock price of the Volatile Chemical Corporation is rather volatile. You are allowed to buy one unit of stock only one time and then sell it at a later date, buying and selling after the close of trading for the day. To compensate for this restriction, you are allowed to learn what the price of the stock will be in the future. Your goal is to maximize your proﬁt. Figure 4.1 shows the price of the stock over a 17-day period. You may buy the stock at any one time, starting after day 0, when the price is $100 per share. Of course, you would want to “buy low, sell high”—buy at the lowest possible price and later on sell at the highest possible price—to maximize your proﬁt. Unfortunately, you might not be able to buy at the lowest price and then sell at the highest price within a given period. In Figure 4.1, the lowest price occurs after day 7, which occurs after the highest price, after day 1. You might think that you can always maximize proﬁt by either buying at the lowest price or selling at the highest price. For example, in Figure 4.1, we would maximize proﬁt by buying at the lowest price, after day 7. If this strategy always worked, then it would be easy to determine how to maximize proﬁt: ﬁnd the highest and lowest prices, and then work left from the highest price to ﬁnd the lowest prior price, work right from the lowest price to ﬁnd the highest later price, and take the pair with the greater difference. Figure 4.2 shows a simple counterexample, 120 110 100 90 80 70 60 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97 Change 13 3 25 20 3 16 23 18 20 7 12 5 22 15 4 7 Figure 4.1 Information about the price of stock in the Volatile Chemical Corporation after the close of trading over a period of 17 days. The horizontal axis of the chart indicates the day, and the vertical axis shows the price. The bottom row of the table gives the change in price from the previous day.
4.1 The maximum-subarray problem 691110 9 Day 0 1 2 3 4 8 Price 10 11 7 10 6 7 Change 1 4 3 4 6 0 1 2 3 4Figure 4.2 An example showing that the maximum proﬁt does not always start at the lowest priceor end at the highest price. Again, the horizontal axis indicates the day, and the vertical axis showsthe price. Here, the maximum proﬁt of $3 per share would be earned by buying after day 2 andselling after day 3. The price of $7 after day 2 is not the lowest price overall, and the price of $10after day 3 is not the highest price overall.demonstrating that the maximum proﬁt sometimes comes neither by buying at thelowest price nor by selling at the highest price.A brute-force solutionWe can easily devise a brute-force solution to this problem: just try every possiblepair of buy and sell dates in which the buy date precedes the sell date. A period of ndays has n such pairs of dates. Since n is ‚.n2 /, and the best we can hope for 2 2is to evaluate each pair of dates in constant time, this approach would take .n2 /time. Can we do better?A transformationIn order to design an algorithm with an o.n2 / running time, we will look at theinput in a slightly different way. We want to ﬁnd a sequence of days over whichthe net change from the ﬁrst day to the last is maximum. Instead of looking at thedaily prices, let us instead consider the daily change in price, where the change onday i is the difference between the prices after day i 1 and after day i. The tablein Figure 4.1 shows these daily changes in the bottom row. If we treat this row asan array A, shown in Figure 4.3, we now want to ﬁnd the nonempty, contiguoussubarray of A whose values have the largest sum. We call this contiguous subarraythe maximum subarray. For example, in the array of Figure 4.3, the maximumsubarray of AŒ1 : : 16 is AŒ8 : : 11, with the sum 43. Thus, you would want to buythe stock just before day 8 (that is, after day 7) and sell it after day 11, earning aproﬁt of $43 per share. At ﬁrst glance, this transformation does not help. We still need to check n 1 2 D ‚.n2 / subarrays for a period of n days. Exercise 4.1-2 asks you to show
70 Chapter 4 Divide-and-Conquer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 A 13 –3 –25 20 –3 –16 –23 18 20 –7 12 –5 –22 15 –4 7 maximum subarray Figure 4.3 The change in stock prices as a maximum-subarray problem. Here, the subar- ray AŒ8 : : 11, with sum 43, has the greatest sum of any contiguous subarray of array A. that although computing the cost of one subarray might take time proportional to the length of the subarray, when computing all ‚.n2 / subarray sums, we can orga- nize the computation so that each subarray sum takes O.1/ time, given the values of previously computed subarray sums, so that the brute-force solution takes ‚.n2 / time. So let us seek a more efﬁcient solution to the maximum-subarray problem. When doing so, we will usually speak of “a” maximum subarray rather than “the” maximum subarray, since there could be more than one subarray that achieves the maximum sum. The maximum-subarray problem is interesting only when the array contains some negative numbers. If all the array entries were nonnegative, then the maximum-subarray problem would present no challenge, since the entire array would give the greatest sum. A solution using divide-and-conquer Let’s think about how we might solve the maximum-subarray problem using the divide-and-conquer technique. Suppose we want to ﬁnd a maximum subar- ray of the subarray AŒlow : : high. Divide-and-conquer suggests that we divide the subarray into two subarrays of as equal size as possible. That is, we ﬁnd the midpoint, say mid, of the subarray, and consider the subarrays AŒlow : : mid and AŒmid C 1 : : high. As Figure 4.4(a) shows, any contiguous subarray AŒi : : j of AŒlow : : high must lie in exactly one of the following places: entirely in the subarray AŒlow : : mid, so that low Ä i Ä j Ä mid, entirely in the subarray AŒmid C 1 : : high, so that mid < i Ä j Ä high, or crossing the midpoint, so that low Ä i Ä mid < j Ä high. Therefore, a maximum subarray of AŒlow : : high must lie in exactly one of these places. In fact, a maximum subarray of AŒlow : : high must have the greatest sum over all subarrays entirely in AŒlow : : mid, entirely in AŒmid C 1 : : high, or crossing the midpoint. We can ﬁnd maximum subarrays of AŒlow : : mid and AŒmidC1 : : high recursively, because these two subproblems are smaller instances of the problem of ﬁnding a maximum subarray. Thus, all that is left to do is ﬁnd a
4.1 The maximum-subarray problem 71 crosses the midpoint AŒmid C 1 : : j low mid high low i mid high mid C 1 mid C 1 jentirely in AŒlow : : mid entirely in AŒmid C 1 : : high AŒi : : mid (a) (b) Figure 4.4 (a) Possible locations of subarrays of AŒlow : : high: entirely in AŒlow : : mid, entirely in AŒmid C 1 : : high, or crossing the midpoint mid. (b) Any subarray of AŒlow : : high crossing the midpoint comprises two subarrays AŒi : : mid and AŒmid C 1 : : j , where low Ä i Ä mid and mid < j Ä high. maximum subarray that crosses the midpoint, and take a subarray with the largest sum of the three. We can easily ﬁnd a maximum subarray crossing the midpoint in time linear in the size of the subarray AŒlow : : high. This problem is not a smaller instance of our original problem, because it has the added restriction that the subarray it chooses must cross the midpoint. As Figure 4.4(b) shows, any subarray crossing the midpoint is itself made of two subarrays AŒi : : mid and AŒmid C 1 : : j , where low Ä i Ä mid and mid < j Ä high. Therefore, we just need to ﬁnd maximum subarrays of the form AŒi : : mid and AŒmid C 1 : : j and then combine them. The procedure F IND -M AX -C ROSSING -S UBARRAY takes as input the array A and the indices low, mid, and high, and it returns a tuple containing the indices demarcating a maximum subarray that crosses the midpoint, along with the sum of the values in a maximum subarray. F IND -M AX -C ROSSING -S UBARRAY .A; low; mid; high/ 1 left-sum D 1 2 sum D 0 3 for i D mid downto low 4 sum D sum C AŒi 5 if sum > left-sum 6 left-sum D sum 7 max-left D i 8 right-sum D 1 9 sum D 0 10 for j D mid C 1 to high 11 sum D sum C AŒj 12 if sum > right-sum 13 right-sum D sum 14 max-right D j 15 return .max-left; max-right; left-sum C right-sum/
72 Chapter 4 Divide-and-Conquer This procedure works as follows. Lines 1–7 ﬁnd a maximum subarray of the left half, AŒlow : : mid. Since this subarray must contain AŒmid, the for loop of lines 3–7 starts the index i at mid and works down to low, so that every subarray it considers is of the form AŒi : : mid. Lines 1–2 initialize the variables left-sum, which holds the greatest sum found so far, and sum, holding the sum of the entries in AŒi : : mid. Whenever we ﬁnd, in line 5, a subarray AŒi : : mid with a sum of values greater than left-sum, we update left-sum to this subarray’s sum in line 6, and in line 7 we update the variable max-left to record this index i. Lines 8–14 work analogously for the right half, AŒmid C 1 : : high. Here, the for loop of lines 10–14 starts the index j at midC1 and works up to high, so that every subarray it considers is of the form AŒmid C 1 : : j . Finally, line 15 returns the indices max-left and max-right that demarcate a maximum subarray crossing the midpoint, along with the sum left-sum Cright-sum of the values in the subarray AŒmax-left : : max-right. If the subarray AŒlow : : high contains n entries (so that n D high low C 1), we claim that the call F IND -M AX -C ROSSING -S UBARRAY .A; low; mid; high/ takes ‚.n/ time. Since each iteration of each of the two for loops takes ‚.1/ time, we just need to count up how many iterations there are altogether. The for loop of lines 3–7 makes mid low C 1 iterations, and the for loop of lines 10–14 makes high mid iterations, and so the total number of iterations is .mid low C 1/ C .high mid/ D high low C 1 D n: With a linear-time F IND -M AX -C ROSSING -S UBARRAY procedure in hand, we can write pseudocode for a divide-and-conquer algorithm to solve the maximum- subarray problem: F IND -M AXIMUM -S UBARRAY .A; low; high/ 1 if high == low 2 return .low; high; AŒlow/ / base case: only one element / 3 else mid D b.low C high/=2c 4 .left-low; left-high; left-sum/ D F IND -M AXIMUM -S UBARRAY .A; low; mid/ 5 .right-low; right-high; right-sum/ D F IND -M AXIMUM -S UBARRAY .A; mid C 1; high/ 6 .cross-low; cross-high; cross-sum/ D F IND -M AX -C ROSSING -S UBARRAY .A; low; mid; high/ 7 if left-sum right-sum and left-sum cross-sum 8 return .left-low; left-high; left-sum/ 9 elseif right-sum left-sum and right-sum cross-sum 10 return .right-low; right-high; right-sum/ 11 else return .cross-low; cross-high; cross-sum/
4.1 The maximum-subarray problem 73The initial call F IND -M AXIMUM -S UBARRAY .A; 1; A:length/ will ﬁnd a maxi-mum subarray of AŒ1 : : n. Similar to F IND -M AX -C ROSSING -S UBARRAY, the recursive procedure F IND -M AXIMUM -S UBARRAY returns a tuple containing the indices that demarcate amaximum subarray, along with the sum of the values in a maximum subarray.Line 1 tests for the base case, where the subarray has just one element. A subar-ray with just one element has only one subarray—itself—and so line 2 returns atuple with the starting and ending indices of just the one element, along with itsvalue. Lines 3–11 handle the recursive case. Line 3 does the divide part, comput-ing the index mid of the midpoint. Let’s refer to the subarray AŒlow : : mid as theleft subarray and to AŒmid C 1 : : high as the right subarray. Because we knowthat the subarray AŒlow : : high contains at least two elements, each of the left andright subarrays must have at least one element. Lines 4 and 5 conquer by recur-sively ﬁnding maximum subarrays within the left and right subarrays, respectively.Lines 6–11 form the combine part. Line 6 ﬁnds a maximum subarray that crossesthe midpoint. (Recall that because line 6 solves a subproblem that is not a smallerinstance of the original problem, we consider it to be in the combine part.) Line 7tests whether the left subarray contains a subarray with the maximum sum, andline 8 returns that maximum subarray. Otherwise, line 9 tests whether the rightsubarray contains a subarray with the maximum sum, and line 10 returns that max-imum subarray. If neither the left nor right subarrays contain a subarray achievingthe maximum sum, then a maximum subarray must cross the midpoint, and line 11returns it.Analyzing the divide-and-conquer algorithmNext we set up a recurrence that describes the running time of the recursive F IND -M AXIMUM -S UBARRAY procedure. As we did when we analyzed merge sort inSection 2.3.2, we make the simplifying assumption that the original problem sizeis a power of 2, so that all subproblem sizes are integers. We denote by T .n/ therunning time of F IND -M AXIMUM -S UBARRAY on a subarray of n elements. Forstarters, line 1 takes constant time. The base case, when n D 1, is easy: line 2takes constant time, and soT .1/ D ‚.1/ : (4.5) The recursive case occurs when n > 1. Lines 1 and 3 take constant time. Eachof the subproblems solved in lines 4 and 5 is on a subarray of n=2 elements (ourassumption that the original problem size is a power of 2 ensures that n=2 is aninteger), and so we spend T .n=2/ time solving each of them. Because we haveto solve two subproblems—for the left subarray and for the right subarray—thecontribution to the running time from lines 4 and 5 comes to 2T .n=2/. As we have
74 Chapter 4 Divide-and-Conquer already seen, the call to F IND -M AX -C ROSSING -S UBARRAY in line 6 takes ‚.n/ time. Lines 7–11 take only ‚.1/ time. For the recursive case, therefore, we have T .n/ D ‚.1/ C 2T .n=2/ C ‚.n/ C ‚.1/ D 2T .n=2/ C ‚.n/ : (4.6) Combining equations (4.5) and (4.6) gives us a recurrence for the running time T .n/ of F IND -M AXIMUM -S UBARRAY: ( ‚.1/ if n D 1 ; T .n/ D (4.7) 2T .n=2/ C ‚.n/ if n > 1 : This recurrence is the same as recurrence (4.1) for merge sort. As we shall see from the master method in Section 4.5, this recurrence has the solution T .n/ D ‚.n lg n/. You might also revisit the recursion tree in Figure 2.5 to un- derstand why the solution should be T .n/ D ‚.n lg n/. Thus, we see that the divide-and-conquer method yields an algorithm that is asymptotically faster than the brute-force method. With merge sort and now the maximum-subarray problem, we begin to get an idea of how powerful the divide- and-conquer method can be. Sometimes it will yield the asymptotically fastest algorithm for a problem, and other times we can do even better. As Exercise 4.1-5 shows, there is in fact a linear-time algorithm for the maximum-subarray problem, and it does not use divide-and-conquer. Exercises 4.1-1 What does F IND -M AXIMUM -S UBARRAY return when all elements of A are nega- tive? 4.1-2 Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in ‚.n2 / time. 4.1-3 Implement both the brute-force and recursive algorithms for the maximum- subarray problem on your own computer. What problem size n0 gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenever the problem size is less than n0 . Does that change the crossover point? 4.1-4 Suppose we change the deﬁnition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subar-
4.2 Strassen’s algorithm for matrix multiplication 75 ray is 0. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result? 4.1-5 Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray of AŒ1 : : j , extend the answer to ﬁnd a maximum subarray ending at in- dex j C1 by using the following observation: a maximum subarray of AŒ1 : : j C 1 is either a maximum subarray of AŒ1 : : j or a subarray AŒi : : j C 1, for some 1 Ä i Ä j C 1. Determine a maximum subarray of the form AŒi : : j C 1 in constant time based on knowing a maximum subarray ending at index j .4.2 Strassen’s algorithm for matrix multiplication If you have seen matrices before, then you probably know how to multiply them. (Otherwise, you should read Section D.1 in Appendix D.) If A D .aij / and B D .bij / are square n n matrices, then in the product C D A B, we deﬁne the entry cij , for i; j D 1; 2; : : : ; n, by X n cij D ai k bkj : (4.8) kD1 We must compute n2 matrix entries, and each is the sum of n values. The following procedure takes n n matrices A and B and multiplies them, returning their n n product C . We assume that each matrix has an attribute rows, giving the number of rows in the matrix. S QUARE -M ATRIX -M ULTIPLY .A; B/ 1 n D A:rows 2 let C be a new n n matrix 3 for i D 1 to n 4 for j D 1 to n 5 cij D 0 6 for k D 1 to n 7 cij D cij C ai k bkj 8 return C The S QUARE -M ATRIX -M ULTIPLY procedure works as follows. The for loop of lines 3–7 computes the entries of each row i, and within a given row i, the
76 Chapter 4 Divide-and-Conquer for loop of lines 4–7 computes each of the entries cij , for each column j . Line 5 initializes cij to 0 as we start computing the sum given in equation (4.8), and each iteration of the for loop of lines 6–7 adds in one more term of equation (4.8). Because each of the triply-nested for loops runs exactly n iterations, and each execution of line 7 takes constant time, the S QUARE -M ATRIX -M ULTIPLY proce- dure takes ‚.n3 / time. You might at ﬁrst think that any matrix multiplication algorithm must take .n3 / time, since the natural deﬁnition of matrix multiplication requires that many mul- tiplications. You would be incorrect, however: we have a way to multiply matrices in o.n3 / time. In this section, we shall see Strassen’s remarkable recursive algo- rithm for multiplying n n matrices. It runs in ‚.nlg 7 / time, which we shall show in Section 4.5. Since lg 7 lies between 2:80 and 2:81, Strassen’s algorithm runs in O.n2:81 / time, which is asymptotically better than the simple S QUARE -M ATRIX - M ULTIPLY procedure. A simple divide-and-conquer algorithm To keep things simple, when we use a divide-and-conquer algorithm to compute the matrix product C D A B, we assume that n is an exact power of 2 in each of the n n matrices. We make this assumption because in each divide step, we will divide n n matrices into four n=2 n=2 matrices, and by assuming that n is an exact power of 2, we are guaranteed that as long as n 2, the dimension n=2 is an integer. Suppose that we partition each of A, B, and C into four n=2 n=2 matrices Â Ã Â Ã Â Ã A11 A12 B11 B12 C11 C12 AD ; BD ; C D ; (4.9) A21 A22 B21 B22 C21 C22 so that we rewrite the equation C D A B as Â Ã Â Ã Â Ã C11 C12 A11 A12 B11 B12 D : (4.10) C21 C22 A21 A22 B21 B22 Equation (4.10) corresponds to the four equations C11 D A11 B11 C A12 B21 ; (4.11) C12 D A11 B12 C A12 B22 ; (4.12) C21 D A21 B11 C A22 B21 ; (4.13) C22 D A21 B12 C A22 B22 : (4.14) Each of these four equations speciﬁes two multiplications of n=2 n=2 matrices and the addition of their n=2 n=2 products. We can use these equations to create a straightforward, recursive, divide-and-conquer algorithm:
4.2 Strassen’s algorithm for matrix multiplication 77S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A; B/ 1 n D A:rows 2 let C be a new n n matrix 3 if n == 1 4 c11 D a11 b11 5 else partition A, B, and C as in equations (4.9) 6 C11 D S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A11 ; B11 / C S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A12 ; B21 / 7 C12 D S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A11 ; B12 / C S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A12 ; B22 / 8 C21 D S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A21 ; B11 / C S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A22 ; B21 / 9 C22 D S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A21 ; B12 / C S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE .A22 ; B22 /10 return C This pseudocode glosses over one subtle but important implementation detail.How do we partition the matrices in line 5? If we were to create 12 new n=2 n=2matrices, we would spend ‚.n2 / time copying entries. In fact, we can partitionthe matrices without copying entries. The trick is to use index calculations. Weidentify a submatrix by a range of row indices and a range of column indices ofthe original matrix. We end up representing a submatrix a little differently fromhow we represent the original matrix, which is the subtlety we are glossing over.The advantage is that, since we can specify submatrices by index calculations,executing line 5 takes only ‚.1/ time (although we shall see that it makes nodifference asymptotically to the overall running time whether we copy or partitionin place). Now, we derive a recurrence to characterize the running time of S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE. Let T .n/ be the time to multiply two n nmatrices using this procedure. In the base case, when n D 1, we perform just theone scalar multiplication in line 4, and soT .1/ D ‚.1/ : (4.15) The recursive case occurs when n > 1. As discussed, partitioning the matrices inline 5 takes ‚.1/ time, using index calculations. In lines 6–9, we recursively callS QUARE -M ATRIX -M ULTIPLY-R ECURSIVE a total of eight times. Because eachrecursive call multiplies two n=2 n=2 matrices, thereby contributing T .n=2/ tothe overall running time, the time taken by all eight recursive calls is 8T .n=2/. Wealso must account for the four matrix additions in lines 6–9. Each of these matricescontains n2 =4 entries, and so each of the four matrix additions takes ‚.n2 / time.Since the number of matrix additions is a constant, the total time spent adding ma-
78 Chapter 4 Divide-and-Conquer trices in lines 6–9 is ‚.n2 /. (Again, we use index calculations to place the results of the matrix additions into the correct positions of matrix C , with an overhead of ‚.1/ time per entry.) The total time for the recursive case, therefore, is the sum of the partitioning time, the time for all the recursive calls, and the time to add the matrices resulting from the recursive calls: T .n/ D ‚.1/ C 8T .n=2/ C ‚.n2 / D 8T .n=2/ C ‚.n2 / : (4.16) Notice that if we implemented partitioning by copying matrices, which would cost ‚.n2 / time, the recurrence would not change, and hence the overall running time would increase by only a constant factor. Combining equations (4.15) and (4.16) gives us the recurrence for the running time of S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE: ( ‚.1/ if n D 1 ; T .n/ D 2 (4.17) 8T .n=2/ C ‚.n / if n > 1 : As we shall see from the master method in Section 4.5, recurrence (4.17) has the solution T .n/ D ‚.n3 /. Thus, this simple divide-and-conquer approach is no faster than the straightforward S QUARE -M ATRIX -M ULTIPLY procedure. Before we continue on to examining Strassen’s algorithm, let us review where the components of equation (4.16) came from. Partitioning each n n matrix by index calculation takes ‚.1/ time, but we have two matrices to partition. Although you could say that partitioning the two matrices takes ‚.2/ time, the constant of 2 is subsumed by the ‚-notation. Adding two matrices, each with, say, k entries, takes ‚.k/ time. Since the matrices we add each have n2 =4 entries, you could say that adding each pair takes ‚.n2 =4/ time. Again, however, the ‚-notation subsumes the constant factor of 1=4, and we say that adding two n2 =4 n2 =4 matrices takes ‚.n2 / time. We have four such matrix additions, and once again, instead of saying that they take ‚.4n2 / time, we say that they take ‚.n2 / time. (Of course, you might observe that we could say that the four matrix additions take ‚.4n2 =4/ time, and that 4n2 =4 D n2 , but the point here is that ‚-notation subsumes constant factors, whatever they are.) Thus, we end up with two terms of ‚.n2 /, which we can combine into one. When we account for the eight recursive calls, however, we cannot just sub- sume the constant factor of 8. In other words, we must say that together they take 8T .n=2/ time, rather than just T .n=2/ time. You can get a feel for why by looking back at the recursion tree in Figure 2.5, for recurrence (2.1) (which is identical to recurrence (4.7)), with the recursive case T .n/ D 2T .n=2/C‚.n/. The factor of 2 determined how many children each tree node had, which in turn determined how many terms contributed to the sum at each level of the tree. If we were to ignore
4.2 Strassen’s algorithm for matrix multiplication 79the factor of 8 in equation (4.16) or the factor of 2 in recurrence (4.1), the recursiontree would just be linear, rather than “bushy,” and each level would contribute onlyone term to the sum. Bear in mind, therefore, that although asymptotic notation subsumes constantmultiplicative factors, recursive notation such as T .n=2/ does not.Strassen’s methodThe key to Strassen’s method is to make the recursion tree slightly less bushy. Thatis, instead of performing eight recursive multiplications of n=2 n=2 matrices,it performs only seven. The cost of eliminating one matrix multiplication will beseveral new additions of n=2 n=2 matrices, but still only a constant number ofadditions. As before, the constant number of matrix additions will be subsumedby ‚-notation when we set up the recurrence equation to characterize the runningtime. Strassen’s method is not at all obvious. (This might be the biggest understate-ment in this book.) It has four steps:1. Divide the input matrices A and B and output matrix C into n=2 n=2 subma- trices, as in equation (4.9). This step takes ‚.1/ time by index calculation, just as in S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE.2. Create 10 matrices S1 ; S2 ; : : : ; S10 , each of which is n=2 n=2 and is the sum or difference of two matrices created in step 1. We can create all 10 matrices in ‚.n2 / time.3. Using the submatrices created in step 1 and the 10 matrices created in step 2, recursively compute seven matrix products P1 ; P2 ; : : : ; P7 . Each matrix Pi is n=2 n=2.4. Compute the desired submatrices C11 ; C12 ; C21 ; C22 of the result matrix C by adding and subtracting various combinations of the Pi matrices. We can com- pute all four submatrices in ‚.n2 / time. We shall see the details of steps 2–4 in a moment, but we already have enoughinformation to set up a recurrence for the running time of Strassen’s method. Let usassume that once the matrix size n gets down to 1, we perform a simple scalar mul-tiplication, just as in line 4 of S QUARE -M ATRIX -M ULTIPLY-R ECURSIVE. Whenn > 1, steps 1, 2, and 4 take a total of ‚.n2 / time, and step 3 requires us to per-form seven multiplications of n=2 n=2 matrices. Hence, we obtain the followingrecurrence for the running time T .n/ of Strassen’s algorithm: ( ‚.1/ if n D 1 ;T .n/ D 2 (4.18) 7T .n=2/ C ‚.n / if n > 1 :
80 Chapter 4 Divide-and-Conquer We have traded off one matrix multiplication for a constant number of matrix ad- ditions. Once we understand recurrences and their solutions, we shall see that this tradeoff actually leads to a lower asymptotic running time. By the master method in Section 4.5, recurrence (4.18) has the solution T .n/ D ‚.nlg 7 /. We now proceed to describe the details. In step 2, we create the following 10 matrices: S1 D B12 B22 ; S2 D A11 C A12 ; S3 D A21 C A22 ; S4 D B21 B11 ; S5 D A11 C A22 ; S6 D B11 C B22 ; S7 D A12 A22 ; S8 D B21 C B22 ; S9 D A11 A21 ; S10 D B11 C B12 : Since we must add or subtract n=2 n=2 matrices 10 times, this step does indeed take ‚.n2 / time. In step 3, we recursively multiply n=2 n=2 matrices seven times to compute the following n=2 n=2 matrices, each of which is the sum or difference of products of A and B submatrices: P1 D A11 S1 D A11 B12 A11 B22 ; P2 D S2 B22 D A11 B22 C A12 B22 ; P3 D S3 B11 D A21 B11 C A22 B11 ; P4 D A22 S4 D A22 B21 A22 B11 ; P5 D S5 S6 D A11 B11 C A11 B22 C A22 B11 C A22 B22 ; P6 D S7 S8 D A12 B21 C A12 B22 A22 B21 A22 B22 ; P7 D S9 S10 D A11 B11 C A11 B12 A21 B11 A21 B12 : Note that the only multiplications we need to perform are those in the middle col- umn of the above equations. The right-hand column just shows what these products equal in terms of the original submatrices created in step 1. Step 4 adds and subtracts the Pi matrices created in step 3 to construct the four n=2 n=2 submatrices of the product C . We start with C11 D P5 C P4 P2 C P6 :
4.2 Strassen’s algorithm for matrix multiplication 81Expanding out the right-hand side, with the expansion of each Pi on its own lineand vertically aligning terms that cancel out, we see that C11 equalsA11 B11 C A11 B22 C A22 B11 C A22 B22 A22 B11 C A22 B21 A11 B22 A12 B22 A22 B22 A22 B21 C A12 B22 C A12 B21A11 B11 C A12 B21 ;which corresponds to equation (4.11).Similarly, we setC12 D P1 C P2 ;and so C12 equalsA11 B12 A11 B22 C A11 B22 C A12 B22A11 B12 C A12 B22 ;corresponding to equation (4.12).SettingC21 D P3 C P4makes C21 equalA21 B11 C A22 B11 A22 B11 C A22 B21A21 B11 C A22 B21 ;corresponding to equation (4.13).Finally, we setC22 D P5 C P1 P3 P7 ;so that C22 equals A11 B11 C A11 B22 C A22 B11 C A22 B22 A11 B22 C A11 B12 A22 B11 A21 B11 A11 B11 A11 B12 C A21 B11 C A21 B12 A22 B22 C A21 B12 ;
82 Chapter 4 Divide-and-Conquer which corresponds to equation (4.14). Altogether, we add or subtract n=2 n=2 matrices eight times in step 4, and so this step indeed takes ‚.n2 / time. Thus, we see that Strassen’s algorithm, comprising steps 1–4, produces the cor- rect matrix product and that recurrence (4.18) characterizes its running time. Since we shall see in Section 4.5 that this recurrence has the solution T .n/ D ‚.nlg 7 /, Strassen’s method is asymptotically faster than the straightforward S QUARE - M ATRIX -M ULTIPLY procedure. The notes at the end of this chapter discuss some of the practical aspects of Strassen’s algorithm. Exercises Note: Although Exercises 4.2-3, 4.2-4, and 4.2-5 are about variants on Strassen’s algorithm, you should read Section 4.5 before trying to solve them. 4.2-1 Use Strassen’s algorithm to compute the matrix product Â ÃÂ Ã 1 3 6 8 : 7 5 4 2 Show your work. 4.2-2 Write pseudocode for Strassen’s algorithm. 4.2-3 How would you modify Strassen’s algorithm to multiply n n matrices in which n is not an exact power of 2? Show that the resulting algorithm runs in time ‚.nlg 7 /. 4.2-4 What is the largest k such that if you can multiply 3 3 matrices using k multi- plications (not assuming commutativity of multiplication), then you can multiply n n matrices in time o.nlg 7 /? What would the running time of this algorithm be? 4.2-5 V. Pan has discovered a way of multiplying 68 68 matrices using 132,464 mul- tiplications, a way of multiplying 70 70 matrices using 143,640 multiplications, and a way of multiplying 72 72 matrices using 155,424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to Strassen’s algorithm?
4.3 The substitution method for solving recurrences 83 4.2-6 How quickly can you multiply a k n n matrix by an n k n matrix, using Strassen’s algorithm as a subroutine? Answer the same question with the order of the input matrices reversed. 4.2-7 Show how to multiply the complex numbers a C bi and c C d i using only three multiplications of real numbers. The algorithm should take a, b, c, and d as input and produce the real component ac bd and the imaginary component ad C bc separately.4.3 The substitution method for solving recurrences Now that we have seen how recurrences characterize the running times of divide- and-conquer algorithms, we will learn how to solve recurrences. We start in this section with the “substitution” method. The substitution method for solving recurrences comprises two steps: 1. Guess the form of the solution. 2. Use mathematical induction to ﬁnd the constants and show that the solution works. We substitute the guessed solution for the function when applying the inductive hypothesis to smaller values; hence the name “substitution method.” This method is powerful, but we must be able to guess the form of the answer in order to apply it. We can use the substitution method to establish either upper or lower bounds on a recurrence. As an example, let us determine an upper bound on the recurrence T .n/ D 2T .bn=2c/ C n ; (4.19) which is similar to recurrences (4.3) and (4.4). We guess that the solution is T .n/ D O.n lg n/. The substitution method requires us to prove that T .n/ Ä cn lg n for an appropriate choice of the constant c > 0. We start by assuming that this bound holds for all positive m < n, in particular for m D bn=2c, yielding T .bn=2c/ Ä c bn=2c lg.bn=2c/. Substituting into the recurrence yields T .n/ Ä 2.c bn=2c lg.bn=2c// C n Ä cn lg.n=2/ C n D cn lg n cn lg 2 C n D cn lg n cn C n Ä cn lg n ;
84 Chapter 4 Divide-and-Conquer where the last step holds as long as c 1. Mathematical induction now requires us to show that our solution holds for the boundary conditions. Typically, we do so by showing that the boundary condi- tions are suitable as base cases for the inductive proof. For the recurrence (4.19), we must show that we can choose the constant c large enough so that the bound T .n/ Ä cn lg n works for the boundary conditions as well. This requirement can sometimes lead to problems. Let us assume, for the sake of argument, that T .1/ D 1 is the sole boundary condition of the recurrence. Then for n D 1, the bound T .n/ Ä cn lg n yields T .1/ Ä c1 lg 1 D 0, which is at odds with T .1/ D 1. Consequently, the base case of our inductive proof fails to hold. We can overcome this obstacle in proving an inductive hypothesis for a spe- ciﬁc boundary condition with only a little more effort. In the recurrence (4.19), for example, we take advantage of asymptotic notation requiring us only to prove T .n/ Ä cn lg n for n n0 , where n0 is a constant that we get to choose. We keep the troublesome boundary condition T .1/ D 1, but remove it from consid- eration in the inductive proof. We do so by ﬁrst observing that for n > 3, the recurrence does not depend directly on T .1/. Thus, we can replace T .1/ by T .2/ and T .3/ as the base cases in the inductive proof, letting n0 D 2. Note that we make a distinction between the base case of the recurrence (n D 1) and the base cases of the inductive proof (n D 2 and n D 3). With T .1/ D 1, we derive from the recurrence that T .2/ D 4 and T .3/ D 5. Now we can complete the inductive proof that T .n/ Ä cn lg n for some constant c 1 by choosing c large enough so that T .2/ Ä c2 lg 2 and T .3/ Ä c3 lg 3. As it turns out, any choice of c 2 sufﬁces for the base cases of n D 2 and n D 3 to hold. For most of the recurrences we shall examine, it is straightforward to extend boundary conditions to make the inductive assumption work for small n, and we shall not always explicitly work out the details. Making a good guess Unfortunately, there is no general way to guess the correct solutions to recurrences. Guessing a solution takes experience and, occasionally, creativity. Fortunately, though, you can use some heuristics to help you become a good guesser. You can also use recursion trees, which we shall see in Section 4.4, to generate good guesses. If a recurrence is similar to one you have seen before, then guessing a similar solution is reasonable. As an example, consider the recurrence T .n/ D 2T .bn=2c C 17/ C n ; which looks difﬁcult because of the added “17” in the argument to T on the right- hand side. Intuitively, however, this additional term cannot substantially affect the
4.3 The substitution method for solving recurrences 85solution to the recurrence. When n is large, the difference between bn=2c andbn=2c C 17 is not that large: both cut n nearly evenly in half. Consequently, wemake the guess that T .n/ D O.n lg n/, which you can verify as correct by usingthe substitution method (see Exercise 4.3-6). Another way to make a good guess is to prove loose upper and lower bounds onthe recurrence and then reduce the range of uncertainty. For example, we mightstart with a lower bound of T .n/ D .n/ for the recurrence (4.19), since wehave the term n in the recurrence, and we can prove an initial upper bound ofT .n/ D O.n2 /. Then, we can gradually lower the upper bound and raise thelower bound until we converge on the correct, asymptotically tight solution ofT .n/ D ‚.n lg n/.SubtletiesSometimes you might correctly guess an asymptotic bound on the solution of arecurrence, but somehow the math fails to work out in the induction. The problemfrequently turns out to be that the inductive assumption is not strong enough toprove the detailed bound. If you revise the guess by subtracting a lower-order termwhen you hit such a snag, the math often goes through. Consider the recurrenceT .n/ D T .bn=2c/ C T .dn=2e/ C 1 :We guess that the solution is T .n/ D O.n/, and we try to show that T .n/ Ä cn foran appropriate choice of the constant c. Substituting our guess in the recurrence,we obtainT .n/ Ä c bn=2c C c dn=2e C 1 D cn C 1 ;which does not imply T .n/ Ä cn for any choice of c. We might be tempted to trya larger guess, say T .n/ D O.n2 /. Although we can make this larger guess work,our original guess of T .n/ D O.n/ is correct. In order to show that it is correct,however, we must make a stronger inductive hypothesis. Intuitively, our guess is nearly right: we are off only by the constant 1, alower-order term. Nevertheless, mathematical induction does not work unless weprove the exact form of the inductive hypothesis. We overcome our difﬁcultyby subtracting a lower-order term from our previous guess. Our new guess isT .n/ Ä cn d , where d 0 is a constant. We now haveT .n/ Ä .c bn=2c d / C .c dn=2e d/ C 1 D cn 2d C 1 Ä cn d ;
86 Chapter 4 Divide-and-Conquer as long as d 1. As before, we must choose the constant c large enough to handle the boundary conditions. You might ﬁnd the idea of subtracting a lower-order term counterintuitive. Af- ter all, if the math does not work out, we should increase our guess, right? Not necessarily! When proving an upper bound by induction, it may actually be more difﬁcult to prove that a weaker upper bound holds, because in order to prove the weaker bound, we must use the same weaker bound inductively in the proof. In our current example, when the recurrence has more than one recursive term, we get to subtract out the lower-order term of the proposed bound once per recursive term. In the above example, we subtracted out the constant d twice, once for the T .bn=2c/ term and once for the T .dn=2e/ term. We ended up with the inequality T .n/ Ä cn 2d C 1, and it was easy to ﬁnd values of d to make cn 2d C 1 be less than or equal to cn d . Avoiding pitfalls It is easy to err in the use of asymptotic notation. For example, in the recur- rence (4.19) we can falsely “prove” T .n/ D O.n/ by guessing T .n/ Ä cn and then arguing T .n/ Ä 2.c bn=2c/ C n Ä cn C n D O.n/ ; wrong!! since c is a constant. The error is that we have not proved the exact form of the inductive hypothesis, that is, that T .n/ Ä cn. We therefore will explicitly prove that T .n/ Ä cn when we want to show that T .n/ D O.n/. Changing variables Sometimes, a little algebraic manipulation can make an unknown recurrence simi- lar to one you have seen before. As an example, consider the recurrence p ˘ T .n/ D 2T n C lg n ; which looks difﬁcult. We can simplify this recurrence, though, with a change of variables. For convenience, we shall not worry about rounding off values, such p as n, to be integers. Renaming m D lg n yields T .2m / D 2T .2m=2 / C m : We can now rename S.m/ D T .2m / to produce the new recurrence S.m/ D 2S.m=2/ C m ;
4.3 The substitution method for solving recurrences 87which is very much like recurrence (4.19). Indeed, this new recurrence has thesame solution: S.m/ D O.m lg m/. Changing back from S.m/ to T .n/, we obtainT .n/ D T .2m / D S.m/ D O.m lg m/ D O.lg n lg lg n/ :Exercises4.3-1Show that the solution of T .n/ D T .n 1/ C n is O.n2 /.4.3-2Show that the solution of T .n/ D T .dn=2e/ C 1 is O.lg n/.4.3-3We saw that the solution of T .n/ D 2T .bn=2c/ C n is O.n lg n/. Show that the so-lution of this recurrence is also .n lg n/. Conclude that the solution is ‚.n lg n/.4.3-4Show that by making a different inductive hypothesis, we can overcome the difﬁ-culty with the boundary condition T .1/ D 1 for recurrence (4.19) without adjustingthe boundary conditions for the inductive proof.4.3-5Show that ‚.n lg n/ is the solution to the “exact” recurrence (4.3) for merge sort.4.3-6Show that the solution to T .n/ D 2T .bn=2c C 17/ C n is O.n lg n/.4.3-7Using the master method in Section 4.5, you can show that the solution to therecurrence T .n/ D 4T .n=3/ C n is T .n/ D ‚.nlog3 4 /. Show that a substitutionproof with the assumption T .n/ Ä cnlog3 4 fails. Then show how to subtract off alower-order term to make a substitution proof work.4.3-8Using the master method in Section 4.5, you can show that the solution to therecurrence T .n/ D 4T .n=2/ C n2 is T .n/ D ‚.n2 /. Show that a substitutionproof with the assumption T .n/ Ä cn2 fails. Then show how to subtract off alower-order term to make a substitution proof work.
88 Chapter 4 Divide-and-Conquer 4.3-9 p Solve the recurrence T .n/ D 3T . n/ C log n by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral.4.4 The recursion-tree method for solving recurrences Although you can use the substitution method to provide a succinct proof that a solution to a recurrence is correct, you might have trouble coming up with a good guess. Drawing out a recursion tree, as we did in our analysis of the merge sort recurrence in Section 2.3.2, serves as a straightforward way to devise a good guess. In a recursion tree, each node represents the cost of a single subproblem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion. A recursion tree is best used to generate a good guess, which you can then verify by the substitution method. When using a recursion tree to generate a good guess, you can often tolerate a small amount of “sloppiness,” since you will be verifying your guess later on. If you are very careful when drawing out a recursion tree and summing the costs, however, you can use a recursion tree as a direct proof of a solution to a recurrence. In this section, we will use recursion trees to generate good guesses, and in Section 4.6, we will use recursion trees directly to prove the theorem that forms the basis of the master method. For example, let us see how a recursion tree would provide a good guess for the recurrence T .n/ D 3T .bn=4c/ C ‚.n2 /. We start by focusing on ﬁnding an upper bound for the solution. Because we know that ﬂoors and ceilings usually do not matter when solving recurrences (here’s an example of sloppiness that we can tolerate), we create a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2 , having written out the implied constant coefﬁcient c > 0. Figure 4.5 shows how we derive the recursion tree for T .n/ D 3T .n=4/ C cn2 . For convenience, we assume that n is an exact power of 4 (another example of tolerable sloppiness) so that all subproblem sizes are integers. Part (a) of the ﬁgure shows T .n/, which we expand in part (b) into an equivalent tree representing the recurrence. The cn2 term at the root represents the cost at the top level of recursion, and the three subtrees of the root represent the costs incurred by the subproblems of size n=4. Part (c) shows this process carried one step further by expanding each node with cost T .n=4/ from part (b). The cost for each of the three children of the root is c.n=4/2 . We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence.
4.4 The recursion-tree method for solving recurrences 89T .n/ cn2 cn2 n n n n 2 n 2 n 2 T 4 T 4 T 4 c 4 c 4 c 4 n n n n n n n n n T 16 T 16 T 16 T 16 T 16 T 16 T 16 T 16 T 16 (a) (b) (c) cn2 cn2 n 2 n 2 n 2 3 c 4 c 4 c 4 16 cn2log4 n n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2 n 2 3 2 c 16 c 16 c 16 c 16 c 16 c 16 c 16 c 16 c 16 16 cn2 … T .1/ T .1/ T .1/ T .1/ T .1/ T .1/ T .1/ T .1/ T .1/ T .1/ … T .1/ T .1/ T .1/ ‚.nlog4 3 / nlog4 3 (d) Total: O.n2 / Figure 4.5 Constructing a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2 . Part (a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has height log4 n (it has log4 n C 1 levels).
90 Chapter 4 Divide-and-Conquer Because subproblem sizes decrease by a factor of 4 each time we go down one level, we eventually must reach a boundary condition. How far from the root do we reach one? The subproblem size for a node at depth i is n=4i . Thus, the subproblem size hits n D 1 when n=4i D 1 or, equivalently, when i D log4 n. Thus, the tree has log4 n C 1 levels (at depths 0; 1; 2; : : : ; log4 n). Next we determine the cost at each level of the tree. Each level has three times more nodes than the level above, and so the number of nodes at depth i is 3i . Because subproblem sizes reduce by a factor of 4 for each level we go down from the root, each node at depth i, for i D 0; 1; 2; : : : ; log4 n 1, has a cost of c.n=4i /2 . Multiplying, we see that the total cost over all nodes at depth i, for i D 0; 1; 2; : : : ; log4 n 1, is 3i c.n=4i /2 D .3=16/i cn2 . The bottom level, at depth log4 n, has 3log4 n D nlog4 3 nodes, each contributing cost T .1/, for a total cost of nlog4 3 T .1/, which is ‚.nlog4 3 /, since we assume that T .1/ is a constant. Now we add up the costs over all levels to determine the cost for the entire tree: Â Ã2 Â Ãlog4 n 1 2 3 2 3 2 3 T .n/ D cn C cn C cn C C cn2 C ‚.nlog4 3 / 16 16 16 log4 n 1 Â Ã X 3 i 2 D cn C ‚.nlog4 3 / i D0 16 .3=16/log 4 n 1 2 D cn C ‚.nlog4 3 / (by equation (A.5)) : .3=16/ 1 This last formula looks somewhat messy until we realize that we can again take advantage of small amounts of sloppiness and use an inﬁnite decreasing geometric series as an upper bound. Backing up one step and applying equation (A.6), we have log4 n 1 Â Ã X 3 i 2 T .n/ D cn C ‚.nlog4 3 / i D0 16 X 3 Ãi 1 Â < cn2 C ‚.nlog4 3 / i D0 16 1 D cn2 C ‚.nlog4 3 / 1 .3=16/ 16 2 D cn C ‚.nlog4 3 / 13 D O.n2 / : Thus, we have derived a guess of T .n/ D O.n2 / for our original recurrence T .n/ D 3T .bn=4c/ C ‚.n2 /. In this example, the coefﬁcients of cn2 form a decreasing geometric series and, by equation (A.6), the sum of these coefﬁcients
4.4 The recursion-tree method for solving recurrences 91 cn cn n 2n c 3 c 3 cnlog3=2 n n 2n 2n 4n c 9 c 9 c 9 c 9 cn … … Total: O.n lg n/Figure 4.6 A recursion tree for the recurrence T .n/ D T .n=3/ C T .2n=3/ C cn.is bounded from above by the constant 16=13. Since the root’s contribution to thetotal cost is cn2 , the root contributes a constant fraction of the total cost. In otherwords, the cost of the root dominates the total cost of the tree. In fact, if O.n2 / is indeed an upper bound for the recurrence (as we shall verify ina moment), then it must be a tight bound. Why? The ﬁrst recursive call contributesa cost of ‚.n2 /, and so .n2 / must be a lower bound for the recurrence. Now we can use the substitution method to verify that our guess was cor-rect, that is, T .n/ D O.n2 / is an upper bound for the recurrence T .n/ D3T .bn=4c/ C ‚.n2 /. We want to show that T .n/ Ä d n2 for some constant d > 0.Using the same constant c > 0 as before, we haveT .n/ Ä 3T .bn=4c/ C cn2 Ä 3d bn=4c2 C cn2 Ä 3d.n=4/2 C cn2 3 D d n2 C cn2 16 Ä d n2 ;where the last step holds as long as d .16=13/c. In another, more intricate, example, Figure 4.6 shows the recursion tree forT .n/ D T .n=3/ C T .2n=3/ C O.n/ :(Again, we omit ﬂoor and ceiling functions for simplicity.) As before, we let crepresent the constant factor in the O.n/ term. When we add the values across thelevels of the recursion tree shown in the ﬁgure, we get a value of cn for every level.
92 Chapter 4 Divide-and-Conquer The longest simple path from the root to a leaf is n ! .2=3/n ! .2=3/2 n ! ! 1. Since .2=3/k n D 1 when k D log3=2 n, the height of the tree is log3=2 n. Intuitively, we expect the solution to the recurrence to be at most the number of levels times the cost of each level, or O.cn log3=2 n/ D O.n lg n/. Figure 4.6 shows only the top levels of the recursion tree, however, and not every level in the tree contributes a cost of cn. Consider the cost of the leaves. If this recursion tree were a complete binary tree of height log3=2 n, there would be 2log3=2 n D nlog3=2 2 leaves. Since the cost of each leaf is a constant, the total cost of all leaves would then be ‚.nlog3=2 2 / which, since log3=2 2 is a constant strictly greater than 1, is !.n lg n/. This recursion tree is not a complete binary tree, however, and so it has fewer than nlog3=2 2 leaves. Moreover, as we go down from the root, more and more internal nodes are absent. Consequently, levels toward the bottom of the recursion tree contribute less than cn to the total cost. We could work out an accu- rate accounting of all costs, but remember that we are just trying to come up with a guess to use in the substitution method. Let us tolerate the sloppiness and attempt to show that a guess of O.n lg n/ for the upper bound is correct. Indeed, we can use the substitution method to verify that O.n lg n/ is an upper bound for the solution to the recurrence. We show that T .n/ Ä d n lg n, where d is a suitable positive constant. We have T .n/ Ä T .n=3/ C T .2n=3/ C cn Ä d.n=3/ lg.n=3/ C d.2n=3/ lg.2n=3/ C cn D .d.n=3/ lg n d.n=3/ lg 3/ C .d.2n=3/ lg n d.2n=3/ lg.3=2// C cn D d n lg n d..n=3/ lg 3 C .2n=3/ lg.3=2// C cn D d n lg n d..n=3/ lg 3 C .2n=3/ lg 3 .2n=3/ lg 2/ C cn D d n lg n d n.lg 3 2=3/ C cn Ä d n lg n ; as long as d c=.lg 3 .2=3//. Thus, we did not need to perform a more accurate accounting of costs in the recursion tree. Exercises 4.4-1 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 3T .bn=2c/ C n. Use the substitution method to verify your answer. 4.4-2 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T .n=2/ C n2 . Use the substitution method to verify your answer.
4.5 The master method for solving recurrences 93 4.4-3 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 4T .n=2 C 2/ C n. Use the substitution method to verify your answer. 4.4-4 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 2T .n 1/ C 1. Use the substitution method to verify your answer. 4.4-5 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T .n 1/CT .n=2/Cn. Use the substitution method to verify your answer. 4.4-6 Argue that the solution to the recurrence T .n/ D T .n=3/CT .2n=3/Ccn, where c is a constant, is .n lg n/ by appealing to a recursion tree. 4.4-7 Draw the recursion tree for T .n/ D 4T .bn=2c/ C cn, where c is a constant, and provide a tight asymptotic bound on its solution. Verify your bound by the substi- tution method. 4.4-8 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T .n a/ C T .a/ C cn, where a 1 and c > 0 are constants. 4.4-9 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T .˛ n/ C T ..1 ˛/n/ C cn, where ˛ is a constant in the range 0 < ˛ < 1 and c > 0 is also a constant.4.5 The master method for solving recurrences The master method provides a “cookbook” method for solving recurrences of the form T .n/ D aT .n=b/ C f .n/ ; (4.20) where a 1 and b > 1 are constants and f .n/ is an asymptotically positive function. To use the master method, you will need to memorize three cases, but then you will be able to solve many recurrences quite easily, often without pencil and paper.
94 Chapter 4 Divide-and-Conquer The recurrence (4.20) describes the running time of an algorithm that divides a problem of size n into a subproblems, each of size n=b, where a and b are positive constants. The a subproblems are solved recursively, each in time T .n=b/. The function f .n/ encompasses the cost of dividing the problem and combining the results of the subproblems. For example, the recurrence arising from Strassen’s algorithm has a D 7, b D 2, and f .n/ D ‚.n2 /. As a matter of technical correctness, the recurrence is not actually well deﬁned, because n=b might not be an integer. Replacing each of the a terms T .n=b/ with either T .bn=bc/ or T .dn=be/ will not affect the asymptotic behavior of the recur- rence, however. (We will prove this assertion in the next section.) We normally ﬁnd it convenient, therefore, to omit the ﬂoor and ceiling functions when writing divide-and-conquer recurrences of this form. The master theorem The master method depends on the following theorem. Theorem 4.1 (Master theorem) Let a 1 and b > 1 be constants, let f .n/ be a function, and let T .n/ be deﬁned on the nonnegative integers by the recurrence T .n/ D aT .n=b/ C f .n/ ; where we interpret n=b to mean either bn=bc or dn=be. Then T .n/ has the follow- ing asymptotic bounds: 1. If f .n/ D O.nlogb a / for some constant > 0, then T .n/ D ‚.nlogb a /. 2. If f .n/ D ‚.nlogb a /, then T .n/ D ‚.nlogb a lg n/. 3. If f .n/ D .nlogb aC / for some constant > 0, and if af .n=b/ Ä cf .n/ for some constant c < 1 and all sufﬁciently large n, then T .n/ D ‚.f .n//. Before applying the master theorem to some examples, let’s spend a moment trying to understand what it says. In each of the three cases, we compare the function f .n/ with the function nlogb a . Intuitively, the larger of the two functions determines the solution to the recurrence. If, as in case 1, the function nlogb a is the larger, then the solution is T .n/ D ‚.nlogb a /. If, as in case 3, the function f .n/ is the larger, then the solution is T .n/ D ‚.f .n//. If, as in case 2, the two func- tions are the same size, we multiply by a logarithmic factor, and the solution is T .n/ D ‚.nlogb a lg n/ D ‚.f .n/ lg n/. Beyond this intuition, you need to be aware of some technicalities. In the ﬁrst case, not only must f .n/ be smaller than nlogb a , it must be polynomially smaller.
4.5 The master method for solving recurrences 95That is, f .n/ must be asymptotically smaller than nlogb a by a factor of n for someconstant > 0. In the third case, not only must f .n/ be larger than nlogb a , it alsomust be polynomially larger and in addition satisfy the “regularity” condition thataf .n=b/ Ä cf .n/. This condition is satisﬁed by most of the polynomially boundedfunctions that we shall encounter. Note that the three cases do not cover all the possibilities for f .n/. There isa gap between cases 1 and 2 when f .n/ is smaller than nlogb a but not polynomi-ally smaller. Similarly, there is a gap between cases 2 and 3 when f .n/ is largerthan nlogb a but not polynomially larger. If the function f .n/ falls into one of thesegaps, or if the regularity condition in case 3 fails to hold, you cannot use the mastermethod to solve the recurrence.Using the master methodTo use the master method, we simply determine which case (if any) of the mastertheorem applies and write down the answer. As a ﬁrst example, considerT .n/ D 9T .n=3/ C n :For this recurrence, we have a D 9, b D 3, f .n/ D n, and thus we have thatnlogb a D nlog3 9 D ‚.n2 ). Since f .n/ D O.nlog3 9 /, where D 1, we can applycase 1 of the master theorem and conclude that the solution is T .n/ D ‚.n2 /. Now considerT .n/ D T .2n=3/ C 1;in which a D 1, b D 3=2, f .n/ D 1, and nlogb a D nlog3=2 1 D n0 D 1. Case 2applies, since f .n/ D ‚.nlogb a / D ‚.1/, and thus the solution to the recurrenceis T .n/ D ‚.lg n/. For the recurrenceT .n/ D 3T .n=4/ C n lg n ;we have a D 3, b D 4, f .n/ D n lg n, and nlogb a D nlog4 3 D O.n0:793 /.Since f .n/ D .nlog4 3C /, where 0:2, case 3 applies if we can show thatthe regularity condition holds for f .n/. For sufﬁciently large n, we have thataf .n=b/ D 3.n=4/ lg.n=4/ Ä .3=4/n lg n D cf .n/ for c D 3=4. Consequently,by case 3, the solution to the recurrence is T .n/ D ‚.n lg n/. The master method does not apply to the recurrenceT .n/ D 2T .n=2/ C n lg n ;even though it appears to have the proper form: a D 2, b D 2, f .n/ D n lg n,and nlogb a D n. You might mistakenly think that case 3 should apply, since
96 Chapter 4 Divide-and-Conquer f .n/ D n lg n is asymptotically larger than nlogb a D n. The problem is that it is not polynomially larger. The ratio f .n/=nlogb a D .n lg n/=n D lg n is asymp- totically less than n for any positive constant . Consequently, the recurrence falls into the gap between case 2 and case 3. (See Exercise 4.6-2 for a solution.) Let’s use the master method to solve the recurrences we saw in Sections 4.1 and 4.2. Recurrence (4.7), T .n/ D 2T .n=2/ C ‚.n/ ; characterizes the running times of the divide-and-conquer algorithm for both the maximum-subarray problem and merge sort. (As is our practice, we omit stating the base case in the recurrence.) Here, we have a D 2, b D 2, f .n/ D ‚.n/, and thus we have that nlogb a D nlog2 2 D n. Case 2 applies, since f .n/ D ‚.n/, and so we have the solution T .n/ D ‚.n lg n/. Recurrence (4.17), T .n/ D 8T .n=2/ C ‚.n2 / ; describes the running time of the ﬁrst divide-and-conquer algorithm that we saw for matrix multiplication. Now we have a D 8, b D 2, and f .n/ D ‚.n2 /, and so nlogb a D nlog2 8 D n3 . Since n3 is polynomially larger than f .n/ (that is, f .n/ D O.n3 / for D 1), case 1 applies, and T .n/ D ‚.n3 /. Finally, consider recurrence (4.18), T .n/ D 7T .n=2/ C ‚.n2 / ; which describes the running time of Strassen’s algorithm. Here, we have a D 7, b D 2, f .n/ D ‚.n2 /, and thus nlogb a D nlog2 7 . Rewriting log2 7 as lg 7 and recalling that 2:80 < lg 7 < 2:81, we see that f .n/ D O.nlg 7 / for D 0:8. Again, case 1 applies, and we have the solution T .n/ D ‚.nlg 7 /. Exercises 4.5-1 Use the master method to give tight asymptotic bounds for the following recur- rences. a. T .n/ D 2T .n=4/ C 1. p b. T .n/ D 2T .n=4/ C n. c. T .n/ D 2T .n=4/ C n. d. T .n/ D 2T .n=4/ C n2 .
4.6 Proof of the master theorem 97 4.5-2 Professor Caesar wishes to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen’s algorithm. His algorithm will use the divide- and-conquer method, dividing each matrix into pieces of size n=4 n=4, and the divide and combine steps together will take ‚.n2 / time. He needs to determine how many subproblems his algorithm has to create in order to beat Strassen’s algo- rithm. If his algorithm creates a subproblems, then the recurrence for the running time T .n/ becomes T .n/ D aT .n=4/ C ‚.n2 /. What is the largest integer value of a for which Professor Caesar’s algorithm would be asymptotically faster than Strassen’s algorithm? 4.5-3 Use the master method to show that the solution to the binary-search recurrence T .n/ D T .n=2/ C ‚.1/ is T .n/ D ‚.lg n/. (See Exercise 2.3-5 for a description of binary search.) 4.5-4 Can the master method be applied to the recurrence T .n/ D 4T .n=2/ C n2 lg n? Why or why not? Give an asymptotic upper bound for this recurrence. 4.5-5 ? Consider the regularity condition af .n=b/ Ä cf .n/ for some constant c < 1, which is part of case 3 of the master theorem. Give an example of constants a 1 and b > 1 and a function f .n/ that satisﬁes all the conditions in case 3 of the master theorem except the regularity condition.? 4.6 Proof of the master theorem This section contains a proof of the master theorem (Theorem 4.1). You do not need to understand the proof in order to apply the master theorem. The proof appears in two parts. The ﬁrst part analyzes the master recur- rence (4.20), under the simplifying assumption that T .n/ is deﬁned only on ex- act powers of b > 1, that is, for n D 1; b; b 2 ; : : :. This part gives all the intuition needed to understand why the master theorem is true. The second part shows how to extend the analysis to all positive integers n; it applies mathematical technique to the problem of handling ﬂoors and ceilings. In this section, we shall sometimes abuse our asymptotic notation slightly by using it to describe the behavior of functions that are deﬁned only over exact powers of b. Recall that the deﬁnitions of asymptotic notations require that
98 Chapter 4 Divide-and-Conquer bounds be proved for all sufﬁciently large numbers, not just those that are pow- ers of b. Since we could make new asymptotic notations that apply only to the set fb i W i D 0; 1; 2; : : :g, instead of to the nonnegative numbers, this abuse is minor. Nevertheless, we must always be on guard when we use asymptotic notation over a limited domain lest we draw improper conclusions. For example, proving that T .n/ D O.n/ when n is an exact power of 2 does not guarantee that T .n/ D O.n/. The function T .n/ could be deﬁned as ( n if n D 1; 2; 4; 8; : : : ; T .n/ D n2 otherwise ; in which case the best upper bound that applies to all values of n is T .n/ D O.n2 /. Because of this sort of drastic consequence, we shall never use asymptotic notation over a limited domain without making it absolutely clear from the context that we are doing so. 4.6.1 The proof for exact powers The ﬁrst part of the proof of the master theorem analyzes the recurrence (4.20) T .n/ D aT .n=b/ C f .n/ ; for the master method, under the assumption that n is an exact power of b > 1, where b need not be an integer. We break the analysis into three lemmas. The ﬁrst reduces the problem of solving the master recurrence to the problem of evaluating an expression that contains a summation. The second determines bounds on this summation. The third lemma puts the ﬁrst two together to prove a version of the master theorem for the case in which n is an exact power of b. Lemma 4.2 Let a 1 and b > 1 be constants, and let f .n/ be a nonnegative function deﬁned on exact powers of b. Deﬁne T .n/ on exact powers of b by the recurrence ( ‚.1/ if n D 1 ; T .n/ D aT .n=b/ C f .n/ if n D b i ; where i is a positive integer. Then X logb n 1 logb a T .n/ D ‚.n /C aj f .n=b j / : (4.21) j D0 Proof We use the recursion tree in Figure 4.7. The root of the tree has cost f .n/, and it has a children, each with cost f .n=b/. (It is convenient to think of a as being
4.6 Proof of the master theorem 99 f .n/ f .n/ a f .n=b/ f .n=b/ … f .n=b/ af .n=b/ a a alogb n f .n=b 2 / f .n=b 2 /…f .n=b 2 / f .n=b 2 / f .n=b 2 /…f .n=b 2 / f .n=b 2 / f .n=b 2 /…f .n=b 2 / a2 f .n=b 2 / a a a a a a a a a … … … … … … … … … … ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ … ‚.1/ ‚.1/ ‚.1/ ‚.nlogb a / nlogb a X logb n 1 Total: ‚.nlogb a / C aj f .n=b j / j D0 Figure 4.7 The recursion tree generated by T .n/ D aT .n=b/ C f .n/. The tree is a complete a-ary tree with nlogb a leaves and height logb n. The cost of the nodes at each depth is shown at the right, and their sum is given in equation (4.21). an integer, especially when visualizing the recursion tree, but the mathematics does not require it.) Each of these children has a children, making a2 nodes at depth 2, and each of the a children has cost f .n=b 2 /. In general, there are aj nodes at depth j , and each has cost f .n=b j /. The cost of each leaf is T .1/ D ‚.1/, and each leaf is at depth logb n, since n=b logb n D 1. There are alogb n D nlogb a leaves in the tree. We can obtain equation (4.21) by summing the costs of the nodes at each depth in the tree, as shown in the ﬁgure. The cost for all internal nodes at depth j is aj f .n=b j /, and so the total cost of all internal nodes is X logb n 1 aj f .n=b j / : j D0 In the underlying divide-and-conquer algorithm, this sum represents the costs of dividing problems into subproblems and then recombining the subproblems. The
100 Chapter 4 Divide-and-Conquer cost of all the leaves, which is the cost of doing all nlogb a subproblems of size 1, is ‚.nlogb a /. In terms of the recursion tree, the three cases of the master theorem correspond to cases in which the total cost of the tree is (1) dominated by the costs in the leaves, (2) evenly distributed among the levels of the tree, or (3) dominated by the cost of the root. The summation in equation (4.21) describes the cost of the dividing and com- bining steps in the underlying divide-and-conquer algorithm. The next lemma pro- vides asymptotic bounds on the summation’s growth. Lemma 4.3 Let a 1 and b > 1 be constants, and let f .n/ be a nonnegative function deﬁned on exact powers of b. A function g.n/ deﬁned over exact powers of b by X logb n 1 g.n/ D aj f .n=b j / (4.22) j D0 has the following asymptotic bounds for exact powers of b: 1. If f .n/ D O.nlogb a / for some constant > 0, then g.n/ D O.nlogb a /. 2. If f .n/ D ‚.nlogb a /, then g.n/ D ‚.nlogb a lg n/. 3. If af .n=b/ Ä cf .n/ for some constant c < 1 and for all sufﬁciently large n, then g.n/ D ‚.f .n//. Proof For case 1, we have f .n/ D O.nlogb a /, which implies that f .n=b j / D O..n=b j /logb a /. Substituting into equation (4.22) yields ! X n Álogb a logb n 1 j g.n/ D O a : (4.23) j D0 bj We bound the summation within the O-notation by factoring out terms and simpli- fying, which leaves an increasing geometric series: logb n 1 Â Ãj X n Álogb a X logb n 1 j logb a ab a D n j D0 bj j D0 b logb a X logb n 1 D n logb a .b /j j D0 Â logb n Ã logb a b 1 D n b 1
4.6 Proof of the master theorem 101 Â Ã n 1 D nlogb a : b 1Since b and are constants, we can rewrite the last expression as nlogb a O.n / DO.nlogb a /. Substituting this expression for the summation in equation (4.23) yieldsg.n/ D O.nlogb a / ;thereby proving case 1. Because case 2 assumes that f .n/ D ‚.nlogb a /, we have that f .n=b j / D‚..n=b j /logb a /. Substituting into equation (4.22) yields ! X n Álogb a logb n 1 jg.n/ D ‚ a : (4.24) j D0 bjWe bound the summation within the ‚-notation as in case 1, but this time we do notobtain a geometric series. Instead, we discover that every term of the summationis the same: X n Álogb a X a Ájlogb n 1 logb n 1 aj D nlogb a j D0 bj j D0 b logb a X logb n 1 D nlogb a 1 j D0 logb a D n logb n :Substituting this expression for the summation in equation (4.24) yieldsg.n/ D ‚.nlogb a logb n/ D ‚.nlogb a lg n/ ;proving case 2. We prove case 3 similarly. Since f .n/ appears in the deﬁnition (4.22) of g.n/and all terms of g.n/ are nonnegative, we can conclude that g.n/ D .f .n// forexact powers of b. We assume in the statement of the lemma that af .n=b/ Ä cf .n/for some constant c < 1 and all sufﬁciently large n. We rewrite this assumptionas f .n=b/ Ä .c=a/f .n/ and iterate j times, yielding f .n=b j / Ä .c=a/j f .n/ or,equivalently, aj f .n=b j / Ä c j f .n/, where we assume that the values we iterateon are sufﬁciently large. Since the last, and smallest, such value is n=b j 1 , it isenough to assume that n=b j 1 is sufﬁciently large. Substituting into equation (4.22) and simplifying yields a geometric series, butunlike the series in case 1, this one has decreasing terms. We use an O.1/ term to
102 Chapter 4 Divide-and-Conquer capture the terms that are not covered by our assumption that n is sufﬁciently large: X logb n 1 g.n/ D aj f .n=b j / j D0 X logb n 1 Ä c j f .n/ C O.1/ j D0 X 1 Ä f .n/ c j C O.1/ j D0 Â Ã 1 D f .n/ C O.1/ 1 c D O.f .n// ; since c is a constant. Thus, we can conclude that g.n/ D ‚.f .n// for exact powers of b. With case 3 proved, the proof of the lemma is complete. We can now prove a version of the master theorem for the case in which n is an exact power of b. Lemma 4.4 Let a 1 and b > 1 be constants, and let f .n/ be a nonnegative function deﬁned on exact powers of b. Deﬁne T .n/ on exact powers of b by the recurrence ( ‚.1/ if n D 1 ; T .n/ D aT .n=b/ C f .n/ if n D b i ; where i is a positive integer. Then T .n/ has the following asymptotic bounds for exact powers of b: 1. If f .n/ D O.nlogb a / for some constant > 0, then T .n/ D ‚.nlogb a /. 2. If f .n/ D ‚.nlogb a /, then T .n/ D ‚.nlogb a lg n/. 3. If f .n/ D .nlogb aC / for some constant > 0, and if af .n=b/ Ä cf .n/ for some constant c < 1 and all sufﬁciently large n, then T .n/ D ‚.f .n//. Proof We use the bounds in Lemma 4.3 to evaluate the summation (4.21) from Lemma 4.2. For case 1, we have T .n/ D ‚.nlogb a / C O.nlogb a / D ‚.nlogb a / ;
4.6 Proof of the master theorem 103and for case 2,T .n/ D ‚.nlogb a / C ‚.nlogb a lg n/ D ‚.nlogb a lg n/ :For case 3,T .n/ D ‚.nlogb a / C ‚.f .n// D ‚.f .n// ;because f .n/ D .nlogb aC /.4.6.2 Floors and ceilingsTo complete the proof of the master theorem, we must now extend our analysis tothe situation in which ﬂoors and ceilings appear in the master recurrence, so thatthe recurrence is deﬁned for all integers, not for just exact powers of b. Obtaininga lower bound onT .n/ D aT .dn=be/ C f .n/ (4.25)and an upper bound onT .n/ D aT .bn=bc/ C f .n/ (4.26)is routine, since we can push through the bound dn=be n=b in the ﬁrst case toyield the desired result, and we can push through the bound bn=bc Ä n=b in thesecond case. We use much the same technique to lower-bound the recurrence (4.26)as to upper-bound the recurrence (4.25), and so we shall present only this latterbound. We modify the recursion tree of Figure 4.7 to produce the recursion tree in Fig-ure 4.8. As we go down in the recursion tree, we obtain a sequence of recursiveinvocations on the argumentsn;dn=be ;ddn=be =be ;dddn=be =be =be ; : : :Let us denote the j th element in the sequence by nj , where ( n if j D 0 ;nj D (4.27) dnj 1 =be if j > 0 :
104 Chapter 4 Divide-and-Conquer f .n/ f .n/ a f .n1 / f .n1 / … f .n1 / af .n1 / a a ablogb nc f .n2 / f .n2 / … f .n2 / f .n2 / f .n2 / … f .n2 / f .n2 / f .n2 / … f .n2 / a2 f .n2 / a a a a a a a a a … … … … … … … … … … ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ ‚.1/ … ‚.1/ ‚.1/ ‚.1/ ‚.nlogb a / ‚.nlogb a / X blogb nc 1 Total: ‚.nlogb a / C aj f .nj / j D0 Figure 4.8 The recursion tree generated by T .n/ D aT .dn=be/Cf .n/. The recursive argument nj is given by equation (4.27). Our ﬁrst goal is to determine the depth k such that nk is a constant. Using the inequality dxe Ä x C 1, we obtain n0 Ä n ; n n1 Ä C1; b n 1 n2 Ä C C1; b2 b n 1 1 n3 Ä 3 C 2 C C1; b b b : : : In general, we have
4.6 Proof of the master theorem 105 n X 1 j 1nj Ä C bj i D0 bi n X 1 1 < C bj i D0 bi n b D j C : b b 1Letting j D blogb nc, we obtain n bnblogb nc < C b blogb nc b 1 n b < C b logb n 1 b 1 n b D C n=b b 1 b D bC b 1 D O.1/ ;and thus we see that at depth blogb nc, the problem size is at most a constant. From Figure 4.8, we see that X blogb nc 1T .n/ D ‚.nlogb a / C aj f .nj / ; (4.28) j D0which is much the same as equation (4.21), except that n is an arbitrary integer andnot restricted to be an exact power of b. We can now evaluate the summation X blogb nc 1g.n/ D aj f .nj / (4.29) j D0from equation (4.28) in a manner analogous to the proof of Lemma 4.3. Beginningwith case 3, if af .dn=be/ Ä cf .n/ for n > bCb=.b 1/, where c < 1 is a constant,then it follows that aj f .nj / Ä c j f .n/. Therefore, we can evaluate the sum inequation (4.29) just as in Lemma 4.3. For case 2, we have f .n/ D ‚.nlogb a /. If wecan show that f .nj / D O.nlogb a =aj / D O..n=b j /logb a /, then the proof for case 2of Lemma 4.3 will go through. Observe that j Ä blogb nc implies b j =n Ä 1. Thebound f .n/ D O.nlogb a / implies that there exists a constant c > 0 such that for allsufﬁciently large nj ,
106 Chapter 4 Divide-and-Conquer Â Ãlogb a n b f .nj / Ä c C bj b 1 Â Â ÃÃlogb a n bj b D c 1C bj n b 1 Â logb a Ã Â Â j ÃÃlogb a n b b D c 1C aj n b 1 Â logb a Ã Â Ãlogb a n b Ä c 1C aj b 1 Â logb a Ã n D O ; aj since c.1 C b=.b 1//logb a is a constant. Thus, we have proved case 2. The proof of case 1 is almost identical. The key is to prove the bound f .nj / D O.nlogb a /, which is similar to the corresponding proof of case 2, though the algebra is more intricate. We have now proved the upper bounds in the master theorem for all integers n. The proof of the lower bounds is similar. Exercises 4.6-1 ? Give a simple and exact expression for nj in equation (4.27) for the case in which b is a positive integer instead of an arbitrary real number. 4.6-2 ? Show that if f .n/ D ‚.nlogb a lgk n/, where k 0, then the master recurrence has solution T .n/ D ‚.nlogb a lgkC1 n/. For simplicity, conﬁne your analysis to exact powers of b. 4.6-3 ? Show that case 3 of the master theorem is overstated, in the sense that the regularity condition af .n=b/ Ä cf .n/ for some constant c < 1 implies that there exists a constant > 0 such that f .n/ D .nlogb aC /.
Problems for Chapter 4 107Problems 4-1 Recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for n Ä 2. Make your bounds as tight as possible, and justify your answers. a. T .n/ D 2T .n=2/ C n4 . b. T .n/ D T .7n=10/ C n. c. T .n/ D 16T .n=4/ C n2 . d. T .n/ D 7T .n=3/ C n2 . e. T .n/ D 7T .n=2/ C n2 . p f. T .n/ D 2T .n=4/ C n. g. T .n/ D T .n 2/ C n2 . 4-2 Parameter-passing costs Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N -element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies: 1. An array is passed by pointer. Time D ‚.1/. 2. An array is passed by copying. Time D ‚.N /, where N is the size of the array. 3. An array is passed by copying only the subrange that might be accessed by the called procedure. Time D ‚.q p C 1/ if the subarray AŒp : : q is passed. a. Consider the recursive binary search algorithm for ﬁnding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem. b. Redo part (a) for the M ERGE -S ORT algorithm from Section 2.3.1.
108 Chapter 4 Divide-and-Conquer 4-3 More recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for sufﬁciently small n. Make your bounds as tight as possible, and justify your answers. a. T .n/ D 4T .n=3/ C n lg n. b. T .n/ D 3T .n=3/ C n= lg n. p c. T .n/ D 4T .n=2/ C n2 n. d. T .n/ D 3T .n=3 2/ C n=2. e. T .n/ D 2T .n=2/ C n= lg n. f. T .n/ D T .n=2/ C T .n=4/ C T .n=8/ C n. g. T .n/ D T .n 1/ C 1=n. h. T .n/ D T .n 1/ C lg n. i. T .n/ D T .n 2/ C 1= lg n. p p j. T .n/ D nT . n/ C n. 4-4 Fibonacci numbers This problem develops properties of the Fibonacci numbers, which are deﬁned by recurrence (3.22). We shall use the technique of generating functions to solve the Fibonacci recurrence. Deﬁne the generating function (or formal power se- ries) F as X 1 F .´/ D Fi ´i i D0 D 0 C ´ C ´2 C 2´3 C 3´4 C 5´5 C 8´6 C 13´7 C 21´8 C ; where Fi is the ith Fibonacci number. a. Show that F .´/ D ´ C ´F .´/ C ´2 F .´/.
Problems for Chapter 4 109b. Show that ´ F .´/ D 1 ´ ´2 ´ D .1 ´/.1 y´/ Â Ã 1 1 1 D p ; 5 1 ´ 1 y´ where p 1C 5 D D 1:61803 : : : 2 and p yD 1 5 D 0:61803 : : : : 2c. Show that X 1 1 F .´/ D p . i yi /´i : i D0 5 i pd. Use part (c) to proveˇthat Fi D ˇ = 5 for i > 0, rounded to the nearest integer. (Hint: Observe that ˇ yˇ < 1.)4-5 Chip testingProfessor Diogenes has n supposedly identical integrated-circuit chips that in prin-ciple are capable of testing each other. The professor’s test jig accommodates twochips at a time. When the jig is loaded, each chip tests the other and reports whetherit is good or bad. A good chip always reports accurately whether the other chip isgood or bad, but the professor cannot trust the answer of a bad chip. Thus, the fourpossible outcomes of a test are as follows:Chip A says Chip B says ConclusionB is good A is good both are good, or both are badB is good A is bad at least one is badB is bad A is good at least one is badB is bad A is bad at least one is bada. Show that if more than n=2 chips are bad, the professor cannot necessarily de- termine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor.
110 Chapter 4 Divide-and-Conquer b. Consider the problem of ﬁnding a single good chip from among n chips, as- suming that more than n=2 of the chips are good. Show that bn=2c pairwise tests are sufﬁcient to reduce the problem to one of nearly half the size. c. Show that the good chips can be identiﬁed with ‚.n/ pairwise tests, assuming that more than n=2 of the chips are good. Give and solve the recurrence that describes the number of tests. 4-6 Monge arrays An m n array A of real numbers is a Monge array if for all i, j , k, and l such that 1 Ä i < k Ä m and 1 Ä j < l Ä n, we have AŒi; j C AŒk; l Ä AŒi; l C AŒk; j : In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the intersections of the rows and the columns, the sum of the upper-left and lower-right elements is less than or equal to the sum of the lower-left and upper-right elements. For example, the following array is Monge: 10 17 13 28 23 17 22 16 29 23 24 28 22 34 24 11 13 6 17 7 45 44 32 37 23 36 33 19 21 6 75 66 51 53 34 a. Prove that an array is Monge if and only if for all i D 1; 2; :::; m 1 and j D 1; 2; :::; n 1, we have AŒi; j C AŒi C 1; j C 1 Ä AŒi; j C 1 C AŒi C 1; j : (Hint: For the “if” part, use induction separately on rows and columns.) b. The following array is not Monge. Change one element in order to make it Monge. (Hint: Use part (a).) 37 23 22 32 21 6 7 10 53 34 30 31 32 13 9 6 43 21 15 8
Notes for Chapter 4 111 c. Let f .i/ be the index of the column containing the leftmost minimum element of row i. Prove that f .1/ Ä f .2/ Ä Ä f .m/ for any m n Monge array. d. Here is a description of a divide-and-conquer algorithm that computes the left- most minimum element in each row of an m n Monge array A: Construct a submatrix A0 of A consisting of the even-numbered rows of A. Recursively determine the leftmost minimum for each row of A0 . Then compute the leftmost minimum in the odd-numbered rows of A. Explain how to compute the leftmost minimum in the odd-numbered rows of A (given that the leftmost minimum of the even-numbered rows is known) in O.m C n/ time. e. Write the recurrence describing the running time of the algorithm described in part (d). Show that its solution is O.m C n log m/.Chapter notes Divide-and-conquer as a technique for designing algorithms dates back to at least 1962 in an article by Karatsuba and Ofman . It might have been used well be- fore then, however; according to Heideman, Johnson, and Burrus , C. F. Gauss devised the ﬁrst fast Fourier transform algorithm in 1805, and Gauss’s formulation breaks the problem into smaller subproblems whose solutions are combined. The maximum-subarray problem in Section 4.1 is a minor variation on a problem studied by Bentley [43, Chapter 7]. Strassen’s algorithm  caused much excitement when it was published in 1969. Before then, few imagined the possibility of an algorithm asymptotically faster than the basic S QUARE -M ATRIX -M ULTIPLY procedure. The asymptotic upper bound for matrix multiplication has been improved since then. The most asymptotically efﬁcient algorithm for multiplying n n matrices to date, due to Coppersmith and Winograd , has a running time of O.n2:376 /. The best lower bound known is just the obvious .n2 / bound (obvious because we must ﬁll in n2 elements of the product matrix). From a practical point of view, Strassen’s algorithm is often not the method of choice for matrix multiplication, for four reasons: 1. The constant factor hidden in the ‚.nlg 7 / running time of Strassen’s algo- rithm is larger than the constant factor in the ‚.n3 /-time S QUARE -M ATRIX - M ULTIPLY procedure. 2. When the matrices are sparse, methods tailored for sparse matrices are faster.
112 Chapter 4 Divide-and-Conquer 3. Strassen’s algorithm is not quite as numerically stable as S QUARE -M ATRIX - M ULTIPLY. In other words, because of the limited precision of computer arith- metic on noninteger values, larger errors accumulate in Strassen’s algorithm than in S QUARE -M ATRIX -M ULTIPLY. 4. The submatrices formed at the levels of recursion consume space. The latter two reasons were mitigated around 1990. Higham  demonstrated that the difference in numerical stability had been overemphasized; although Strassen’s algorithm is too numerically unstable for some applications, it is within acceptable limits for others. Bailey, Lee, and Simon  discuss techniques for reducing the memory requirements for Strassen’s algorithm. In practice, fast matrix-multiplication implementations for dense matrices use Strassen’s algorithm for matrix sizes above a “crossover point,” and they switch to a simpler method once the subproblem size reduces to below the crossover point. The exact value of the crossover point is highly system dependent. Analyses that count operations but ignore effects from caches and pipelining have produced crossover points as low as n D 8 (by Higham ) or n D 12 (by Huss-Lederman et al. ). D’Alberto and Nicolau  developed an adaptive scheme, which determines the crossover point by benchmarking when their software package is installed. They found crossover points on various systems ranging from n D 400 to n D 2150, and they could not ﬁnd a crossover point on a couple of systems. Recurrences were studied as early as 1202 by L. Fibonacci, for whom the Fi- bonacci numbers are named. A. De Moivre introduced the method of generating functions (see Problem 4-4) for solving recurrences. The master method is adapted from Bentley, Haken, and Saxe , which provides the extended method justiﬁed by Exercise 4.6-2. Knuth  and Liu  show how to solve linear recurrences using the method of generating functions. Purdom and Brown  and Graham, Knuth, and Patashnik  contain extended discussions of recurrence solving. Several researchers, including Akra and Bazzi , Roura , Verma , and Yap , have given methods for solving more general divide-and-conquer recurrences than are solved by the master method. We describe the result of Akra and Bazzi here, as modiﬁed by Leighton . The Akra-Bazzi method works for recurrences of the form ( ‚.1/ if 1 Ä x Ä x0 ; T .x/ D Pk (4.30) i D1 ai T .bi x/ C f .x/ if x > x0 ; where x 1 is a real number, x0 is a constant such that x0 1=bi and x0 1=.1 bi / for i D 1; 2; : : : ; k, ai is a positive constant for i D 1; 2; : : : ; k,
Notes for Chapter 4 113 bi is a constant in the range 0 < bi < 1 for i D 1; 2; : : : ; k, k 1 is an integer constant, and f .x/ is a nonnegative function that satisﬁes the polynomial-growth condi- tion: there exist positive constants c1 and c2 such that for all x 1, for i D 1; 2; : : : ; k, and for all u such that bi x Ä u Ä x, we have c1 f .x/ Ä f .u/ Ä c2 f .x/. (If jf 0 .x/j is upper-bounded by some polynomial in x, then f .x/ satisﬁes the polynomial-growth condition. For example, f .x/ D x ˛ lgˇ x satisﬁes this condition for any real constants ˛ and ˇ.)Although the master method does not apply to a recurrence such as T .n/ DT .bn=3c/ C T .b2n=3c/ C O.n/, the Akra-Bazzi method does. To solve the re- Pkcurrence (4.30), we ﬁrst ﬁnd the unique real number p such that i D1 ai bip D 1.(Such a p always exists.) The solution to the recurrence is then Â Â Z x ÃÃ p f .u/T .n/ D ‚ x 1 C pC1 du : 1 uThe Akra-Bazzi method can be somewhat difﬁcult to use, but it serves in solvingrecurrences that model division of the problem into substantially unequally sizedsubproblems. The master method is simpler to use, but it applies only when sub-problem sizes are equal.
5 Probabilistic Analysis and Randomized Algorithms This chapter introduces probabilistic analysis and randomized algorithms. If you are unfamiliar with the basics of probability theory, you should read Appendix C, which reviews this material. We shall revisit probabilistic analysis and randomized algorithms several times throughout this book.5.1 The hiring problem Suppose that you need to hire a new ofﬁce assistant. Your previous attempts at hiring have been unsuccessful, and you decide to use an employment agency. The employment agency sends you one candidate each day. You interview that person and then decide either to hire that person or not. You must pay the employment agency a small fee to interview an applicant. To actually hire an applicant is more costly, however, since you must ﬁre your current ofﬁce assistant and pay a substan- tial hiring fee to the employment agency. You are committed to having, at all times, the best possible person for the job. Therefore, you decide that, after interviewing each applicant, if that applicant is better qualiﬁed than the current ofﬁce assistant, you will ﬁre the current ofﬁce assistant and hire the new applicant. You are willing to pay the resulting price of this strategy, but you wish to estimate what that price will be. The procedure H IRE -A SSISTANT, given below, expresses this strategy for hiring in pseudocode. It assumes that the candidates for the ofﬁce assistant job are num- bered 1 through n. The procedure assumes that you are able to, after interviewing candidate i, determine whether candidate i is the best candidate you have seen so far. To initialize, the procedure creates a dummy candidate, numbered 0, who is less qualiﬁed than each of the other candidates.
5.1 The hiring problem 115H IRE -A SSISTANT .n/1 best D 0 / candidate 0 is a least-qualiﬁed dummy candidate /2 for i D 1 to n3 interview candidate i4 if candidate i is better than candidate best5 best D i6 hire candidate i The cost model for this problem differs from the model described in Chapter 2.We focus not on the running time of H IRE -A SSISTANT, but instead on the costsincurred by interviewing and hiring. On the surface, analyzing the cost of this algo-rithm may seem very different from analyzing the running time of, say, merge sort.The analytical techniques used, however, are identical whether we are analyzingcost or running time. In either case, we are counting the number of times certainbasic operations are executed. Interviewing has a low cost, say ci , whereas hiring is expensive, costing ch . Let-ting m be the number of people hired, the total cost associated with this algorithmis O.ci n C ch m/. No matter how many people we hire, we always interview ncandidates and thus always incur the cost ci n associated with interviewing. Wetherefore concentrate on analyzing ch m, the hiring cost. This quantity varies witheach run of the algorithm. This scenario serves as a model for a common computational paradigm. We of-ten need to ﬁnd the maximum or minimum value in a sequence by examining eachelement of the sequence and maintaining a current “winner.” The hiring problemmodels how often we update our notion of which element is currently winning.Worst-case analysisIn the worst case, we actually hire every candidate that we interview. This situationoccurs if the candidates come in strictly increasing order of quality, in which casewe hire n times, for a total hiring cost of O.ch n/. Of course, the candidates do not always come in increasing order of quality. Infact, we have no idea about the order in which they arrive, nor do we have anycontrol over this order. Therefore, it is natural to ask what we expect to happen ina typical or average case.Probabilistic analysisProbabilistic analysis is the use of probability in the analysis of problems. Mostcommonly, we use probabilistic analysis to analyze the running time of an algo-rithm. Sometimes we use it to analyze other quantities, such as the hiring cost
116 Chapter 5 Probabilistic Analysis and Randomized Algorithms in procedure H IRE -A SSISTANT. In order to perform a probabilistic analysis, we must use knowledge of, or make assumptions about, the distribution of the inputs. Then we analyze our algorithm, computing an average-case running time, where we take the average over the distribution of the possible inputs. Thus we are, in effect, averaging the running time over all possible inputs. When reporting such a running time, we will refer to it as the average-case running time. We must be very careful in deciding on the distribution of inputs. For some problems, we may reasonably assume something about the set of all possible in- puts, and then we can use probabilistic analysis as a technique for designing an efﬁcient algorithm and as a means for gaining insight into a problem. For other problems, we cannot describe a reasonable input distribution, and in these cases we cannot use probabilistic analysis. For the hiring problem, we can assume that the applicants come in a random order. What does that mean for this problem? We assume that we can compare any two candidates and decide which one is better qualiﬁed; that is, there is a total order on the candidates. (See Appendix B for the deﬁnition of a total or- der.) Thus, we can rank each candidate with a unique number from 1 through n, using rank.i/ to denote the rank of applicant i, and adopt the convention that a higher rank corresponds to a better qualiﬁed applicant. The ordered list hrank.1/; rank.2/; : : : ; rank.n/i is a permutation of the list h1; 2; : : : ; ni. Saying that the applicants come in a random order is equivalent to saying that this list of ranks is equally likely to be any one of the nŠ permutations of the numbers 1 through n. Alternatively, we say that the ranks form a uniform random permutation; that is, each of the possible nŠ permutations appears with equal probability. Section 5.2 contains a probabilistic analysis of the hiring problem. Randomized algorithms In order to use probabilistic analysis, we need to know something about the distri- bution of the inputs. In many cases, we know very little about the input distribution. Even if we do know something about the distribution, we may not be able to model this knowledge computationally. Yet we often can use probability and randomness as a tool for algorithm design and analysis, by making the behavior of part of the algorithm random. In the hiring problem, it may seem as if the candidates are being presented to us in a random order, but we have no way of knowing whether or not they really are. Thus, in order to develop a randomized algorithm for the hiring problem, we must have greater control over the order in which we interview the candidates. We will, therefore, change the model slightly. We say that the employment agency has n candidates, and they send us a list of the candidates in advance. On each day, we choose, randomly, which candidate to interview. Although we know nothing about
5.1 The hiring problem 117the candidates (besides their names), we have made a signiﬁcant change. Insteadof relying on a guess that the candidates come to us in a random order, we haveinstead gained control of the process and enforced a random order. More generally, we call an algorithm randomized if its behavior is determinednot only by its input but also by values produced by a random-number gener-ator. We shall assume that we have at our disposal a random-number generatorR ANDOM. A call to R ANDOM.a; b/ returns an integer between a and b, inclu-sive, with each such integer being equally likely. For example, R ANDOM.0; 1/produces 0 with probability 1=2, and it produces 1 with probability 1=2. A call toR ANDOM.3; 7/ returns either 3, 4, 5, 6, or 7, each with probability 1=5. Each inte-ger returned by R ANDOM is independent of the integers returned on previous calls.You may imagine R ANDOM as rolling a .b a C 1/-sided die to obtain its out-put. (In practice, most programming environments offer a pseudorandom-numbergenerator: a deterministic algorithm returning numbers that “look” statisticallyrandom.) When analyzing the running time of a randomized algorithm, we take the expec-tation of the running time over the distribution of values returned by the randomnumber generator. We distinguish these algorithms from those in which the inputis random by referring to the running time of a randomized algorithm as an ex-pected running time. In general, we discuss the average-case running time whenthe probability distribution is over the inputs to the algorithm, and we discuss theexpected running time when the algorithm itself makes random choices.Exercises5.1-1Show that the assumption that we are always able to determine which candidate isbest, in line 4 of procedure H IRE -A SSISTANT, implies that we know a total orderon the ranks of the candidates.5.1-2 ?Describe an implementation of the procedure R ANDOM.a; b/ that only makes callsto R ANDOM.0; 1/. What is the expected running time of your procedure, as afunction of a and b?5.1-3 ?Suppose that you want to output 0 with probability 1=2 and 1 with probability 1=2.At your disposal is a procedure B IASED -R ANDOM , that outputs either 0 or 1. Itoutputs 1 with some probability p and 0 with probability 1 p, where 0 < p < 1,but you do not know what p is. Give an algorithm that uses B IASED -R ANDOMas a subroutine, and returns an unbiased answer, returning 0 with probability 1=2
118 Chapter 5 Probabilistic Analysis and Randomized Algorithms and 1 with probability 1=2. What is the expected running time of your algorithm as a function of p?5.2 Indicator random variables In order to analyze many algorithms, including the hiring problem, we use indicator random variables. Indicator random variables provide a convenient method for converting between probabilities and expectations. Suppose we are given a sample space S and an event A. Then the indicator random variable I fAg associated with event A is deﬁned as ( 1 if A occurs ; I fAg D (5.1) 0 if A does not occur : As a simple example, let us determine the expected number of heads that we obtain when ﬂipping a fair coin. Our sample space is S D fH; T g, with Pr fH g D Pr fT g D 1=2. We can then deﬁne an indicator random variable XH , associated with the coin coming up heads, which is the event H . This variable counts the number of heads obtained in this ﬂip, and it is 1 if the coin comes up heads and 0 otherwise. We write XH D I fH g ( 1 if H occurs ; D 0 if T occurs : The expected number of heads obtained in one ﬂip of the coin is simply the ex- pected value of our indicator variable XH : E ŒXH D E ŒI fH g D 1 Pr fH g C 0 Pr fT g D 1 .1=2/ C 0 .1=2/ D 1=2 : Thus the expected number of heads obtained by one ﬂip of a fair coin is 1=2. As the following lemma shows, the expected value of an indicator random variable associated with an event A is equal to the probability that A occurs. Lemma 5.1 Given a sample space S and an event A in the sample space S, let XA D I fAg. Then E ŒXA D Pr fAg.
5.2 Indicator random variables 119Proof By the deﬁnition of an indicator random variable from equation (5.1) andthe deﬁnition of expected value, we haveE ŒXA D E ŒI fAg ˚ « D 1 Pr fAg C 0 Pr A D Pr fAg ;where A denotes S A, the complement of A. Although indicator random variables may seem cumbersome for an applicationsuch as counting the expected number of heads on a ﬂip of a single coin, they areuseful for analyzing situations in which we perform repeated random trials. Forexample, indicator random variables give us a simple way to arrive at the resultof equation (C.37). In this equation, we compute the number of heads in n coinﬂips by considering separately the probability of obtaining 0 heads, 1 head, 2 heads,etc. The simpler method proposed in equation (C.38) instead uses indicator randomvariables implicitly. Making this argument more explicit, we let Xi be the indicatorrandom variable associated with the event in which the ith ﬂip comes up heads:Xi D I fthe ith ﬂip results in the event H g. Let X be the random variable denotingthe total number of heads in the n coin ﬂips, so that X nXD Xi : i D1We wish to compute the expected number of heads, and so we take the expectationof both sides of the above equation to obtain " n # XE ŒX D E Xi : i D1The above equation gives the expectation of the sum of n indicator random vari-ables. By Lemma 5.1, we can easily compute the expectation of each of the randomvariables. By equation (C.21)—linearity of expectation—it is easy to compute theexpectation of the sum: it equals the sum of the expectations of the n randomvariables. Linearity of expectation makes the use of indicator random variables apowerful analytical technique; it applies even when there is dependence among therandom variables. We now can easily compute the expected number of heads:
120 Chapter 5 Probabilistic Analysis and Randomized Algorithms " n # X E ŒX D E Xi i D1 X n D E ŒXi i D1 X n D 1=2 i D1 D n=2 : Thus, compared to the method used in equation (C.37), indicator random variables greatly simplify the calculation. We shall use indicator random variables through- out this book. Analysis of the hiring problem using indicator random variables Returning to the hiring problem, we now wish to compute the expected number of times that we hire a new ofﬁce assistant. In order to use a probabilistic analysis, we assume that the candidates arrive in a random order, as discussed in the previous section. (We shall see in Section 5.3 how to remove this assumption.) Let X be the random variable whose value equals the number of times we hire a new ofﬁce as- sistant. We could then apply the deﬁnition of expected value from equation (C.20) to obtain X n E ŒX D x Pr fX D xg ; xD1 but this calculation would be cumbersome. We shall instead use indicator random variables to greatly simplify the calculation. To use indicator random variables, instead of computing E ŒX by deﬁning one variable associated with the number of times we hire a new ofﬁce assistant, we deﬁne n variables related to whether or not each particular candidate is hired. In particular, we let Xi be the indicator random variable associated with the event in which the ith candidate is hired. Thus, Xi D I fcandidate i is hiredg ( 1 if candidate i is hired ; D 0 if candidate i is not hired ; and X D X1 C X2 C C Xn : (5.2)
5.2 Indicator random variables 121By Lemma 5.1, we have thatE ŒXi D Pr fcandidate i is hiredg ;and we must therefore compute the probability that lines 5–6 of H IRE -A SSISTANTare executed. Candidate i is hired, in line 6, exactly when candidate i is better than each ofcandidates 1 through i 1. Because we have assumed that the candidates arrive ina random order, the ﬁrst i candidates have appeared in a random order. Any one ofthese ﬁrst i candidates is equally likely to be the best-qualiﬁed so far. Candidate ihas a probability of 1=i of being better qualiﬁed than candidates 1 through i 1and thus a probability of 1=i of being hired. By Lemma 5.1, we conclude thatE ŒXi D 1=i : (5.3)Now we can compute E ŒX : " n # XE ŒX D E Xi (by equation (5.2)) (5.4) i D1 X n D E ŒXi (by linearity of expectation) i D1 X n D 1=i (by equation (5.3)) i D1 D ln n C O.1/ (by equation (A.7)) . (5.5)Even though we interview n people, we actually hire only approximately ln n ofthem, on average. We summarize this result in the following lemma.Lemma 5.2Assuming that the candidates are presented in a random order, algorithm H IRE -A SSISTANT has an average-case total hiring cost of O.ch ln n/.Proof The bound follows immediately from our deﬁnition of the hiring costand equation (5.5), which shows that the expected number of hires is approxi-mately ln n. The average-case hiring cost is a signiﬁcant improvement over the worst-casehiring cost of O.ch n/.
122 Chapter 5 Probabilistic Analysis and Randomized Algorithms Exercises 5.2-1 In H IRE -A SSISTANT, assuming that the candidates are presented in a random or- der, what is the probability that you hire exactly one time? What is the probability that you hire exactly n times? 5.2-2 In H IRE -A SSISTANT, assuming that the candidates are presented in a random or- der, what is the probability that you hire exactly twice? 5.2-3 Use indicator random variables to compute the expected value of the sum of n dice. 5.2-4 Use indicator random variables to solve the following problem, which is known as the hat-check problem. Each of n customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers who get back their own hat? 5.2-5 Let AŒ1 : : n be an array of n distinct numbers. If i < j and AŒi > AŒj , then the pair .i; j / is called an inversion of A. (See Problem 2-4 for more on inver- sions.) Suppose that the elements of A form a uniform random permutation of h1; 2; : : : ; ni. Use indicator random variables to compute the expected number of inversions.5.3 Randomized algorithms In the previous section, we showed how knowing a distribution on the inputs can help us to analyze the average-case behavior of an algorithm. Many times, we do not have such knowledge, thus precluding an average-case analysis. As mentioned in Section 5.1, we may be able to use a randomized algorithm. For a problem such as the hiring problem, in which it is helpful to assume that all permutations of the input are equally likely, a probabilistic analysis can guide the development of a randomized algorithm. Instead of assuming a distribution of inputs, we impose a distribution. In particular, before running the algorithm, we randomly permute the candidates in order to enforce the property that every permutation is equally likely. Although we have modiﬁed the algorithm, we still expect to hire a new ofﬁce assistant approximately ln n times. But now we expect
5.3 Randomized algorithms 123this to be the case for any input, rather than for inputs drawn from a particulardistribution. Let us further explore the distinction between probabilistic analysis and random-ized algorithms. In Section 5.2, we claimed that, assuming that the candidates ar-rive in a random order, the expected number of times we hire a new ofﬁce assistantis about ln n. Note that the algorithm here is deterministic; for any particular input,the number of times a new ofﬁce assistant is hired is always the same. Furthermore,the number of times we hire a new ofﬁce assistant differs for different inputs, and itdepends on the ranks of the various candidates. Since this number depends only onthe ranks of the candidates, we can represent a particular input by listing, in order,the ranks of the candidates, i.e., hrank.1/; rank.2/; : : : ; rank.n/i. Given the ranklist A1 D h1; 2; 3; 4; 5; 6; 7; 8; 9; 10i, a new ofﬁce assistant is always hired 10 times,since each successive candidate is better than the previous one, and lines 5–6 areexecuted in each iteration. Given the list of ranks A2 D h10; 9; 8; 7; 6; 5; 4; 3; 2; 1i,a new ofﬁce assistant is hired only once, in the ﬁrst iteration. Given a list of ranksA3 D h5; 2; 1; 8; 4; 7; 10; 9; 3; 6i, a new ofﬁce assistant is hired three times,upon interviewing the candidates with ranks 5, 8, and 10. Recalling that the costof our algorithm depends on how many times we hire a new ofﬁce assistant, wesee that there are expensive inputs such as A1 , inexpensive inputs such as A2 , andmoderately expensive inputs such as A3 . Consider, on the other hand, the randomized algorithm that ﬁrst permutes thecandidates and then determines the best candidate. In this case, we randomize inthe algorithm, not in the input distribution. Given a particular input, say A3 above,we cannot say how many times the maximum is updated, because this quantitydiffers with each run of the algorithm. The ﬁrst time we run the algorithm on A3 ,it may produce the permutation A1 and perform 10 updates; but the second timewe run the algorithm, we may produce the permutation A2 and perform only oneupdate. The third time we run it, we may perform some other number of updates.Each time we run the algorithm, the execution depends on the random choicesmade and is likely to differ from the previous execution of the algorithm. For thisalgorithm and many other randomized algorithms, no particular input elicits itsworst-case behavior. Even your worst enemy cannot produce a bad input array,since the random permutation makes the input order irrelevant. The randomizedalgorithm performs badly only if the random-number generator produces an “un-lucky” permutation. For the hiring problem, the only change needed in the code is to randomly per-mute the array.
124 Chapter 5 Probabilistic Analysis and Randomized Algorithms R ANDOMIZED -H IRE -A SSISTANT .n/ 1 randomly permute the list of candidates 2 best D 0 / candidate 0 is a least-qualiﬁed dummy candidate / 3 for i D 1 to n 4 interview candidate i 5 if candidate i is better than candidate best 6 best D i 7 hire candidate i With this simple change, we have created a randomized algorithm whose perfor- mance matches that obtained by assuming that the candidates were presented in a random order. Lemma 5.3 The expected hiring cost of the procedure R ANDOMIZED -H IRE -A SSISTANT is O.ch ln n/. Proof After permuting the input array, we have achieved a situation identical to that of the probabilistic analysis of H IRE -A SSISTANT. Comparing Lemmas 5.2 and 5.3 highlights the difference between probabilistic analysis and randomized algorithms. In Lemma 5.2, we make an assumption about the input. In Lemma 5.3, we make no such assumption, although randomizing the input takes some additional time. To remain consistent with our terminology, we couched Lemma 5.2 in terms of the average-case hiring cost and Lemma 5.3 in terms of the expected hiring cost. In the remainder of this section, we discuss some issues involved in randomly permuting inputs. Randomly permuting arrays Many randomized algorithms randomize the input by permuting the given input array. (There are other ways to use randomization.) Here, we shall discuss two methods for doing so. We assume that we are given an array A which, without loss of generality, contains the elements 1 through n. Our goal is to produce a random permutation of the array. One common method is to assign each element AŒi of the array a random pri- ority P Œi, and then sort the elements of A according to these priorities. For ex- ample, if our initial array is A D h1; 2; 3; 4i and we choose random priorities P D h36; 3; 62; 19i, we would produce an array B D h2; 4; 1; 3i, since the second priority is the smallest, followed by the fourth, then the ﬁrst, and ﬁnally the third. We call this procedure P ERMUTE -B Y-S ORTING :
5.3 Randomized algorithms 125P ERMUTE -B Y-S ORTING .A/1 n D A:length2 let P Œ1 : : n be a new array3 for i D 1 to n4 P Œi D R ANDOM.1; n3 /5 sort A, using P as sort keysLine 4 chooses a random number between 1 and n3 . We use a range of 1 to n3to make it likely that all the priorities in P are unique. (Exercise 5.3-5 asks youto prove that the probability that all entries are unique is at least 1 1=n, andExercise 5.3-6 asks how to implement the algorithm even if two or more prioritiesare identical.) Let us assume that all the priorities are unique. The time-consuming step in this procedure is the sorting in line 5. As we shallsee in Chapter 8, if we use a comparison sort, sorting takes .n lg n/ time. Wecan achieve this lower bound, since we have seen that merge sort takes ‚.n lg n/time. (We shall see other comparison sorts that take ‚.n lg n/ time in Part II.Exercise 8.3-4 asks you to solve the very similar problem of sorting numbers in therange 0 to n3 1 in O.n/ time.) After sorting, if P Œi is the j th smallest priority,then AŒi lies in position j of the output. In this manner we obtain a permutation. Itremains to prove that the procedure produces a uniform random permutation, thatis, that the procedure is equally likely to produce every permutation of the numbers1 through n.Lemma 5.4Procedure P ERMUTE - BY-S ORTING produces a uniform random permutation of theinput, assuming that all priorities are distinct.Proof We start by considering the particular permutation in which each ele-ment AŒi receives the ith smallest priority. We shall show that this permutationoccurs with probability exactly 1=nŠ. For i D 1; 2; : : : ; n, let Ei be the eventthat element AŒi receives the ith smallest priority. Then we wish to compute theprobability that for all i, event Ei occurs, which isPr fE1 E2 E3 En 1 En g :Using Exercise C.2-5, this probability is equal toPr fE1 g Pr fE2 j E1 g Pr fE3 j E2 E1 g Pr fE4 j E3 E2 E1 g Pr fEi j Ei 1 Ei 2 E1 g Pr fEn j En 1 E1 g :We have that Pr fE1 g D 1=n because it is the probability that one prioritychosen randomly out of a set of n is the smallest priority. Next, we observe
126 Chapter 5 Probabilistic Analysis and Randomized Algorithms that Pr fE2 j E1 g D 1=.n 1/ because given that element AŒ1 has the small- est priority, each of the remaining n 1 elements has an equal chance of hav- ing the second smallest priority. In general, for i D 2; 3; : : : ; n, we have that Pr fEi j Ei 1 Ei 2 E1 g D 1=.n i C 1/, since, given that elements AŒ1 through AŒi 1 have the i 1 smallest priorities (in order), each of the remaining n .i 1/ elements has an equal chance of having the ith smallest priority. Thus, we have Â ÃÂ Ã Â ÃÂ Ã 1 1 1 1 Pr fE1 E2 E3 En 1 En g D n n 1 2 1 1 D ; nŠ and we have shown that the probability of obtaining the identity permutation is 1=nŠ. We can extend this proof to work for any permutation of priorities. Consider any ﬁxed permutation D h .1/; .2/; : : : ; .n/i of the set f1; 2; : : : ; ng. Let us denote by ri the rank of the priority assigned to element AŒi, where the element with the j th smallest priority has rank j . If we deﬁne Ei as the event in which element AŒi receives the .i /th smallest priority, or ri D .i /, the same proof still applies. Therefore, if we calculate the probability of obtaining any particular permutation, the calculation is identical to the one above, so that the probability of obtaining this permutation is also 1=nŠ. You might think that to prove that a permutation is a uniform random permuta- tion, it sufﬁces to show that, for each element AŒi, the probability that the element winds up in position j is 1=n. Exercise 5.3-4 shows that this weaker condition is, in fact, insufﬁcient. A better method for generating a random permutation is to permute the given array in place. The procedure R ANDOMIZE -I N -P LACE does so in O.n/ time. In its ith iteration, it chooses the element AŒi randomly from among elements AŒi through AŒn. Subsequent to the ith iteration, AŒi is never altered. R ANDOMIZE -I N -P LACE .A/ 1 n D A:length 2 for i D 1 to n 3 swap AŒi with AŒR ANDOM.i; n/ We shall use a loop invariant to show that procedure R ANDOMIZE -I N -P LACE produces a uniform random permutation. A k-permutation on a set of n ele- ments is a sequence containing k of the n elements, with no repetitions. (See Appendix C.) There are nŠ=.n k/Š such possible k-permutations.
5.3 Randomized algorithms 127Lemma 5.5Procedure R ANDOMIZE -I N -P LACE computes a uniform random permutation.Proof We use the following loop invariant: Just prior to the ith iteration of the for loop of lines 2–3, for each possible .i 1/-permutation of the n elements, the subarray AŒ1 : : i 1 contains this .i 1/-permutation with probability .n i C 1/Š=nŠ.We need to show that this invariant is true prior to the ﬁrst loop iteration, that eachiteration of the loop maintains the invariant, and that the invariant provides a usefulproperty to show correctness when the loop terminates.Initialization: Consider the situation just before the ﬁrst loop iteration, so that i D 1. The loop invariant says that for each possible 0-permutation, the sub- array AŒ1 : : 0 contains this 0-permutation with probability .n i C 1/Š=nŠ D nŠ=nŠ D 1. The subarray AŒ1 : : 0 is an empty subarray, and a 0-permutation has no elements. Thus, AŒ1 : : 0 contains any 0-permutation with probability 1, and the loop invariant holds prior to the ﬁrst iteration.Maintenance: We assume that just before the ith iteration, each possible .i 1/-permutation appears in the subarray AŒ1 : : i 1 with probability .n i C 1/Š=nŠ, and we shall show that after the ith iteration, each possible i-permutation appears in the subarray AŒ1 : : i with probability .n i/Š=nŠ. Incrementing i for the next iteration then maintains the loop invariant. Let us examine the ith iteration. Consider a particular i-permutation, and de- note the elements in it by hx1 ; x2 ; : : : ; xi i. This permutation consists of an .i 1/-permutation hx1 ; : : : ; xi 1 i followed by the value xi that the algorithm places in AŒi. Let E1 denote the event in which the ﬁrst i 1 iterations have created the particular .i 1/-permutation hx1 ; : : : ; xi 1 i in AŒ1 : : i 1. By the loop invariant, Pr fE1 g D .n i C 1/Š=nŠ. Let E2 be the event that ith iteration puts xi in position AŒi. The i-permutation hx1 ; : : : ; xi i appears in AŒ1 : : i pre- cisely when both E1 and E2 occur, and so we wish to compute Pr fE2 E1 g. Using equation (C.14), we have Pr fE2 E1 g D Pr fE2 j E1 g Pr fE1 g : The probability Pr fE2 j E1 g equals 1=.n i C1/ because in line 3 the algorithm chooses xi randomly from the n i C 1 values in positions AŒi : : n. Thus, we have
128 Chapter 5 Probabilistic Analysis and Randomized Algorithms Pr fE2 E1 g D Pr fE2 j E1 g Pr fE1 g 1 .n i C 1/Š D n i C1 nŠ .n i/Š D : nŠ Termination: At termination, i D n C 1, and we have that the subarray AŒ1 : : n is a given n-permutation with probability .n .nC1/C1/=nŠ D 0Š=nŠ D 1=nŠ. Thus, R ANDOMIZE -I N -P LACE produces a uniform random permutation. A randomized algorithm is often the simplest and most efﬁcient way to solve a problem. We shall use randomized algorithms occasionally throughout this book. Exercises 5.3-1 Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the ﬁrst iteration. He reasons that we could just as easily declare that an empty subarray contains no 0-permutations. Therefore, the probability that an empty subarray contains a 0-permutation should be 0, thus invalidating the loop invariant prior to the ﬁrst iteration. Rewrite the procedure R ANDOMIZE -I N -P LACE so that its associated loop invariant applies to a nonempty subarray prior to the ﬁrst iteration, and modify the proof of Lemma 5.5 for your procedure. 5.3-2 Professor Kelp decides to write a procedure that produces at random any permuta- tion besides the identity permutation. He proposes the following procedure: P ERMUTE -W ITHOUT-I DENTITY .A/ 1 n D A:length 2 for i D 1 to n 1 3 swap AŒi with AŒR ANDOM.i C 1; n/ Does this code do what Professor Kelp intends? 5.3-3 Suppose that instead of swapping element AŒi with a random element from the subarray AŒi : : n, we swapped it with a random element from anywhere in the array:
5.3 Randomized algorithms 129P ERMUTE -W ITH -A LL .A/1 n D A:length2 for i D 1 to n3 swap AŒi with AŒR ANDOM.1; n/Does this code produce a uniform random permutation? Why or why not?5.3-4Professor Armstrong suggests the following procedure for generating a uniformrandom permutation:P ERMUTE -B Y-C YCLIC .A/1 n D A:length2 let BŒ1 : : n be a new array3 offset D R ANDOM .1; n/4 for i D 1 to n5 dest D i C offset6 if dest > n7 dest D dest n8 BŒdest D AŒi9 return BShow that each element AŒi has a 1=n probability of winding up in any particularposition in B. Then show that Professor Armstrong is mistaken by showing thatthe resulting permutation is not uniformly random.5.3-5 ?Prove that in the array P in procedure P ERMUTE -B Y-S ORTING, the probabilitythat all elements are unique is at least 1 1=n.5.3-6Explain how to implement the algorithm P ERMUTE -B Y-S ORTING to handle thecase in which two or more priorities are identical. That is, your algorithm shouldproduce a uniform random permutation, even if two or more priorities are identical.5.3-7Suppose we want to create a random sample of the set f1; 2; 3; : : : ; ng, that is,an m-element subset S, where 0 Ä m Ä n, such that each m-subset is equallylikely to be created. One way would be to set AŒi D i for i D 1; 2; 3; : : : ; n,call R ANDOMIZE -I N -P LACE .A/, and then take just the ﬁrst m array elements.This method would make n calls to the R ANDOM procedure. If n is much largerthan m, we can create a random sample with fewer calls to R ANDOM. Show that
130 Chapter 5 Probabilistic Analysis and Randomized Algorithms the following recursive procedure returns a random m-subset S of f1; 2; 3; : : : ; ng, in which each m-subset is equally likely, while making only m calls to R ANDOM: R ANDOM -S AMPLE .m; n/ 1 if m == 0 2 return ; 3 else S D R ANDOM -S AMPLE .m 1; n 1/ 4 i D R ANDOM.1; n/ 5 if i 2 S 6 S D S [ fng 7 else S D S [ fig 8 return S? 5.4 Probabilistic analysis and further uses of indicator random variables This advanced section further illustrates probabilistic analysis by way of four ex- amples. The ﬁrst determines the probability that in a room of k people, two of them share the same birthday. The second example examines what happens when we randomly toss balls into bins. The third investigates “streaks” of consecutive heads when we ﬂip coins. The ﬁnal example analyzes a variant of the hiring prob- lem in which you have to make decisions without actually interviewing all the candidates. 5.4.1 The birthday paradox Our ﬁrst example is the birthday paradox. How many people must there be in a room before there is a 50% chance that two of them were born on the same day of the year? The answer is surprisingly few. The paradox is that it is in fact far fewer than the number of days in a year, or even half the number of days in a year, as we shall see. To answer this question, we index the people in the room with the integers 1; 2; : : : ; k, where k is the number of people in the room. We ignore the issue of leap years and assume that all years have n D 365 days. For i D 1; 2; : : : ; k, let bi be the day of the year on which person i’s birthday falls, where 1 Ä bi Ä n. We also assume that birthdays are uniformly distributed across the n days of the year, so that Pr fbi D rg D 1=n for i D 1; 2; : : : ; k and r D 1; 2; : : : ; n. The probability that two given people, say i and j , have matching birthdays depends on whether the random selection of birthdays is independent. We assume from now on that birthdays are independent, so that the probability that i’s birthday
5.4 Probabilistic analysis and further uses of indicator random variables 131and j ’s birthday both fall on day r isPr fbi D r and bj D rg D Pr fbi D rg Pr fbj D rg D 1=n2 :Thus, the probability that they both fall on the same day is X nPr fbi D bj g D Pr fbi D r and bj D rg rD1 Xn D .1=n2 / rD1 D 1=n : (5.6)More intuitively, once bi is chosen, the probability that bj is chosen to be the sameday is 1=n. Thus, the probability that i and j have the same birthday is the sameas the probability that the birthday of one of them falls on a given day. Notice,however, that this coincidence depends on the assumption that the birthdays areindependent. We can analyze the probability of at least 2 out of k people having matchingbirthdays by looking at the complementary event. The probability that at least twoof the birthdays match is 1 minus the probability that all the birthdays are different.The event that k people have distinct birthdays is kBk D Ai ; i D1where Ai is the event that person i’s birthday is different from person j ’s forall j < i. Since we can write Bk D Ak Bk 1 , we obtain from equation (C.16)the recurrencePr fBk g D Pr fBk 1 g Pr fAk j Bk 1 g ; (5.7)where we take Pr fB1 g D Pr fA1 g D 1 as an initial condition. In other words,the probability that b1 ; b2 ; : : : ; bk are distinct birthdays is the probability thatb1 ; b2 ; : : : ; bk 1 are distinct birthdays times the probability that bk ¤ bi fori D 1; 2; : : : ; k 1, given that b1 ; b2 ; : : : ; bk 1 are distinct. If b1 ; b2 ; : : : ; bk 1 are distinct, the conditional probability that bk ¤ bi fori D 1; 2; : : : ; k 1 is Pr fAk j Bk 1 g D .n k C 1/=n, since out of the n days,n .k 1/ days are not taken. We iteratively apply the recurrence (5.7) to obtain
132 Chapter 5 Probabilistic Analysis and Randomized Algorithms Pr fBk g D Pr fBk 1 g Pr fAk j Bk 1 g D Pr fBk 2 g Pr fAk 1 j Bk 2 g Pr fAk j Bk 1 g : : : D Pr fB1 g Pr fA2 j B1 g Pr fA3 j B2 g Pr fAk j Bk 1 g Â ÃÂ Ã Â Ã n 1 n 2 n kC1 D 1 n n n Â ÃÂ Ã Â Ã 1 2 k 1 D 1 1 1 1 : n n n Inequality (3.12), 1 C x Ä e x , gives us 1=n 2=n .k 1/=n Pr fBk g Ä e e e Pk 1 D e i D1 i=n D e k.k 1/=2n Ä 1=2 when k.k 1/=2n Ä ln.1=2/. The probability that all k birthdays are distinct is at most 1=2 when k.k 1/ 2n ln 2 or, solving the quadratic equation, when p k .1 C 1 C .8 ln 2/n/=2. For n D 365, we must have k 23. Thus, if at least 23 people are in a room, the probability is at least 1=2 that at least two people have the same birthday. On Mars, a year is 669 Martian days long; it therefore takes 31 Martians to get the same effect. An analysis using indicator random variables We can use indicator random variables to provide a simpler but approximate anal- ysis of the birthday paradox. For each pair .i; j / of the k people in the room, we deﬁne the indicator random variable Xij , for 1 Ä i < j Ä k, by Xij D I fperson i and person j have the same birthdayg ( 1 if person i and person j have the same birthday ; D 0 otherwise : By equation (5.6), the probability that two people have matching birthdays is 1=n, and thus by Lemma 5.1, we have E ŒXij D Pr fperson i and person j have the same birthdayg D 1=n : Letting X be the random variable that counts the number of pairs of individuals having the same birthday, we have
5.4 Probabilistic analysis and further uses of indicator random variables 133 X X k kXD Xij : i D1 j Di C1Taking expectations of both sides and applying linearity of expectation, we obtain " k # X X kE ŒX D E Xij i D1 j Di C1 X k X k D E ŒXij i D1 j Di C1 ! k 1 D 2 n k.k 1/ D : 2nWhen k.k 1/ 2n, therefore, the expected number of pairs of people with the psame birthday is at least 1. Thus, if we have at least 2nC1 individuals in a room,we can expect at least two to have the same birthday. For n D 365, if k D 28, theexpected number of pairs with the same birthday is .28 27/=.2 365/ 1:0356.Thus, with at least 28 people, we expect to ﬁnd at least one matching pair of birth-days. On Mars, where a year is 669 Martian days long, we need at least 38 Mar-tians. The ﬁrst analysis, which used only probabilities, determined the number of peo-ple required for the probability to exceed 1=2 that a matching pair of birthdaysexists, and the second analysis, which used indicator random variables, determinedthe number such that the expected number of matching birthdays is 1. Althoughthe exact numbers of people differ for the two situations, they are the same asymp- ptotically: ‚. n/.5.4.2 Balls and binsConsider a process in which we randomly toss identical balls into b bins, numbered1; 2; : : : ; b. The tosses are independent, and on each toss the ball is equally likelyto end up in any bin. The probability that a tossed ball lands in any given bin is 1=b.Thus, the ball-tossing process is a sequence of Bernoulli trials (see Appendix C.4)with a probability 1=b of success, where success means that the ball falls in thegiven bin. This model is particularly useful for analyzing hashing (see Chapter 11),and we can answer a variety of interesting questions about the ball-tossing process.(Problem C-1 asks additional questions about balls and bins.)
134 Chapter 5 Probabilistic Analysis and Randomized Algorithms How many balls fall in a given bin? The number of balls that fall in a given bin follows the binomial distribution b.kI n; 1=b/. If we toss n balls, equation (C.37) tells us that the expected number of balls that fall in the given bin is n=b. How many balls must we toss, on the average, until a given bin contains a ball? The number of tosses until the given bin receives a ball follows the geometric distribution with probability 1=b and, by equation (C.32), the expected number of tosses until success is 1=.1=b/ D b. How many balls must we toss until every bin contains at least one ball? Let us call a toss in which a ball falls into an empty bin a “hit.” We want to know the expected number n of tosses required to get b hits. Using the hits, we can partition the n tosses into stages. The ith stage consists of the tosses after the .i 1/st hit until the ith hit. The ﬁrst stage consists of the ﬁrst toss, since we are guaranteed to have a hit when all bins are empty. For each toss during the ith stage, i 1 bins contain balls and b i C 1 bins are empty. Thus, for each toss in the ith stage, the probability of obtaining a hit is .b i C 1/=b. Let ni denote the number of tosses in the ith stage. Thus, the number of tosses Pb required to get b hits is n D i D1 ni . Each random variable ni has a geometric distribution with probability of success .b i C 1/=b and thus, by equation (C.32), we have b E Œni D : b i C1 By linearity of expectation, we have " b # X E Œn D E ni i D1 X b D E Œni i D1 X b b D i D1 b i C1 X1 b D b i D1 i D b.ln b C O.1// (by equation (A.7)) . It therefore takes approximately b ln b tosses before we can expect that every bin has a ball. This problem is also known as the coupon collector’s problem, which says that a person trying to collect each of b different coupons expects to acquire approximately b ln b randomly obtained coupons in order to succeed.
5.4 Probabilistic analysis and further uses of indicator random variables 1355.4.3 StreaksSuppose you ﬂip a fair coin n times. What is the longest streak of consecutiveheads that you expect to see? The answer is ‚.lg n/, as the following analysisshows. We ﬁrst prove that the expected length of the longest streak of heads is O.lg n/.The probability that each coin ﬂip is a head is 1=2. Let Ai k be the event that astreak of heads of length at least k begins with the ith coin ﬂip or, more precisely,the event that the k consecutive coin ﬂips i; i C 1; : : : ; i C k 1 yield only heads,where 1 Ä k Ä n and 1 Ä i Ä n k C1. Since coin ﬂips are mutually independent,for any given event Ai k , the probability that all k ﬂips are heads isPr fAi k g D 1=2k : (5.8)For k D 2 dlg ne,Pr fAi;2dlg ne g D 1=22dlg ne Ä 1=22 lg n D 1=n2 ;and thus the probability that a streak of heads of length at least 2 dlg ne begins inposition i is quite small. There are at most n 2 dlg ne C 1 positions where sucha streak can begin. The probability that a streak of heads of length at least 2 dlg nebegins anywhere is therefore (n 2dlg neC1 ) [ X n 2dlg neC1Pr Ai;2dlg ne Ä 1=n2 i D1 i D1 X n < 1=n2 i D1 D 1=n ; (5.9)since by Boole’s inequality (C.19), the probability of a union of events is at mostthe sum of the probabilities of the individual events. (Note that Boole’s inequalityholds even for events such as these that are not independent.) We now use inequality (5.9) to bound the length of the longest streak. Forj D 0; 1; 2; : : : ; n, let Lj be the event that the longest streak of heads has length ex-actly j , and let L be the length of the longest streak. By the deﬁnition of expectedvalue, we have X nE ŒL D j Pr fLj g : (5.10) j D0
136 Chapter 5 Probabilistic Analysis and Randomized Algorithms We could try to evaluate this sum using upper bounds on each Pr fLj g similar to those computed in inequality (5.9). Unfortunately, this method would yield weak bounds. We can use some intuition gained by the above analysis to obtain a good bound, however. Informally, we observe that for no individual term in the sum- mation in equation (5.10) are both the factors j and Pr fLj g large. Why? When j 2 dlg ne, then Pr fLj g is very small, and when j < 2 dlg ne, then j is fairly small. More formally, we note that the events Lj for j D 0; 1; : : : ; n are disjoint, and so the probability that a streak of heads of length at P 2 dlg ne begins any- Pn least n where is j D2dlg ne Pr fLj g. By inequality (5.9), we have j D2dlg ne Pr fLj g < 1=n. Pn P2dlg ne 1 Also, noting that j D0 Pr fLj g D 1, we have that j D0 Pr fLj g Ä 1. Thus, we obtain X n E ŒL D j Pr fLj g j D0 X 2dlg ne 1 X n D j Pr fLj g C j Pr fLj g j D0 j D2dlg ne X 2dlg ne 1 X n < .2 dlg ne/ Pr fLj g C n Pr fLj g j D0 j D2dlg ne X 2dlg ne 1 X n D 2 dlg ne Pr fLj g C n Pr fLj g j D0 j D2dlg ne < 2 dlg ne 1 C n .1=n/ D O.lg n/ : The probability that a streak of heads exceeds r dlg ne ﬂips diminishes quickly with r. For r 1, the probability that a streak of at least r dlg ne heads starts in position i is Pr fAi;rdlg ne g D 1=2rdlg ne Ä 1=nr : Thus, the probability is at most n=nr D 1=nr 1 that the longest streak is at least r dlg ne, or equivalently, the probability is at least 1 1=nr 1 that the longest streak has length less than r dlg ne. As an example, for n D 1000 coin ﬂips, the probability of having a streak of at least 2 dlg ne D 20 heads is at most 1=n D 1=1000. The chance of having a streak longer than 3 dlg ne D 30 heads is at most 1=n2 D 1=1,000,000. We now prove a complementary lower bound: the expected length of the longest streak of heads in n coin ﬂips is .lg n/. To prove this bound, we look for streaks
5.4 Probabilistic analysis and further uses of indicator random variables 137of length s by partitioning the n ﬂips into approximately n=s groups of s ﬂipseach. If we choose s D b.lg n/=2c, we can show that it is likely that at least oneof these groups comes up all heads, and hence it is likely that the longest streakhas length at least s D .lg n/. We then show that the longest streak has expectedlength .lg n/. We partition the n coin ﬂips into at least bn= b.lg n/=2cc groups of b.lg n/=2cconsecutive ﬂips, and we bound the probability that no group comes up all heads.By equation (5.8), the probability that the group starting in position i comes up allheads isPr fAi;b.lg n/=2c g D 1=2b.lg n/=2c p 1= n :The probability that a streak of heads of length at least b.lg n/=2c does not begin pin position i is therefore at most 1 1= n. Since the bn= b.lg n/=2cc groups areformed from mutually exclusive, independent coin ﬂips, the probability that everyone of these groups fails to be a streak of length b.lg n/=2c is at most p bn=b.lg n/=2cc p n=b.lg n/=2c 1 1 1= n Ä 1 1= n p 2n= lg n 1 Ä 1 1= n p Ä e .2n= lg n 1/= n D O.e lg n / D O.1=n/ :For this argument, we used inequality (3.12), 1 C x Ä e x , and the fact, which you pmight want to verify, that .2n= lg n 1/= n lg n for sufﬁciently large n. Thus, the probability that the longest streak exceeds b.lg n/=2c is X n Pr fLj g 1 O.1=n/ : (5.11)j Db.lg n/=2cC1We can now calculate a lower bound on the expected length of the longest streak,beginning with equation (5.10) and proceeding in a manner similar to our analysisof the upper bound:
138 Chapter 5 Probabilistic Analysis and Randomized Algorithms X n E ŒL D j Pr fLj g j D0 X b.lg n/=2c X n D j Pr fLj g C j Pr fLj g j D0 j Db.lg n/=2cC1 X b.lg n/=2c X n 0 Pr fLj g C b.lg n/=2c Pr fLj g j D0 j Db.lg n/=2cC1 X b.lg n/=2c X n D 0 Pr fLj g C b.lg n/=2c Pr fLj g j D0 j Db.lg n/=2cC1 0 C b.lg n/=2c .1 O.1=n// (by inequality (5.11)) D .lg n/ : As with the birthday paradox, we can obtain a simpler but approximate analysis using indicator random variables. We let Xi k D I fAi k g be the indicator random variable associated with a streak of heads of length at least k beginning with the ith coin ﬂip. To count the total number of such streaks, we deﬁne nX kC1 XD Xi k : i D1 Taking expectations and using linearity of expectation, we have "n kC1 # X E ŒX D E Xi k i D1 nX kC1 D E ŒXi k i D1 nX kC1 D Pr fAi k g i D1 nX kC1 D 1=2k i D1 n kC1 D : 2k By plugging in various values for k, we can calculate the expected number of streaks of length k. If this number is large (much greater than 1), then we expect many streaks of length k to occur and the probability that one occurs is high. If
5.4 Probabilistic analysis and further uses of indicator random variables 139this number is small (much less than 1), then we expect few streaks of length k tooccur and the probability that one occurs is low. If k D c lg n, for some positiveconstant c, we obtain nc lg n C 1E ŒX D 2c lg n n c lg n C 1 D nc 1 .c lg n 1/=n D nc 1 nc 1 c 1 D ‚.1=n / :If c is large, the expected number of streaks of length c lg n is small, and we con-clude that they are unlikely to occur. On the other hand, if c D 1=2, then we obtainE ŒX D ‚.1=n1=2 1 / D ‚.n1=2 /, and we expect that there are a large numberof streaks of length .1=2/ lg n. Therefore, one streak of such a length is likely tooccur. From these rough estimates alone, we can conclude that the expected lengthof the longest streak is ‚.lg n/.5.4.4 The on-line hiring problemAs a ﬁnal example, we consider a variant of the hiring problem. Suppose now thatwe do not wish to interview all the candidates in order to ﬁnd the best one. Wealso do not wish to hire and ﬁre as we ﬁnd better and better applicants. Instead, weare willing to settle for a candidate who is close to the best, in exchange for hiringexactly once. We must obey one company requirement: after each interview wemust either immediately offer the position to the applicant or immediately reject theapplicant. What is the trade-off between minimizing the amount of interviewingand maximizing the quality of the candidate hired? We can model this problem in the following way. After meeting an applicant,we are able to give each one a score; let score.i/ denote the score we give to the ithapplicant, and assume that no two applicants receive the same score. After we haveseen j applicants, we know which of the j has the highest score, but we do notknow whether any of the remaining n j applicants will receive a higher score. Wedecide to adopt the strategy of selecting a positive integer k < n, interviewing andthen rejecting the ﬁrst k applicants, and hiring the ﬁrst applicant thereafter who hasa higher score than all preceding applicants. If it turns out that the best-qualiﬁedapplicant was among the ﬁrst k interviewed, then we hire the nth applicant. Weformalize this strategy in the procedure O N -L INE -M AXIMUM .k; n/, which returnsthe index of the candidate we wish to hire.
140 Chapter 5 Probabilistic Analysis and Randomized Algorithms O N -L INE -M AXIMUM .k; n/ 1 bestscore D 1 2 for i D 1 to k 3 if score.i/ > bestscore 4 bestscore D score.i/ 5 for i D k C 1 to n 6 if score.i/ > bestscore 7 return i 8 return n We wish to determine, for each possible value of k, the probability that we hire the most qualiﬁed applicant. We then choose the best possible k, and implement the strategy with that value. For the moment, assume that k is ﬁxed. Let M.j / D max1Äi Äj fscore.i/g denote the maximum score among ap- plicants 1 through j . Let S be the event that we succeed in choosing the best- qualiﬁed applicant, and let Si be the event that we succeed when the best-qualiﬁed applicant is the ith one interviewed. Since the various Si are disjoint, we have Pn that Pr fSg D i D1 Pr fSi g. Noting that we never succeed when the best-qualiﬁed applicant is one of the ﬁrst k, we have that Pr fSi g D 0 for i D 1; 2; : : : ; k. Thus, we obtain X n Pr fSg D Pr fSi g : (5.12) i DkC1 We now compute Pr fSi g. In order to succeed when the best-qualiﬁed applicant is the ith one, two things must happen. First, the best-qualiﬁed applicant must be in position i, an event which we denote by Bi . Second, the algorithm must not select any of the applicants in positions k C 1 through i 1, which happens only if, for each j such that k C 1 Ä j Ä i 1, we ﬁnd that score.j / < bestscore in line 6. (Because scores are unique, we can ignore the possibility of score.j / D bestscore.) In other words, all of the values score.k C 1/ through score.i 1/ must be less than M.k/; if any are greater than M.k/, we instead return the index of the ﬁrst one that is greater. We use Oi to denote the event that none of the applicants in position k C 1 through i 1 are chosen. Fortunately, the two events Bi and Oi are independent. The event Oi depends only on the relative ordering of the values in positions 1 through i 1, whereas Bi depends only on whether the value in position i is greater than the values in all other positions. The ordering of the values in positions 1 through i 1 does not affect whether the value in position i is greater than all of them, and the value in position i does not affect the ordering of the values in positions 1 through i 1. Thus we can apply equation (C.15) to obtain
5.4 Probabilistic analysis and further uses of indicator random variables 141Pr fSi g D Pr fBi Oi g D Pr fBi g Pr fOi g :The probability Pr fBi g is clearly 1=n, since the maximum is equally likely tobe in any one of the n positions. For event Oi to occur, the maximum value inpositions 1 through i 1, which is equally likely to be in any of these i 1 positions,must be in one of the ﬁrst k positions. Consequently, Pr fOi g D k=.i 1/ andPr fSi g D k=.n.i 1//. Using equation (5.12), we have X nPr fSg D Pr fSi g i DkC1 Xn k D i DkC1 n.i 1/ k Xn 1 D n i 1 i DkC1 kX1 n 1 D : n i i DkWe approximate by integrals to bound this summation from above and below. Bythe inequalities (A.12), we haveZ n X1 Z n 11 n 1 1 dx Ä Ä dx : k x i i Dk k 1 xEvaluating these deﬁnite integrals gives us the boundsk k .ln n ln k/ Ä Pr fSg Ä .ln.n 1/ ln.k 1// ;n nwhich provide a rather tight bound for Pr fSg. Because we wish to maximize ourprobability of success, let us focus on choosing the value of k that maximizes thelower bound on Pr fSg. (Besides, the lower-bound expression is easier to maximizethan the upper-bound expression.) Differentiating the expression .k=n/.ln n ln k/with respect to k, we obtain1 .ln n ln k 1/ :nSetting this derivative equal to 0, we see that we maximize the lower bound on theprobability when ln k D ln n 1 D ln.n=e/ or, equivalently, when k D n=e. Thus,if we implement our strategy with k D n=e, we succeed in hiring our best-qualiﬁedapplicant with probability at least 1=e.
142 Chapter 5 Probabilistic Analysis and Randomized Algorithms Exercises 5.4-1 How many people must there be in a room before the probability that someone has the same birthday as you do is at least 1=2? How many people must there be before the probability that at least two people have a birthday on July 4 is greater than 1=2? 5.4-2 Suppose that we toss balls into b bins until some bin contains two balls. Each toss is independent, and each ball is equally likely to end up in any bin. What is the expected number of ball tosses? 5.4-3 ? For the analysis of the birthday paradox, is it important that the birthdays be mutu- ally independent, or is pairwise independence sufﬁcient? Justify your answer. 5.4-4 ? How many people should be invited to a party in order to make it likely that there are three people with the same birthday? 5.4-5 ? What is the probability that a k-string over a set of size n forms a k-permutation? How does this question relate to the birthday paradox? 5.4-6 ? Suppose that n balls are tossed into n bins, where each toss is independent and the ball is equally likely to end up in any bin. What is the expected number of empty bins? What is the expected number of bins with exactly one ball? 5.4-7 ? Sharpen the lower bound on streak length by showing that in n ﬂips of a fair coin, the probability is less than 1=n that no streak longer than lg n 2 lg lg n consecutive heads occurs.
Problems for Chapter 5 143Problems 5-1 Probabilistic counting With a b-bit counter, we can ordinarily only count up to 2b 1. With R. Morris’s probabilistic counting, we can count up to a much larger value at the expense of some loss of precision. We let a counter value of i represent a count of ni for i D 0; 1; : : : ; 2b 1, where the ni form an increasing sequence of nonnegative values. We assume that the ini- tial value of the counter is 0, representing a count of n0 D 0. The I NCREMENT operation works on a counter containing the value i in a probabilistic manner. If i D 2b 1, then the operation reports an overﬂow error. Otherwise, the I NCRE - MENT operation increases the counter by 1 with probability 1=.ni C1 ni /, and it leaves the counter unchanged with probability 1 1=.ni C1 ni /. If we select ni D i for all i 0, then the counter is an ordinary one. More interesting situations arise if we select, say, ni D 2i 1 for i > 0 or ni D Fi (the ith Fibonacci number—see Section 3.2). For this problem, assume that n2b 1 is large enough that the probability of an overﬂow error is negligible. a. Show that the expected value represented by the counter after n I NCREMENT operations have been performed is exactly n. b. The analysis of the variance of the count represented by the counter depends on the sequence of the ni . Let us consider a simple case: ni D 100i for all i 0. Estimate the variance in the value represented by the register after n I NCREMENT operations have been performed. 5-2 Searching an unsorted array This problem examines three algorithms for searching for a value x in an unsorted array A consisting of n elements. Consider the following randomized strategy: pick a random index i into A. If AŒi D x, then we terminate; otherwise, we continue the search by picking a new random index into A. We continue picking random indices into A until we ﬁnd an index j such that AŒj D x or until we have checked every element of A. Note that we pick from the whole set of indices each time, so that we may examine a given element more than once. a. Write pseudocode for a procedure R ANDOM -S EARCH to implement the strat- egy above. Be sure that your algorithm terminates when all indices into A have been picked.
144 Chapter 5 Probabilistic Analysis and Randomized Algorithms b. Suppose that there is exactly one index i such that AŒi D x. What is the expected number of indices into A that we must pick before we ﬁnd x and R ANDOM -S EARCH terminates? c. Generalizing your solution to part (b), suppose that there are k 1 indices i such that AŒi D x. What is the expected number of indices into A that we must pick before we ﬁnd x and R ANDOM -S EARCH terminates? Your answer should be a function of n and k. d. Suppose that there are no indices i such that AŒi D x. What is the expected number of indices into A that we must pick before we have checked all elements of A and R ANDOM -S EARCH terminates? Now consider a deterministic linear search algorithm, which we refer to as D ETERMINISTIC -S EARCH. Speciﬁcally, the algorithm searches A for x in order, considering AŒ1; AŒ2; AŒ3; : : : ; AŒn until either it ﬁnds AŒi D x or it reaches the end of the array. Assume that all possible permutations of the input array are equally likely. e. Suppose that there is exactly one index i such that AŒi D x. What is the average-case running time of D ETERMINISTIC -S EARCH? What is the worst- case running time of D ETERMINISTIC -S EARCH? f. Generalizing your solution to part (e), suppose that there are k 1 indices i such that AŒi D x. What is the average-case running time of D ETERMINISTIC - S EARCH? What is the worst-case running time of D ETERMINISTIC -S EARCH? Your answer should be a function of n and k. g. Suppose that there are no indices i such that AŒi D x. What is the average-case running time of D ETERMINISTIC -S EARCH? What is the worst-case running time of D ETERMINISTIC -S EARCH? Finally, consider a randomized algorithm S CRAMBLE -S EARCH that works by ﬁrst randomly permuting the input array and then running the deterministic lin- ear search given above on the resulting permuted array. h. Letting k be the number of indices i such that AŒi D x, give the worst-case and expected running times of S CRAMBLE -S EARCH for the cases in which k D 0 and k D 1. Generalize your solution to handle the case in which k 1. i. Which of the three searching algorithms would you use? Explain your answer.
Notes for Chapter 5 145Chapter notes Bollob´ s , Hofri , and Spencer  contain a wealth of advanced prob- a abilistic techniques. The advantages of randomized algorithms are discussed and surveyed by Karp  and Rabin . The textbook by Motwani and Raghavan  gives an extensive treatment of randomized algorithms. Several variants of the hiring problem have been widely studied. These problems are more commonly referred to as “secretary problems.” An example of work in this area is the paper by Ajtai, Meggido, and Waarts .
II Sorting and Order Statistics
Introduction This part presents several algorithms that solve the following sorting problem: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. 0 0 0 Output: A permutation (reordering) ha1 ; a2 ; : : : ; an i of the input sequence such 0 0 0 that a1 Ä a2 Ä Ä an . The input sequence is usually an n-element array, although it may be represented in some other fashion, such as a linked list. The structure of the data In practice, the numbers to be sorted are rarely isolated values. Each is usually part of a collection of data called a record. Each record contains a key, which is the value to be sorted. The remainder of the record consists of satellite data, which are usually carried around with the key. In practice, when a sorting algorithm permutes the keys, it must permute the satellite data as well. If each record includes a large amount of satellite data, we often permute an array of pointers to the records rather than the records themselves in order to minimize data movement. In a sense, it is these implementation details that distinguish an algorithm from a full-blown program. A sorting algorithm describes the method by which we determine the sorted order, regardless of whether we are sorting individual numbers or large records containing many bytes of satellite data. Thus, when focusing on the problem of sorting, we typically assume that the input consists only of numbers. Translating an algorithm for sorting numbers into a program for sorting records
148 Part II Sorting and Order Statistics is conceptually straightforward, although in a given engineering situation other subtleties may make the actual programming task a challenge. Why sorting? Many computer scientists consider sorting to be the most fundamental problem in the study of algorithms. There are several reasons: Sometimes an application inherently needs to sort information. For example, in order to prepare customer statements, banks need to sort checks by check number. Algorithms often use sorting as a key subroutine. For example, a program that renders graphical objects which are layered on top of each other might have to sort the objects according to an “above” relation so that it can draw these objects from bottom to top. We shall see numerous algorithms in this text that use sorting as a subroutine. We can draw from among a wide variety of sorting algorithms, and they em- ploy a rich set of techniques. In fact, many important techniques used through- out algorithm design appear in the body of sorting algorithms that have been developed over the years. In this way, sorting is also a problem of historical interest. We can prove a nontrivial lower bound for sorting (as we shall do in Chapter 8). Our best upper bounds match the lower bound asymptotically, and so we know that our sorting algorithms are asymptotically optimal. Moreover, we can use the lower bound for sorting to prove lower bounds for certain other problems. Many engineering issues come to the fore when implementing sorting algo- rithms. The fastest sorting program for a particular situation may depend on many factors, such as prior knowledge about the keys and satellite data, the memory hierarchy (caches and virtual memory) of the host computer, and the software environment. Many of these issues are best dealt with at the algorith- mic level, rather than by “tweaking” the code. Sorting algorithms We introduced two algorithms that sort n real numbers in Chapter 2. Insertion sort takes ‚.n2 / time in the worst case. Because its inner loops are tight, however, it is a fast in-place sorting algorithm for small input sizes. (Recall that a sorting algorithm sorts in place if only a constant number of elements of the input ar- ray are ever stored outside the array.) Merge sort has a better asymptotic running time, ‚.n lg n/, but the M ERGE procedure it uses does not operate in place.
Part II Sorting and Order Statistics 149 In this part, we shall introduce two more algorithms that sort arbitrary real num-bers. Heapsort, presented in Chapter 6, sorts n numbers in place in O.n lg n/ time.It uses an important data structure, called a heap, with which we can also imple-ment a priority queue. Quicksort, in Chapter 7, also sorts n numbers in place, but its worst-case runningtime is ‚.n2 /. Its expected running time is ‚.n lg n/, however, and it generallyoutperforms heapsort in practice. Like insertion sort, quicksort has tight code, andso the hidden constant factor in its running time is small. It is a popular algorithmfor sorting large input arrays. Insertion sort, merge sort, heapsort, and quicksort are all comparison sorts: theydetermine the sorted order of an input array by comparing elements. Chapter 8 be-gins by introducing the decision-tree model in order to study the performance limi-tations of comparison sorts. Using this model, we prove a lower bound of .n lg n/on the worst-case running time of any comparison sort on n inputs, thus showingthat heapsort and merge sort are asymptotically optimal comparison sorts. Chapter 8 then goes on to show that we can beat this lower bound of .n lg n/if we can gather information about the sorted order of the input by means otherthan comparing elements. The counting sort algorithm, for example, assumes thatthe input numbers are in the set f0; 1; : : : ; kg. By using array indexing as a toolfor determining relative order, counting sort can sort n numbers in ‚.k C n/ time.Thus, when k D O.n/, counting sort runs in time that is linear in the size of theinput array. A related algorithm, radix sort, can be used to extend the range ofcounting sort. If there are n integers to sort, each integer has d digits, and eachdigit can take on up to k possible values, then radix sort can sort the numbersin ‚.d.n C k// time. When d is a constant and k is O.n/, radix sort runs inlinear time. A third algorithm, bucket sort, requires knowledge of the probabilisticdistribution of numbers in the input array. It can sort n real numbers uniformlydistributed in the half-open interval Œ0; 1/ in average-case O.n/ time. The following table summarizes the running times of the sorting algorithms fromChapters 2 and 6–8. As usual, n denotes the number of items to sort. For countingsort, the items to sort are integers in the set f0; 1; : : : ; kg. For radix sort, each itemis a d -digit number, where each digit takes on k possible values. For bucket sort,we assume that the keys are real numbers uniformly distributed in the half-openinterval Œ0; 1/. The rightmost column gives the average-case or expected runningtime, indicating which it gives when it differs from the worst-case running time.We omit the average-case running time of heapsort because we do not analyze it inthis book.
150 Part II Sorting and Order Statistics Worst-case Average-case/expected Algorithm running time running time Insertion sort ‚.n2 / ‚.n2 / Merge sort ‚.n lg n/ ‚.n lg n/ Heapsort O.n lg n/ — Quicksort ‚.n2 / ‚.n lg n/ (expected) Counting sort ‚.k C n/ ‚.k C n/ Radix sort ‚.d.n C k// ‚.d.n C k// Bucket sort ‚.n2 / ‚.n/ (average-case) Order statistics The ith order statistic of a set of n numbers is the ith smallest number in the set. We can, of course, select the ith order statistic by sorting the input and indexing the ith element of the output. With no assumptions about the input distribution, this method runs in .n lg n/ time, as the lower bound proved in Chapter 8 shows. In Chapter 9, we show that we can ﬁnd the ith smallest element in O.n/ time, even when the elements are arbitrary real numbers. We present a randomized algo- rithm with tight pseudocode that runs in ‚.n2 / time in the worst case, but whose expected running time is O.n/. We also give a more complicated algorithm that runs in O.n/ worst-case time. Background Although most of this part does not rely on difﬁcult mathematics, some sections do require mathematical sophistication. In particular, analyses of quicksort, bucket sort, and the order-statistic algorithm use probability, which is reviewed in Ap- pendix C, and the material on probabilistic analysis and randomized algorithms in Chapter 5. The analysis of the worst-case linear-time algorithm for order statis- tics involves somewhat more sophisticated mathematics than the other worst-case analyses in this part.
6 Heapsort In this chapter, we introduce another sorting algorithm: heapsort. Like merge sort, but unlike insertion sort, heapsort’s running time is O.n lg n/. Like insertion sort, but unlike merge sort, heapsort sorts in place: only a constant number of array elements are stored outside the input array at any time. Thus, heapsort combines the better attributes of the two sorting algorithms we have already discussed. Heapsort also introduces another algorithm design technique: using a data struc- ture, in this case one we call a “heap,” to manage information. Not only is the heap data structure useful for heapsort, but it also makes an efﬁcient priority queue. The heap data structure will reappear in algorithms in later chapters. The term “heap” was originally coined in the context of heapsort, but it has since come to refer to “garbage-collected storage,” such as the programming languages Java and Lisp provide. Our heap data structure is not garbage-collected storage, and whenever we refer to heaps in this book, we shall mean a data structure rather than an aspect of garbage collection.6.1 Heaps The (binary) heap data structure is an array object that we can view as a nearly complete binary tree (see Section B.5.3), as shown in Figure 6.1. Each node of the tree corresponds to an element of the array. The tree is com- pletely ﬁlled on all levels except possibly the lowest, which is ﬁlled from the left up to a point. An array A that represents a heap is an object with two at- tributes: A:length, which (as usual) gives the number of elements in the array, and A:heap-size, which represents how many elements in the heap are stored within array A. That is, although AŒ1 : : A:length may contain numbers, only the ele- ments in AŒ1 : : A:heap-size, where 0 Ä A:heap-size Ä A:length, are valid ele- ments of the heap. The root of the tree is AŒ1, and given the index i of a node, we can easily compute the indices of its parent, left child, and right child:
152 Chapter 6 Heapsort 1 16 2 3 14 10 4 5 6 7 1 2 3 4 5 6 7 8 9 10 8 7 9 3 16 14 10 8 7 9 3 2 4 1 8 9 10 2 4 1 (a) (b) Figure 6.1 A max-heap viewed as (a) a binary tree and (b) an array. The number within the circle at each node in the tree is the value stored at that node. The number above a node is the corresponding index in the array. Above and below the array are lines showing parent-child relationships; parents are always to the left of their children. The tree has height three; the node at index 4 (with value 8) has height one. PARENT .i/ 1 return bi=2c L EFT .i/ 1 return 2i R IGHT .i/ 1 return 2i C 1 On most computers, the L EFT procedure can compute 2i in one instruction by simply shifting the binary representation of i left by one bit position. Similarly, the R IGHT procedure can quickly compute 2i C 1 by shifting the binary representation of i left by one bit position and then adding in a 1 as the low-order bit. The PARENT procedure can compute bi=2c by shifting i right one bit position. Good implementations of heapsort often implement these procedures as “macros” or “in- line” procedures. There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the values in the nodes satisfy a heap property, the speciﬁcs of which depend on the kind of heap. In a max-heap, the max-heap property is that for every node i other than the root, AŒPARENT.i/ AŒi ; that is, the value of a node is at most the value of its parent. Thus, the largest element in a max-heap is stored at the root, and the subtree rooted at a node contains
6.1 Heaps 153values no larger than that contained at the node itself. A min-heap is organized inthe opposite way; the min-heap property is that for every node i other than theroot,AŒPARENT.i/ Ä AŒi :The smallest element in a min-heap is at the root. For the heapsort algorithm, we use max-heaps. Min-heaps commonly imple-ment priority queues, which we discuss in Section 6.5. We shall be precise inspecifying whether we need a max-heap or a min-heap for any particular applica-tion, and when properties apply to either max-heaps or min-heaps, we just use theterm “heap.” Viewing a heap as a tree, we deﬁne the height of a node in a heap to be thenumber of edges on the longest simple downward path from the node to a leaf, andwe deﬁne the height of the heap to be the height of its root. Since a heap of n ele-ments is based on a complete binary tree, its height is ‚.lg n/ (see Exercise 6.1-2).We shall see that the basic operations on heaps run in time at most proportionalto the height of the tree and thus take O.lg n/ time. The remainder of this chapterpresents some basic procedures and shows how they are used in a sorting algorithmand a priority-queue data structure. The M AX -H EAPIFY procedure, which runs in O.lg n/ time, is the key to main- taining the max-heap property. The B UILD -M AX -H EAP procedure, which runs in linear time, produces a max- heap from an unordered input array. The H EAPSORT procedure, which runs in O.n lg n/ time, sorts an array in place. The M AX -H EAP -I NSERT, H EAP -E XTRACT-M AX, H EAP -I NCREASE -K EY, and H EAP -M AXIMUM procedures, which run in O.lg n/ time, allow the heap data structure to implement a priority queue.Exercises6.1-1What are the minimum and maximum numbers of elements in a heap of height h?6.1-2Show that an n-element heap has height blg nc.6.1-3Show that in any subtree of a max-heap, the root of the subtree contains the largestvalue occurring anywhere in that subtree.
154 Chapter 6 Heapsort 6.1-4 Where in a max-heap might the smallest element reside, assuming that all elements are distinct? 6.1-5 Is an array that is in sorted order a min-heap? 6.1-6 Is the array with values h23; 17; 14; 6; 13; 10; 1; 5; 7; 12i a max-heap? 6.1-7 Show that, with the array representation for storing an n-element heap, the leaves are the nodes indexed by bn=2c C 1; bn=2c C 2; : : : ; n.6.2 Maintaining the heap property In order to maintain the max-heap property, we call the procedure M AX -H EAPIFY. Its inputs are an array A and an index i into the array. When it is called, M AX - H EAPIFY assumes that the binary trees rooted at L EFT .i/ and R IGHT .i/ are max- heaps, but that AŒi might be smaller than its children, thus violating the max-heap property. M AX -H EAPIFY lets the value at AŒi “ﬂoat down” in the max-heap so that the subtree rooted at index i obeys the max-heap property. M AX -H EAPIFY .A; i/ 1 l D L EFT .i/ 2 r D R IGHT .i/ 3 if l Ä A:heap-size and AŒl > AŒi 4 largest D l 5 else largest D i 6 if r Ä A:heap-size and AŒr > AŒlargest 7 largest D r 8 if largest ¤ i 9 exchange AŒi with AŒlargest 10 M AX -H EAPIFY .A; largest/ Figure 6.2 illustrates the action of M AX -H EAPIFY. At each step, the largest of the elements AŒi, AŒL EFT .i/, and AŒR IGHT .i/ is determined, and its index is stored in largest. If AŒi is largest, then the subtree rooted at node i is already a max-heap and the procedure terminates. Otherwise, one of the two children has the largest element, and AŒi is swapped with AŒlargest, which causes node i and its
6.2 Maintaining the heap property 155 1 1 16 16 2 3 2 3 i 4 10 14 10 4 5 6 7 4 5 6 7 14 7 9 3 i 4 7 9 3 8 9 10 8 9 10 2 8 1 2 8 1 (a) (b) 1 16 2 3 14 10 4 5 6 7 8 7 9 3 8 9 10 i 2 4 1 (c)Figure 6.2 The action of M AX -H EAPIFY.A; 2/, where A: heap-size D 10. (a) The initial con-ﬁguration, with AŒ2 at node i D 2 violating the max-heap property since it is not larger thanboth children. The max-heap property is restored for node 2 in (b) by exchanging AŒ2 with AŒ4,which destroys the max-heap property for node 4. The recursive call M AX -H EAPIFY.A; 4/ nowhas i D 4. After swapping AŒ4 with AŒ9, as shown in (c), node 4 is ﬁxed up, and the recursive callM AX -H EAPIFY.A; 9/ yields no further change to the data structure.children to satisfy the max-heap property. The node indexed by largest, however,now has the original value AŒi, and thus the subtree rooted at largest might violatethe max-heap property. Consequently, we call M AX -H EAPIFY recursively on thatsubtree. The running time of M AX -H EAPIFY on a subtree of size n rooted at a givennode i is the ‚.1/ time to ﬁx up the relationships among the elements AŒi,AŒL EFT .i/, and AŒR IGHT .i/, plus the time to run M AX -H EAPIFY on a subtreerooted at one of the children of node i (assuming that the recursive call occurs).The children’s subtrees each have size at most 2n=3—the worst case occurs whenthe bottom level of the tree is exactly half full—and therefore we can describe therunning time of M AX -H EAPIFY by the recurrenceT .n/ Ä T .2n=3/ C ‚.1/ :
156 Chapter 6 Heapsort The solution to this recurrence, by case 2 of the master theorem (Theorem 4.1), is T .n/ D O.lg n/. Alternatively, we can characterize the running time of M AX - H EAPIFY on a node of height h as O.h/. Exercises 6.2-1 Using Figure 6.2 as a model, illustrate the operation of M AX -H EAPIFY .A; 3/ on the array A D h27; 17; 3; 16; 13; 10; 1; 5; 7; 12; 4; 8; 9; 0i. 6.2-2 Starting with the procedure M AX -H EAPIFY, write pseudocode for the procedure M IN -H EAPIFY .A; i/, which performs the corresponding manipulation on a min- heap. How does the running time of M IN -H EAPIFY compare to that of M AX - H EAPIFY? 6.2-3 What is the effect of calling M AX -H EAPIFY .A; i/ when the element AŒi is larger than its children? 6.2-4 What is the