Your SlideShare is downloading. ×

Elements of information theory.2nd

3,453

Published on

Published in: Technology
1 Comment
6 Likes
Statistics
Notes
No Downloads
Views
Total Views
3,453
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
268
Comments
1
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. ELEMENTS OFINFORMATION THEORYSecond EditionTHOMAS M. COVERJOY A. THOMASA JOHN WILEY & SONS, INC., PUBLICATION
  • 2. ELEMENTS OFINFORMATION THEORY
  • 3. ELEMENTS OFINFORMATION THEORYSecond EditionTHOMAS M. COVERJOY A. THOMASA JOHN WILEY & SONS, INC., PUBLICATION
  • 4. Copyright  2006 by John Wiley & Sons, Inc. All rights reserved.Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.No part of this publication may be reproduced, stored in a retrieval system, or transmitted in anyform or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, withouteither the prior written permission of the Publisher, or authorization through payment of theappropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers,Requests to the Publisher for permission should be addressed to the Permissions Department,John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008,Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best effortsin preparing this book, they make no representations or warranties with respect to the accuracy orcompleteness of the contents of this book and specifically disclaim any implied warranties ofmerchantability or fitness for a particular purpose. No warranty may be created or extended by salesrepresentatives or written sales materials. The advice and strategies contained herein may not besuitable for your situation. You should consult with a professional where appropriate. Neither thepublisher nor author shall be liable for any loss of profit or any other commercial damages, includingbut not limited to special, incidental, consequential, or other damages.For general information on our other products and services or for technical support, please contact ourCustomer Care Department within the United States at (800) 762-2974, outside the United States at(317) 572-3993 or fax (317) 572-4002.Wiley also publishes its books in a variety of electronic formats. Some content that appears in printmay not be available in electronic formats. For more information about Wiley products, visit our webLibrary of Congress Cataloging-in-Publication Data:Cover, T. M., 1938–Elements of information theory/by Thomas M. Cover, Joy A. Thomas.–2nd ed.p. cm.“A Wiley-Interscience publication.”Includes bibliographical references and index.ISBN-13 978-0-471-24195-9ISBN-10 0-471-24195-41. Information theory. I. Thomas, Joy A. II. Title.Q360.C68 2005003 .54–dc222005047799Printed in the United States of America.10 9 8 7 6 5 4 3 2 1site at www.wiley.com.or online at http://www.wiley.com/go/permission.MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com.
  • 5. CONTENTSContents vPreface to the Second Edition xvPreface to the First Edition xviiAcknowledgments for the Second Edition xxiAcknowledgments for the First Edition xxiii1 Introduction and Preview 11.1 Preview of the Book 52 Entropy, Relative Entropy, and Mutual Information 132.1 Entropy 132.2 Joint Entropy and Conditional Entropy 162.3 Relative Entropy and Mutual Information 192.4 Relationship Between Entropy and MutualInformation 202.5 Chain Rules for Entropy, Relative Entropy,and Mutual Information 222.6 Jensen’s Inequality and Its Consequences 252.7 Log Sum Inequality and Its Applications 302.8 Data-Processing Inequality 342.9 Sufficient Statistics 352.10 Fano’s Inequality 37Summary 41Problems 43Historical Notes 54v
  • 6. vi CONTENTS3 Asymptotic Equipartition Property 573.1 Asymptotic Equipartition Property Theorem 583.2 Consequences of the AEP: Data Compression 603.3 High-Probability Sets and the Typical Set 62Summary 64Problems 64Historical Notes 694 Entropy Rates of a Stochastic Process 714.1 Markov Chains 714.2 Entropy Rate 744.3 Example: Entropy Rate of a Random Walk on aWeighted Graph 784.4 Second Law of Thermodynamics 814.5 Functions of Markov Chains 84Summary 87Problems 88Historical Notes 1005 Data Compression 1035.1 Examples of Codes 1035.2 Kraft Inequality 1075.3 Optimal Codes 1105.4 Bounds on the Optimal Code Length 1125.5 Kraft Inequality for Uniquely DecodableCodes 1155.6 Huffman Codes 1185.7 Some Comments on Huffman Codes 1205.8 Optimality of Huffman Codes 1235.9 Shannon–Fano–Elias Coding 1275.10 Competitive Optimality of the ShannonCode 1305.11 Generation of Discrete Distributions fromFair Coins 134Summary 141Problems 142Historical Notes 157
  • 7. CONTENTS vii6 Gambling and Data Compression 1596.1 The Horse Race 1596.2 Gambling and Side Information 1646.3 Dependent Horse Races and Entropy Rate 1666.4 The Entropy of English 1686.5 Data Compression and Gambling 1716.6 Gambling Estimate of the Entropy of English 173Summary 175Problems 176Historical Notes 1827 Channel Capacity 1837.1 Examples of Channel Capacity 1847.1.1 Noiseless Binary Channel 1847.1.2 Noisy Channel with NonoverlappingOutputs 1857.1.3 Noisy Typewriter 1867.1.4 Binary Symmetric Channel 1877.1.5 Binary Erasure Channel 1887.2 Symmetric Channels 1897.3 Properties of Channel Capacity 1917.4 Preview of the Channel Coding Theorem 1917.5 Definitions 1927.6 Jointly Typical Sequences 1957.7 Channel Coding Theorem 1997.8 Zero-Error Codes 2057.9 Fano’s Inequality and the Converse to the CodingTheorem 2067.10 Equality in the Converse to the Channel CodingTheorem 2087.11 Hamming Codes 2107.12 Feedback Capacity 2167.13 Source–Channel Separation Theorem 218Summary 222Problems 223Historical Notes 240
  • 8. viii CONTENTS8 Differential Entropy 2438.1 Definitions 2438.2 AEP for Continuous Random Variables 2458.3 Relation of Differential Entropy to DiscreteEntropy 2478.4 Joint and Conditional Differential Entropy 2498.5 Relative Entropy and Mutual Information 2508.6 Properties of Differential Entropy, Relative Entropy,and Mutual Information 252Summary 256Problems 256Historical Notes 2599 Gaussian Channel 2619.1 Gaussian Channel: Definitions 2639.2 Converse to the Coding Theorem for GaussianChannels 2689.3 Bandlimited Channels 2709.4 Parallel Gaussian Channels 2749.5 Channels with Colored Gaussian Noise 2779.6 Gaussian Channels with Feedback 280Summary 289Problems 290Historical Notes 29910 Rate Distortion Theory 30110.1 Quantization 30110.2 Definitions 30310.3 Calculation of the Rate Distortion Function 30710.3.1 Binary Source 30710.3.2 Gaussian Source 31010.3.3 Simultaneous Description of IndependentGaussian Random Variables 31210.4 Converse to the Rate Distortion Theorem 31510.5 Achievability of the Rate Distortion Function 31810.6 Strongly Typical Sequences and Rate Distortion 32510.7 Characterization of the Rate Distortion Function 329
  • 9. CONTENTS ix10.8 Computation of Channel Capacity and the RateDistortion Function 332Summary 335Problems 336Historical Notes 34511 Information Theory and Statistics 34711.1 Method of Types 34711.2 Law of Large Numbers 35511.3 Universal Source Coding 35711.4 Large Deviation Theory 36011.5 Examples of Sanov’s Theorem 36411.6 Conditional Limit Theorem 36611.7 Hypothesis Testing 37511.8 Chernoff–Stein Lemma 38011.9 Chernoff Information 38411.10 Fisher Information and the Cram´er–RaoInequality 392Summary 397Problems 399Historical Notes 40812 Maximum Entropy 40912.1 Maximum Entropy Distributions 40912.2 Examples 41112.3 Anomalous Maximum Entropy Problem 41312.4 Spectrum Estimation 41512.5 Entropy Rates of a Gaussian Process 41612.6 Burg’s Maximum Entropy Theorem 417Summary 420Problems 421Historical Notes 42513 Universal Source Coding 42713.1 Universal Codes and Channel Capacity 42813.2 Universal Coding for Binary Sequences 43313.3 Arithmetic Coding 436
  • 10. x CONTENTS13.4 Lempel–Ziv Coding 44013.4.1 Sliding Window Lempel–ZivAlgorithm 44113.4.2 Tree-Structured Lempel–ZivAlgorithms 44213.5 Optimality of Lempel–Ziv Algorithms 44313.5.1 Sliding Window Lempel–ZivAlgorithms 44313.5.2 Optimality of Tree-Structured Lempel–ZivCompression 448Summary 456Problems 457Historical Notes 46114 Kolmogorov Complexity 46314.1 Models of Computation 46414.2 Kolmogorov Complexity: Definitionsand Examples 46614.3 Kolmogorov Complexity and Entropy 47314.4 Kolmogorov Complexity of Integers 47514.5 Algorithmically Random and IncompressibleSequences 47614.6 Universal Probability 48014.7 Kolmogorov complexity 48214.8 48414.9 Universal Gambling 48714.10 Occam’s Razor 48814.11 Kolmogorov Complexity and UniversalProbability 49014.12 Kolmogorov Sufficient Statistic 49614.13 Minimum Description Length Principle 500Summary 501Problems 503Historical Notes 50715 Network Information Theory 50915.1 Gaussian Multiple-User Channels 513
  • 11. CONTENTS xi15.1.1 Single-User Gaussian Channel 51315.1.2 Gaussian Multiple-Access Channelwith m Users 51415.1.3 Gaussian Broadcast Channel 51515.1.4 Gaussian Relay Channel 51615.1.5 Gaussian Interference Channel 51815.1.6 Gaussian Two-Way Channel 51915.2 Jointly Typical Sequences 52015.3 Multiple-Access Channel 52415.3.1 Achievability of the Capacity Region for theMultiple-Access Channel 53015.3.2 Comments on the Capacity Region for theMultiple-Access Channel 53215.3.3 Convexity of the Capacity Region of theMultiple-Access Channel 53415.3.4 Converse for the Multiple-AccessChannel 53815.3.5 m-User Multiple-Access Channels 54315.3.6 Gaussian Multiple-Access Channels 54415.4 Encoding of Correlated Sources 54915.4.1 Achievability of the Slepian–WolfTheorem 55115.4.2 Converse for the Slepian–WolfTheorem 55515.4.3 Slepian–Wolf Theorem for ManySources 55615.4.4 Interpretation of Slepian–WolfCoding 55715.5 Duality Between Slepian–Wolf Encoding andMultiple-Access Channels 55815.6 Broadcast Channel 56015.6.1 Definitions for a Broadcast Channel 56315.6.2 Degraded Broadcast Channels 56415.6.3 Capacity Region for the Degraded BroadcastChannel 56515.7 Relay Channel 57115.8 Source Coding with Side Information 57515.9 Rate Distortion with Side Information 580
  • 12. xii CONTENTS15.10 General Multiterminal Networks 587Summary 594Problems 596Historical Notes 60916 Information Theory and Portfolio Theory 61316.1 The Stock Market: Some Definitions 61316.2 Kuhn–Tucker Characterization of the Log-OptimalPortfolio 61716.3 Asymptotic Optimality of the Log-OptimalPortfolio 61916.4 Side Information and the Growth Rate 62116.5 Investment in Stationary Markets 62316.6 Competitive Optimality of the Log-OptimalPortfolio 62716.7 Universal Portfolios 62916.7.1 Finite-Horizon Universal Portfolios 63116.7.2 Horizon-Free Universal Portfolios 63816.8 Shannon–McMillan–Breiman Theorem(General AEP) 644Summary 650Problems 652Historical Notes 65517 Inequalities in Information Theory 65717.1 Basic Inequalities of Information Theory 65717.2 Differential Entropy 66017.3 Bounds on Entropy and Relative Entropy 66317.4 Inequalities for Types 66517.5 Combinatorial Bounds on Entropy 66617.6 Entropy Rates of Subsets 66717.7 Entropy and Fisher Information 67117.8 Entropy Power Inequality and Brunn–MinkowskiInequality 67417.9 Inequalities for Determinants 679
  • 13. CONTENTS xiii17.10 Inequalities for Ratios of Determinants 683Summary 686Problems 686Historical Notes 687Bibliography 689List of Symbols 723Index 727
  • 14. PREFACE TO THESECOND EDITIONIn the years since the publication of the first edition, there were manyaspects of the book that we wished to improve, to rearrange, or to expand,but the constraints of reprinting would not allow us to make those changesbetween printings. In the new edition, we now get a chance to make someof these changes, to add problems, and to discuss some topics that we hadomitted from the first edition.The key changes include a reorganization of the chapters to makethe book easier to teach, and the addition of more than two hundrednew problems. We have added material on universal portfolios, universalsource coding, Gaussian feedback capacity, network information theory,and developed the duality of data compression and channel capacity. Anew chapter has been added and many proofs have been simplified. Wehave also updated the references and historical notes.The material in this book can be taught in a two-quarter sequence. Thefirst quarter might cover Chapters 1 to 9, which includes the asymptoticequipartition property, data compression, and channel capacity, culminat-ing in the capacity of the Gaussian channel. The second quarter couldcover the remaining chapters, including rate distortion, the method oftypes, Kolmogorov complexity, network information theory, universalsource coding, and portfolio theory. If only one semester is available, wewould add rate distortion and a single lecture each on Kolmogorov com-plexity and network information theory to the first semester. A web site,http://www.elementsofinformationtheory.com, provides links to additionalmaterial and solutions to selected problems.In the years since the first edition of the book, information theorycelebrated its 50th birthday (the 50th anniversary of Shannon’s originalpaper that started the field), and ideas from information theory have beenapplied to many problems of science and technology, including bioin-formatics, web search, wireless communication, video compression, andxv
  • 15. xvi PREFACE TO THE SECOND EDITIONothers. The list of applications is endless, but it is the elegance of thefundamental mathematics that is still the key attraction of this area. Wehope that this book will give some insight into why we believe that thisis one of the most interesting areas at the intersection of mathematics,physics, statistics, and engineering.Tom CoverJoy ThomasPalo Alto, CaliforniaJanuary 2006
  • 16. PREFACE TO THEFIRST EDITIONThis is intended to be a simple and accessible book on information theory.As Einstein said, “Everything should be made as simple as possible, but nosimpler.” Although we have not verified the quote (first found in a fortunecookie), this point of view drives our development throughout the book.There are a few key ideas and techniques that, when mastered, make thesubject appear simple and provide great intuition on new questions.This book has arisen from over ten years of lectures in a two-quartersequence of a senior and first-year graduate-level course in informationtheory, and is intended as an introduction to information theory for stu-dents of communication theory, computer science, and statistics.There are two points to be made about the simplicities inherent in infor-mation theory. First, certain quantities like entropy and mutual informationarise as the answers to fundamental questions. For example, entropy isthe minimum descriptive complexity of a random variable, and mutualinformation is the communication rate in the presence of noise. Also,as we shall point out, mutual information corresponds to the increase inthe doubling rate of wealth given side information. Second, the answersto information theoretic questions have a natural algebraic structure. Forexample, there is a chain rule for entropies, and entropy and mutual infor-mation are related. Thus the answers to problems in data compressionand communication admit extensive interpretation. We all know the feel-ing that follows when one investigates a problem, goes through a largeamount of algebra, and finally investigates the answer to find that theentire problem is illuminated not by the analysis but by the inspection ofthe answer. Perhaps the outstanding examples of this in physics are New-ton’s laws and Schr¨odinger’s wave equation. Who could have foreseen theawesome philosophical interpretations of Schr¨odinger’s wave equation?In the text we often investigate properties of the answer before we lookat the question. For example, in Chapter 2, we define entropy, relativeentropy, and mutual information and study the relationships and a fewxvii
  • 17. xviii PREFACE TO THE FIRST EDITIONinterpretations of them, showing how the answers fit together in variousways. Along the way we speculate on the meaning of the second law ofthermodynamics. Does entropy always increase? The answer is yes andno. This is the sort of result that should please experts in the area butmight be overlooked as standard by the novice.In fact, that brings up a point that often occurs in teaching. It is funto find new proofs or slightly new results that no one else knows. Whenone presents these ideas along with the established material in class, theresponse is “sure, sure, sure.” But the excitement of teaching the materialis greatly enhanced. Thus we have derived great pleasure from investigat-ing a number of new ideas in this textbook.Examples of some of the new material in this text include the chapteron the relationship of information theory to gambling, the work on the uni-versality of the second law of thermodynamics in the context of Markovchains, the joint typicality proofs of the channel capacity theorem, thecompetitive optimality of Huffman codes, and the proof of Burg’s theoremon maximum entropy spectral density estimation. Also, the chapter onKolmogorov complexity has no counterpart in other information theorytexts. We have also taken delight in relating Fisher information, mutualinformation, the central limit theorem, and the Brunn–Minkowski andentropy power inequalities. To our surprise, many of the classical resultson determinant inequalities are most easily proved using information the-oretic inequalities.Even though the field of information theory has grown considerablysince Shannon’s original paper, we have strived to emphasize its coher-ence. While it is clear that Shannon was motivated by problems in commu-nication theory when he developed information theory, we treat informa-tion theory as a field of its own with applications to communication theoryand statistics. We were drawn to the field of information theory frombackgrounds in communication theory, probability theory, and statistics,because of the apparent impossibility of capturing the intangible conceptof information.Since most of the results in the book are given as theorems and proofs,we expect the elegance of the results to speak for themselves. In manycases we actually describe the properties of the solutions before the prob-lems. Again, the properties are interesting in themselves and provide anatural rhythm for the proofs that follow.One innovation in the presentation is our use of long chains of inequal-ities with no intervening text followed immediately by the explanations.By the time the reader comes to many of these proofs, we expect that heor she will be able to follow most of these steps without any explanationand will be able to pick out the needed explanations. These chains of
  • 18. PREFACE TO THE FIRST EDITION xixinequalities serve as pop quizzes in which the reader can be reassuredof having the knowledge needed to prove some important theorems. Thenatural flow of these proofs is so compelling that it prompted us to floutone of the cardinal rules of technical writing; and the absence of verbiagemakes the logical necessity of the ideas evident and the key ideas per-spicuous. We hope that by the end of the book the reader will share ourappreciation of the elegance, simplicity, and naturalness of informationtheory.Throughout the book we use the method of weakly typical sequences,which has its origins in Shannon’s original 1948 work but was formallydeveloped in the early 1970s. The key idea here is the asymptotic equipar-tition property, which can be roughly paraphrased as “Almost everythingis almost equally probable.”Chapter 2 includes the basic algebraic relationships of entropy, relativeentropy, and mutual information. The asymptotic equipartition property(AEP) is given central prominence in Chapter 3. This leads us to dis-cuss the entropy rates of stochastic processes and data compression inChapters 4 and 5. A gambling sojourn is taken in Chapter 6, where theduality of data compression and the growth rate of wealth is developed.The sensational success of Kolmogorov complexity as an intellectualfoundation for information theory is explored in Chapter 14. Here wereplace the goal of finding a description that is good on the average withthe goal of finding the universally shortest description. There is indeeda universal notion of the descriptive complexity of an object. Here alsothe wonderful number is investigated. This number, which is the binaryexpansion of the probability that a Turing machine will halt, reveals manyof the secrets of mathematics.Channel capacity is established in Chapter 7. The necessary materialon differential entropy is developed in Chapter 8, laying the groundworkfor the extension of previous capacity theorems to continuous noise chan-nels. The capacity of the fundamental Gaussian channel is investigated inChapter 9.The relationship between information theory and statistics, first studiedby Kullback in the early 1950s and relatively neglected since, is developedin Chapter 11. Rate distortion theory requires a little more backgroundthan its noiseless data compression counterpart, which accounts for itsplacement as late as Chapter 10 in the text.The huge subject of network information theory, which is the studyof the simultaneously achievable flows of information in the presence ofnoise and interference, is developed in Chapter 15. Many new ideas comeinto play in network information theory. The primary new ingredients areinterference and feedback. Chapter 16 considers the stock market, which is
  • 19. xx PREFACE TO THE FIRST EDITIONthe generalization of the gambling processes considered in Chapter 6, andshows again the close correspondence of information theory and gambling.Chapter 17, on inequalities in information theory, gives us a chance torecapitulate the interesting inequalities strewn throughout the book, putthem in a new framework, and then add some interesting new inequalitieson the entropy rates of randomly drawn subsets. The beautiful relationshipof the Brunn–Minkowski inequality for volumes of set sums, the entropypower inequality for the effective variance of the sum of independentrandom variables, and the Fisher information inequalities are made explicithere.We have made an attempt to keep the theory at a consistent level.The mathematical level is a reasonably high one, probably the senior orfirst-year graduate level, with a background of at least one good semestercourse in probability and a solid background in mathematics. We have,however, been able to avoid the use of measure theory. Measure theorycomes up only briefly in the proof of the AEP for ergodic processes inChapter 16. This fits in with our belief that the fundamentals of infor-mation theory are orthogonal to the techniques required to bring them totheir full generalization.The essential vitamins are contained in Chapters 2, 3, 4, 5, 7, 8, 9,11, 10, and 15. This subset of chapters can be read without essentialreference to the others and makes a good core of understanding. In ouropinion, Chapter 14 on Kolmogorov complexity is also essential for a deepunderstanding of information theory. The rest, ranging from gambling toinequalities, is part of the terrain illuminated by this coherent and beautifulsubject.Every course has its first lecture, in which a sneak preview and overviewof ideas is presented. Chapter 1 plays this role.Tom CoverJoy ThomasPalo Alto, CaliforniaJune 1990
  • 20. ACKNOWLEDGMENTSFOR THE SECOND EDITIONSince the appearance of the first edition, we have been fortunate to receivefeedback, suggestions, and corrections from a large number of readers. Itwould be impossible to thank everyone who has helped us in our efforts,but we would like to list some of them. In particular, we would liketo thank all the faculty who taught courses based on this book and thestudents who took those courses; it is through them that we learned tolook at the same material from a different perspective.In particular, we would like to thank Andrew Barron, Alon Orlitsky,T. S. Han, Raymond Yeung, Nam Phamdo, Franz Willems, and MartyCohn for their comments and suggestions. Over the years, students atStanford have provided ideas and inspirations for the changes—theseinclude George Gemelos, Navid Hassanpour, Young-Han Kim, CharlesMathis, Styrmir Sigurjonsson, Jon Yard, Michael Baer, Mung Chiang,Suhas Diggavi, Elza Erkip, Paul Fahn, Garud Iyengar, David Julian, Yian-nis Kontoyiannis, Amos Lapidoth, Erik Ordentlich, Sandeep Pombra, JimRoche, Arak Sutivong, Joshua Sweetkind-Singer, and Assaf Zeevi. DeniseMurphy provided much support and help during the preparation of thesecond edition.Joy Thomas would like to acknowledge the support of colleaguesat IBM and Stratify who provided valuable comments and suggestions.Particular thanks are due Peter Franaszek, C. S. Chang, Randy Nelson,Ramesh Gopinath, Pandurang Nayak, John Lamping, Vineet Gupta, andRamana Venkata. In particular, many hours of dicussion with BrandonRoy helped refine some of the arguments in the book. Above all, Joywould like to acknowledge that the second edition would not have beenpossible without the support and encouragement of his wife, Priya, whomakes all things worthwhile.Tom Cover would like to thank his students and his wife, Karen.xxi
  • 21. ACKNOWLEDGMENTSFOR THE FIRST EDITIONWe wish to thank everyone who helped make this book what it is. Inparticular, Aaron Wyner, Toby Berger, Masoud Salehi, Alon Orlitsky,Jim Mazo and Andrew Barron have made detailed comments on variousdrafts of the book which guided us in our final choice of content. Wewould like to thank Bob Gallager for an initial reading of the manuscriptand his encouragement to publish it. Aaron Wyner donated his new proofwith Ziv on the convergence of the Lempel-Ziv algorithm. We wouldalso like to thank Normam Abramson, Ed van der Meulen, Jack Salz andRaymond Yeung for their suggested revisions.Certain key visitors and research associates contributed as well, includ-ing Amir Dembo, Paul Algoet, Hirosuke Yamamoto, Ben Kawabata, M.Shimizu and Yoichiro Watanabe. We benefited from the advice of JohnGill when he used this text in his class. Abbas El Gamal made invaluablecontributions, and helped begin this book years ago when we plannedto write a research monograph on multiple user information theory. Wewould also like to thank the Ph.D. students in information theory as thisbook was being written: Laura Ekroot, Will Equitz, Don Kimber, MitchellTrott, Andrew Nobel, Jim Roche, Erik Ordentlich, Elza Erkip and Vitto-rio Castelli. Also Mitchell Oslick, Chien-Wen Tseng and Michael Morrellwere among the most active students in contributing questions and sug-gestions to the text. Marc Goldberg and Anil Kaul helped us producesome of the figures. Finally we would like to thank Kirsten Goodell andKathy Adams for their support and help in some of the aspects of thepreparation of the manuscript.Joy Thomas would also like to thank Peter Franaszek, Steve Lavenberg,Fred Jelinek, David Nahamoo and Lalit Bahl for their encouragment andsupport during the final stages of production of this book.xxiii
  • 22. CHAPTER 1INTRODUCTION AND PREVIEWInformation theory answers two fundamental questions in communicationtheory: What is the ultimate data compression (answer: the entropy H),and what is the ultimate transmission rate of communication (answer: thechannel capacity C). For this reason some consider information theoryto be a subset of communication theory. We argue that it is much more.Indeed, it has fundamental contributions to make in statistical physics(thermodynamics), computer science (Kolmogorov complexity or algo-rithmic complexity), statistical inference (Occam’s Razor: “The simplestexplanation is best”), and to probability and statistics (error exponents foroptimal hypothesis testing and estimation).This “first lecture” chapter goes backward and forward through infor-mation theory and its naturally related ideas. The full definitions and studyof the subject begin in Chapter 2. Figure 1.1 illustrates the relationshipof information theory to other fields. As the figure suggests, informationtheory intersects physics (statistical mechanics), mathematics (probabilitytheory), electrical engineering (communication theory), and computer sci-ence (algorithmic complexity). We now describe the areas of intersectionin greater detail.Electrical Engineering (Communication Theory). In the early 1940sit was thought to be impossible to send information at a positive ratewith negligible probability of error. Shannon surprised the communica-tion theory community by proving that the probability of error could bemade nearly zero for all communication rates below channel capacity.The capacity can be computed simply from the noise characteristics ofthe channel. Shannon further argued that random processes such as musicand speech have an irreducible complexity below which the signal cannotbe compressed. This he named the entropy, in deference to the paralleluse of this word in thermodynamics, and argued that if the entropy of theElements of Information Theory, Second Edition, By Thomas M. Cover and Joy A. ThomasCopyright  2006 John Wiley & Sons, Inc.1
  • 23. 2 INTRODUCTION AND PREVIEWPhysicsAEPThermodynamicsQuantumInformationTheoryMathematicsInequalitiesStatisticsHypothesisTestingFisherInformationComputerScienceKolmogorovComplexityProbabilityTheoryLimitTheoremsLargeDeviationsCommunicationTheoryLimits ofCommunicationTheoryPortfolio TheoryKelly GamblingEconomicsInformationTheoryFIGURE 1.1. Relationship of information theory to other fields.Data compressionlimitData transmissionlimitmin l(X; X )^max l(X; Y )FIGURE 1.2. Information theory as the extreme points of communication theory.source is less than the capacity of the channel, asymptotically error-freecommunication can be achieved.Information theory today represents the extreme points of the set ofall possible communication schemes, as shown in the fanciful Figure 1.2.The data compression minimum I (X; ˆX) lies at one extreme of the set ofcommunication ideas. All data compression schemes require description
  • 24. INTRODUCTION AND PREVIEW 3rates at least equal to this minimum. At the other extreme is the datatransmission maximum I (X; Y), known as the channel capacity. Thus,all modulation schemes and data compression schemes lie between theselimits.Information theory also suggests means of achieving these ultimatelimits of communication. However, these theoretically optimal communi-cation schemes, beautiful as they are, may turn out to be computationallyimpractical. It is only because of the computational feasibility of sim-ple modulation and demodulation schemes that we use them rather thanthe random coding and nearest-neighbor decoding rule suggested by Shan-non’s proof of the channel capacity theorem. Progress in integrated circuitsand code design has enabled us to reap some of the gains suggested byShannon’s theory. Computational practicality was finally achieved by theadvent of turbo codes. A good example of an application of the ideas ofinformation theory is the use of error-correcting codes on compact discsand DVDs.Recent work on the communication aspects of information theory hasconcentrated on network information theory: the theory of the simultane-ous rates of communication from many senders to many receivers in thepresence of interference and noise. Some of the trade-offs of rates betweensenders and receivers are unexpected, and all have a certain mathematicalsimplicity. A unifying theory, however, remains to be found.Computer Science (Kolmogorov Complexity). Kolmogorov,Chaitin, and Solomonoff put forth the idea that the complexity of a stringof data can be defined by the length of the shortest binary computerprogram for computing the string. Thus, the complexity is the minimaldescription length. This definition of complexity turns out to be universal,that is, computer independent, and is of fundamental importance. Thus,Kolmogorov complexity lays the foundation for the theory of descriptivecomplexity. Gratifyingly, the Kolmogorov complexity K is approximatelyequal to the Shannon entropy H if the sequence is drawn at random froma distribution that has entropy H. So the tie-in between information theoryand Kolmogorov complexity is perfect. Indeed, we consider Kolmogorovcomplexity to be more fundamental than Shannon entropy. It is the ulti-mate data compression and leads to a logically consistent procedure forinference.There is a pleasing complementary relationship between algorithmiccomplexity and computational complexity. One can think about computa-tional complexity (time complexity) and Kolmogorov complexity (pro-gram length or descriptive complexity) as two axes corresponding to
  • 25. 4 INTRODUCTION AND PREVIEWprogram running time and program length. Kolmogorov complexity fo-cuses on minimizing along the second axis, and computational complexityfocuses on minimizing along the first axis. Little work has been done onthe simultaneous minimization of the two.Physics (Thermodynamics). Statistical mechanics is the birthplace ofentropy and the second law of thermodynamics. Entropy always increases.Among other things, the second law allows one to dismiss any claims toperpetual motion machines. We discuss the second law briefly in Chapter 4.Mathematics (Probability Theory and Statistics). The fundamentalquantities of information theory—entropy, relative entropy, and mutualinformation—are defined as functionals of probability distributions. Inturn, they characterize the behavior of long sequences of random variablesand allow us to estimate the probabilities of rare events (large deviationtheory) and to find the best error exponent in hypothesis tests.Philosophy of Science (Occam’s Razor). William of Occam said“Causes shall not be multiplied beyond necessity,” or to paraphrase it,“The simplest explanation is best.” Solomonoff and Chaitin argued per-suasively that one gets a universally good prediction procedure if one takesa weighted combination of all programs that explain the data and observeswhat they print next. Moreover, this inference will work in many problemsnot handled by statistics. For example, this procedure will eventually pre-dict the subsequent digits of π. When this procedure is applied to coin flipsthat come up heads with probability 0.7, this too will be inferred. Whenapplied to the stock market, the procedure should essentially find all the“laws” of the stock market and extrapolate them optimally. In principle,such a procedure would have found Newton’s laws of physics. Of course,such inference is highly impractical, because weeding out all computerprograms that fail to generate existing data will take impossibly long. Wewould predict what happens tomorrow a hundred years from now.Economics (Investment). Repeated investment in a stationary stockmarket results in an exponential growth of wealth. The growth rate ofthe wealth is a dual of the entropy rate of the stock market. The paral-lels between the theory of optimal investment in the stock market andinformation theory are striking. We develop the theory of investment toexplore this duality.Computation vs. Communication. As we build larger computersout of smaller components, we encounter both a computation limit anda communication limit. Computation is communication limited and com-munication is computation limited. These become intertwined, and thus
  • 26. 1.1 PREVIEW OF THE BOOK 5all of the developments in communication theory via information theoryshould have a direct impact on the theory of computation.1.1 PREVIEW OF THE BOOKThe initial questions treated by information theory lay in the areas ofdata compression and transmission. The answers are quantities such asentropy and mutual information, which are functions of the probabilitydistributions that underlie the process of communication. A few definitionswill aid the initial discussion. We repeat these definitions in Chapter 2.The entropy of a random variable X with a probability mass functionp(x) is defined byH(X) = −xp(x) log2 p(x). (1.1)We use logarithms to base 2. The entropy will then be measured in bits.The entropy is a measure of the average uncertainty in the random vari-able. It is the number of bits on average required to describe the randomvariable.Example 1.1.1 Consider a random variable that has a uniform distribu-tion over 32 outcomes. To identify an outcome, we need a label that takeson 32 different values. Thus, 5-bit strings suffice as labels.The entropy of this random variable isH(X) = −32i=1p(i) log p(i) = −32i=1132log132= log 32 = 5 bits,(1.2)which agrees with the number of bits needed to describe X. In this case,all the outcomes have representations of the same length.Now consider an example with nonuniform distribution.Example 1.1.2 Suppose that we have a horse race with eight horsestaking part. Assume that the probabilities of winning for the eight horsesare 12, 14, 18, 116, 164, 164, 164, 164 . We can calculate the entropy of the horserace asH(X) = −12log12−14log14−18log18−116log116− 4164log164= 2 bits. (1.3)
  • 27. 6 INTRODUCTION AND PREVIEWSuppose that we wish to send a message indicating which horse wonthe race. One alternative is to send the index of the winning horse. Thisdescription requires 3 bits for any of the horses. But the win probabilitiesare not uniform. It therefore makes sense to use shorter descriptions for themore probable horses and longer descriptions for the less probable ones,so that we achieve a lower average description length. For example, wecould use the following set of bit strings to represent the eight horses: 0,10, 110, 1110, 111100, 111101, 111110, 111111. The average descriptionlength in this case is 2 bits, as opposed to 3 bits for the uniform code.Notice that the average description length in this case is equal to theentropy. In Chapter 5 we show that the entropy of a random variable isa lower bound on the average number of bits required to represent therandom variable and also on the average number of questions needed toidentify the variable in a game of “20 questions.” We also show how toconstruct representations that have an average length within 1 bit of theentropy.The concept of entropy in information theory is related to the concept ofentropy in statistical mechanics. If we draw a sequence of n independentand identically distributed (i.i.d.) random variables, we will show that theprobability of a “typical” sequence is about 2−nH(X)and that there areabout 2nH(X)such typical sequences. This property [known as the asymp-totic equipartition property (AEP)] is the basis of many of the proofs ininformation theory. We later present other problems for which entropyarises as a natural answer (e.g., the number of fair coin flips needed togenerate a random variable).The notion of descriptive complexity of a random variable can beextended to define the descriptive complexity of a single string. The Kol-mogorov complexity of a binary string is defined as the length of theshortest computer program that prints out the string. It will turn out thatif the string is indeed random, the Kolmogorov complexity is close tothe entropy. Kolmogorov complexity is a natural framework in whichto consider problems of statistical inference and modeling and leads toa clearer understanding of Occam’s Razor: “The simplest explanation isbest.” We describe some simple properties of Kolmogorov complexity inChapter 1.Entropy is the uncertainty of a single random variable. We can defineconditional entropy H(X|Y), which is the entropy of a random variableconditional on the knowledge of another random variable. The reductionin uncertainty due to another random variable is called the mutual infor-mation. For two random variables X and Y this reduction is the mutual
  • 28. 1.1 PREVIEW OF THE BOOK 7informationI (X; Y) = H(X) − H(X|Y) =x,yp(x, y) logp(x, y)p(x)p(y). (1.4)The mutual information I (X; Y) is a measure of the dependence betweenthe two random variables. It is symmetric in X and Y and always non-negative and is equal to zero if and only if X and Y are independent.A communication channel is a system in which the output dependsprobabilistically on its input. It is characterized by a probability transitionmatrix p(y|x) that determines the conditional distribution of the outputgiven the input. For a communication channel with input X and outputY, we can define the capacity C byC = maxp(x)I (X; Y). (1.5)Later we show that the capacity is the maximum rate at which we can sendinformation over the channel and recover the information at the outputwith a vanishingly low probability of error. We illustrate this with a fewexamples.Example 1.1.3 (Noiseless binary channel) For this channel, the binaryinput is reproduced exactly at the output. This channel is illustrated inFigure 1.3. Here, any transmitted bit is received without error. Hence,in each transmission, we can send 1 bit reliably to the receiver, and thecapacity is 1 bit. We can also calculate the information capacity C =max I (X; Y) = 1 bit.Example 1.1.4 (Noisy four-symbol channel) Consider the channelshown in Figure 1.4. In this channel, each input letter is received either asthe same letter with probability 12 or as the next letter with probability 12.If we use all four input symbols, inspection of the output would not revealwith certainty which input symbol was sent. If, on the other hand, we use1010FIGURE 1.3. Noiseless binary channel. C = 1 bit.
  • 29. 8 INTRODUCTION AND PREVIEW12341234FIGURE 1.4. Noisy channel.only two of the inputs (1 and 3, say), we can tell immediately from theoutput which input symbol was sent. This channel then acts like the noise-less channel of Example 1.1.3, and we can send 1 bit per transmissionover this channel with no errors. We can calculate the channel capacityC = max I (X; Y) in this case, and it is equal to 1 bit per transmission,in agreement with the analysis above.In general, communication channels do not have the simple structure ofthis example, so we cannot always identify a subset of the inputs to sendinformation without error. But if we consider a sequence of transmissions,all channels look like this example and we can then identify a subset of theinput sequences (the codewords) that can be used to transmit informationover the channel in such a way that the sets of possible output sequencesassociated with each of the codewords are approximately disjoint. We canthen look at the output sequence and identify the input sequence with avanishingly low probability of error.Example 1.1.5 (Binary symmetric channel) This is the basic exampleof a noisy communication system. The channel is illustrated in Figure 1.5.1 − p1 − p0 01 1ppFIGURE 1.5. Binary symmetric channel.
  • 30. 1.1 PREVIEW OF THE BOOK 9The channel has a binary input, and its output is equal to the input withprobability 1 − p. With probability p, on the other hand, a 0 is receivedas a 1, and vice versa. In this case, the capacity of the channel can be cal-culated to be C = 1 + p log p + (1 − p) log(1 − p) bits per transmission.However, it is no longer obvious how one can achieve this capacity. If weuse the channel many times, however, the channel begins to look like thenoisy four-symbol channel of Example 1.1.4, and we can send informa-tion at a rate C bits per transmission with an arbitrarily low probabilityof error.The ultimate limit on the rate of communication of information overa channel is given by the channel capacity. The channel coding theoremshows that this limit can be achieved by using codes with a long blocklength. In practical communication systems, there are limitations on thecomplexity of the codes that we can use, and therefore we may not beable to achieve capacity.Mutual information turns out to be a special case of a more generalquantity called relative entropy D(p||q), which is a measure of the “dis-tance” between two probability mass functions p and q. It is definedasD(p||q) =xp(x) logp(x)q(x). (1.6)Although relative entropy is not a true metric, it has some of the propertiesof a metric. In particular, it is always nonnegative and is zero if and onlyif p = q. Relative entropy arises as the exponent in the probability oferror in a hypothesis test between distributions p and q. Relative entropycan be used to define a geometry for probability distributions that allowsus to interpret many of the results of large deviation theory.There are a number of parallels between information theory and thetheory of investment in a stock market. A stock market is defined by arandom vector X whose elements are nonnegative numbers equal to theratio of the price of a stock at the end of a day to the price at the beginningof the day. For a stock market with distribution F(x), we can define thedoubling rate W asW = maxb:bi≥0, bi=1log btx dF(x). (1.7)The doubling rate is the maximum asymptotic exponent in the growthof wealth. The doubling rate has a number of properties that parallel theproperties of entropy. We explore some of these properties in Chapter 16.
  • 31. 10 INTRODUCTION AND PREVIEWThe quantities H, I, C, D, K, W arise naturally in the following areas:• Data compression. The entropy H of a random variable is a lowerbound on the average length of the shortest description of the randomvariable. We can construct descriptions with average length within 1bit of the entropy. If we relax the constraint of recovering the sourceperfectly, we can then ask what communication rates are required todescribe the source up to distortion D? And what channel capacitiesare sufficient to enable the transmission of this source over the chan-nel and its reconstruction with distortion less than or equal to D?This is the subject of rate distortion theory.When we try to formalize the notion of the shortest descriptionfor nonrandom objects, we are led to the definition of Kolmogorovcomplexity K. Later, we show that Kolmogorov complexity is uni-versal and satisfies many of the intuitive requirements for the theoryof shortest descriptions.• Data transmission. We consider the problem of transmitting infor-mation so that the receiver can decode the message with a small prob-ability of error. Essentially, we wish to find codewords (sequencesof input symbols to a channel) that are mutually far apart in thesense that their noisy versions (available at the output of the channel)are distinguishable. This is equivalent to sphere packing in high-dimensional space. For any set of codewords it is possible to calculatethe probability that the receiver will make an error (i.e., make anincorrect decision as to which codeword was sent). However, in mostcases, this calculation is tedious.Using a randomly generated code, Shannon showed that one cansend information at any rate below the capacity C of the channelwith an arbitrarily low probability of error. The idea of a randomlygenerated code is very unusual. It provides the basis for a simpleanalysis of a very difficult problem. One of the key ideas in the proofis the concept of typical sequences. The capacity C is the logarithmof the number of distinguishable input signals.• Network information theory. Each of the topics mentioned previouslyinvolves a single source or a single channel. What if one wishes to com-press each of many sources and then put the compressed descriptionstogether into a joint reconstruction of the sources? This problem issolved by the Slepian–Wolf theorem. Or what if one has many senderssending information independently to a common receiver? What is thechannel capacity of this channel? This is the multiple-access channelsolved by Liao and Ahlswede. Or what if one has one sender and many
  • 32. 1.1 PREVIEW OF THE BOOK 11receivers and wishes to communicate (perhaps different) informationsimultaneously to each of the receivers? This is the broadcast channel.Finally, what if one has an arbitrary number of senders and receivers inan environment of interference and noise. What is the capacity regionof achievable rates from the various senders to the receivers? This isthe general network information theory problem. All of the precedingproblems fall into the general area of multiple-user or network informa-tion theory. Although hopes for a comprehensive theory for networksmay be beyond current research techniques, there is still some hope thatall the answers involve only elaborate forms of mutual information andrelative entropy.• Ergodic theory. The asymptotic equipartition theorem states that mostsample n-sequences of an ergodic process have probability about 2−nHand that there are about 2nHsuch typical sequences.• Hypothesis testing. The relative entropy D arises as the exponent inthe probability of error in a hypothesis test between two distributions.It is a natural measure of distance between distributions.• Statistical mechanics. The entropy H arises in statistical mechanicsas a measure of uncertainty or disorganization in a physical system.Roughly speaking, the entropy is the logarithm of the number ofways in which the physical system can be configured. The second lawof thermodynamics says that the entropy of a closed system cannotdecrease. Later we provide some interpretations of the second law.• Quantum mechanics. Here, von Neumann entropy S = tr(ρ ln ρ) =i λi log λi plays the role of classical Shannon–Boltzmann entropyH = − i pi log pi. Quantum mechanical versions of data compres-sion and channel capacity can then be found.• Inference. We can use the notion of Kolmogorov complexity K tofind the shortest description of the data and use that as a model topredict what comes next. A model that maximizes the uncertainty orentropy yields the maximum entropy approach to inference.• Gambling and investment. The optimal exponent in the growth rateof wealth is given by the doubling rate W. For a horse race withuniform odds, the sum of the doubling rate W and the entropy H isconstant. The increase in the doubling rate due to side information isequal to the mutual information I between a horse race and the sideinformation. Similar results hold for investment in the stock market.• Probability theory. The asymptotic equipartition property (AEP)shows that most sequences are typical in that they have a sam-ple entropy close to H. So attention can be restricted to theseapproximately 2nHtypical sequences. In large deviation theory, the
  • 33. 12 INTRODUCTION AND PREVIEWprobability of a set is approximately 2−nD, where D is the relativeentropy distance between the closest element in the set and the truedistribution.• Complexity theory. The Kolmogorov complexity K is a measure ofthe descriptive complexity of an object. It is related to, but differentfrom, computational complexity, which measures the time or spacerequired for a computation.Information-theoretic quantities such as entropy and relative entropyarise again and again as the answers to the fundamental questions incommunication and statistics. Before studying these questions, we shallstudy some of the properties of the answers. We begin in Chapter 2 withthe definitions and basic properties of entropy, relative entropy, and mutualinformation.
  • 34. CHAPTER 2ENTROPY, RELATIVE ENTROPY,AND MUTUAL INFORMATIONIn this chapter we introduce most of the basic definitions required forsubsequent development of the theory. It is irresistible to play with theirrelationships and interpretations, taking faith in their later utility. Afterdefining entropy and mutual information, we establish chain rules, thenonnegativity of mutual information, the data-processing inequality, andillustrate these definitions by examining sufficient statistics and Fano’sinequality.The concept of information is too broad to be captured completely bya single definition. However, for any probability distribution, we define aquantity called the entropy, which has many properties that agree with theintuitive notion of what a measure of information should be. This notion isextended to define mutual information, which is a measure of the amountof information one random variable contains about another. Entropy thenbecomes the self-information of a random variable. Mutual information isa special case of a more general quantity called relative entropy, which isa measure of the distance between two probability distributions. All thesequantities are closely related and share a number of simple properties,some of which we derive in this chapter.In later chapters we show how these quantities arise as natural answersto a number of questions in communication, statistics, complexity, andgambling. That will be the ultimate test of the value of these definitions.2.1 ENTROPYWe first introduce the concept of entropy, which is a measure of theuncertainty of a random variable. Let X be a discrete random variablewith alphabet X and probability mass function p(x) = Pr{X = x}, x ∈ X.Elements of Information Theory, Second Edition, By Thomas M. Cover and Joy A. ThomasCopyright  2006 John Wiley & Sons, Inc.13
  • 35. 14 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONWe denote the probability mass function by p(x) rather than pX(x), forconvenience. Thus, p(x) and p(y) refer to two different random variablesand are in fact different probability mass functions, pX(x) and pY (y),respectively.Definition The entropy H(X) of a discrete random variable X isdefined byH(X) = −x∈Xp(x) log p(x). (2.1)We also write H(p) for the above quantity. The log is to the base 2and entropy is expressed in bits. For example, the entropy of a fair cointoss is 1 bit. We will use the convention that 0 log 0 = 0, which is easilyjustified by continuity since x log x → 0 as x → 0. Adding terms of zeroprobability does not change the entropy.If the base of the logarithm is b, we denote the entropy as Hb(X). Ifthe base of the logarithm is e, the entropy is measured in nats. Unlessotherwise specified, we will take all logarithms to base 2, and hence allthe entropies will be measured in bits. Note that entropy is a functionalof the distribution of X. It does not depend on the actual values taken bythe random variable X, but only on the probabilities.We denote expectation by E. Thus, if X ∼ p(x), the expected value ofthe random variable g(X) is writtenEpg(X) =x∈Xg(x)p(x), (2.2)or more simply as Eg(X) when the probability mass function is under-stood from the context. We shall take a peculiar interest in the eerilyself-referential expectation of g(X) under p(x) when g(X) = log 1p(X).Remark The entropy of X can also be interpreted as the expected valueof the random variable log 1p(X) , where X is drawn according to probabilitymass function p(x). Thus,H(X) = Ep log1p(X). (2.3)This definition of entropy is related to the definition of entropy in ther-modynamics; some of the connections are explored later. It is possibleto derive the definition of entropy axiomatically by defining certain prop-erties that the entropy of a random variable must satisfy. This approachis illustrated in Problem 2.46. We do not use the axiomatic approach to
  • 36. 2.1 ENTROPY 15justify the definition of entropy; instead, we show that it arises as theanswer to a number of natural questions, such as “What is the averagelength of the shortest description of the random variable?” First, we derivesome immediate consequences of the definition.Lemma 2.1.1 H(X) ≥ 0.Proof: 0 ≤ p(x) ≤ 1 implies that log 1p(x) ≥ 0.Lemma 2.1.2 Hb(X) = (logb a)Ha(X).Proof: logb p = logb a loga p.The second property of entropy enables us to change the base of thelogarithm in the definition. Entropy can be changed from one base toanother by multiplying by the appropriate factor.Example 2.1.1 LetX =1 with probability p,0 with probability 1 − p.(2.4)ThenH(X) = −p log p − (1 − p) log(1 − p)def== H(p). (2.5)In particular, H(X) = 1 bit when p = 12. The graph of the function H(p)is shown in Figure 2.1. The figure illustrates some of the basic propertiesof entropy: It is a concave function of the distribution and equals 0 whenp = 0 or 1. This makes sense, because when p = 0 or 1, the variableis not random and there is no uncertainty. Similarly, the uncertainty ismaximum when p = 12, which also corresponds to the maximum value ofthe entropy.Example 2.1.2 LetX =a with probability12,b with probability14,c with probability18,d with probability18.(2.6)The entropy of X isH(X) = −12log12−14log14−18log18−18log18=74bits. (2.7)
  • 37. 16 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION00.10.20.30.40.50.60.70.80.910 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1pH(p)FIGURE 2.1. H(p) vs. p.Suppose that we wish to determine the value of X with the minimumnumber of binary questions. An efficient first question is “Is X = a?”This splits the probability in half. If the answer to the first question isno, the second question can be “Is X = b?” The third question can be“Is X = c?” The resulting expected number of binary questions requiredis 1.75. This turns out to be the minimum expected number of binaryquestions required to determine the value of X. In Chapter 5 we show thatthe minimum expected number of binary questions required to determineX lies between H(X) and H(X) + 1.2.2 JOINT ENTROPY AND CONDITIONAL ENTROPYWe defined the entropy of a single random variable in Section 2.1. Wenow extend the definition to a pair of random variables. There is nothingreally new in this definition because (X, Y) can be considered to be asingle vector-valued random variable.Definition The joint entropy H(X, Y) of a pair of discrete randomvariables (X, Y) with a joint distribution p(x, y) is defined asH(X, Y) = −x∈X y∈Yp(x, y) log p(x, y), (2.8)
  • 38. 2.2 JOINT ENTROPY AND CONDITIONAL ENTROPY 17which can also be expressed asH(X, Y) = −E log p(X, Y). (2.9)We also define the conditional entropy of a random variable givenanother as the expected value of the entropies of the conditional distribu-tions, averaged over the conditioning random variable.Definition If (X, Y) ∼ p(x, y), the conditional entropy H(Y|X) isdefined asH(Y|X) =x∈Xp(x)H(Y|X = x) (2.10)= −x∈Xp(x)y∈Yp(y|x) log p(y|x) (2.11)= −x∈X y∈Yp(x, y) log p(y|x) (2.12)= −E log p(Y|X). (2.13)The naturalness of the definition of joint entropy and conditional entropyis exhibited by the fact that the entropy of a pair of random variables isthe entropy of one plus the conditional entropy of the other. This is provedin the following theorem.Theorem 2.2.1 (Chain rule)H(X, Y) = H(X) + H(Y|X). (2.14)ProofH(X, Y) = −x∈X y∈Yp(x, y) log p(x, y) (2.15)= −x∈X y∈Yp(x, y) log p(x)p(y|x) (2.16)= −x∈X y∈Yp(x, y) log p(x) −x∈X y∈Yp(x, y) log p(y|x)(2.17)= −x∈Xp(x) log p(x) −x∈X y∈Yp(x, y) log p(y|x) (2.18)= H(X) + H(Y|X). (2.19)
  • 39. 18 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONEquivalently, we can writelog p(X, Y) = log p(X) + log p(Y|X) (2.20)and take the expectation of both sides of the equation to obtain thetheorem.CorollaryH(X, Y|Z) = H(X|Z) + H(Y|X, Z). (2.21)Proof: The proof follows along the same lines as the theorem.Example 2.2.1 Let (X, Y) have the following joint distribution:The marginal distribution of X is (12, 14, 18, 18) and the marginal distributionof Y is (14, 14, 14, 14), and hence H(X) = 74 bits and H(Y) = 2 bits. Also,H(X|Y) =4i=1p(Y = i)H(X|Y = i) (2.22)=14H12,14,18,18+14H14,12,18,18+14H14,14,14,14+14H(1, 0, 0, 0) (2.23)=14×74+14×74+14× 2 +14× 0 (2.24)=118bits. (2.25)Similarly, H(Y|X) = 138 bits and H(X, Y) = 278 bits.Remark Note that H(Y|X) = H(X|Y). However, H(X) − H(X|Y) =H(Y)− H(Y|X), a property that we exploit later.
  • 40. 2.3 RELATIVE ENTROPY AND MUTUAL INFORMATION 192.3 RELATIVE ENTROPY AND MUTUAL INFORMATIONThe entropy of a random variable is a measure of the uncertainty of therandom variable; it is a measure of the amount of information required onthe average to describe the random variable. In this section we introducetwo related concepts: relative entropy and mutual information.The relative entropy is a measure of the distance between two distribu-tions. In statistics, it arises as an expected logarithm of the likelihood ratio.The relative entropy D(p||q) is a measure of the inefficiency of assumingthat the distribution is q when the true distribution is p. For example, ifwe knew the true distribution p of the random variable, we could con-struct a code with average description length H(p). If, instead, we usedthe code for a distribution q, we would need H(p) + D(p||q) bits on theaverage to describe the random variable.Definition The relative entropy or Kullback–Leibler distance betweentwo probability mass functions p(x) and q(x) is defined asD(p||q) =x∈Xp(x) logp(x)q(x)(2.26)= Ep logp(X)q(X). (2.27)In the above definition, we use the convention that 0 log 00 = 0 and theconvention (based on continuity arguments) that 0 log 0q = 0 and p log p0 =∞. Thus, if there is any symbol x ∈ X such that p(x) > 0 and q(x) = 0,then D(p||q) = ∞.We will soon show that relative entropy is always nonnegative and iszero if and only if p = q. However, it is not a true distance betweendistributions since it is not symmetric and does not satisfy the triangleinequality. Nonetheless, it is often useful to think of relative entropy as a“distance” between distributions.We now introduce mutual information, which is a measure of theamount of information that one random variable contains about anotherrandom variable. It is the reduction in the uncertainty of one randomvariable due to the knowledge of the other.Definition Consider two random variables X and Y with a joint proba-bility mass function p(x, y) and marginal probability mass functions p(x)and p(y). The mutual information I (X; Y) is the relative entropy between
  • 41. 20 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONthe joint distribution and the product distribution p(x)p(y):I (X; Y) =x∈X y∈Yp(x, y) logp(x, y)p(x)p(y)(2.28)= D(p(x, y)||p(x)p(y)) (2.29)= Ep(x,y) logp(X, Y)p(X)p(Y). (2.30)In Chapter 8 we generalize this definition to continuous random vari-ables, and in (8.54) to general random variables that could be a mixtureof discrete and continuous random variables.Example 2.3.1 Let X = {0, 1} and consider two distributions p and qon X. Let p(0) = 1 − r, p(1) = r, and let q(0) = 1 − s, q(1) = s. ThenD(p||q) = (1 − r) log1 − r1 − s+ r logrs(2.31)andD(q||p) = (1 − s) log1 − s1 − r+ s logsr. (2.32)If r = s, then D(p||q) = D(q||p) = 0. If r = 12, s = 14, we can calculateD(p||q) =12log1234+12log1214= 1 −12log 3 = 0.2075 bit, (2.33)whereasD(q||p) =34log3412+14log1412=34log 3 − 1 = 0.1887 bit. (2.34)Note that D(p||q) = D(q||p) in general.2.4 RELATIONSHIP BETWEEN ENTROPY AND MUTUALINFORMATIONWe can rewrite the definition of mutual information I (X; Y) asI (X; Y) =x,yp(x, y) logp(x, y)p(x)p(y)(2.35)
  • 42. 2.4 RELATIONSHIP BETWEEN ENTROPY AND MUTUAL INFORMATION 21=x,yp(x, y) logp(x|y)p(x)(2.36)= −x,yp(x, y) log p(x) +x,yp(x, y) log p(x|y) (2.37)= −xp(x) log p(x) − −x,yp(x, y) log p(x|y) (2.38)= H(X) − H(X|Y). (2.39)Thus, the mutual information I (X; Y) is the reduction in the uncertaintyof X due to the knowledge of Y.By symmetry, it also follows thatI (X; Y) = H(Y) − H(Y|X). (2.40)Thus, X says as much about Y as Y says about X.Since H(X, Y) = H(X) + H(Y|X), as shown in Section 2.2, we haveI (X; Y) = H(X) + H(Y) − H(X, Y). (2.41)Finally, we note thatI (X; X) = H(X) − H(X|X) = H(X). (2.42)Thus, the mutual information of a random variable with itself is theentropy of the random variable. This is the reason that entropy is some-times referred to as self-information.Collecting these results, we have the following theorem.Theorem 2.4.1 (Mutual information and entropy)I (X; Y) = H(X) − H(X|Y) (2.43)I (X; Y) = H(Y) − H(Y|X) (2.44)I (X; Y) = H(X) + H(Y) − H(X, Y) (2.45)I (X; Y) = I (Y; X) (2.46)I (X; X) = H(X). (2.47)
  • 43. 22 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONH(X,Y)H(Y |X)H(X|Y)H(Y)I(X;Y)H(X)FIGURE 2.2. Relationship between entropy and mutual information.The relationship between H(X), H(Y), H(X, Y), H(X|Y), H(Y|X),and I (X; Y) is expressed in a Venn diagram (Figure 2.2). Notice thatthe mutual information I (X; Y) corresponds to the intersection of theinformation in X with the information in Y.Example 2.4.1 For the joint distribution of Example 2.2.1, it is easy tocalculate the mutual information I (X; Y) = H(X) − H(X|Y) = H(Y) −H(Y|X) = 0.375 bit.2.5 CHAIN RULES FOR ENTROPY, RELATIVE ENTROPY,AND MUTUAL INFORMATIONWe now show that the entropy of a collection of random variables is thesum of the conditional entropies.Theorem 2.5.1 (Chain rule for entropy) Let X1, X2, . . . , Xn be drawnaccording to p(x1, x2, . . . , xn). ThenH(X1, X2, . . . , Xn) =ni=1H(Xi|Xi−1, . . . , X1). (2.48)Proof: By repeated application of the two-variable expansion rule forentropies, we haveH(X1, X2) = H(X1) + H(X2|X1), (2.49)H(X1, X2, X3) = H(X1) + H(X2, X3|X1) (2.50)
  • 44. 2.5 CHAIN RULES FOR ENTROPY, RELATIVE ENTROPY, MUTUAL INFORMATION 23= H(X1) + H(X2|X1) + H(X3|X2, X1), (2.51)...H(X1, X2, . . . , Xn) = H(X1) + H(X2|X1) + · · · + H(Xn|Xn−1, . . . , X1)(2.52)=ni=1H(Xi|Xi−1, . . . , X1). (2.53)Alternative Proof: We write p(x1, . . . , xn) = ni=1 p(xi|xi−1, . . . , x1)and evaluateH(X1, X2, . . . , Xn)= −x1,x2,...,xnp(x1, x2, . . . , xn) log p(x1, x2, . . . , xn) (2.54)= −x1,x2,...,xnp(x1, x2, . . . , xn) logni=1p(xi|xi−1, . . . , x1) (2.55)= −x1,x2,...,xnni=1p(x1, x2, . . . , xn) log p(xi|xi−1, . . . , x1) (2.56)= −ni=1 x1,x2,...,xnp(x1, x2, . . . , xn) log p(xi|xi−1, . . . , x1) (2.57)= −ni=1 x1,x2,...,xip(x1, x2, . . . , xi) log p(xi|xi−1, . . . , x1) (2.58)=ni=1H(Xi|Xi−1, . . . , X1). (2.59)We now define the conditional mutual information as the reduction inthe uncertainty of X due to knowledge of Y when Z is given.Definition The conditional mutual information of random variables Xand Y given Z is defined byI (X; Y|Z) = H(X|Z) − H(X|Y, Z) (2.60)= Ep(x,y,z) logp(X, Y|Z)p(X|Z)p(Y|Z). (2.61)Mutual information also satisfies a chain rule.
  • 45. 24 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONTheorem 2.5.2 (Chain rule for information)I (X1, X2, . . . , Xn; Y) =ni=1I (Xi; Y|Xi−1, Xi−2, . . . , X1). (2.62)ProofI (X1, X2, . . . , Xn; Y)= H(X1, X2, . . . , Xn) − H(X1, X2, . . . , Xn|Y) (2.63)=ni=1H(Xi|Xi−1, . . . , X1) −ni=1H(Xi|Xi−1, . . . , X1, Y)=ni=1I (Xi; Y|X1, X2, . . . , Xi−1). (2.64)We define a conditional version of the relative entropy.Definition For joint probability mass functions p(x, y) and q(x, y), theconditional relative entropy D(p(y|x)||q(y|x)) is the average of the rela-tive entropies between the conditional probability mass functions p(y|x)and q(y|x) averaged over the probability mass function p(x). More pre-cisely,D(p(y|x)||q(y|x)) =xp(x)yp(y|x) logp(y|x)q(y|x)(2.65)= Ep(x,y) logp(Y|X)q(Y|X). (2.66)The notation for conditional relative entropy is not explicit since it omitsmention of the distribution p(x) of the conditioning random variable.However, it is normally understood from the context.The relative entropy between two joint distributions on a pair of ran-dom variables can be expanded as the sum of a relative entropy and aconditional relative entropy. The chain rule for relative entropy is used inSection 4.4 to prove a version of the second law of thermodynamics.Theorem 2.5.3 (Chain rule for relative entropy)D(p(x, y)||q(x, y)) = D(p(x)||q(x)) + D(p(y|x)||q(y|x)). (2.67)
  • 46. 2.6 JENSEN’S INEQUALITY AND ITS CONSEQUENCES 25ProofD(p(x, y)||q(x, y))=x yp(x, y) logp(x, y)q(x, y)(2.68)=x yp(x, y) logp(x)p(y|x)q(x)q(y|x)(2.69)=x yp(x, y) logp(x)q(x)+x yp(x, y) logp(y|x)q(y|x)(2.70)= D(p(x)||q(x)) + D(p(y|x)||q(y|x)). (2.71)2.6 JENSEN’S INEQUALITY AND ITS CONSEQUENCESIn this section we prove some simple properties of the quantities definedearlier. We begin with the properties of convex functions.Definition A function f (x) is said to be convex over an interval (a, b)if for every x1, x2 ∈ (a, b) and 0 ≤ λ ≤ 1,f (λx1 + (1 − λ)x2) ≤ λf (x1) + (1 − λ)f (x2). (2.72)A function f is said to be strictly convex if equality holds only if λ = 0or λ = 1.Definition A function f is concave if −f is convex. A function isconvex if it always lies below any chord. A function is concave if italways lies above any chord.Examples of convex functions include x2, |x|, ex, x log x (for x ≥0), and so on. Examples of concave functions include log x and√x forx ≥ 0. Figure 2.3 shows some examples of convex and concave functions.Note that linear functions ax + b are both convex and concave. Convexityunderlies many of the basic properties of information-theoretic quantitiessuch as entropy and mutual information. Before we prove some of theseproperties, we derive some simple results for convex functions.Theorem 2.6.1 If the function f has a second derivative that is non-negative (positive) over an interval, the function is convex (strictly convex)over that interval.
  • 47. 26 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION(a)(b)FIGURE 2.3. Examples of (a) convex and (b) concave functions.Proof: We use the Taylor series expansion of the function around x0:f (x) = f (x0) + f (x0)(x − x0) +f (x∗)2(x − x0)2, (2.73)where x∗ lies between x0 and x. By hypothesis, f (x∗) ≥ 0, and thusthe last term is nonnegative for all x.We let x0 = λx1 + (1 − λ)x2 and take x = x1, to obtainf (x1) ≥ f (x0) + f (x0)((1 − λ)(x1 − x2)). (2.74)Similarly, taking x = x2, we obtainf (x2) ≥ f (x0) + f (x0)(λ(x2 − x1)). (2.75)Multiplying (2.74) by λ and (2.75) by 1 − λ and adding, we obtain (2.72).The proof for strict convexity proceeds along the same lines.Theorem 2.6.1 allows us immediately to verify the strict convexity ofx2, ex, and x log x for x ≥ 0, and the strict concavity of log x and√x forx ≥ 0.Let E denote expectation. Thus, EX = x∈X p(x)x in the discretecase and EX = xf (x) dx in the continuous case.
  • 48. 2.6 JENSEN’S INEQUALITY AND ITS CONSEQUENCES 27The next inequality is one of the most widely used in mathematics andone that underlies many of the basic results in information theory.Theorem 2.6.2 (Jensen’s inequality) If f is a convex function andX is a random variable,Ef (X) ≥ f (EX). (2.76)Moreover, if f is strictly convex, the equality in (2.76) implies thatX = EX with probability 1 (i.e., X is a constant).Proof: We prove this for discrete distributions by induction on the num-ber of mass points. The proof of conditions for equality when f is strictlyconvex is left to the reader.For a two-mass-point distribution, the inequality becomesp1f (x1) + p2f (x2) ≥ f (p1x1 + p2x2), (2.77)which follows directly from the definition of convex functions. Supposethat the theorem is true for distributions with k − 1 mass points. Thenwriting pi = pi/(1 − pk) for i = 1, 2, . . . , k − 1, we haveki=1pif (xi) = pkf (xk) + (1 − pk)k−1i=1pif (xi) (2.78)≥ pkf (xk) + (1 − pk)fk−1i=1pixi (2.79)≥ f pkxk + (1 − pk)k−1i=1pixi (2.80)= fki=1pixi , (2.81)where the first inequality follows from the induction hypothesis and thesecond follows from the definition of convexity.The proof can be extended to continuous distributions by continuityarguments.We now use these results to prove some of the properties of entropy andrelative entropy. The following theorem is of fundamental importance.
  • 49. 28 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONTheorem 2.6.3 (Information inequality) Let p(x), q(x), x ∈ X, betwo probability mass functions. ThenD(p||q) ≥ 0 (2.82)with equality if and only if p(x) = q(x) for all x.Proof: Let A = {x : p(x) > 0} be the support set of p(x). Then−D(p||q) = −x∈Ap(x) logp(x)q(x)(2.83)=x∈Ap(x) logq(x)p(x)(2.84)≤ logx∈Ap(x)q(x)p(x)(2.85)= logx∈Aq(x) (2.86)≤ logx∈Xq(x) (2.87)= log 1 (2.88)= 0, (2.89)where (2.85) follows from Jensen’s inequality. Since log t is a strictlyconcave function of t, we have equality in (2.85) if and only if q(x)/p(x)is constant everywhere [i.e., q(x) = cp(x) for all x]. Thus, x∈A q(x) =c x∈A p(x) = c. We have equality in (2.87) only if x∈A q(x) = x∈Xq(x) = 1, which implies that c = 1. Hence, we have D(p||q) = 0 if andonly if p(x) = q(x) for all x.Corollary (Nonnegativity of mutual information) For any two randomvariables, X, Y,I (X; Y) ≥ 0, (2.90)with equality if and only if X and Y are independent.Proof: I (X; Y) = D(p(x, y)||p(x)p(y)) ≥ 0, with equality if and onlyif p(x, y) = p(x)p(y) (i.e., X and Y are independent).
  • 50. 2.6 JENSEN’S INEQUALITY AND ITS CONSEQUENCES 29CorollaryD(p(y|x)||q(y|x)) ≥ 0, (2.91)with equality if and only if p(y|x) = q(y|x) for all y and x such thatp(x) > 0.CorollaryI (X; Y|Z) ≥ 0, (2.92)with equality if and only if X and Y are conditionally independent given Z.We now show that the uniform distribution over the range X is themaximum entropy distribution over this range. It follows that any randomvariable with this range has an entropy no greater than log |X|.Theorem 2.6.4 H(X) ≤ log |X|, where |X| denotes the number of ele-ments in the range of X, with equality if and only X has a uniform distri-bution over X.Proof: Let u(x) = 1|X| be the uniform probability mass function over X,and let p(x) be the probability mass function for X. ThenD(p u) = p(x) logp(x)u(x)= log |X| − H(X). (2.93)Hence by the nonnegativity of relative entropy,0 ≤ D(p u) = log |X| − H(X). (2.94)Theorem 2.6.5 (Conditioning reduces entropy)(Information can’t hurt)H(X|Y) ≤ H(X) (2.95)with equality if and only if X and Y are independent.Proof: 0 ≤ I (X; Y) = H(X) − H(X|Y).Intuitively, the theorem says that knowing another random variable Ycan only reduce the uncertainty in X. Note that this is true only on theaverage. Specifically, H(X|Y = y) may be greater than or less than orequal to H(X), but on the average H(X|Y) = y p(y)H(X|Y = y) ≤H(X). For example, in a court case, specific new evidence might increaseuncertainty, but on the average evidence decreases uncertainty.
  • 51. 30 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONExample 2.6.1 Let (X, Y) have the following joint distribution:Then H(X) = H(18, 78) = 0.544 bit, H(X|Y = 1) = 0 bits, andH(X|Y = 2) = 1 bit. We calculate H(X|Y) = 34H(X|Y = 1) + 14H(X|Y = 2) = 0.25 bit. Thus, the uncertainty in X is increased if Y = 2is observed and decreased if Y = 1 is observed, but uncertainty decreaseson the average.Theorem 2.6.6 (Independence bound on entropy) LetX1, X2, . . . , Xn be drawn according to p(x1, x2, . . . , xn). ThenH(X1, X2, . . . , Xn) ≤ni=1H(Xi) (2.96)with equality if and only if the Xi are independent.Proof: By the chain rule for entropies,H(X1, X2, . . . , Xn) =ni=1H(Xi|Xi−1, . . . , X1) (2.97)≤ni=1H(Xi), (2.98)where the inequality follows directly from Theorem 2.6.5. We have equal-ity if and only if Xi is independent of Xi−1, . . . , X1 for all i (i.e., if andonly if the Xi’s are independent).2.7 LOG SUM INEQUALITY AND ITS APPLICATIONSWe now prove a simple consequence of the concavity of the logarithm,which will be used to prove some concavity results for the entropy.
  • 52. 2.7 LOG SUM INEQUALITY AND ITS APPLICATIONS 31Theorem 2.7.1 (Log sum inequality) For nonnegative numbers,a1, a2, . . . , an and b1, b2, . . . , bn,ni=1ai logaibi≥ni=1ai logni=1 aini=1 bi(2.99)with equality if and only if aibi= const.We again use the convention that 0 log 0 = 0, a log a0 = ∞ if a > 0 and0 log 00 = 0. These follow easily from continuity.Proof: Assume without loss of generality that ai > 0 and bi > 0. Thefunction f (t) = t log t is strictly convex, since f (t) = 1t log e > 0 for allpositive t. Hence by Jensen’s inequality, we haveαif (ti) ≥ f αiti (2.100)for αi ≥ 0, i αi = 1. Setting αi = binj=1 bjand ti = aibi, we obtainaibjlogaibi≥aibjlogaibj, (2.101)which is the log sum inequality.We now use the log sum inequality to prove various convexity results.We begin by reproving Theorem 2.6.3, which states that D(p||q) ≥ 0 withequality if and only if p(x) = q(x). By the log sum inequality,D(p||q) = p(x) logp(x)q(x)(2.102)≥ p(x) log p(x) q(x) (2.103)= 1 log11= 0 (2.104)with equality if and only if p(x)q(x) = c. Since both p and q are probabilitymass functions, c = 1, and hence we have D(p||q) = 0 if and only ifp(x) = q(x) for all x.
  • 53. 32 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONTheorem 2.7.2 (Convexity of relative entropy) D(p||q) is convex inthe pair (p, q); that is, if (p1, q1) and (p2, q2) are two pairs of probabilitymass functions, thenD(λp1 + (1 − λ)p2||λq1 + (1 − λ)q2) ≤ λD(p1||q1) + (1 − λ)D(p2||q2)(2.105)for all 0 ≤ λ ≤ 1.Proof: We apply the log sum inequality to a term on the left-hand sideof (2.105):(λp1(x) + (1 − λ)p2(x)) logλp1(x) + (1 − λ)p2(x)λq1(x) + (1 − λ)q2(x)≤ λp1(x) logλp1(x)λq1(x)+ (1 − λ)p2(x) log(1 − λ)p2(x)(1 − λ)q2(x). (2.106)Summing this over all x, we obtain the desired property.Theorem 2.7.3 (Concavity of entropy) H(p) is a concave functionof p.ProofH(p) = log |X| − D(p||u), (2.107)where u is the uniform distribution on |X| outcomes. The concavity of Hthen follows directly from the convexity of D.Alternative Proof: Let X1 be a random variable with distribution p1,taking on values in a set A. Let X2 be another random variable withdistribution p2 on the same set. Letθ =1 with probability λ,2 with probability 1 − λ.(2.108)Let Z = Xθ . Then the distribution of Z is λp1 + (1 − λ)p2. Now sinceconditioning reduces entropy, we haveH(Z) ≥ H(Z|θ), (2.109)or equivalently,H(λp1 + (1 − λ)p2) ≥ λH(p1) + (1 − λ)H(p2), (2.110)which proves the concavity of the entropy as a function of the distribution.
  • 54. 2.7 LOG SUM INEQUALITY AND ITS APPLICATIONS 33One of the consequences of the concavity of entropy is that mixing twogases of equal entropy results in a gas with higher entropy.Theorem 2.7.4 Let (X, Y) ∼ p(x, y) = p(x)p(y|x). The mutual infor-mation I (X; Y) is a concave function of p(x) for fixed p(y|x) and a convexfunction of p(y|x) for fixed p(x).Proof: To prove the first part, we expand the mutual informationI (X; Y) = H(Y) − H(Y|X) = H(Y) −xp(x)H(Y|X = x). (2.111)If p(y|x) is fixed, then p(y) is a linear function of p(x). Hence H(Y),which is a concave function of p(y), is a concave function of p(x). Thesecond term is a linear function of p(x). Hence, the difference is a concavefunction of p(x).To prove the second part, we fix p(x) and consider two different con-ditional distributions p1(y|x) and p2(y|x). The corresponding joint dis-tributions are p1(x, y) = p(x)p1(y|x) and p2(x, y) = p(x)p2(y|x), andtheir respective marginals are p(x), p1(y) and p(x), p2(y). Consider aconditional distributionpλ(y|x) = λp1(y|x) + (1 − λ)p2(y|x), (2.112)which is a mixture of p1(y|x) and p2(y|x) where 0 ≤ λ ≤ 1. The cor-responding joint distribution is also a mixture of the corresponding jointdistributions,pλ(x, y) = λp1(x, y) + (1 − λ)p2(x, y), (2.113)and the distribution of Y is also a mixture,pλ(y) = λp1(y) + (1 − λ)p2(y). (2.114)Hence if we let qλ(x, y) = p(x)pλ(y) be the product of the marginaldistributions, we haveqλ(x, y) = λq1(x, y) + (1 − λ)q2(x, y). (2.115)Since the mutual information is the relative entropy between the jointdistribution and the product of the marginals,I (X; Y) = D(pλ(x, y)||qλ(x, y)), (2.116)and relative entropy D(p||q) is a convex function of (p, q), it follows thatthe mutual information is a convex function of the conditional distribution.
  • 55. 34 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION2.8 DATA-PROCESSING INEQUALITYThe data-processing inequality can be used to show that no clever manip-ulation of the data can improve the inferences that can be made fromthe data.Definition Random variables X, Y, Z are said to form a Markov chainin that order (denoted by X → Y → Z) if the conditional distribution ofZ depends only on Y and is conditionally independent of X. Specifically,X, Y, and Z form a Markov chain X → Y → Z if the joint probabilitymass function can be written asp(x, y, z) = p(x)p(y|x)p(z|y). (2.117)Some simple consequences are as follows:• X → Y → Z if and only if X and Z are conditionally independentgiven Y. Markovity implies conditional independence becausep(x, z|y) =p(x, y, z)p(y)=p(x, y)p(z|y)p(y)= p(x|y)p(z|y). (2.118)This is the characterization of Markov chains that can be extendedto define Markov fields, which are n-dimensional random processesin which the interior and exterior are independent given the valueson the boundary.• X → Y → Z implies that Z → Y → X. Thus, the condition is some-times written X ↔ Y ↔ Z.• If Z = f (Y), then X → Y → Z.We can now prove an important and useful theorem demonstrating thatno processing of Y, deterministic or random, can increase the informationthat Y contains about X.Theorem 2.8.1 (Data-processing inequality) If X → Y → Z, thenI (X; Y) ≥ I (X; Z).Proof: By the chain rule, we can expand mutual information in twodifferent ways:I (X; Y, Z) = I (X; Z) + I (X; Y|Z) (2.119)= I (X; Y) + I (X; Z|Y). (2.120)
  • 56. 2.9 SUFFICIENT STATISTICS 35Since X and Z are conditionally independent given Y, we haveI (X; Z|Y) = 0. Since I (X; Y|Z) ≥ 0, we haveI (X; Y) ≥ I (X; Z). (2.121)We have equality if and only if I (X; Y|Z) = 0 (i.e., X → Z → Y formsa Markov chain). Similarly, one can prove that I (Y; Z) ≥ I (X; Z).Corollary In particular, if Z = g(Y), we have I (X; Y) ≥ I (X; g(Y)).Proof: X → Y → g(Y) forms a Markov chain.Thus functions of the data Y cannot increase the information about X.Corollary If X → Y → Z, then I (X; Y|Z) ≤ I (X; Y).Proof: We note in (2.119) and (2.120) that I (X; Z|Y) = 0, byMarkovity, and I (X; Z) ≥ 0. Thus,I (X; Y|Z) ≤ I (X; Y). (2.122)Thus, the dependence of X and Y is decreased (or remains unchanged)by the observation of a “downstream” random variable Z. Note that it isalso possible that I (X; Y|Z) > I (X; Y) when X, Y, and Z do not form aMarkov chain. For example, let X and Y be independent fair binary ran-dom variables, and let Z = X + Y. Then I (X; Y) = 0, but I (X; Y|Z) =H(X|Z) − H(X|Y, Z) = H(X|Z) = P (Z = 1)H(X|Z = 1) = 12 bit.2.9 SUFFICIENT STATISTICSThis section is a sidelight showing the power of the data-processinginequality in clarifying an important idea in statistics. Suppose that wehave a family of probability mass functions {fθ (x)} indexed by θ, and letX be a sample from a distribution in this family. Let T (X) be any statistic(function of the sample) like the sample mean or sample variance. Thenθ → X → T (X), and by the data-processing inequality, we haveI (θ; T (X)) ≤ I (θ; X) (2.123)for any distribution on θ. However, if equality holds, no informationis lost.A statistic T (X) is called sufficient for θ if it contains all the infor-mation in X about θ.
  • 57. 36 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONDefinition A function T (X) is said to be a sufficient statistic relative tothe family {fθ (x)} if X is independent of θ given T (X) for any distributionon θ[i.e., θ → T (X) → X forms a Markov chain].This is the same as the condition for equality in the data-processinginequality,I (θ; X) = I (θ; T (X)) (2.124)for all distributions on θ. Hence sufficient statistics preserve mutual infor-mation and conversely.Here are some examples of sufficient statistics:1. Let X1, X2, . . . , Xn, Xi ∈ {0, 1}, be an independent and identicallydistributed (i.i.d.) sequence of coin tosses of a coin with unknownparameter θ = Pr(Xi = 1). Given n, the number of 1’s is a sufficientstatistic for θ. Here T (X1, X2, . . . , Xn) = ni=1 Xi. In fact, we canshow that given T , all sequences having that many 1’s are equallylikely and independent of the parameter θ. Specifically,Pr (X1, X2, . . . , Xn) = (x1, x2, . . . , xn)ni=1Xi = k=1(nk)if xi = k,0 otherwise.(2.125)Thus, θ → Xi → (X1, X2, . . . , Xn) forms a Markov chain, andT is a sufficient statistic for θ.The next two examples involve probability densities instead ofprobability mass functions, but the theory still applies. We defineentropy and mutual information for continuous random variables inChapter 8.2. If X is normally distributed with mean θ and variance 1; that is, iffθ (x) =1√2πe−(x−θ)2/2= N(θ, 1), (2.126)and X1, X2, . . . , Xn are drawn independently according to this distri-bution, a sufficient statistic for θ is the sample mean Xn = 1nni=1 Xi.It can be verified that the conditional distribution of X1, X2, . . . , Xn,conditioned on Xn and n does not depend on θ.
  • 58. 2.10 FANO’S INEQUALITY 373. If fθ = Uniform(θ, θ + 1), a sufficient statistic for θ isT (X1, X2, . . . , Xn)= (max{X1, X2, . . . , Xn}, min{X1, X2, . . . , Xn}). (2.127)The proof of this is slightly more complicated, but again one canshow that the distribution of the data is independent of the parametergiven the statistic T .The minimal sufficient statistic is a sufficient statistic that is a functionof all other sufficient statistics.Definition A statistic T (X) is a minimal sufficient statistic relative to{fθ (x)} if it is a function of every other sufficient statistic U. Interpretingthis in terms of the data-processing inequality, this implies thatθ → T (X) → U(X) → X. (2.128)Hence, a minimal sufficient statistic maximally compresses the infor-mation about θ in the sample. Other sufficient statistics may containadditional irrelevant information. For example, for a normal distributionwith mean θ, the pair of functions giving the mean of all odd samples andthe mean of all even samples is a sufficient statistic, but not a minimalsufficient statistic. In the preceding examples, the sufficient statistics arealso minimal.2.10 FANO’S INEQUALITYSuppose that we know a random variable Y and we wish to guess the valueof a correlated random variable X. Fano’s inequality relates the probabil-ity of error in guessing the random variable X to its conditional entropyH(X|Y). It will be crucial in proving the converse to Shannon’s channelcapacity theorem in Chapter 7. From Problem 2.5 we know that the con-ditional entropy of a random variable X given another random variableY is zero if and only if X is a function of Y. Hence we can estimate Xfrom Y with zero probability of error if and only if H(X|Y) = 0.Extending this argument, we expect to be able to estimate X with alow probability of error only if the conditional entropy H(X|Y) is small.Fano’s inequality quantifies this idea. Suppose that we wish to estimate arandom variable X with a distribution p(x). We observe a random variableY that is related to X by the conditional distribution p(y|x). From Y, we
  • 59. 38 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONcalculate a function g(Y) = ˆX, where ˆX is an estimate of X and takes onvalues in ˆX. We will not restrict the alphabet ˆX to be equal to X, and wewill also allow the function g(Y) to be random. We wish to bound theprobability that ˆX = X. We observe that X → Y → ˆX forms a Markovchain. Define the probability of errorPe = Pr ˆX = X . (2.129)Theorem 2.10.1 (Fano’s Inequality) For any estimator ˆX such thatX → Y → ˆX, with Pe = Pr(X = ˆX), we haveH(Pe) + Pe log |X| ≥ H(X| ˆX) ≥ H(X|Y). (2.130)This inequality can be weakened to1 + Pe log |X| ≥ H(X|Y) (2.131)orPe ≥H(X|Y) − 1log |X|. (2.132)Remark Note from (2.130) that Pe = 0 implies that H(X|Y) = 0, asintuition suggests.Proof: We first ignore the role of Y and prove the first inequality in(2.130). We will then use the data-processing inequality to prove the moretraditional form of Fano’s inequality, given by the second inequality in(2.130). Define an error random variable,E =1 if ˆX = X,0 if ˆX = X.(2.133)Then, using the chain rule for entropies to expand H(E, X| ˆX) in twodifferent ways, we haveH(E, X| ˆX) = H(X| ˆX) + H(E|X, ˆX)=0(2.134)= H(E| ˆX)≤H(Pe)+ H(X|E, ˆX)≤Pe log |X|. (2.135)Since conditioning reduces entropy, H(E| ˆX) ≤ H(E) = H(Pe). Nowsince E is a function of X and ˆX, the conditional entropy H(E|X, ˆX) is
  • 60. 2.10 FANO’S INEQUALITY 39equal to 0. Also, since E is a binary-valued random variable, H(E) =H(Pe). The remaining term, H(X|E, ˆX), can be bounded as follows:H(X|E, ˆX) = Pr(E = 0)H(X| ˆX, E = 0) + Pr(E = 1)H(X| ˆX, E = 1)≤ (1 − Pe)0 + Pe log |X|, (2.136)since given E = 0, X = ˆX, and given E = 1, we can upper bound theconditional entropy by the log of the number of possible outcomes. Com-bining these results, we obtainH(Pe) + Pe log |X| ≥ H(X| ˆX). (2.137)By the data-processing inequality, we have I (X; ˆX) ≤ I (X; Y) sinceX → Y → ˆX is a Markov chain, and therefore H(X| ˆX) ≥ H(X|Y). Thus,we haveH(Pe) + Pe log |X| ≥ H(X| ˆX) ≥ H(X|Y). (2.138)Corollary For any two random variables X and Y, let p = Pr(X = Y).H(p) + p log |X| ≥ H(X|Y). (2.139)Proof: Let ˆX = Y in Fano’s inequality.For any two random variables X and Y, if the estimator g(Y) takesvalues in the set X, we can strengthen the inequality slightly by replacinglog |X| with log(|X| − 1).Corollary Let Pe = Pr(X = ˆX), and let ˆX : Y → X; thenH(Pe) + Pe log(|X| − 1) ≥ H(X|Y). (2.140)Proof: The proof of the theorem goes through without change, exceptthatH(X|E, ˆX) = Pr(E = 0)H(X| ˆX, E = 0) + Pr(E = 1)H(X| ˆX, E = 1)(2.141)≤ (1 − Pe)0 + Pe log(|X| − 1), (2.142)since given E = 0, X = ˆX, and given E = 1, the range of possible Xoutcomes is |X| − 1, we can upper bound the conditional entropy by thelog(|X| − 1), the logarithm of the number of possible outcomes. Substi-tuting this provides us with the stronger inequality.
  • 61. 40 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONRemark Suppose that there is no knowledge of Y. Thus, X must beguessed without any information. Let X ∈ {1, 2, . . . , m} and p1 ≥ p2 ≥· · · ≥ pm. Then the best guess of X is ˆX = 1 and the resulting probabilityof error is Pe = 1 − p1. Fano’s inequality becomesH(Pe) + Pe log(m − 1) ≥ H(X). (2.143)The probability mass function(p1, p2, . . . , pm) = 1 − Pe,Pem − 1, . . . ,Pem − 1(2.144)achieves this bound with equality. Thus, Fano’s inequality is sharp.While we are at it, let us introduce a new inequality relating probabilityof error and entropy. Let X and X by two independent identically dis-tributed random variables with entropy H(X). The probability at X = Xis given byPr(X = X ) =xp2(x). (2.145)We have the following inequality:Lemma 2.10.1 If X and X are i.i.d. with entropy H(X),Pr(X = X ) ≥ 2−H(X), (2.146)with equality if and only if X has a uniform distribution.Proof: Suppose that X ∼ p(x). By Jensen’s inequality, we have2E log p(X)≤ E2log p(X), (2.147)which implies that2−H(X)= 2 p(x) log p(x)≤ p(x)2log p(x)= p2(x). (2.148)Corollary Let X, X be independent with X ∼ p(x), X ∼ r(x), x, x ∈X. ThenPr(X = X ) ≥ 2−H(p)−D(p||r), (2.149)Pr(X = X ) ≥ 2−H(r)−D(r||p). (2.150)
  • 62. SUMMARY 41Proof: We have2−H(p)−D(p||r)= 2p(x) log p(x)+ p(x) log r(x)p(x) (2.151)= 2 p(x) log r(x)(2.152)≤ p(x)2log r(x)(2.153)= p(x)r(x) (2.154)= Pr(X = X ), (2.155)where the inequality follows from Jensen’s inequality and the convexityof the function f (y) = 2y.The following telegraphic summary omits qualifying conditions.SUMMARYDefinition The entropy H(X) of a discrete random variable X isdefined byH(X) = −x∈Xp(x) log p(x). (2.156)Properties of H1. H(X) ≥ 0.2. Hb(X) = (logb a)Ha(X).3. (Conditioning reduces entropy) For any two random variables, Xand Y, we haveH(X|Y) ≤ H(X) (2.157)with equality if and only if X and Y are independent.4. H(X1, X2, . . . , Xn) ≤ ni=1 H(Xi), with equality if and only if theXi are independent.5. H(X) ≤ log | X |, with equality if and only if X is distributed uni-formly over X.6. H(p) is concave in p.
  • 63. 42 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONDefinition The relative entropy D(p q) of the probability massfunction p with respect to the probability mass function q is defined byD(p q) =xp(x) logp(x)q(x). (2.158)Definition The mutual information between two random variables Xand Y is defined asI (X; Y) =x∈X y∈Yp(x, y) logp(x, y)p(x)p(y). (2.159)Alternative expressionsH(X) = Ep log1p(X), (2.160)H(X, Y) = Ep log1p(X, Y), (2.161)H(X|Y) = Ep log1p(X|Y), (2.162)I (X; Y) = Ep logp(X, Y)p(X)p(Y), (2.163)D(p||q) = Ep logp(X)q(X). (2.164)Properties of D and I1. I (X; Y) = H(X) − H(X|Y) = H(Y) − H(Y|X) = H(X) +H(Y) − H(X, Y).2. D(p q) ≥ 0 with equality if and only if p(x) = q(x), for all x ∈X.3. I (X; Y) = D(p(x, y)||p(x)p(y)) ≥ 0, with equality if and only ifp(x, y) = p(x)p(y) (i.e., X and Y are independent).4. If | X |= m, and u is the uniform distribution over X, then D(pu) = log m − H(p).5. D(p||q) is convex in the pair (p, q).Chain rulesEntropy: H(X1, X2, . . . , Xn) = ni=1 H(Xi|Xi−1, . . . , X1).Mutual information:I (X1, X2, . . . , Xn; Y) = ni=1 I (Xi; Y|X1, X2, . . . , Xi−1).
  • 64. PROBLEMS 43Relative entropy:D(p(x, y)||q(x, y)) = D(p(x)||q(x)) + D(p(y|x)||q(y|x)).Jensen’s inequality. If f is a convex function, then Ef (X) ≥ f (EX).Log sum inequality. For n positive numbers, a1, a2, . . . , an andb1, b2, . . . , bn,ni=1ai logaibi≥ni=1ai logni=1 aini=1 bi(2.165)with equality if and only if aibi= constant.Data-processing inequality. If X → Y → Z forms a Markov chain,I (X; Y) ≥ I (X; Z).Sufficient statistic. T (X) is sufficient relative to {fθ (x)} if and onlyif I (θ; X) = I (θ; T (X)) for all distributions on θ.Fano’s inequality. Let Pe = Pr{ ˆX(Y) = X}. ThenH(Pe) + Pe log |X| ≥ H(X|Y). (2.166)Inequality. If X and X are independent and identically distributed,thenPr(X = X ) ≥ 2−H(X), (2.167)PROBLEMS2.1 Coin flips. A fair coin is flipped until the first head occurs. LetX denote the number of flips required.(a) Find the entropy H(X) in bits. The following expressions maybe useful:∞n=0rn=11 − r,∞n=0nrn=r(1 − r)2.(b) A random variable X is drawn according to this distribution.Find an “efficient” sequence of yes–no questions of the form,
  • 65. 44 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION“Is X contained in the set S?” Compare H(X) to the expectednumber of questions required to determine X.2.2 Entropy of functions. Let X be a random variable taking on afinite number of values. What is the (general) inequality relation-ship of H(X) and H(Y) if(a) Y = 2X?(b) Y = cos X?2.3 Minimum entropy. What is the minimum value ofH(p1, . . . , pn) = H(p) as p ranges over the set of n-dimensionalprobability vectors? Find all p’s that achieve this minimum.2.4 Entropy of functions of a random variable. Let X be a discreterandom variable. Show that the entropy of a function of X is lessthan or equal to the entropy of X by justifying the following steps:H(X, g(X))(a)= H(X) + H(g(X) | X) (2.168)(b)= H(X), (2.169)H(X, g(X))(c)= H(g(X)) + H(X | g(X)) (2.170)(d)≥ H(g(X)). (2.171)Thus, H(g(X)) ≤ H(X).2.5 Zero conditional entropy. Show that if H(Y|X) = 0, then Y isa function of X [i.e., for all x with p(x) > 0, there is only onepossible value of y with p(x, y) > 0].2.6 Conditional mutual information vs. unconditional mutual informa-tion. Give examples of joint random variables X, Y, and Zsuch that(a) I (X; Y | Z) < I (X; Y).(b) I (X; Y | Z) > I (X; Y).2.7 Coin weighing. Suppose that one has n coins, among which theremay or may not be one counterfeit coin. If there is a counterfeitcoin, it may be either heavier or lighter than the other coins. Thecoins are to be weighed by a balance.(a) Find an upper bound on the number of coins n so that kweighings will find the counterfeit coin (if any) and correctlydeclare it to be heavier or lighter.
  • 66. PROBLEMS 45(b) (Difficult) What is the coin- weighing strategy for k = 3 weigh-ings and 12 coins?2.8 Drawing with and without replacement. An urn contains r red, wwhite, and b black balls. Which has higher entropy, drawing k ≥ 2balls from the urn with replacement or without replacement? Set itup and show why. (There is both a difficult way and a relativelysimple way to do this.)2.9 Metric. A function ρ(x, y) is a metric if for all x, y,• ρ(x, y) ≥ 0.• ρ(x, y) = ρ(y, x).• ρ(x, y) = 0 if and only if x = y.• ρ(x, y) + ρ(y, z) ≥ ρ(x, z).(a) Show that ρ(X, Y) = H(X|Y) + H(Y|X) satisfies the first,second, and fourth properties above. If we say that X = Y ifthere is a one-to-one function mapping from X to Y, the thirdproperty is also satisfied, and ρ(X, Y) is a metric.(b) Verify that ρ(X, Y) can also be expressed asρ(X, Y) = H(X) + H(Y) − 2I (X; Y) (2.172)= H(X, Y) − I (X; Y) (2.173)= 2H(X, Y) − H(X) − H(Y). (2.174)2.10 Entropy of a disjoint mixture. Let X1 and X2 be discrete randomvariables drawn according to probability mass functions p1(·) andp2(·) over the respective alphabets X1 = {1, 2, . . . , m} and X2 ={m + 1, . . . , n}. LetX =X1 with probability α,X2 with probability 1 − α.(a) Find H(X) in terms of H(X1), H(X2), and α.(b) Maximize over α to show that 2H(X) ≤ 2H(X1) + 2H(X2) andinterpret using the notion that 2H(X) is the effective alpha-bet size.2.11 Measure of correlation. Let X1 and X2 be identically distributedbut not necessarily independent. Letρ = 1 −H(X2 | X1)H(X1).
  • 67. 46 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION(a) Show that ρ = I(X1;X2)H(X1) .(b) Show that 0 ≤ ρ ≤ 1.(c) When is ρ = 0?(d) When is ρ = 1?2.12 Example of joint entropy. Let p(x, y) be given byFind:(a) H(X), H(Y).(b) H(X | Y), H(Y | X).(c) H(X, Y).(d) H(Y) − H(Y | X).(e) I (X; Y).(f) Draw a Venn diagram for the quantities in parts (a) through (e).2.13 Inequality. Show that ln x ≥ 1 − 1x for x > 0.2.14 Entropy of a sum. Let X and Y be random variables that takeon values x1, x2, . . . , xr and y1, y2, . . . , ys, respectively. Let Z =X + Y.(a) Show that H(Z|X) = H(Y|X). Argue that if X, Y are inde-pendent, then H(Y) ≤ H(Z) and H(X) ≤ H(Z). Thus, theaddition of independent random variables adds uncertainty.(b) Give an example of (necessarily dependent) random variablesin which H(X) > H(Z) and H(Y) > H(Z).(c) Under what conditions does H(Z) = H(X) + H(Y)?2.15 Data processing. Let X1 → X2 → X3 → · · · → Xn form aMarkov chain in this order; that is, letp(x1, x2, . . . , xn) = p(x1)p(x2|x1) · · · p(xn|xn−1).Reduce I (X1; X2, . . . , Xn) to its simplest form.2.16 Bottleneck. Suppose that a (nonstationary) Markov chain startsin one of n states, necks down to k < n states, and thenfans back to m > k states. Thus, X1 → X2 → X3, that is,
  • 68. PROBLEMS 47p(x1, x2, x3) = p(x1)p(x2|x1)p(x3|x2), for all x1 ∈ {1, 2, . . . , n},x2 ∈ {1, 2, . . . , k}, x3 ∈ {1, 2, . . . , m}.(a) Show that the dependence of X1 and X3 is limited by thebottleneck by proving that I (X1; X3) ≤ log k.(b) Evaluate I (X1; X3) for k = 1, and conclude that no depen-dence can survive such a bottleneck.2.17 Pure randomness and bent coins. Let X1, X2, . . . , Xn denote theoutcomes of independent flips of a bent coin. Thus, Pr {Xi =1} = p, Pr {Xi = 0} = 1 − p, where p is unknown. We wishto obtain a sequence Z1, Z2, . . . , ZK of fair coin flips fromX1, X2, . . . , Xn. Toward this end, let f : Xn→ {0, 1}∗(where{0, 1}∗= { , 0, 1, 00, 01, . . .} is the set of all finite-length binarysequences) be a mapping f (X1, X2, . . . , Xn) = (Z1, Z2, . . . , ZK),where Zi ∼ Bernoulli (12), and K may depend on (X1, . . . , Xn).In order that the sequence Z1, Z2, . . . appear to be fair coin flips,the map f from bent coin flips to fair flips must have the prop-erty that all 2ksequences (Z1, Z2, . . . , Zk) of a given length khave equal probability (possibly 0), for k = 1, 2, . . .. For example,for n = 2, the map f (01) = 0, f (10) = 1, f (00) = f (11) =(the null string) has the property that Pr{Z1 = 1|K = 1} = Pr{Z1 =0|K = 1} = 12. Give reasons for the following inequalities:nH(p)(a)= H(X1, . . . , Xn)(b)≥ H(Z1, Z2, . . . , ZK, K)(c)= H(K) + H(Z1, . . . , ZK|K)(d)= H(K) + E(K)(e)≥ EK.Thus, no more than nH(p) fair coin tosses can be derived from(X1, . . . , Xn), on the average. Exhibit a good map f on sequencesof length 4.2.18 World Series. The World Series is a seven-game series that termi-nates as soon as either team wins four games. Let X be the randomvariable that represents the outcome of a World Series betweenteams A and B; possible values of X are AAAA, BABABAB, andBBBAAAA. Let Y be the number of games played, which rangesfrom 4 to 7. Assuming that A and B are equally matched and that
  • 69. 48 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONthe games are independent, calculate H(X), H(Y), H(Y|X), andH(X|Y).2.19 Infinite entropy. This problem shows that the entropy of a discreterandom variable can be infinite. Let A = ∞n=2(n log2n)−1. [It iseasy to show that A is finite by bounding the infinite sum by theintegral of (x log2x)−1.] Show that the integer-valued random vari-able X defined by Pr(X = n) = (An log2n)−1for n = 2, 3, . . .,has H(X) = +∞.2.20 Run-length coding. Let X1, X2, . . . , Xn be (possibly dependent)binary random variables. Suppose that one calculates the runlengths R = (R1, R2, . . .) of this sequence (in order as theyoccur). For example, the sequence X = 0001100100 yields runlengths R = (3, 2, 2, 1, 2). Compare H(X1, X2, . . . , Xn), H(R),and H(Xn, R). Show all equalities and inequalities, and bound allthe differences.2.21 Markov’s inequality for probabilities. Let p(x) be a probabilitymass function. Prove, for all d ≥ 0, thatPr {p(X) ≤ d} log1d≤ H(X). (2.175)2.22 Logical order of ideas. Ideas have been developed in order ofneed and then generalized if necessary. Reorder the following ideas,strongest first, implications following:(a) Chain rule for I (X1, . . . , Xn; Y), chain rule for D(p(x1, . . . ,xn)||q(x1, x2, . . . , xn)), and chain rule for H(X1, X2, . . . , Xn).(b) D(f ||g) ≥ 0, Jensen’s inequality, I (X; Y) ≥ 0.2.23 Conditional mutual information. Consider a sequence of n binaryrandom variables X1, X2, . . . , Xn. Each sequence with an evennumber of 1’s has probability 2−(n−1), and each sequence with anodd number of 1’s has probability 0. Find the mutual informationsI (X1; X2), I (X2; X3|X1), . . . , I (Xn−1; Xn|X1, . . . , Xn−2).2.24 Average entropy. Let H(p) = −p log2 p − (1 − p) log2(1 − p)be the binary entropy function.(a) Evaluate H(14) using the fact that log2 3 ≈ 1.584. (Hint: Youmay wish to consider an experiment with four equally likelyoutcomes, one of which is more interesting than the others.)
  • 70. PROBLEMS 49(b) Calculate the average entropy H(p) when the probability p ischosen uniformly in the range 0 ≤ p ≤ 1.(c) (Optional) Calculate the average entropy H(p1, p2, p3), where(p1, p2, p3) is a uniformly distributed probability vector. Gen-eralize to dimension n.2.25 Venn diagrams. There isn’t really a notion of mutual informationcommon to three random variables. Here is one attempt at a defini-tion: Using Venn diagrams, we can see that the mutual informationcommon to three random variables X, Y, and Z can be defined byI (X; Y; Z) = I (X; Y) − I (X; Y|Z) .This quantity is symmetric in X, Y, and Z, despite the precedingasymmetric definition. Unfortunately, I (X; Y; Z) is not necessar-ily nonnegative. Find X, Y, and Z such that I (X; Y; Z) < 0, andprove the following two identities:(a) I (X; Y; Z) = H(X, Y, Z) − H(X) − H(Y) − H(Z) +I (X; Y) + I (Y; Z) + I (Z; X).(b) I (X; Y; Z) = H(X, Y, Z) − H(X, Y) − H(Y, Z) −H(Z, X) + H(X) + H(Y) + H(Z).The first identity can be understood using the Venn diagram analogyfor entropy and mutual information. The second identity followseasily from the first.2.26 Another proof of nonnegativity of relative entropy. In view of thefundamental nature of the result D(p||q) ≥ 0, we will give anotherproof.(a) Show that ln x ≤ x − 1 for 0 < x < ∞.(b) Justify the following steps:−D(p||q) =xp(x) lnq(x)p(x)(2.176)≤xp(x)q(x)p(x)− 1 (2.177)≤ 0. (2.178)(c) What are the conditions for equality?2.27 Grouping rule for entropy. Let p = (p1, p2, . . . , pm) be a prob-ability distribution on m elements (i.e., pi ≥ 0 and mi=1 pi = 1).
  • 71. 50 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONDefine a new distribution q on m − 1 elements as q1 = p1, q2 = p2,. . . , qm−2 = pm−2, and qm−1 = pm−1 + pm [i.e., the distribution qis the same as p on {1, 2, . . . , m − 2}, and the probability of thelast element in q is the sum of the last two probabilities of p].Show thatH(p) = H(q) + (pm−1 + pm)Hpm−1pm−1 + pm,pmpm−1 + pm.(2.179)2.28 Mixing increases entropy. Show that the entropy of the proba-bility distribution, (p1, . . . , pi, . . . , pj , . . . , pm), is less than theentropy of the distribution (p1, . . . ,pi+pj2 , . . . ,pi+pj2 ,. . . , pm). Show that in general any transfer of probability thatmakes the distribution more uniform increases the entropy.2.29 Inequalities. Let X, Y, and Z be joint random variables. Provethe following inequalities and find conditions for equality.(a) H(X, Y|Z) ≥ H(X|Z).(b) I (X, Y; Z) ≥ I (X; Z).(c) H(X, Y, Z) − H(X, Y) ≤ H(X, Z) − H(X).(d) I (X; Z|Y) ≥ I (Z; Y|X) − I (Z; Y) + I (X; Z).2.30 Maximum entropy. Find the probability mass function p(x) thatmaximizes the entropy H(X) of a nonnegative integer-valued ran-dom variable X subject to the constraintEX =∞n=0np(n) = Afor a fixed value A > 0. Evaluate this maximum H(X).2.31 Conditional entropy. Under what conditions does H(X|g(Y)) =H(X|Y)?2.32 Fano. We are given the following joint distribution on (X, Y):
  • 72. PROBLEMS 51Let ˆX(Y) be an estimator for X (based on Y) and let Pe =Pr{ ˆX(Y) = X}.(a) Find the minimum probability of error estimator ˆX(Y) and theassociated Pe.(b) Evaluate Fano’s inequality for this problem and compare.2.33 Fano’s inequality. Let Pr(X = i) = pi, i = 1, 2, . . . , m, and letp1 ≥ p2 ≥ p3 ≥ · · · ≥ pm. The minimal probability of error pre-dictor of X is ˆX = 1, with resulting probability of error Pe =1 − p1. Maximize H(p) subject to the constraint 1 − p1 = Pe tofind a bound on Pe in terms of H. This is Fano’s inequality in theabsence of conditioning.2.34 Entropy of initial conditions. Prove that H(X0|Xn) is nondecreas-ing with n for any Markov chain.2.35 Relative entropy is not symmetric.Let the random variable X have three possible outcomes {a, b, c}.Consider two distributions on this random variable:Symbol p(x) q(x)a 1213b 1413c 1413Calculate H(p), H(q), D(p||q), and D(q||p). Verify that in thiscase, D(p||q) = D(q||p).2.36 Symmetric relative entropy. Although, as Problem 2.35 shows,D(p||q) = D(q||p) in general, there could be distributions forwhich equality holds. Give an example of two distributions p andq on a binary alphabet such that D(p||q) = D(q||p) (other thanthe trivial case p = q).2.37 Relative entropy. Let X, Y, Z be three random variables with ajoint probability mass function p(x, y, z). The relative entropybetween the joint distribution and the product of the marginals isD(p(x, y, z)||p(x)p(y)p(z)) = E logp(x, y, z)p(x)p(y)p(z). (2.180)Expand this in terms of entropies. When is this quantity zero?
  • 73. 52 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATION2.38 The value of a question. Let X ∼ p(x), x = 1, 2, . . . , m. We aregiven a set S ⊆ {1, 2, . . . , m}. We ask whether X ∈ S and receivethe answerY =1 if X ∈ S0 if X ∈ S.Suppose that Pr{X ∈ S} = α. Find the decrease in uncertaintyH(X) − H(X|Y).Apparently, any set S with a given α is as good as any other.2.39 Entropy and pairwise independence. Let X, Y, Z be three binaryBernoulli(12) random variables that are pairwise independent; thatis, I (X; Y) = I (X; Z) = I (Y; Z) = 0.(a) Under this constraint, what is the minimum value forH(X, Y, Z)?(b) Give an example achieving this minimum.2.40 Discrete entropies. Let X and Y be two independent integer-valued random variables. Let X be uniformly distributed over {1, 2,. . . , 8}, and let Pr{Y = k} = 2−k, k = 1, 2, 3, . . ..(a) Find H(X).(b) Find H(Y).(c) Find H(X + Y, X − Y).2.41 Random questions. One wishes to identify a random object X ∼p(x). A question Q ∼ r(q) is asked at random according to r(q).This results in a deterministic answer A = A(x, q) ∈ {a1, a2, . . .}.Suppose that X and Q are independent. Then I (X; Q, A) is theuncertainty in X removed by the question–answer (Q, A).(a) Show that I (X; Q, A) = H(A|Q). Interpret.(b) Now suppose that two i.i.d. questions Q1, Q2, ∼ r(q) areasked, eliciting answers A1 and A2. Show that two questionsare less valuable than twice a single question in the sense thatI (X; Q1, A1, Q2, A2) ≤ 2I (X; Q1, A1).2.42 Inequalities. Which of the following inequalities are generally≥, =, ≤? Label each with ≥, =, or ≤.(a) H(5X) vs. H(X)(b) I (g(X); Y) vs. I (X; Y)(c) H(X0|X−1) vs. H(X0|X−1, X1)(d) H(X, Y)/(H(X) + H(Y)) vs. 1
  • 74. PROBLEMS 532.43 Mutual information of heads and tails(a) Consider a fair coin flip. What is the mutual informationbetween the top and bottom sides of the coin?(b) A six-sided fair die is rolled. What is the mutual informationbetween the top side and the front face (the side most facingyou)?2.44 Pure randomness. We wish to use a three-sided coin to generatea fair coin toss. Let the coin X have probability mass functionX =A, pAB, pBC, pC,where pA, pB, pC are unknown.(a) How would you use two independent flips X1, X2 to generate(if possible) a Bernoulli(12 ) random variable Z?(b) What is the resulting maximum expected number of fair bitsgenerated?2.45 Finite entropy. Show that for a discrete random variable X ∈{1, 2, . . .}, if E log X < ∞, then H(X) < ∞.2.46 Axiomatic definition of entropy (Difficult). If we assume certainaxioms for our measure of information, we will be forced to use alogarithmic measure such as entropy. Shannon used this to justifyhis initial definition of entropy. In this book we rely more on theother properties of entropy rather than its axiomatic derivation tojustify its use. The following problem is considerably more difficultthan the other problems in this section.If a sequence of symmetric functions Hm(p1, p2, . . . , pm) satisfiesthe following properties:• Normalization: H212, 12 = 1,• Continuity: H2(p, 1 − p) is a continuous function of p,• Grouping: Hm(p1, p2, . . . , pm) = Hm−1(p1 + p2, p3, . . . , pm) +(p1 + p2)H2p1p1+p2, p2p1+p2,prove that Hm must be of the formHm(p1, p2, . . . , pm) = −mi=1pi log pi, m = 2, 3, . . . .(2.181)
  • 75. 54 ENTROPY, RELATIVE ENTROPY, AND MUTUAL INFORMATIONThere are various other axiomatic formulations which result in thesame definition of entropy. See, for example, the book by Csisz´arand K¨orner [149].2.47 Entropy of a missorted file. A deck of n cards in order 1, 2, . . . , nis provided. One card is removed at random, then replaced at ran-dom. What is the entropy of the resulting deck?2.48 Sequence length. How much information does the length of asequence give about the content of a sequence? Suppose that weconsider a Bernoulli (12) process {Xi}. Stop the process when thefirst 1 appears. Let N designate this stopping time. Thus, XNis anelement of the set of all finite-length binary sequences {0, 1}∗={0, 1, 00, 01, 10, 11, 000, . . . }.(a) Find I (N; XN ).(b) Find H(XN|N).(c) Find H(XN).Let’s now consider a different stopping time. For this part, againassume that Xi ∼ Bernoulli(12) but stop at time N = 6, with prob-ability 13 and stop at time N = 12 with probability 23. Let thisstopping time be independent of the sequence X1X2 · · · X12.(d) Find I (N; XN).(e) Find H(XN|N).(f) Find H(XN).HISTORICAL NOTESThe concept of entropy was introduced in thermodynamics, where itwas used to provide a statement of the second law of thermodynam-ics. Later, statistical mechanics provided a connection between thermo-dynamic entropy and the logarithm of the number of microstates in amacrostate of the system. This work was the crowning achievement ofBoltzmann, who had the equation S = k ln W inscribed as the epitaph onhis gravestone [361].In the 1930s, Hartley introduced a logarithmic measure of informa-tion for communication. His measure was essentially the logarithm of thealphabet size. Shannon [472] was the first to define entropy and mutualinformation as defined in this chapter. Relative entropy was first definedby Kullback and Leibler [339]. It is known under a variety of names,including the Kullback–Leibler distance, cross entropy, information diver-gence, and information for discrimination, and has been studied in detailby Csisz´ar [138] and Amari [22].
  • 76. HISTORICAL NOTES 55Many of the simple properties of these quantities were developed byShannon. Fano’s inequality was proved in Fano [201]. The notion ofsufficient statistic was defined by Fisher [209], and the notion of theminimal sufficient statistic was introduced by Lehmann and Scheff´e [350].The relationship of mutual information and sufficiency is due to Kullback[335]. The relationship between information theory and thermodynamicshas been discussed extensively by Brillouin [77] and Jaynes [294].The physics of information is a vast new subject of inquiry spawnedfrom statistical mechanics, quantum mechanics, and information theory.The key question is how information is represented physically. Quan-tum channel capacity (the logarithm of the number of distinguishablepreparations of a physical system) and quantum data compression [299]are well-defined problems with nice answers involving the von Neumannentropy. A new element of quantum information arises from the exis-tence of quantum entanglement and the consequences (exhibited in Bell’sinequality) that the observed marginal distribution of physical events arenot consistent with any joint distribution (no local realism). The funda-mental text by Nielsen and Chuang [395] develops the theory of quantuminformation and the quantum counterparts to many of the results in thisbook. There have also been attempts to determine whether there areany fundamental physical limits to computation, including work by Ben-nett [47] and Bennett and Landauer [48].
  • 77. CHAPTER 3ASYMPTOTIC EQUIPARTITIONPROPERTYIn information theory, the analog of the law of large numbers is theasymptotic equipartition property (AEP). It is a direct consequenceof the weak law of large numbers. The law of large numbers statesthat for independent, identically distributed (i.i.d.) random variables,1nni=1 Xi is close to its expected value EX for large values of n.The AEP states that 1n log 1p(X1,X2,...,Xn) is close to the entropy H, whereX1, X2, . . . , Xn are i.i.d. random variables and p(X1, X2, . . . , Xn) is theprobability of observing the sequence X1, X2, . . . , Xn. Thus, the proba-bility p(X1, X2, . . . , Xn) assigned to an observed sequence will be closeto 2−nH.This enables us to divide the set of all sequences into two sets, thetypical set, where the sample entropy is close to the true entropy, and thenontypical set, which contains the other sequences. Most of our attentionwill be on the typical sequences. Any property that is proved for the typicalsequences will then be true with high probability and will determine theaverage behavior of a large sample.First, an example. Let the random variable X ∈ {0, 1} have a probabilitymass function defined by p(1) = p and p(0) = q. If X1, X2, . . . , Xn arei.i.d. according to p(x), the probability of a sequence x1, x2, . . . , xn isni=1 p(xi). For example, the probability of the sequence (1, 0, 1, 1, 0, 1)is p Xi qn− Xi = p4q2. Clearly, it is not true that all 2n sequences oflength n have the same probability.However, we might be able to predict the probability of the sequencethat we actually observe. We ask for the probability p(X1, X2, . . . , Xn) ofthe outcomes X1, X2, . . . , Xn, where X1, X2, . . . are i.i.d. ∼ p(x). This isinsidiously self-referential, but well defined nonetheless. Apparently, weare asking for the probability of an event drawn according to the sameElements of Information Theory, Second Edition, By Thomas M. Cover and Joy A. ThomasCopyright  2006 John Wiley & Sons, Inc.57
  • 78. 58 ASYMPTOTIC EQUIPARTITION PROPERTYprobability distribution. Here it turns out that p(X1, X2, . . . , Xn) is closeto 2−nHwith high probability.We summarize this by saying, “Almost all events are almost equallysurprising.” This is a way of saying thatPr (X1, X2, . . . , Xn) : p(X1, X2, . . . , Xn) = 2−n(H± )≈ 1 (3.1)if X1, X2, . . . , Xn are i.i.d. ∼ p(x).In the example just given, where p(X1, X2, . . . , Xn) = p Xi qn− Xi ,we are simply saying that the number of 1’s in the sequence is closeto np (with high probability), and all such sequences have (roughly) thesame probability 2−nH(p). We use the idea of convergence in probability,defined as follows:Definition (Convergence of random variables). Given a sequence ofrandom variables, X1, X2, . . . , we say that the sequence X1, X2, . . . con-verges to a random variable X:1. In probability if for every > 0, Pr{|Xn − X| > } → 02. In mean square if E(Xn − X)2→ 03. With probability 1 (also called almost surely) if Pr{limn→∞ Xn =X} = 13.1 ASYMPTOTIC EQUIPARTITION PROPERTY THEOREMThe asymptotic equipartition property is formalized in the followingtheorem.Theorem 3.1.1 (AEP) If X1, X2, . . . are i.i.d. ∼ p(x), then−1nlog p(X1, X2, . . . , Xn) → H(X) in probability. (3.2)Proof: Functions of independent random variables are also independentrandom variables. Thus, since the Xi are i.i.d., so are log p(Xi). Hence,by the weak law of large numbers,−1nlog p(X1, X2, . . . , Xn) = −1nilog p(Xi) (3.3)→ −E log p(X) in probability (3.4)= H(X), (3.5)which proves the theorem.
  • 79. 3.1 ASYMPTOTIC EQUIPARTITION PROPERTY THEOREM 59Definition The typical set A(n)with respect to p(x) is the set of se-quences (x1, x2, . . . , xn) ∈ Xnwith the property2−n(H(X)+ )≤ p(x1, x2, . . . , xn) ≤ 2−n(H(X)− ). (3.6)As a consequence of the AEP, we can show that the set A(n)has thefollowing properties:Theorem 3.1.21. If (x1, x2, . . . , xn) ∈ A(n), then H(X) − ≤ −1n log p(x1, x2, . . . ,xn) ≤ H(X) + .2. Pr A(n)> 1 − for n sufficiently large.3. A(n)≤ 2n(H(X)+ ), where |A| denotes the number of elements in theset A.4. |A(n)| ≥ (1 − )2n(H(X)− )for n sufficiently large.Thus, the typical set has probability nearly 1, all elements of the typicalset are nearly equiprobable, and the number of elements in the typical setis nearly 2nH.Proof: The proof of property (1) is immediate from the definition ofA(n). The second property follows directly from Theorem 3.1.1, since theprobability of the event (X1, X2, . . . , Xn) ∈ A(n)tends to 1 as n → ∞.Thus, for any δ > 0, there exists an n0 such that for all n ≥ n0, we havePr −1nlog p(X1, X2, . . . , Xn) − H(X) < > 1 − δ. (3.7)Setting δ = , we obtain the second part of the theorem. The identificationof δ = will conveniently simplify notation later.To prove property (3), we write1 =x∈X np(x) (3.8)≥x∈A(n)p(x) (3.9)≥x∈A(n)2−n(H(X)+ )(3.10)= 2−n(H(X)+ )|A(n)|, (3.11)
  • 80. 60 ASYMPTOTIC EQUIPARTITION PROPERTYwhere the second inequality follows from (3.6). Hence|A(n)| ≤ 2n(H(X)+ ). (3.12)Finally, for sufficiently large n, Pr{A(n)} > 1 − , so that1 − < Pr{A(n)} (3.13)≤x∈A(n)2−n(H(X)− )(3.14)= 2−n(H(X)− )|A(n)|, (3.15)where the second inequality follows from (3.6). Hence,|A(n)| ≥ (1 − )2n(H(X)− ), (3.16)which completes the proof of the properties of A(n).3.2 CONSEQUENCES OF THE AEP: DATA COMPRESSIONLet X1, X2, . . . , Xn be independent, identically distributed random vari-ables drawn from the probability mass function p(x). We wish to findshort descriptions for such sequences of random variables. We divide allsequences in Xninto two sets: the typical set A(n)and its complement,as shown in Figure 3.1.Non-typical setTypical set∋∋A(n): 2n(H + )elementsn:| |n elementsFIGURE 3.1. Typical sets and source coding.
  • 81. 3.2 CONSEQUENCES OF THE AEP: DATA COMPRESSION 61Non-typical setTypical setDescription: n log | | + 2 bitsDescription: n(H + ) + 2 bits∋FIGURE 3.2. Source code using the typical set.We order all elements in each set according to some order (e.g., lexi-cographic order). Then we can represent each sequence of A(n)by givingthe index of the sequence in the set. Since there are ≤ 2n(H+ )sequencesin A(n), the indexing requires no more than n(H + ) + 1 bits. [The extrabit may be necessary because n(H + ) may not be an integer.] We pre-fix all these sequences by a 0, giving a total length of ≤ n(H + ) + 2bits to represent each sequence in A(n)(see Figure 3.2). Similarly, we canindex each sequence not in A(n)by using not more than n log |X| + 1 bits.Prefixing these indices by 1, we have a code for all the sequences in Xn.Note the following features of the above coding scheme:• The code is one-to-one and easily decodable. The initial bit acts asa flag bit to indicate the length of the codeword that follows.• We have used a brute-force enumeration of the atypical set A(n)cwithout taking into account the fact that the number of elements inA(n)cis less than the number of elements in Xn. Surprisingly, this isgood enough to yield an efficient description.• The typical sequences have short descriptions of length ≈ nH.We use the notation xnto denote a sequence x1, x2, . . . , xn. Let l(xn)be the length of the codeword corresponding to xn. If n is sufficientlylarge so that Pr{A(n)} ≥ 1 − , the expected length of the codeword isE(l(Xn)) =xnp(xn)l(xn) (3.17)
  • 82. 62 ASYMPTOTIC EQUIPARTITION PROPERTY=xn∈A(n)p(xn)l(xn) +xn∈A(n)cp(xn)l(xn) (3.18)≤xn∈A(n)p(xn)(n(H + ) + 2)+xn∈A(n)cp(xn)(n log |X| + 2) (3.19)= Pr A(n)(n(H + ) + 2) + Pr A(n)c(n log |X| + 2)(3.20)≤ n(H + ) + n(log |X|) + 2 (3.21)= n(H + ), (3.22)where = + log |X| + 2n can be made arbitrarily small by an appro-priate choice of followed by an appropriate choice of n. Hence we haveproved the following theorem.Theorem 3.2.1 Let Xnbe i.i.d. ∼ p(x). Let > 0. Then there exists acode that maps sequences xnof length n into binary strings such that themapping is one-to-one (and therefore invertible) andE1nl(Xn) ≤ H(X) + (3.23)for n sufficiently large.Thus, we can represent sequences Xnusing nH(X) bits on the average.3.3 HIGH-PROBABILITY SETS AND THE TYPICAL SETFrom the definition of A(n), it is clear that A(n)is a fairly small set thatcontains most of the probability. But from the definition, it is not clearwhether it is the smallest such set. We will prove that the typical set hasessentially the same number of elements as the smallest set, to first orderin the exponent.Definition For each n = 1, 2, . . . , let B(n)δ ⊂ Xnbe the smallest setwithPr{B(n)δ } ≥ 1 − δ. (3.24)
  • 83. 3.3 HIGH-PROBABILITY SETS AND THE TYPICAL SET 63We argue that B(n)δ must have significant intersection with A(n)and there-fore must have about as many elements. In Problem 3.3.11, we outlinethe proof of the following theorem.Theorem 3.3.1 Let X1, X2, . . . , Xn be i.i.d. ∼ p(x). For δ < 12 andany δ > 0, if Pr{B(n)δ } > 1 − δ, then1nlog |B(n)δ | > H − δ for n sufficiently large. (3.25)Thus, B(n)δ must have at least 2nHelements, to first order in the expo-nent. But A(n)has 2n(H± )elements. Therefore, A(n)is about the samesize as the smallest high-probability set.We will now define some new notation to express equality to first orderin the exponent.Definition The notation an.= bn meanslimn→∞1nloganbn= 0. (3.26)Thus, an.= bn implies that an and bn are equal to the first order in theexponent.We can now restate the above results: If δn → 0 and n → 0, then|B(n)δn|.=|A(n)n|.=2nH. (3.27)To illustrate the difference between A(n)and B(n)δ , let us con-sider a Bernoulli sequence X1, X2, . . . , Xn with parameter p = 0.9. [ABernoulli(θ) random variable is a binary random variable that takes onthe value 1 with probability θ.] The typical sequences in this case are thesequences in which the proportion of 1’s is close to 0.9. However, thisdoes not include the most likely single sequence, which is the sequence ofall 1’s. The set B(n)δ includes all the most probable sequences and there-fore includes the sequence of all 1’s. Theorem 3.3.1 implies that A(n)andB(n)δ must both contain the sequences that have about 90% 1’s, and thetwo sets are almost equal in size.
  • 84. 64 ASYMPTOTIC EQUIPARTITION PROPERTYSUMMARYAEP. “Almost all events are almost equally surprising.” Specifically,if X1, X2, . . . are i.i.d. ∼ p(x), then−1nlog p(X1, X2, . . . , Xn) → H(X) in probability. (3.28)Definition. The typical set A(n)is the set of sequences x1, x2, . . . , xnsatisfying2−n(H(X)+ )≤ p(x1, x2, . . . , xn) ≤ 2−n(H(X)− ). (3.29)Properties of the typical set1. If (x1, x2, . . . , xn) ∈ A(n), then p(x1, x2, . . . , xn) = 2−n(H± ).2. Pr A(n)> 1 − for n sufficiently large.3. A(n) ≤ 2n(H(X)+ ), where |A| denotes the number of elements inset A.Definition. an.=bn means that 1n log anbn→ 0 as n → ∞.Smallest probable set. Let X1, X2, . . . , Xn be i.i.d. ∼ p(x), and forδ < 12, let B(n)δ ⊂ Xnbe the smallest set such that Pr{B(n)δ } ≥ 1 − δ.Then|B(n)δ |.=2nH. (3.30)PROBLEMS3.1 Markov’s inequality and Chebyshev’s inequality(a) (Markov’s inequality) For any nonnegative random variable Xand any t > 0, show thatPr {X ≥ t} ≤EXt. (3.31)Exhibit a random variable that achieves this inequality withequality.(b) (Chebyshev’s inequality) Let Y be a random variable withmean µ and variance σ2. By letting X = (Y − µ)2, show that
  • 85. PROBLEMS 65for any > 0,Pr {|Y − µ| > } ≤σ22. (3.32)(c) (Weak law of large numbers) Let Z1, Z2, . . . , Zn be a sequenceof i.i.d. random variables with mean µ and variance σ2. LetZn = 1nni=1Zi be the sample mean. Show thatPr Zn − µ > ≤σ2n 2. (3.33)Thus, Pr Zn − µ > → 0 as n → ∞. This is known as theweak law of large numbers.3.2 AEP and mutual information. Let (Xi, Yi) be i.i.d. ∼ p(x, y). Weform the log likelihood ratio of the hypothesis that X and Y areindependent vs. the hypothesis that X and Y are dependent. Whatis the limit of1nlogp(Xn)p(Yn)p(Xn, Yn)?3.3 Piece of cake.A cake is sliced roughly in half, the largest piece being chosen eachtime, the other pieces discarded. We will assume that a random cutcreates pieces of proportionsP =(23, 13) with probability 34(25, 35) with probability 14Thus, for example, the first cut (and choice of largest piece) mayresult in a piece of size 35. Cutting and choosing from this piecemight reduce it to size 3523 at time 2, and so on. How large, tofirst order in the exponent, is the piece of cake after n cuts?3.4 AEP. Let Xi be iid ∼ p(x), x ∈ {1, 2, . . . , m}. Let µ = EX andH = − p(x) log p(x). Let An= {xn∈ Xn: | − 1n log p(xn) −H| ≤ }. Let Bn= {xn∈ Xn: |1nni=1 Xi − µ| ≤ }.(a) Does Pr{Xn ∈ An} −→ 1?(b) Does Pr{Xn∈ An∩ Bn} −→ 1?
  • 86. 66 ASYMPTOTIC EQUIPARTITION PROPERTY(c) Show that |An∩ Bn| ≤ 2n(H+ )for all n.(d) Show that |An∩ Bn| ≥ 12 2n(H− )for n sufficiently large.3.5 Sets defined by probabilities. Let X1, X2, . . . be an i.i.d. sequenceof discrete random variables with entropy H(X). LetCn(t) = {xn∈ Xn: p(xn) ≥ 2−nt}denote the subset of n-sequences with probabilities ≥ 2−nt .(a) Show that |Cn(t)| ≤ 2nt.(b) For what values of t does P ({Xn∈ Cn(t)}) → 1?3.6 AEP-like limit. Let X1, X2, . . . be i.i.d. drawn according to prob-ability mass function p(x). Findlimn→∞(p(X1, X2, . . . , Xn))1n .3.7 AEP and source coding. A discrete memoryless source emits asequence of statistically independent binary digits with probabilitiesp(1) = 0.005 and p(0) = 0.995. The digits are taken 100 at a timeand a binary codeword is provided for every sequence of 100 digitscontaining three or fewer 1’s.(a) Assuming that all codewords are the same length, find the min-imum length required to provide codewords for all sequenceswith three or fewer 1’s.(b) Calculate the probability of observing a source sequence forwhich no codeword has been assigned.(c) Use Chebyshev’s inequality to bound the probability of observ-ing a source sequence for which no codeword has been assign-ed. Compare this bound with the actual probability computedin part (b).3.8 Products.LetX =1, with probability 122, with probability 143, with probability 14Let X1, X2, . . . be drawn i.i.d. according to this distribution. Findthe limiting behavior of the product(X1X2 · · · Xn)1n .
  • 87. PROBLEMS 673.9 AEP. Let X1, X2, . . . be independent, identically distributed ran-dom variables drawn according to the probability mass functionp(x), x ∈ {1, 2, . . . , m}. Thus, p(x1, x2, . . . , xn) = ni=1 p(xi). Weknow that −1n log p(X1, X2, . . . , Xn) → H(X) in probability. Letq(x1, x2, . . . , xn) = ni=1 q(xi), where q is another probabilitymass function on {1, 2, . . . , m}.(a) Evaluate lim −1n log q(X1, X2, . . . , Xn), where X1, X2, . . . arei.i.d. ∼ p(x).(b) Now evaluate the limit of the log likelihood ratio1n log q(X1,...,Xn)p(X1,...,Xn) when X1, X2, . . . are i.i.d. ∼ p(x). Thus, theodds favoring q are exponentially small when p is true.3.10 Random box size.An n-dimensional rectangular box with sides X1, X2, X3, . . . , Xn isto be constructed. The volume is Vn = ni=1 Xi. The edge length lof a n-cube with the same volume as the random box is l = V1/nn .Let X1, X2, . . . be i.i.d. uniform random variables over the unitinterval [0, 1]. Find limn→∞ V1/nn and compare to (EVn)1n . Clearly,the expected edge length does not capture the idea of the volumeof the box. The geometric mean, rather than the arithmetic mean,characterizes the behavior of products.3.11 Proof of Theorem 3.3.1. This problem shows that the size of thesmallest “probable” set is about 2nH. Let X1, X2, . . . , Xn be i.i.d.∼ p(x). Let B(n)δ ⊂ Xnsuch that Pr(B(n)δ ) > 1 − δ. Fix < 12.(a) Given any two sets A, B such that Pr(A) > 1 − 1 and Pr(B) >1 − 2, show that Pr(A ∩ B) > 1 − 1 − 2. Hence, Pr(A(n)∩B(n)δ ) ≥ 1 − − δ.(b) Justify the steps in the chain of inequalities1 − − δ ≤ Pr(A(n)∩ B(n)δ ) (3.34)=A(n)∩B(n)δp(xn) (3.35)≤A(n)∩B(n)δ2−n(H− )(3.36)= |A(n)∩ B(n)δ |2−n(H− )(3.37)≤ |B(n)δ |2−n(H− ). (3.38)(c) Complete the proof of the theorem.
  • 88. 68 ASYMPTOTIC EQUIPARTITION PROPERTY3.12 Monotonic convergence of the empirical distribution.Let ˆpn denote the empirical probability mass function correspond-ing to X1, X2, . . . , Xn i.i.d. ∼ p(x), x ∈ X. Specifically,ˆpn(x) =1nni=1I (Xi = x)is the proportion of times that Xi = x in the first n samples, whereI is the indicator function.(a) Show for X binary thatED( ˆp2n p) ≤ ED( ˆpn p).Thus, the expected relative entropy “distance” from the empir-ical distribution to the true distribution decreases with samplesize. (Hint: Write ˆp2n = 12 ˆpn + 12 ˆpn and use the convexityof D.)(b) Show for an arbitrary discrete X thatED( ˆpn p) ≤ ED( ˆpn−1 p).(Hint: Write ˆpn as the average of n empirical mass functionswith each of the n samples deleted in turn.)3.13 Calculation of typical set. To clarify the notion of a typical setA(n)and the smallest set of high probability B(n)δ , we will calculatethe set for a simple example. Consider a sequence of i.i.d. binaryrandom variables, X1, X2, . . . , Xn, where the probability that Xi =1 is 0.6 (and therefore the probability that Xi = 0 is 0.4).(a) Calculate H(X).(b) With n = 25 and = 0.1, which sequences fall in the typi-cal set A(n)? What is the probability of the typical set? Howmany elements are there in the typical set? (This involves com-putation of a table of probabilities for sequences with k 1’s,0 ≤ k ≤ 25, and finding those sequences that are in the typi-cal set.)(c) How many elements are there in the smallest set that has prob-ability 0.9?(d) How many elements are there in the intersection of the sets inparts (b) and (c)? What is the probability of this intersection?
  • 89. HISTORICAL NOTES 69knknkpk(1 − p)n−k−1nlog p(xn)0 1 0.000000 1.3219281 25 0.000000 1.2985302 300 0.000000 1.2751313 2300 0.000001 1.2517334 12650 0.000007 1.2283345 53130 0.000054 1.2049366 177100 0.000227 1.1815377 480700 0.001205 1.1581398 1081575 0.003121 1.1347409 2042975 0.013169 1.11134210 3268760 0.021222 1.08794311 4457400 0.077801 1.06454512 5200300 0.075967 1.04114613 5200300 0.267718 1.01774814 4457400 0.146507 0.99434915 3268760 0.575383 0.97095116 2042975 0.151086 0.94755217 1081575 0.846448 0.92415418 480700 0.079986 0.90075519 177100 0.970638 0.87735720 53130 0.019891 0.85395821 12650 0.997633 0.83056022 2300 0.001937 0.80716123 300 0.999950 0.78376324 25 0.000047 0.76036425 1 0.000003 0.736966HISTORICAL NOTESThe asymptotic equipartition property (AEP) was first stated by Shan-non in his original 1948 paper [472], where he proved the result fori.i.d. processes and stated the result for stationary ergodic processes.McMillan [384] and Breiman [74] proved the AEP for ergodic finitealphabet sources. The result is now referred to as the AEP or the Shan-non–McMillan–Breiman theorem. Chung [101] extended the theorem tothe case of countable alphabets and Moy [392], Perez [417], and Kieffer[312] proved the L1 convergence when {Xi} is continuous valued andergodic. Barron [34] and Orey [402] proved almost sure convergence forreal-valued ergodic processes; a simple sandwich argument (Algoet andCover [20]) will be used in Section 16.8 to prove the general AEP.
  • 90. CHAPTER 4ENTROPY RATESOF A STOCHASTIC PROCESSThe asymptotic equipartition property in Chapter 3 establishes thatnH(X) bits suffice on the average to describe n independent and iden-tically distributed random variables. But what if the random variablesare dependent? In particular, what if the random variables form a sta-tionary process? We will show, just as in the i.i.d. case, that the entropyH(X1, X2, . . . , Xn) grows (asymptotically) linearly with n at a rate H(X),which we will call the entropy rate of the process. The interpretation ofH(X) as the best achievable data compression will await the analysis inChapter 5.4.1 MARKOV CHAINSA stochastic process {Xi} is an indexed sequence of random variables.In general, there can be an arbitrary dependence among the random vari-ables. The process is characterized by the joint probability mass functionsPr{(X1, X2, . . . , Xn) = (x1, x2, . . . , xn)}= p(x1, x2, . . . , xn), (x1, x2, . . . ,xn) ∈ Xnfor n = 1, 2, . . . .Definition A stochastic process is said to be stationary if the jointdistribution of any subset of the sequence of random variables is invariantwith respect to shifts in the time index; that is,Pr{X1 = x1, X2 = x2, . . . , Xn = xn}= Pr{X1+l = x1, X2+l = x2, . . . , Xn+l = xn} (4.1)for every n and every shift l and for all x1, x2, . . . , xn ∈ X.Elements of Information Theory, Second Edition, By Thomas M. Cover and Joy A. ThomasCopyright  2006 John Wiley & Sons, Inc.71
  • 91. 72 ENTROPY RATES OF A STOCHASTIC PROCESSA simple example of a stochastic process with dependence is one inwhich each random variable depends only on the one preceding it andis conditionally independent of all the other preceding random variables.Such a process is said to be Markov.Definition A discrete stochastic process X1, X2, . . . is said to be aMarkov chain or a Markov process if for n = 1, 2, . . . ,Pr(Xn+1 = xn+1|Xn = xn, Xn−1 = xn−1, . . . , X1 = x1)= Pr (Xn+1 = xn+1|Xn = xn) (4.2)for all x1, x2, . . . , xn, xn+1 ∈ X.In this case, the joint probability mass function of the random variablescan be written asp(x1, x2, . . . , xn) = p(x1)p(x2|x1)p(x3|x2) · · · p(xn|xn−1). (4.3)Definition The Markov chain is said to be time invariant if the con-ditional probability p(xn+1|xn) does not depend on n; that is, for n =1, 2, . . . ,Pr{Xn+1 = b|Xn = a} = Pr{X2 = b|X1 = a} for all a, b ∈ X. (4.4)We will assume that the Markov chain is time invariant unless otherwisestated.If {Xi} is a Markov chain, Xn is called the state at time n. A time-invariant Markov chain is characterized by its initial state and a probabilitytransition matrix P = [Pij ], i, j ∈ {1, 2, . . . , m}, where Pij = Pr{Xn+1 =j|Xn = i}.If it is possible to go with positive probability from any state of theMarkov chain to any other state in a finite number of steps, the Markovchain is said to be irreducible. If the largest common factor of the lengthsof different paths from a state to itself is 1, the Markov chain is said toaperiodic.If the probability mass function of the random variable at time n isp(xn), the probability mass function at time n + 1 isp(xn+1) =xnp(xn)Pxnxn+1. (4.5)A distribution on the states such that the distribution at time n + 1 is thesame as the distribution at time n is called a stationary distribution. The
  • 92. 4.1 MARKOV CHAINS 73stationary distribution is so called because if the initial state of a Markovchain is drawn according to a stationary distribution, the Markov chainforms a stationary process.If the finite-state Markov chain is irreducible and aperiodic, the sta-tionary distribution is unique, and from any starting distribution, thedistribution of Xn tends to the stationary distribution as n → ∞.Example 4.1.1 Consider a two-state Markov chain with a probabilitytransition matrixP =1 − α αβ 1 − β(4.6)as shown in Figure 4.1.Let the stationary distribution be represented by a vector µ whose com-ponents are the stationary probabilities of states 1 and 2, respectively. Thenthe stationary probability can be found by solving the equation µP = µor, more simply, by balancing probabilities. For the stationary distribution,the net probability flow across any cut set in the state transition graph iszero. Applying this to Figure 4.1, we obtainµ1α = µ2β. (4.7)Since µ1 + µ2 = 1, the stationary distribution isµ1 =βα + β, µ2 =αα + β. (4.8)If the Markov chain has an initial state drawn according to the stationarydistribution, the resulting process will be stationary. The entropy of theState 1 State 21 − β1 − αβαFIGURE 4.1. Two-state Markov chain.
  • 93. 74 ENTROPY RATES OF A STOCHASTIC PROCESSstate Xn at time n isH(Xn) = Hβα + β,αα + β. (4.9)However, this is not the rate at which entropy grows for H(X1, X2, . . . ,Xn). The dependence among the Xi’s will take a steady toll.4.2 ENTROPY RATEIf we have a sequence of n random variables, a natural question to askis: How does the entropy of the sequence grow with n? We define theentropy rate as this rate of growth as follows.Definition The entropy of a stochastic process {Xi} is defined byH(X) = limn→∞1nH(X1, X2, . . . , Xn) (4.10)when the limit exists.We now consider some simple examples of stochastic processes andtheir corresponding entropy rates.1. Typewriter.Consider the case of a typewriter that has m equally likely outputletters. The typewriter can produce mnsequences of length n, allof them equally likely. Hence H(X1, X2, . . . , Xn) = log mnand theentropy rate is H(X) = log m bits per symbol.2. X1, X2, . . . are i.i.d. random variables. ThenH(X) = limH(X1, X2, . . . , Xn)n= limnH(X1)n= H(X1),(4.11)which is what one would expect for the entropy rate per symbol.3. Sequence of independent but not identically distributed random vari-ables. In this case,H(X1, X2, . . . , Xn) =ni=1H(Xi), (4.12)but the H(Xi)’s are all not equal. We can choose a sequence of dis-tributions on X1, X2, . . . such that the limit of 1n H(Xi) does notexist. An example of such a sequence is a random binary sequence
  • 94. 4.2 ENTROPY RATE 75where pi = P (Xi = 1) is not constant but a function of i, chosencarefully so that the limit in (4.10) does not exist. For example, letpi =0.5 if 2k < log log i ≤ 2k + 1,0 if 2k + 1 < log log i ≤ 2k + 2(4.13)for k = 0, 1, 2, . . . .Then there are arbitrarily long stretches where H(Xi) = 1, followedby exponentially longer segments where H(Xi) = 0. Hence, the run-ning average of the H(Xi) will oscillate between 0 and 1 and willnot have a limit. Thus, H(X) is not defined for this process.We can also define a related quantity for entropy rate:H (X) = limn→∞H(Xn|Xn−1, Xn−2, . . . , X1) (4.14)when the limit exists.The two quantities H(X) and H (X) correspond to two different notionsof entropy rate. The first is the per symbol entropy of the n random vari-ables, and the second is the conditional entropy of the last random variablegiven the past. We now prove the important result that for stationary pro-cesses both limits exist and are equal.Theorem 4.2.1 For a stationary stochastic process, the limits in (4.10)and (4.14) exist and are equal:H(X) = H (X). (4.15)We first prove that lim H(Xn|Xn−1, . . . , X1) exists.Theorem 4.2.2 For a stationary stochastic process, H(Xn|Xn−1, . . . ,X1) is nonincreasing in n and has a limit H (X).ProofH(Xn+1|X1, X2, . . . , Xn) ≤ H(Xn+1|Xn, . . . , X2) (4.16)= H(Xn|Xn−1, . . . , X1), (4.17)where the inequality follows from the fact that conditioning reduces en-tropy and the equality follows from the stationarity of the process. SinceH(Xn|Xn−1, . . . , X1) is a decreasing sequence of nonnegative numbers,it has a limit, H (X).
  • 95. 76 ENTROPY RATES OF A STOCHASTIC PROCESSWe now use the following simple result from analysis.Theorem 4.2.3 (Ces´aro mean) If an → a and bn = 1nni=1 ai, thenbn → a.Proof: (Informal outline). Since most of the terms in the sequence {ak}are eventually close to a, then bn, which is the average of the first n terms,is also eventually close to a.Formal Proof: Let > 0. Since an → a, there exists a number N( )such that |an − a| ≤ for all n ≥ N( ). Hence,|bn − a| =1nni=1(ai − a) (4.18)≤1nni=1|(ai − a)| (4.19)≤1nN( )i=1|ai − a| +n − N( )n(4.20)≤1nN( )i=1|ai − a| + (4.21)for all n ≥ N( ). Since the first term goes to 0 as n → ∞, we can make|bn − a| ≤ 2 by taking n large enough. Hence, bn → a as n → ∞.Proof of Theorem 4.2.1: By the chain rule,H(X1, X2, . . . , Xn)n=1nni=1H(Xi|Xi−1, . . . , X1), (4.22)that is, the entropy rate is the time average of the conditional entropies.But we know that the conditional entropies tend to a limit H . Hence, byTheorem 4.2.3, their running average has a limit, which is equal to thelimit H of the terms. Thus, by Theorem 4.2.2,H(X) = limH(X1, X2, . . . , Xn)n= lim H(Xn|Xn−1, . . . , X1)= H (X). (4.23)
  • 96. 4.2 ENTROPY RATE 77The significance of the entropy rate of a stochastic process arises fromthe AEP for a stationary ergodic process. We prove the general AEP inSection 16.8, where we show that for any stationary ergodic process,−1nlog p(X1, X2, . . . , Xn) → H(X) (4.24)with probability 1. Using this, the theorems of Chapter 3 can easily beextended to a general stationary ergodic process. We can define a typicalset in the same way as we did for the i.i.d. case in Chapter 3. By thesame arguments, we can show that the typical set has a probability closeto 1 and that there are about 2nH(X)typical sequences of length n, eachwith probability about 2−nH(X). We can therefore represent the typicalsequences of length n using approximately nH(X) bits. This shows thesignificance of the entropy rate as the average description length for astationary ergodic process.The entropy rate is well defined for all stationary processes. The entropyrate is particularly easy to calculate for Markov chains.Markov Chains. For a stationary Markov chain, the entropy rate isgiven byH(X) = H (X) = lim H(Xn|Xn−1, . . . , X1) = lim H(Xn|Xn−1)= H(X2|X1), (4.25)where the conditional entropy is calculated using the given stationarydistribution. Recall that the stationary distribution µ is the solution of theequationsµj =iµiPij for all j. (4.26)We express the conditional entropy explicitly in the following theorem.Theorem 4.2.4 Let {Xi} be a stationary Markov chain with station-ary distribution µ and transition matrix P . Let X1 ∼ µ. Then the entropyrate isH(X) = −ijµiPij log Pij . (4.27)Proof: H(X) = H(X2|X1) = i µi j −Pij log Pij .
  • 97. 78 ENTROPY RATES OF A STOCHASTIC PROCESSExample 4.2.1 (Two-state Markov chain) The entropy rate of the two-state Markov chain in Figure 4.1 isH(X) = H(X2|X1) =βα + βH(α) +αα + βH(β). (4.28)Remark If the Markov chain is irreducible and aperiodic, it has a uniquestationary distribution on the states, and any initial distribution tends tothe stationary distribution as n → ∞. In this case, even though the initialdistribution is not the stationary distribution, the entropy rate, which isdefined in terms of long-term behavior, is H(X), as defined in (4.25) and(4.27).4.3 EXAMPLE: ENTROPY RATE OF A RANDOM WALKON A WEIGHTED GRAPHAs an example of a stochastic process, let us consider a random walk ona connected graph (Figure 4.2). Consider a graph with m nodes labeled{1, 2, . . . , m}, with weight Wij ≥ 0 on the edge joining node i to nodej. (The graph is assumed to be undirected, so that Wij = Wji. We setWij = 0 if there is no edge joining nodes i and j.)A particle walks randomly from node to node in this graph. The ran-dom walk {Xn}, Xn ∈ {1, 2, . . . , m}, is a sequence of vertices of thegraph. Given Xn = i, the next vertex j is chosen from among the nodesconnected to node i with a probability proportional to the weight of theedge connecting i to j. Thus, Pij = Wij / k Wik.12534FIGURE 4.2. Random walk on a graph.
  • 98. 4.3 EXAMPLE: ENTROPY RATE OF A RANDOM WALK ON A WEIGHTED GRAPH 79In this case, the stationary distribution has a surprisingly simple form,which we will guess and verify. The stationary distribution for this Markovchain assigns probability to node i proportional to the total weight of theedges emanating from node i. LetWi =jWij (4.29)be the total weight of edges emanating from node i, and letW =i,j:j>iWij (4.30)be the sum of the weights of all the edges. Then i Wi = 2W.We now guess that the stationary distribution isµi =Wi2W. (4.31)We verify that this is the stationary distribution by checking that µP = µ.HereiµiPij =iWi2WWijWi(4.32)=i12WWij (4.33)=Wj2W(4.34)= µj . (4.35)Thus, the stationary probability of state i is proportional to the weight ofedges emanating from node i. This stationary distribution has an inter-esting property of locality: It depends only on the total weight and theweight of edges connected to the node and hence does not change if theweights in some other part of the graph are changed while keeping thetotal weight constant. We can now calculate the entropy rate asH(X) = H(X2|X1) (4.36)= −iµijPij log Pij (4.37)
  • 99. 80 ENTROPY RATES OF A STOCHASTIC PROCESS= −iWi2WjWijWilogWijWi(4.38)= −i jWij2WlogWijWi(4.39)= −i jWij2WlogWij2W+i jWij2WlogWi2W(4.40)= H . . . ,Wij2W, . . . − H . . . ,Wi2W, . . . . (4.41)If all the edges have equal weight, the stationary distribution putsweight Ei/2E on node i, where Ei is the number of edges emanatingfrom node i and E is the total number of edges in the graph. In this case,the entropy rate of the random walk isH(X) = log(2E) − HE12E,E22E, . . . ,Em2E. (4.42)This answer for the entropy rate is so simple that it is almost mislead-ing. Apparently, the entropy rate, which is the average transition entropy,depends only on the entropy of the stationary distribution and the totalnumber of edges.Example 4.3.1 (Random walk on a chessboard) Let a king move atrandom on an 8 × 8 chessboard. The king has eight moves in the interior,five moves at the edges, and three moves at the corners. Using this andthe preceding results, the stationary probabilities are, respectively, 8420,5420, and 3420, and the entropy rate is 0.92 log 8. The factor of 0.92 is dueto edge effects; we would have an entropy rate of log 8 on an infinitechessboard.Similarly, we can find the entropy rate of rooks (log 14 bits, since therook always has 14 possible moves), bishops, and queens. The queencombines the moves of a rook and a bishop. Does the queen have moreor less freedom than the pair?Remark It is easy to see that a stationary random walk on a graph istime-reversible; that is, the probability of any sequence of states is the
  • 100. 4.4 SECOND LAW OF THERMODYNAMICS 81same forward or backward:Pr(X1 = x1, X2 = x2, . . . , Xn = xn)= Pr(Xn = x1, Xn−1 = x2, . . . , X1 = xn). (4.43)Rather surprisingly, the converse is also true; that is, any time-reversibleMarkov chain can be represented as a random walk on an undirectedweighted graph.4.4 SECOND LAW OF THERMODYNAMICSOne of the basic laws of physics, the second law of thermodynamics,states that the entropy of an isolated system is nondecreasing. We nowexplore the relationship between the second law and the entropy functionthat we defined earlier in this chapter.In statistical thermodynamics, entropy is often defined as the log ofthe number of microstates in the system. This corresponds exactly to ournotion of entropy if all the states are equally likely. But why does entropyincrease?We model the isolated system as a Markov chain with transitions obey-ing the physical laws governing the system. Implicit in this assumption isthe notion of an overall state of the system and the fact that knowing thepresent state, the future of the system is independent of the past. In sucha system we can find four different interpretations of the second law. Itmay come as a shock to find that the entropy does not always increase.However, relative entropy always decreases.1. Relative entropy D(µn||µn) decreases with n. Let µn and µn be twoprobability distributions on the state space of a Markov chain at timen, and let µn+1 and µn+1 be the corresponding distributions at timen + 1. Let the corresponding joint mass functions be denoted byp and q. Thus, p(xn, xn+1) = p(xn)r(xn+1|xn) and q(xn, xn+1) =q(xn)r(xn+1|xn), where r(·|·) is the probability transition functionfor the Markov chain. Then by the chain rule for relative entropy,we have two expansions:D(p(xn, xn+1)||q(xn, xn+1)) = D(p(xn)||q(xn))+ D(p(xn+1|xn)||q(xn+1|xn))
  • 101. 82 ENTROPY RATES OF A STOCHASTIC PROCESS= D(p(xn+1)||q(xn+1))+ D(p(xn|xn+1)||q(xn|xn+1)).Since both p and q are derived from the Markov chain, the con-ditional probability mass functions p(xn+1|xn) and q(xn+1|xn) areboth equal to r(xn+1|xn), and hence D(p(xn+1|xn)||q(xn+1|xn)) = 0.Now using the nonnegativity of D(p(xn|xn+1)||q(xn|xn+1)) (Corol-lary to Theorem 2.6.3), we haveD(p(xn)||q(xn)) ≥ D(p(xn+1)||q(xn+1)) (4.44)orD(µn||µn) ≥ D(µn+1||µn+1). (4.45)Consequently, the distance between the probability mass functionsis decreasing with time n for any Markov chain.An example of one interpretation of the preceding inequality isto suppose that the tax system for the redistribution of wealth isthe same in Canada and in England. Then if µn and µn representthe distributions of wealth among people in the two countries, thisinequality shows that the relative entropy distance between the twodistributions decreases with time. The wealth distributions in Canadaand England become more similar.2. Relative entropy D(µn||µ) between a distribution µn on the states attime n and a stationary distribution µ decreases with n. In (4.45),µn is any distribution on the states at time n. If we let µn be anystationary distribution µ, the distribution µn+1 at the next time isalso equal to µ. Hence,D(µn||µ) ≥ D(µn+1||µ), (4.46)which implies that any state distribution gets closer and closer toeach stationary distribution as time passes. The sequence D(µn||µ)is a monotonically nonincreasing nonnegative sequence and musttherefore have a limit. The limit is zero if the stationary distributionis unique, but this is more difficult to prove.3. Entropy increases if the stationary distribution is uniform. In gen-eral, the fact that the relative entropy decreases does not imply thatthe entropy increases. A simple counterexample is provided by anyMarkov chain with a nonuniform stationary distribution. If we start
  • 102. 4.4 SECOND LAW OF THERMODYNAMICS 83this Markov chain from the uniform distribution, which already isthe maximum entropy distribution, the distribution will tend to thestationary distribution, which has a lower entropy than the uniform.Here, the entropy decreases with time.If, however, the stationary distribution is the uniform distribution,we can express the relative entropy asD(µn||µ) = log |X| − H(µn) = log |X| − H(Xn). (4.47)In this case the monotonic decrease in relative entropy implies amonotonic increase in entropy. This is the explanation that ties inmost closely with statistical thermodynamics, where all the micro-states are equally likely. We now characterize processes having auniform stationary distribution.Definition A probability transition matrix [Pij ], Pij = Pr{Xn+1 =j|Xn = i}, is called doubly stochastic ifiPij = 1, j = 1, 2, . . . (4.48)andjPij = 1, i = 1, 2, . . . . (4.49)Remark The uniform distribution is a stationary distribution of P ifand only if the probability transition matrix is doubly stochastic (seeProblem 4.1).4. The conditional entropy H(Xn|X1) increases with n for a station-ary Markov process. If the Markov process is stationary, H(Xn) isconstant. So the entropy is nonincreasing. However, we will provethat H(Xn|X1) increases with n. Thus, the conditional uncertaintyof the future increases. We give two alternative proofs of this result.First, we use the properties of entropy,H(Xn|X1) ≥ H(Xn|X1, X2) (conditioning reduces entropy)(4.50)= H(Xn|X2) (by Markovity) (4.51)= H(Xn−1|X1) (by stationarity). (4.52)
  • 103. 84 ENTROPY RATES OF A STOCHASTIC PROCESSAlternatively, by an application of the data-processing inequality tothe Markov chain X1 → Xn−1 → Xn, we haveI (X1; Xn−1) ≥ I (X1; Xn). (4.53)Expanding the mutual informations in terms of entropies, we haveH(Xn−1) − H(Xn−1|X1) ≥ H(Xn) − H(Xn|X1). (4.54)By stationarity, H(Xn−1) = H(Xn), and hence we haveH(Xn−1|X1) ≤ H(Xn|X1). (4.55)[These techniques can also be used to show that H(X0|Xn) isincreasing in n for any Markov chain.]5. Shuffles increase entropy. If T is a shuffle (permutation) of a deckof cards and X is the initial (random) position of the cards in thedeck, and if the choice of the shuffle T is independent of X, thenH(T X) ≥ H(X), (4.56)where T X is the permutation of the deck induced by the shuffle Ton the initial permutation X. Problem 4.3 outlines a proof.4.5 FUNCTIONS OF MARKOV CHAINSHere is an example that can be very difficult if done the wrongway. It illustrates the power of the techniques developed so far. LetX1, X2, . . . , Xn, . . . be a stationary Markov chain, and let Yi = φ(Xi) bea process each term of which is a function of the corresponding statein the Markov chain. What is the entropy rate H(Y)? Such functions ofMarkov chains occur often in practice. In many situations, one has onlypartial information about the state of the system. It would simplify mattersgreatly if Y1, Y2, . . . , Yn also formed a Markov chain, but in many cases,this is not true. Since the Markov chain is stationary, so is Y1, Y2, . . . , Yn,and the entropy rate is well defined. However, if we wish to computeH(Y), we might compute H(Yn|Yn−1, . . . , Y1) for each n and find thelimit. Since the convergence can be arbitrarily slow, we will never knowhow close we are to the limit. (We can’t look at the change between thevalues at n and n + 1, since this difference may be small even when weare far away from the limit—consider, for example, 1n.)
  • 104. 4.5 FUNCTIONS OF MARKOV CHAINS 85It would be useful computationally to have upper and lower bounds con-verging to the limit from above and below. We can halt the computationwhen the difference between upper and lower bounds is small, and wewill then have a good estimate of the limit.We already know that H(Yn|Yn−1, . . . , Y1) converges monoton-ically to H(Y) from above. For a lower bound, we will useH(Yn|Yn−1, . . . , Y1, X1). This is a neat trick based on the idea that X1contains as much information about Yn as Y1, Y0, Y−1, . . . .Lemma 4.5.1H(Yn|Yn−1, . . . , Y2, X1) ≤ H(Y). (4.57)Proof: We have for k = 1, 2, . . . ,H(Yn|Yn−1, . . . , Y2, X1)(a)= H(Yn|Yn−1, . . . , Y2, Y1, X1) (4.58)(b)= H(Yn|Yn−1, . . . , Y1, X1, X0, X−1, . . . , X−k)(4.59)(c)= H(Yn|Yn−1, . . . , Y1, X1, X0, X−1, . . . ,X−k, Y0, . . . , Y−k) (4.60)(d)≤ H(Yn|Yn−1, . . . , Y1, Y0, . . . , Y−k) (4.61)(e)= H(Yn+k+1|Yn+k, . . . , Y1), (4.62)where (a) follows from that fact that Y1 is a function of X1, and (b) followsfrom the Markovity of X, (c) follows from the fact that Yi is a functionof Xi, (d) follows from the fact that conditioning reduces entropy, and (e)follows by stationarity. Since the inequality is true for all k, it is true inthe limit. Thus,H(Yn|Yn−1, . . . , Y1, X1) ≤ limkH(Yn+k+1|Yn+k, . . . , Y1) (4.63)= H(Y). (4.64)The next lemma shows that the interval between the upper and thelower bounds decreases in length.Lemma 4.5.2H(Yn|Yn−1, . . . , Y1) − H(Yn|Yn−1, . . . , Y1, X1) → 0. (4.65)
  • 105. 86 ENTROPY RATES OF A STOCHASTIC PROCESSProof: The interval length can be rewritten asH(Yn|Yn−1, . . . , Y1) − H(Yn|Yn−1, . . . , Y1, X1)= I (X1; Yn|Yn−1, . . . , Y1). (4.66)By the properties of mutual information,I (X1; Y1, Y2, . . . , Yn) ≤ H(X1), (4.67)and I (X1; Y1, Y2, . . . , Yn) increases with n. Thus, lim I (X1; Y1, Y2, . . . ,Yn) exists andlimn→∞I (X1; Y1, Y2, . . . , Yn) ≤ H(X1). (4.68)By the chain rule,H(X) ≥ limn→∞I (X1; Y1, Y2, . . . , Yn) (4.69)= limn→∞ni=1I (X1; Yi|Yi−1, . . . , Y1) (4.70)=∞i=1I (X1; Yi|Yi−1, . . . , Y1). (4.71)Since this infinite sum is finite and the terms are nonnegative, the termsmust tend to 0; that is,lim I (X1; Yn|Yn−1, . . . , Y1) = 0, (4.72)which proves the lemma.Combining Lemmas 4.5.1 and 4.5.2, we have the following theorem.Theorem 4.5.1 If X1, X2, . . . , Xn form a stationary Markov chain, andYi = φ(Xi), thenH(Yn|Yn−1, . . . , Y1, X1) ≤ H(Y) ≤ H(Yn|Yn−1, . . . , Y1) (4.73)andlim H(Yn|Yn−1, . . . , Y1, X1) = H(Y) = lim H(Yn|Yn−1, . . . , Y1). (4.74)In general, we could also consider the case where Yi is a stochasticfunction (as opposed to a deterministic function) of Xi. Consider a Markov
  • 106. SUMMARY 87process X1, X2, . . . , Xn, and define a new process Y1, Y2, . . . , Yn, whereeach Yi is drawn according to p(yi|xi), conditionally independent of allthe other Xj , j = i; that is,p(xn, yn) = p(x1)n−1i=1p(xi+1|xi)ni=1p(yi|xi). (4.75)Such a process, called a hidden Markov model (HMM), is used extensivelyin speech recognition, handwriting recognition, and so on. The same argu-ment as that used above for functions of a Markov chain carry over tohidden Markov models, and we can lower bound the entropy rate of ahidden Markov model by conditioning it on the underlying Markov state.The details of the argument are left to the reader.SUMMARYEntropy rate. Two definitions of entropy rate for a stochastic processareH(X) = limn→∞1nH(X1, X2, . . . , Xn), (4.76)H (X) = limn→∞H(Xn|Xn−1, Xn−2, . . . , X1). (4.77)For a stationary stochastic process,H(X) = H (X). (4.78)Entropy rate of a stationary Markov chainH(X) = −ijµiPij log Pij . (4.79)Second law of thermodynamics. For a Markov chain:1. Relative entropy D(µn||µn) decreases with time2. Relative entropy D(µn||µ) between a distribution and the stationarydistribution decreases with time.3. Entropy H(Xn) increases if the stationary distribution is uniform.
  • 107. 88 ENTROPY RATES OF A STOCHASTIC PROCESS4. The conditional entropy H(Xn|X1) increases with time for a sta-tionary Markov chain.5. The conditional entropy H(X0|Xn) of the initial condition X0 in-creases for any Markov chain.Functions of a Markov chain. If X1, X2, . . . , Xn form a stationaryMarkov chain and Yi = φ(Xi), thenH(Yn|Yn−1, . . . , Y1, X1) ≤ H(Y) ≤ H(Yn|Yn−1, . . . , Y1) (4.80)andlimn→∞H(Yn|Yn−1, . . . , Y1, X1) = H(Y) = limn→∞H(Yn|Yn−1, . . . , Y1).(4.81)PROBLEMS4.1 Doubly stochastic matrices. An n × n matrix P = [Pij ] is saidto be doubly stochastic if Pij ≥ 0 and j Pij = 1 for all i andi Pij = 1 for all j. An n × n matrix P is said to be a permu-tation matrix if it is doubly stochastic and there is precisely onePij = 1 in each row and each column. It can be shown that everydoubly stochastic matrix can be written as the convex combinationof permutation matrices.(a) Let at= (a1, a2, . . . , an), ai ≥ 0, ai = 1, be a probabilityvector. Let b = aP , where P is doubly stochastic. Show that bis a probability vector and that H(b1, b2, . . . , bn) ≥ H(a1, a2,. . . , an). Thus, stochastic mixing increases entropy.(b) Show that a stationary distribution µ for a doubly stochasticmatrix P is the uniform distribution.(c) Conversely, prove that if the uniform distribution is a stationarydistribution for a Markov transition matrix P , then P is doublystochastic.4.2 Time’s arrow. Let {Xi}∞i=−∞ be a stationary stochastic process.Prove thatH(X0|X−1, X−2, . . . , X−n) = H(X0|X1, X2, . . . , Xn).
  • 108. PROBLEMS 89In other words, the present has a conditional entropy given the pastequal to the conditional entropy given the future. This is true eventhough it is quite easy to concoct stationary random processes forwhich the flow into the future looks quite different from the flowinto the past. That is, one can determine the direction of time bylooking at a sample function of the process. Nonetheless, giventhe present state, the conditional uncertainty of the next symbol inthe future is equal to the conditional uncertainty of the previoussymbol in the past.4.3 Shuffles increase entropy. Argue that for any distribution on shuf-fles T and any distribution on card positions X thatH(T X) ≥ H(T X|T ) (4.82)= H(T −1T X|T ) (4.83)= H(X|T ) (4.84)= H(X) (4.85)if X and T are independent.4.4 Second law of thermodynamics. Let X1, X2, X3, . . . be a station-ary first-order Markov chain. In Section 4.4 it was shown thatH(Xn | X1) ≥ H(Xn−1 | X1) for n = 2, 3, . . . . Thus, conditionaluncertainty about the future grows with time. This is true althoughthe unconditional uncertainty H(Xn) remains constant. However,show by example that H(Xn|X1 = x1) does not necessarily growwith n for every x1.4.5 Entropy of a random tree. Consider the following method of gen-erating a random tree with n nodes. First expand the root node:Then expand one of the two terminal nodes at random:At time k, choose one of the k − 1 terminal nodes according to auniform distribution and expand it. Continue until n terminal nodes
  • 109. 90 ENTROPY RATES OF A STOCHASTIC PROCESShave been generated. Thus, a sequence leading to a five-node treemight look like this:Surprisingly, the following method of generating random treesyields the same probability distribution on trees with n termi-nal nodes. First choose an integer N1 uniformly distributed on{1, 2, . . . , n − 1}. We then have the pictureN1 n − N1Then choose an integer N2 uniformly distributed over{1, 2, . . . , N1 − 1}, and independently choose another integer N3uniformly over {1, 2, . . . , (n − N1) − 1}. The picture is nowN2 N3 n − N1 − N3N1 − N2Continue the process until no further subdivision can be made.(The equivalence of these two tree generation schemes follows, forexample, from Polya’s urn model.)Now let Tn denote a random n-node tree generated as described. Theprobability distribution on such trees seems difficult to describe, butwe can find the entropy of this distribution in recursive form.First some examples. For n = 2, we have only one tree. Thus,H(T2) = 0. For n = 3, we have two equally probable trees:
  • 110. PROBLEMS 91Thus, H(T3) = log 2. For n = 4, we have five possible trees, withprobabilities 13, 16, 16, 16, 16.Now for the recurrence relation. Let N1(Tn) denote the number ofterminal nodes of Tn in the right half of the tree. Justify each ofthe steps in the following:H(Tn)(a)= H(N1, Tn) (4.86)(b)= H(N1) + H(Tn|N1) (4.87)(c)= log(n − 1) + H(Tn|N1) (4.88)(d)= log(n − 1) +1n − 1n−1k=1(H(Tk) + H(Tn−k)) (4.89)(e)= log(n − 1) +2n − 1n−1k=1H(Tk) (4.90)= log(n − 1) +2n − 1n−1k=1Hk. (4.91)(f) Use this to show that(n − 1)Hn = nHn−1 + (n − 1) log(n − 1) − (n − 2) log(n − 2)(4.92)orHnn=Hn−1n − 1+ cn (4.93)for appropriately defined cn. Since cn = c < ∞, you have provedthat 1nH(Tn) converges to a constant. Thus, the expected number ofbits necessary to describe the random tree Tn grows linearly with n.4.6 Monotonicity of entropy per element. For a stationary stochasticprocess X1, X2, . . . , Xn, show that(a)H(X1, X2, . . . , Xn)n≤H(X1, X2, . . . , Xn−1)n − 1. (4.94)(b)H(X1, X2, . . . , Xn)n≥ H(Xn|Xn−1, . . . , X1). (4.95)
  • 111. 92 ENTROPY RATES OF A STOCHASTIC PROCESS4.7 Entropy rates of Markov chains(a) Find the entropy rate of the two-state Markov chain with tran-sition matrixP =1 − p01 p01p10 1 − p10.(b) What values of p01, p10 maximize the entropy rate?(c) Find the entropy rate of the two-state Markov chain with tran-sition matrixP =1 − p p1 0.(d) Find the maximum value of the entropy rate of the Markovchain of part (c). We expect that the maximizing value of pshould be less than 12, since the 0 state permits more informa-tion to be generated than the 1 state.(e) Let N(t) be the number of allowable state sequences of length tfor the Markov chain of part (c). Find N(t) and calculateH0 = limt→∞1tlog N(t).[Hint: Find a linear recurrence that expresses N(t) in termsof N(t − 1) and N(t − 2). Why is H0 an upper bound on theentropy rate of the Markov chain? Compare H0 with the max-imum entropy found in part (d).]4.8 Maximum entropy process. A discrete memoryless source has thealphabet {1, 2}, where the symbol 1 has duration 1 and the sym-bol 2 has duration 2. The probabilities of 1 and 2 are p1 and p2,respectively. Find the value of p1 that maximizes the source entropyper unit time H(X) = H(X)ET . What is the maximum value H(X)?4.9 Initial conditions. Show, for a Markov chain, thatH(X0|Xn) ≥ H(X0|Xn−1).Thus, initial conditions X0 become more difficult to recover as thefuture Xn unfolds.4.10 Pairwise independence. Let X1, X2, . . . , Xn−1 be i.i.d. randomvariables taking values in {0, 1}, with Pr{Xi = 1} = 12 . Let Xn = 1if n−1i=1 Xi is odd and Xn = 0 otherwise. Let n ≥ 3.
  • 112. PROBLEMS 93(a) Show that Xi and Xj are independent for i = j, i, j ∈ {1, 2,. . . , n}.(b) Find H(Xi, Xj ) for i = j.(c) Find H(X1, X2, . . . , Xn). Is this equal to nH(X1)?4.11 Stationary processes. Let . . . , X−1, X0, X1, . . . be a stationary(not necessarily Markov) stochastic process. Which of the follow-ing statements are true? Prove or provide a counterexample.(a) H(Xn|X0) = H(X−n|X0) .(b) H(Xn|X0) ≥ H(Xn−1|X0) .(c) H(Xn|X1, X2, . . . , Xn−1, Xn+1) is nonincreasing in n.(d) H(Xn|X1, . . . , Xn−1, Xn+1, . . . , X2n) is nonincreasing in n.4.12 Entropy rate of a dog looking for a bone. A dog walks on theintegers, possibly reversing direction at each step with probabilityp = 0.1. Let X0 = 0. The first step is equally likely to be positiveor negative. A typical walk might look like this:(X0, X1, . . .) = (0, −1, −2, −3, −4, −3, −2, −1, 0, 1, . . .).(a) Find H(X1, X2, . . . , Xn).(b) Find the entropy rate of the dog.(c) What is the expected number of steps that the dog takes beforereversing direction?4.13 The past has little to say about the future. For a stationary stochas-tic process X1, X2, . . . , Xn, . . . , show thatlimn→∞12nI (X1, X2, . . . , Xn; Xn+1, Xn+2, . . . , X2n) = 0. (4.96)Thus, the dependence between adjacent n-blocks of a stationaryprocess does not grow linearly with n.4.14 Functions of a stochastic process(a) Consider a stationary stochastic process X1, X2, . . . , Xn, andlet Y1, Y2, . . . , Yn be defined byYi = φ(Xi), i = 1, 2, . . . (4.97)for some function φ. Prove thatH(Y) ≤ H(X). (4.98)
  • 113. 94 ENTROPY RATES OF A STOCHASTIC PROCESS(b) What is the relationship between the entropy rates H(Z) andH(X) ifZi = ψ(Xi, Xi+1), i = 1, 2, . . . (4.99)for some function ψ?4.15 Entropy rate. Let {Xi} be a discrete stationary stochastic processwith entropy rate H(X). Show that1nH(Xn, . . . , X1 | X0, X−1, . . . , X−k) → H(X) (4.100)for k = 1, 2, . . ..4.16 Entropy rate of constrained sequences. In magnetic recording, themechanism of recording and reading the bits imposes constraintson the sequences of bits that can be recorded. For example, toensure proper synchronization, it is often necessary to limit thelength of runs of 0’s between two 1’s. Also, to reduce intersymbolinterference, it may be necessary to require at least one 0 betweenany two 1’s. We consider a simple example of such a constraint.Suppose that we are required to have at least one 0 and at mosttwo 0’s between any pair of 1’s in a sequences. Thus, sequenceslike 101001 and 0101001 are valid sequences, but 0110010 and0000101 are not. We wish to calculate the number of valid se-quences of length n.(a) Show that the set of constrained sequences is the same as theset of allowed paths on the following state diagram:(b) Let Xi(n) be the number of valid paths of length n ending atstate i. Argue that X(n) = [X1(n) X2(n) X3(n)]tsatisfies the
  • 114. PROBLEMS 95following recursion:X1(n)X2(n)X3(n) =0 1 11 0 00 1 0X1(n − 1)X2(n − 1)X3(n − 1), (4.101)with initial conditions X(1) = [1 1 0]t.(c) LetA =0 1 11 0 00 1 0. (4.102)Then we have by inductionX(n) = AX(n − 1) = A2X(n − 2) = · · · = An−1X(1).(4.103)Using the eigenvalue decomposition of A for the case of distincteigenvalues, we can write A = U−1U, where is the diag-onal matrix of eigenvalues. Then An−1= U−1 n−1U. Showthat we can writeX(n) = λn−11 Y1 + λn−12 Y2 + λn−13 Y3, (4.104)where Y1, Y2, Y3 do not depend on n. For large n, this sumis dominated by the largest term. Therefore, argue that for i =1, 2, 3, we have1nlog Xi(n) → log λ, (4.105)where λ is the largest (positive) eigenvalue. Thus, the numberof sequences of length n grows as λnfor large n. Calculate λfor the matrix A above. (The case when the eigenvalues arenot distinct can be handled in a similar manner.)(d) We now take a different approach. Consider a Markov chainwhose state diagram is the one given in part (a), but witharbitrary transition probabilities. Therefore, the probability tran-sition matrix of this Markov chain isP =0 1 0α 0 1 − α1 0 0. (4.106)Show that the stationary distribution of this Markov chain isµ =13 − α,13 − α,1 − α3 − α. (4.107)
  • 115. 96 ENTROPY RATES OF A STOCHASTIC PROCESS(e) Maximize the entropy rate of the Markov chain over choicesof α. What is the maximum entropy rate of the chain?(f) Compare the maximum entropy rate in part (e) with log λ inpart (c). Why are the two answers the same?4.17 Recurrence times are insensitive to distributions. Let X0, X1, X2,. . . be drawn i.i.d. ∼ p(x), x ∈ X = {1, 2, . . . , m}, and let N be thewaiting time to the next occurrence of X0. Thus N = minn{Xn =X0}.(a) Show that EN = m.(b) Show that E log N ≤ H(X).(c) (Optional) Prove part (a) for {Xi} stationary and ergodic.4.18 Stationary but not ergodic process. A bin has two biased coins,one with probability of heads p and the other with probability ofheads 1 − p. One of these coins is chosen at random (i.e., withprobability 12) and is then tossed n times. Let X denote the identityof the coin that is picked, and let Y1 and Y2 denote the results ofthe first two tosses.(a) Calculate I (Y1; Y2|X).(b) Calculate I (X; Y1, Y2).(c) Let H(Y) be the entropy rate of the Y process (the se-quence of coin tosses). Calculate H(Y). [Hint: Relate this tolim 1nH(X, Y1, Y2, . . . , Yn).]You can check the answer by considering the behavior as p → 12.4.19 Random walk on graph. Consider a random walk on the followinggraph:15432(a) Calculate the stationary distribution.
  • 116. PROBLEMS 97(b) What is the entropy rate?(c) Find the mutual information I (Xn+1; Xn) assuming that theprocess is stationary.4.20 Random walk on chessboard. Find the entropy rate of the Markovchain associated with a random walk of a king on the 3 × 3 chess-board1 2 34 5 67 8 9What about the entropy rate of rooks, bishops, and queens? Thereare two types of bishops.4.21 Maximal entropy graphs. Consider a random walk on a connectedgraph with four edges.(a) Which graph has the highest entropy rate?(b) Which graph has the lowest?4.22 Three-dimensional maze. A bird is lost in a 3 × 3 × 3 cubicalmaze. The bird flies from room to room going to adjoining roomswith equal probability through each of the walls. For example, thecorner rooms have three exits.(a) What is the stationary distribution?(b) What is the entropy rate of this random walk?4.23 Entropy rate. Let {Xi} be a stationary stochastic process withentropy rate H(X).(a) Argue that H(X) ≤ H(X1).(b) What are the conditions for equality?4.24 Entropy rates. Let {Xi} be a stationary process. Let Yi = (Xi,Xi+1). Let Zi = (X2i, X2i+1). Let Vi = X2i. Consider the entropyrates H(X), H(Y), H(Z), and H(V) of the processes {Xi},{Yi},{Zi}, and {Vi}. What is the inequality relationship ≤, =, or ≥between each of the pairs listed below?(a) H(X)<H(Y).(b) H(X)<H(Z).(c) H(X)<H(V).(d) H(Z)<H(X).4.25 Monotonicity(a) Show that I (X; Y1, Y2, . . . , Yn) is nondecreasing in n.
  • 117. 98 ENTROPY RATES OF A STOCHASTIC PROCESS(b) Under what conditions is the mutual information constant forall n?4.26 Transitions in Markov chains. Suppose that {Xi} forms an irre-ducible Markov chain with transition matrix P and stationary distri-bution µ. Form the associated “edge process” {Yi} by keeping trackonly of the transitions. Thus, the new process {Yi} takes values inX × X, and Yi = (Xi−1, Xi). For example,Xn= 3, 2, 8, 5, 7, . . .becomesYn= (∅, 3), (3, 2), (2, 8), (8, 5), (5, 7), . . . .Find the entropy rate of the edge process {Yi}.4.27 Entropy rate. Let {Xi} be a stationary {0, 1}-valued stochasticprocess obeyingXk+1 = Xk ⊕ Xk−1 ⊕ Zk+1,where {Zi} is Bernoulli(p)and ⊕ denotes mod 2 addition. What isthe entropy rate H(X)?4.28 Mixture of processes. Suppose that we observe one of twostochastic processes but don’t know which. What is the entropyrate? Specifically, let X11, X12, X13, . . . be a Bernoulli process withparameter p1, and let X21, X22, X23, . . . be Bernoulli(p2). Letθ =1 with probability 122 with probability 12and let Yi = Xθi, i = 1, 2, . . . , be the stochastic process observed.Thus, Y observes the process {X1i} or {X2i}. Eventually, Y willknow which.(a) Is {Yi} stationary?(b) Is {Yi} an i.i.d. process?(c) What is the entropy rate H of {Yi}?(d) Does−1nlog p(Y1, Y2, . . . Yn) −→ H?(e) Is there a code that achieves an expected per-symbol descriptionlength 1nELn −→ H?
  • 118. PROBLEMS 99Now let θi be Bern(12). Observe thatZi = Xθii, i = 1, 2, . . . .Thus, θ is not fixed for all time, as it was in the first part, but ischosen i.i.d. each time. Answer parts (a), (b), (c), (d), (e) for theprocess {Zi}, labeling the answers (a ), (b ), (c ), (d ), (e ).4.29 Waiting times. Let X be the waiting time for the first heads toappear in successive flips of a fair coin. For example, Pr{X = 3} =(12)3. Let Sn be the waiting time for the nth head to appear. Thus,S0 = 0Sn+1 = Sn + Xn+1,where X1, X2, X3, . . . are i.i.d according to the distribution above.(a) Is the process {Sn} stationary?(b) Calculate H(S1, S2, . . . , Sn).(c) Does the process {Sn} have an entropy rate? If so, what is it?If not, why not?(d) What is the expected number of fair coin flips required togenerate a random variable having the same distribution as Sn?4.30 Markov chain transitionsP = [Pij ] =121414141214141412.Let X1 be distributed uniformly over the states {0, 1, 2}. Let {Xi}∞1be a Markov chain with transition matrix P ; thus, P (Xn+1 =j|Xn = i) = Pij , i, j ∈ {0, 1, 2}.(a) Is {Xn} stationary?(b) Find limn→∞1nH(X1, . . . , Xn).Now consider the derived process Z1, Z2, . . . , Zn, whereZ1 = X1Zi = Xi − Xi−1 (mod 3), i = 2, . . . , n.Thus, Znencodes the transitions, not the states.(c) Find H(Z1, Z2, . . . , Zn).(d) Find H(Zn) and H(Xn) for n ≥ 2.
  • 119. 100 ENTROPY RATES OF A STOCHASTIC PROCESS(e) Find H(Zn|Zn−1) for n ≥ 2.(f) Are Zn−1 and Zn independent for n ≥ 2?4.31 Markov. Let {Xi} ∼ Bernoulli(p). Consider the associatedMarkov chain {Yi}ni=1, whereYi = (the number of 1’s in the current run of 1’s). For example, ifXn= 101110 . . . , we have Yn= 101230 . . . .(a) Find the entropy rate of Xn.(b) Find the entropy rate of Yn.4.32 Time symmetry. Let {Xn} be a stationary Markov process. Wecondition on (X0, X1) and look into the past and future. For whatindex k isH(X−n|X0, X1) = H(Xk|X0, X1)?Give the argument.4.33 Chain inequality. Let X1 → X2 → X3 → X4 form a Markovchain. Show thatI (X1; X3) + I (X2; X4) ≤ I (X1; X4) + I (X2; X3). (4.108)4.34 Broadcast channel. Let X → Y → (Z, W) form a Markov chain[i.e., p(x, y, z, w) = p(x)p(y|x)p(z, w|y) for all x, y, z, w]. ShowthatI (X; Z) + I (X; W) ≤ I (X; Y) + I (Z; W). (4.109)4.35 Concavity of second law. Let {Xn}∞−∞ be a stationary Markovprocess. Show that H(Xn|X0) is concave in n. Specifically, showthatH(Xn|X0) − H(Xn−1|X0) − (H(Xn−1|X0) − H(Xn−2|X0))= −I (X1; Xn−1|X0, Xn) ≤ 0. (4.110)Thus, the second difference is negative, establishing that H(Xn|X0)is a concave function of n.HISTORICAL NOTESThe entropy rate of a stochastic process was introduced by Shannon [472],who also explored some of the connections between the entropy rate of theprocess and the number of possible sequences generated by the process.Since Shannon, there have been a number of results extending the basic
  • 120. HISTORICAL NOTES 101theorems of information theory to general stochastic processes. The AEPfor a general stationary stochastic process is proved in Chapter 16.Hidden Markov models are used for a number of applications, suchas speech recognition [432]. The calculation of the entropy rate for con-strained sequences was introduced by Shannon [472]. These sequencesare used for coding for magnetic and optical channels [288].
  • 121. CHAPTER 5DATA COMPRESSIONWe now put content in the definition of entropy by establishing the funda-mental limit for the compression of information. Data compression can beachieved by assigning short descriptions to the most frequent outcomesof the data source, and necessarily longer descriptions to the less fre-quent outcomes. For example, in Morse code, the most frequent symbolis represented by a single dot. In this chapter we find the shortest averagedescription length of a random variable.We first define the notion of an instantaneous code and then prove theimportant Kraft inequality, which asserts that the exponentiated codewordlength assignments must look like a probability mass function. Elemen-tary calculus then shows that the expected description length must begreater than or equal to the entropy, the first main result. Then Shan-non’s simple construction shows that the expected description length canachieve this bound asymptotically for repeated descriptions. This estab-lishes the entropy as a natural measure of efficient description length.The famous Huffman coding procedure for finding minimum expecteddescription length assignments is provided. Finally, we show that Huff-man codes are competitively optimal and that it requires roughly H faircoin flips to generate a sample of a random variable having entropy H.Thus, the entropy is the data compression limit as well as the number ofbits needed in random number generation, and codes achieving H turnout to be optimal from many points of view.5.1 EXAMPLES OF CODESDefinition A source code C for a random variable X is a mapping fromX, the range of X, to D∗, the set of finite-length strings of symbols froma D-ary alphabet. Let C(x) denote the codeword corresponding to x andlet l(x) denote the length of C(x).Elements of Information Theory, Second Edition, By Thomas M. Cover and Joy A. ThomasCopyright  2006 John Wiley & Sons, Inc.103
  • 122. 104 DATA COMPRESSIONFor example, C(red) = 00, C(blue) = 11 is a source code for X = {red,blue} with alphabet D = {0, 1}.Definition The expected length L(C) of a source code C(x) for a ran-dom variable X with probability mass function p(x) is given byL(C) =x∈Xp(x)l(x), (5.1)where l(x) is the length of the codeword associated with x.Without loss of generality, we can assume that the D-ary alphabet isD = {0, 1, . . . , D − 1}.Some examples of codes follow.Example 5.1.1 Let X be a random variable with the following distri-bution and codeword assignment:Pr(X = 1) = 12, codeword C(1) = 0Pr(X = 2) = 14, codeword C(2) = 10Pr(X = 3) = 18, codeword C(3) = 110Pr(X = 4) = 18, codeword C(4) = 111.(5.2)The entropy H(X) of X is 1.75 bits, and the expected length L(C) =El(X) of this code is also 1.75 bits. Here we have a code that has thesame average length as the entropy. We note that any sequence of bitscan be uniquely decoded into a sequence of symbols of X. For example,the bit string 0110111100110 is decoded as 134213.Example 5.1.2 Consider another simple example of a code for a randomvariable:Pr(X = 1) = 13, codeword C(1) = 0Pr(X = 2) = 13, codeword C(2) = 10Pr(X = 3) = 13, codeword C(3) = 11.(5.3)Just as in Example 5.1.1, the code is uniquely decodable. However, inthis case the entropy is log 3 = 1.58 bits and the average length of theencoding is 1.66 bits. Here El(X) > H(X).Example 5.1.3 (Morse code) The Morse code is a reasonably efficientcode for the English alphabet using an alphabet of four symbols: a dot,
  • 123. 5.1 EXAMPLES OF CODES 105a dash, a letter space, and a word space. Short sequences represent fre-quent letters (e.g., a single dot represents E) and long sequences representinfrequent letters (e.g., Q is represented by “dash,dash,dot,dash”). This isnot the optimal representation for the alphabet in four symbols—in fact,many possible codewords are not utilized because the codewords for let-ters do not contain spaces except for a letter space at the end of everycodeword, and no space can follow another space. It is an interesting prob-lem to calculate the number of sequences that can be constructed underthese constraints. The problem was solved by Shannon in his original1948 paper. The problem is also related to coding for magnetic recording,where long strings of 0’s are prohibited [5], [370].We now define increasingly more stringent conditions on codes. Let xndenote (x1, x2, . . . , xn).Definition A code is said to be nonsingular if every element of therange of X maps into a different string in D∗; that is,x = x ⇒ C(x) = C(x ). (5.4)Nonsingularity suffices for an unambiguous description of a singlevalue of X. But we usually wish to send a sequence of values of X.In such cases we can ensure decodability by adding a special symbol(a “comma”) between any two codewords. But this is an inefficient useof the special symbol; we can do better by developing the idea of self-punctuating or instantaneous codes. Motivated by the necessity to sendsequences of symbols X, we define the extension of a code as follows:Definition The extension C∗of a code C is the mapping from finite-length strings of X to finite-length strings of D, defined byC(x1x2 · · · xn) = C(x1)C(x2) · · · C(xn), (5.5)where C(x1)C(x2) · · · C(xn) indicates concatenation of the correspondingcodewords.Example 5.1.4 If C(x1) = 00 and C(x2) = 11, then C(x1x2) = 0011.Definition A code is called uniquely decodable if its extension is non-singular.In other words, any encoded string in a uniquely decodable code hasonly one possible source string producing it. However, one may haveto look at the entire string to determine even the first symbol in thecorresponding source string.
  • 124. 106 DATA COMPRESSIONDefinition A code is called a prefix code or an instantaneous code ifno codeword is a prefix of any other codeword.An instantaneous code can be decoded without reference to future code-words since the end of a codeword is immediately recognizable. Hence,for an instantaneous code, the symbol xi can be decoded as soon as wecome to the end of the codeword corresponding to it. We need not waitto see the codewords that come later. An instantaneous code is a self-punctuating code; we can look down the sequence of code symbols andadd the commas to separate the codewords without looking at later sym-bols. For example, the binary string 01011111010 produced by the codeof Example 5.1.1 is parsed as 0,10,111,110,10.The nesting of these definitions is shown in Figure 5.1. To illustrate thedifferences between the various kinds of codes, consider the examples ofcodeword assignments C(x) to x ∈ X in Table 5.1. For the nonsingularcode, the code string 010 has three possible source sequences: 2 or 14 or31, and hence the code is not uniquely decodable. The uniquely decodablecode is not prefix-free and hence is not instantaneous. To see that it isuniquely decodable, take any code string and start from the beginning.If the first two bits are 00 or 10, they can be decoded immediately. IfAllcodesNonsingularcodesUniquelydecodablecodesInstantaneouscodesFIGURE 5.1. Classes of codes.
  • 125. 5.2 KRAFT INEQUALITY 107TABLE 5.1 Classes of CodesNonsingular, But Not Uniquely Decodable,X Singular Uniquely Decodable But Not Instantaneous Instantaneous1 0 0 10 02 0 010 00 103 0 01 11 1104 0 10 110 111the first two bits are 11, we must look at the following bits. If the nextbit is a 1, the first source symbol is a 3. If the length of the string of0’s immediately following the 11 is odd, the first codeword must be 110and the first source symbol must be 4; if the length of the string of 0’s iseven, the first source symbol is a 3. By repeating this argument, we can seethat this code is uniquely decodable. Sardinas and Patterson [455] havedevised a finite test for unique decodability, which involves forming setsof possible suffixes to the codewords and eliminating them systematically.The test is described more fully in Problem 5.5.27. The fact that the lastcode in Table 5.1 is instantaneous is obvious since no codeword is a prefixof any other.5.2 KRAFT INEQUALITYWe wish to construct instantaneous codes of minimum expected length todescribe a given source. It is clear that we cannot assign short codewordsto all source symbols and still be prefix-free. The set of codeword lengthspossible for instantaneous codes is limited by the following inequality.Theorem 5.2.1 (Kraft inequality) For any instantaneous code (prefixcode) over an alphabet of size D, the codeword lengths l1, l2, . . . , lm mustsatisfy the inequalityiD−li ≤ 1. (5.6)Conversely, given a set of codeword lengths that satisfy this inequality,there exists an instantaneous code with these word lengths.Proof: Consider a D-ary tree in which each node has D children. Let thebranches of the tree represent the symbols of the codeword. For example,the D branches arising from the root node represent the D possible valuesof the first symbol of the codeword. Then each codeword is represented
  • 126. 108 DATA COMPRESSIONRoot010110111FIGURE 5.2. Code tree for the Kraft inequality.by a leaf on the tree. The path from the root traces out the symbols of thecodeword. A binary example of such a tree is shown in Figure 5.2. Theprefix condition on the codewords implies that no codeword is an ancestorof any other codeword on the tree. Hence, each codeword eliminates itsdescendants as possible codewords.Let lmax be the length of the longest codeword of the set of codewords.Consider all nodes of the tree at level lmax. Some of them are codewords,some are descendants of codewords, and some are neither. A codewordat level li has Dlmax−li descendants at level lmax. Each of these descendantsets must be disjoint. Also, the total number of nodes in these sets mustbe less than or equal to Dlmax . Hence, summing over all the codewords,we haveDlmax−li ≤ Dlmax(5.7)orD−li ≤ 1, (5.8)which is the Kraft inequality.Conversely, given any set of codeword lengths l1, l2, . . . , lm that sat-isfy the Kraft inequality, we can always construct a tree like the one in
  • 127. 5.2 KRAFT INEQUALITY 109Figure 5.2. Label the first node (lexicographically) of depth l1 as code-word 1, and remove its descendants from the tree. Then label the firstremaining node of depth l2 as codeword 2, and so on. Proceeding thisway, we construct a prefix code with the specified l1, l2, . . . , lm.We now show that an infinite prefix code also satisfies the Kraft inequal-ity.Theorem 5.2.2 (Extended Kraft Inequality) For any countably infi-nite set of codewords that form a prefix code, the codeword lengths satisfythe extended Kraft inequality,∞i=1D−li ≤ 1. (5.9)Conversely, given any l1, l2, . . . satisfying the extended Kraft inequality,we can construct a prefix code with these codeword lengths.Proof: Let the D-ary alphabet be {0, 1, . . . , D − 1}. Consider the ithcodeword y1y2 · · · yli . Let 0.y1y2 · · · yli be the real number given by theD-ary expansion0.y1y2 · · · yli =lij=1yj D−j. (5.10)This codeword corresponds to the interval0.y1y2 · · · yli , 0.y1y2 · · · yli +1Dli, (5.11)the set of all real numbers whose D-ary expansion begins with0.y1y2 · · · yli . This is a subinterval of the unit interval [0, 1]. By the prefixcondition, these intervals are disjoint. Hence, the sum of their lengths hasto be less than or equal to 1. This proves that∞i=1D−li ≤ 1. (5.12)Just as in the finite case, we can reverse the proof to construct thecode for a given l1, l2, . . . that satisfies the Kraft inequality. First, reorderthe indexing so that l1 ≤ l2 ≤ . . . . Then simply assign the intervals in
  • 128. 110 DATA COMPRESSIONorder from the low end of the unit interval. For example, if we wish toconstruct a binary code with l1 = 1, l2 = 2, . . . , we assign the intervals[0, 12), [12, 14), . . . to the symbols, with corresponding codewords 0, 10,. . . .In Section 5.5 we show that the lengths of codewords for a uniquelydecodable code also satisfy the Kraft inequality. Before we do that, weconsider the problem of finding the shortest instantaneous code.5.3 OPTIMAL CODESIn Section 5.2 we proved that any codeword set that satisfies the prefixcondition has to satisfy the Kraft inequality and that the Kraft inequalityis a sufficient condition for the existence of a codeword set with thespecified set of codeword lengths. We now consider the problem of findingthe prefix code with the minimum expected length. From the results ofSection 5.2, this is equivalent to finding the set of lengths l1, l2, . . . , lmsatisfying the Kraft inequality and whose expected length L = pili isless than the expected length of any other prefix code. This is a standardoptimization problem: MinimizeL = pili (5.13)over all integers l1, l2, . . . , lm satisfyingD−li ≤ 1. (5.14)A simple analysis by calculus suggests the form of the minimizing l∗i .We neglect the integer constraint on li and assume equality in the con-straint. Hence, we can write the constrained minimization using Lagrangemultipliers as the minimization ofJ = pili + λ D−li . (5.15)Differentiating with respect to li, we obtain∂J∂li= pi − λD−li loge D. (5.16)Setting the derivative to 0, we obtainD−li =piλ loge D. (5.17)
  • 129. 5.3 OPTIMAL CODES 111Substituting this in the constraint to find λ, we find λ = 1/ loge D, andhencepi = D−li , (5.18)yielding optimal code lengths,l∗i = − logD pi. (5.19)This noninteger choice of codeword lengths yields expected codewordlengthL∗= pil∗i = − pi logD pi = HD(X). (5.20)But since the li must be integers, we will not always be able to setthe codeword lengths as in (5.19). Instead, we should choose a set ofcodeword lengths li “close” to the optimal set. Rather than demonstrateby calculus that l∗i = − logD pi is a global minimum, we verify optimalitydirectly in the proof of the following theorem.Theorem 5.3.1 The expected length L of any instantaneous D-ary codefor a random variable X is greater than or equal to the entropy HD(X);that is,L ≥ HD(X), (5.21)with equality if and only if D−li = pi.Proof: We can write the difference between the expected length and theentropy asL − HD(X) = pili − pi logD1pi(5.22)= − pi logD D−li + pi logD pi. (5.23)Letting ri = D−li / j D−lj and c = D−li , we obtainL − H = pi logDpiri− logD c (5.24)= D(p||r) + logD1c(5.25)≥ 0 (5.26)
  • 130. 112 DATA COMPRESSIONby the nonnegativity of relative entropy and the fact (Kraft inequality)that c ≤ 1. Hence, L ≥ H with equality if and only if pi = D−li (i.e., ifand only if − logD pi is an integer for all i).Definition A probability distribution is called D-adic if each of theprobabilities is equal to D−n for some n. Thus, we have equality in thetheorem if and only if the distribution of X is D-adic.The preceding proof also indicates a procedure for finding an optimalcode: Find the D-adic distribution that is closest (in the relative entropysense) to the distribution of X. This distribution provides the set of code-word lengths. Construct the code by choosing the first available node asin the proof of the Kraft inequality. We then have an optimal code for X.However, this procedure is not easy, since the search for the closestD-adic distribution is not obvious. In the next section we give a goodsuboptimal procedure (Shannon–Fano coding). In Section 5.6 we describea simple procedure (Huffman coding) for actually finding the optimalcode.5.4 BOUNDS ON THE OPTIMAL CODE LENGTHWe now demonstrate a code that achieves an expected description lengthL within 1 bit of the lower bound; that is,H(X) ≤ L < H(X) + 1. (5.27)Recall the setup of Section 5.3: We wish to minimize L = pili sub-ject to the constraint that l1, l2, . . . , lm are integers and D−li ≤ 1. Weproved that the optimal codeword lengths can be found by finding theD-adic probability distribution closest to the distribution of X in relativeentropy, that is, by finding the D-adic r (ri = D−li / j D−lj ) minimizingL − HD = D(p||r) − log D−li ≥ 0. (5.28)The choice of word lengths li = logD1piyields L = H. Since logD1pimay not equal an integer, we round it up to give integer word-lengthassignments,li = logD1pi, (5.29)
  • 131. 5.4 BOUNDS ON THE OPTIMAL CODE LENGTH 113where x is the smallest integer ≥ x. These lengths satisfy the Kraftinequality sinceD− log 1pi ≤ D− log 1pi = pi = 1. (5.30)This choice of codeword lengths satisfieslogD1pi≤ li < logD1pi+ 1. (5.31)Multiplying by pi and summing over i, we obtainHD(X) ≤ L < HD(X) + 1. (5.32)Since an optimal code can only be better than this code, we have thefollowing theorem.Theorem 5.4.1 Let l∗1, l∗2, . . . , l∗m be optimal codeword lengths for asource distribution p and a D-ary alphabet, and let L∗be the associatedexpected length of an optimal code (L∗= pil∗i ). ThenHD(X) ≤ L∗< HD(X) + 1. (5.33)Proof: Let li = logD1pi. Then li satisfies the Kraft inequality and from(5.32) we haveHD(X) ≤ L = pili < HD(X) + 1. (5.34)But since L∗, the expected length of the optimal code, is less than L =pili, and since L∗≥ HD from Theorem 5.3.1, we have thetheorem.In Theorem 5.4.1 there is an overhead that is at most 1 bit, due to thefact that log 1piis not always an integer. We can reduce the overhead persymbol by spreading it out over many symbols. With this in mind, let usconsider a system in which we send a sequence of n symbols from X.The symbols are assumed to be drawn i.i.d. according to p(x). We canconsider these n symbols to be a supersymbol from the alphabet Xn.Define Ln to be the expected codeword length per input symbol, thatis, if l(x1, x2, . . . , xn) is the length of the binary codeword associated
  • 132. 114 DATA COMPRESSIONwith (x1, x2, . . . , xn) (for the rest of this section, we assume that D = 2,for simplicity), thenLn =1np(x1, x2, . . . , xn)l(x1, x2, . . . , xn) =1nEl(X1, X2, . . . , Xn).(5.35)We can now apply the bounds derived above to the code:H(X1, X2, . . . , Xn) ≤ El(X1, X2, . . . , Xn) < H(X1, X2, . . . , Xn) + 1.(5.36)Since X1, X2, . . . , Xn are i.i.d., H(X1, X2, . . . , Xn) = H(Xi) =nH(X). Dividing (5.36) by n, we obtainH(X) ≤ Ln < H(X) +1n. (5.37)Hence, by using large block lengths we can achieve an expected code-length per symbol arbitrarily close to the entropy.We can use the same argument for a sequence of symbols from astochastic process that is not necessarily i.i.d. In this case, we still havethe boundH(X1, X2, . . . , Xn) ≤ El(X1, X2, . . . , Xn) < H(X1, X2, . . . , Xn) + 1.(5.38)Dividing by n again and defining Ln to be the expected description lengthper symbol, we obtainH(X1, X2, . . . , Xn)n≤ Ln <H(X1, X2, . . . , Xn)n+1n. (5.39)If the stochastic process is stationary, then H(X1, X2, . . . , Xn)/n →H(X), and the expected description length tends to the entropy rate asn → ∞. Thus, we have the following theorem:Theorem 5.4.2 The minimum expected codeword length per symbol sat-isfiesH(X1, X2, . . . , Xn)n≤ L∗n <H(X1, X2, . . . , Xn)n+1n. (5.40)Moreover, if X1, X2, . . . , Xn is a stationary stochastic process,L∗n → H(X), (5.41)where H(X) is the entropy rate of the process.
  • 133. 5.5 KRAFT INEQUALITY FOR UNIQUELY DECODABLE CODES 115This theorem provides another justification for the definition of entropyrate—it is the expected number of bits per symbol required to describethe process.Finally, we ask what happens to the expected description length if thecode is designed for the wrong distribution. For example, the wrong dis-tribution may be the best estimate that we can make of the unknown truedistribution. We consider the Shannon code assignment l(x) = log 1q(x)designed for the probability mass function q(x). Suppose that the trueprobability mass function is p(x). Thus, we will not achieve expectedlength L ≈ H(p) = − p(x) log p(x). We now show that the increasein expected description length due to the incorrect distribution is the rel-ative entropy D(p||q). Thus, D(p||q) has a concrete interpretation as theincrease in descriptive complexity due to incorrect information.Theorem 5.4.3 (Wrong code) The expected length under p(x) of thecode assignment l(x) = log 1q(x) satisfiesH(p) + D(p||q) ≤ Epl(X) < H(p) + D(p||q) + 1. (5.42)Proof: The expected codelength isEl(X) =xp(x) log1q(x)(5.43)<xp(x) log1q(x)+ 1 (5.44)=xp(x) logp(x)q(x)1p(x)+ 1 (5.45)=xp(x) logp(x)q(x)+xp(x) log1p(x)+ 1 (5.46)= D(p||q) + H(p) + 1. (5.47)The lower bound can be derived similarly.Thus, believing that the distribution is q(x) when the true distributionis p(x) incurs a penalty of D(p||q) in the average description length.5.5 KRAFT INEQUALITY FOR UNIQUELY DECODABLE CODESWe have proved that any instantaneous code must satisfy the Kraft inequal-ity. The class of uniquely decodable codes is larger than the class of
  • 134. 116 DATA COMPRESSIONinstantaneous codes, so one expects to achieve a lower expected codewordlength if L is minimized over all uniquely decodable codes. In this sectionwe prove that the class of uniquely decodable codes does not offer anyfurther possibilities for the set of codeword lengths than do instantaneouscodes. We now give Karush’s elegant proof of the following theorem.Theorem 5.5.1 (McMillan) The codeword lengths of any uniquelydecodable D-ary code must satisfy the Kraft inequalityD−li ≤ 1. (5.48)Conversely, given a set of codeword lengths that satisfy this inequality, itis possible to construct a uniquely decodable code with these codewordlengths.Proof: Consider Ck, the kth extension of the code (i.e., the code formedby the concatenation of k repetitions of the given uniquely decodable codeC). By the definition of unique decodability, the kth extension of the codeis nonsingular. Since there are only Dn different D-ary strings of length n,unique decodability implies that the number of code sequences of lengthn in the kth extension of the code must be no greater than Dn. We nowuse this observation to prove the Kraft inequality.Let the codeword lengths of the symbols x ∈ X be denoted by l(x).For the extension code, the length of the code sequence isl(x1, x2, . . . , xk) =ki=1l(xi). (5.49)The inequality that we wish to prove isx∈XD−l(x)≤ 1. (5.50)The trick is to consider the kth power of this quantity. Thus,x∈XD−l(x)k=x1∈X x2∈X· · ·xk∈XD−l(x1)D−l(x2)· · · D−l(xk)(5.51)=x1,x2,...,xk∈XkD−l(x1)D−l(x2)· · · D−l(xk)(5.52)=xk∈XkD−l(xk), (5.53)
  • 135. 5.5 KRAFT INEQUALITY FOR UNIQUELY DECODABLE CODES 117by (5.49). We now gather the terms by word lengths to obtainxk∈XkD−l(xk)=klmaxm=1a(m)D−m, (5.54)where lmax is the maximum codeword length and a(m) is the numberof source sequences xkmapping into codewords of length m. But thecode is uniquely decodable, so there is at most one sequence mappinginto each code m-sequence and there are at most Dmcode m-sequences.Thus, a(m) ≤ Dm, and we havex∈XD−l(x)k=klmaxm=1a(m)D−m(5.55