Probabilistic Methods Of Signal And System Analysis, 3rd Edition

7,883 views

Published on

Probabilistic Methods of Signal and System Analysis, 3/e stresses the engineering applications of probability theory, presenting the material at a level and in a manner ideally suited to engineering students at the junior or senior level. It is also useful as a review for graduate students and practicing engineers.

Thoroughly revised and updated, this third edition incorporates increased use of the computer in both text examples and selected problems. It utilizes MATLAB as a computational tool and includes new sections relating to Bernoulli trials, correlation of data sets, smoothing of data, computer computation of correlation functions and spectral densities, and computer simulation of systems. All computer examples can be run using the Student Version of MATLAB. Almost all of the examples and many of the problems have been modified or changed entirely, and a number of new problems have been added. A separate appendix discusses and illustrates the application of computers to signal and system analysis

Published in: Education, Technology, Business
1 Comment
2 Likes
Statistics
Notes
  • Can anyone provide its solution?
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
7,883
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
583
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide

Probabilistic Methods Of Signal And System Analysis, 3rd Edition

  1. 1. Probabilistic Methods ofSignal and System AnalysisTHIRD EDITIONGeorge R. Cooper Clare D. McGillem
  2. 2. Probabilistic Methods ofSignal and System AnalysisThird Edition
  3. 3. THE OXFORD SERIES IN ELECTRICAL AND COMPUTER ENGINEERINGSERIES EDITORSAdel S. Sedra, Series Editor, Electrical EngineeringMichael R. Lightner, Series Editor, Computer EngineeringSERIES TITLESAllen and Holberg, CMOS Analog Circuit DesignBobi,:ow, Elementary Linear Circuit Analysis, 2nd Ed.. Bobrow, Fundamentals of Electrical Engineering, 2nd Ed.Campbell, The Science and Engineering of Microelectronic FabricationChen, Analog & Digital Control System DesignChen, Linear System Theory and Design, 3rd Ed.Chen, System and Signal Analysis, 2nd Ed.Comer, Digital Logic and State Machine Design, 3rd Ed.Cooper and McGillem, Probabilistic Methods of Signal and System Analysis, 3rd Ed.Franco, Electric Circuits Fundr;imentalsFortney, Principles Of Electronics: Analog & DigitalGranzow, Digital Transmission LinesGuru and Hiziroglu, Electric Machinery & Transformers, 2nd Ed.Boole and Boole, A Modern Short Course In Engineering ElectromagneticsJones, Introduction to Optical Fiber Communication SystemsKrein, Elements of Power ElectronicsKuo, Digital Control Systems, 3rd Ed.Lathi, Modern Digital and Analog Communications Systems, 3rd Ed.McGillem and Cooper, Continuous and Discrete·Signal and System Analysis, 3rd Ed.Miner, Lines and Electromagnetic Fields for EngineersRoberts and Sedra, SPICE, 2nd Ed.Roulston, An Introduction to the Physics ofSemiconductor DevicesSadiku, Elements of Electromagnetics, 2nd Ed.Santina, Stubberud, and Hostetter, Digital Control System Design, 2nd Ed.Schwarz, Electromagnetics for EngineersSchwarz and Oldham, Electrical Engineering: An Introduction, 2nd Ed.Sedra and Smith, Microelectronic Circuits, 4th Ed.Stefani, Savant, Shahian, and Hostetter, Design of Feedback Control Systems, 3rd Ed.Van Valkenburg, Anµlog Filter DesignWarner and Grung,i Semiconductor Device ElectronicsWolovich, Automatic Control SystemsYariv, Optical Electronics in Modem Communications, 5th Ed.
  4. 4. CONTENTS ���,Preface xi1 Introduction to Probability 11-1 Engineering Applications of Probability1-2 Random Experiments and Events 51-3 Definitions of Probability 71-4 The Relative-Frequency Approach 81-5 Elementary Set Theory 131-6 The Axiomatic Approach 191-7 Conditional Probability 221-8 Independence 271-9 Combined Experiments 291-10 Bernoulli Trials 311-11 Applications of Bernoulli Trials 35Problems 38References 502 Random Variables sz2-1 Concept of a Random Variable 522-2 Distribution Functions 542-3 Density Functions 572-4 Mean Values and Moments 632-5 The Gaussian Random Variable 672-6 Density Functions Related to Gaussian 772-7 Other Probability Density Functions 872-8 Conditional Probability Distribution andJ)ensity Functions 972-9 Examples and Applications I02
  5. 5. vi CONTENTSProblems 109References 1193 Several Random Variables 1203-1 Two Random Variables 1203-2 Conditional Probability-Revisited 1243-3 Statistical Independence 1303-4 Correlation between Random Variables 1323-5 Density Function of the Sum of Two Random Variables 1363-6 Probability Density Function of a Function of Two RandomVariables 1423-7 The Characteristic Function 148Problems 152References 1584 Elements of Statistics 1594-1 Introduction 1594-2 Sampling Theory-The Sample Mean 1604-3 Sampling Theory-The Sample Variance 1664-4 Sampling Distributions and Confidence Intervals 1694-5 Hypothesis Testing 1734-6 Curve Fitt_ing and Linear Regression 1774-7 Correlation between Two Sets of Data 182Problems 184References 1885 Random Processes 1895-1 Introduction 1895-2 Continqous and Discrete Random Processes 1915-3 Deterministic and Nondetermipistic Random Processes 1945-4 Stationary and Nonstationary Random Processes 1955-5 Ergodic and Nonergodic Random Processes 1975-6 Measurement of Process Parameters 1995-7 Smoothing Data with a Moving Window Average 203Problems 205References 208
  6. 6. CONTENTS6 Correlation Functions 2096-1 Introduction 2096-2 Example: Autocorrelation Function of a Binary Process 2136-3 Properties of Autocorrelation Functions 2166-4 Measurement of Autocorrelation Functions 2206.;.5 Examples of Autocorrelation Functions 2276-6 Crosscorrelation Functions 2306-7 Properties of Crosscorrelation Functions 2326-8 Examples and Applications of Crosscorrelation Functions 2346-9 Correlation Matrices for Sampled Functions 240Problems 245References 2567 Spectral Density 2s17-1 Introduction 2577-2 Relation of Spectral Density to the Fourier Transform 2597-3 Properties of Spectral Density 2637-4 Spectral Density and the Complex Frequency Plane 2717-5 Mean-Square Values from Spectral Density 274vii7-6 Relation of Spectral Density to the Autocorrelation Function 2817-7 White Noise 2877-8 Cross-Spectral Density 1897-9 Autocorrelation Function Estimate of Spectral Density 2927-10 Periodogram Estimate of Spectral Density 3017-11 Examples and Applications of Spectral Density 309Problems 315References 3228 Respo,nse of Linear Systems to Random Inputs 3238-1 Introduction 3238-2 Analysis in the Time Domain 3248-3 Mean and Mean-Square Value of System Output 3268-4 Autocorrelation Function of System Output 3308-5 Crosscorrelation between Input and Output 3358-6 Examples of Time-Domain System Analysis 3398-7 Analysis in the Frequency Domain 3458-8 Spectral Density at the System Output 346
  7. 7. viii CONTE NTS8-9 Cross-Spectral Densities between Input and Output 3508-10 Examples of Frequency-Domain Analysis 3528-11 Nurp.erical Computation of System Output 359Problems 368References 380,9 Optimum Linear Systems 3819-1 Introduction 3819-2 Criteria of Optimality · 3829-3 Restrictions on the Optimum System 3849-4 Optimization by Parameter Adjustment 3859-5 Systems That Maximize Signal-to-Noise Ratio 3959-6 Systems That Minimize Mean-Square Error 402Problems 412References 418AppendicesA Mathematical Tables 4 1 9A-1 Trigonometric Identities 419A-2 Indefinite Integrals 420A-3 Definite Integrals 421A-4 Fourier Transform Operations 422A-5 Fourier Transforms 423A-6 One-Sided Laplace Transforms. 423B Frequently Encountered Probability Distributions 425B-1 Discrete Probability Functions 425B-2 Continuous Distributions 427C Binomial Coefficients 43 1D Normal Probability Distribution Function 432E The Q-Function 434-; Students t Distribution Function 436G Computer Computations 438
  8. 8. CONTENTSH Table of Correlation Function-SpectralDensity Pairs 466Contour Integration 467Index 475Ix
  9. 9. PREFACE ------------------------The goals of the Third Edition are essentially the same as those of the earlier editions, viz.,to provide an introduction to the applications ofprobability theory to the solution ofproblemsarising in the analysis of signals and systems that is appropriate for engineering students at thejunior or senior level. However, it may also serve graduate students and engineers as a concisereview of material thatthey previously encountered in widely scattered sources.This edition differs from the first and second in several respects. In this edition use of thecomputer is introduced both in text examples and in selected problems. The computer examplesare carried out using MATLAB1 and the problems are such that they can be handled withthe Student Edition of MATLAB as well as with other computer mathematics applications.In addition.to the introduction of computer usage in solving problems involving statistics andrandom processe�. other changes have also been made. In particular, a number of new sectionshave been added, virtually all of the exercises have been modified or changed, a number oftheproblems have been modified, and a number of new problems have been added.Since this is an engineering text, thetreatment is heuristicratherthanrigorous, and the studentwill find many examples ofthe application ofthese concepts to engineeringproblems. However,it is not completely devoid of the mathematical subtleties, and considerable attention has beendevoted to pointing out some of the difficulties that make a more advanced study of the subjectessential if one is to master it. The authors believe that the educational process is best servedby repeated exposure to difficult subject matter; this text is intended to be the first exposure toprobability and random processes and, we hope, not the last. The book is not comprehensive,but deals selectively with those topics that the authors have found most useful in the solution ofengineering problems.A brief discussion of some ofthe significant features ofthis book will help set the stage fora discussion ofthe various ways it can be used. Elementary concepts of discrete probability areintroduced in Chapter 1: first from the intuitive standpoint of the relative frequency approachand then from the more rigorous standpoint ofaxiomatic probability. Simple examples illustrateall these concepts and are more meaningful to engineers than are the traditional examples ofselecting red and white balls from urns. The concept of a random variable is introduced inChapter 2 along with the ideas of probability distribution and density functions, mean values,and conditional probability. A significant feature of this chapter is an extensive discussion ofMATLAB is the registered trademark of The MathWorks, Inc., Natick, MA.xi
  10. 10. xii PREFACE.many differentprobability density functions and thephysical situationsinwhich they may occur.Chapter 3 extends the random variable concept to situations involving two or more randomvariables and introduces the concepts of statistical independence and correlation.In Chapter 4, sampling theory, as applied to statistical estimation, is considered in somedetail and a thorough discussion of sample mean and sample varianoe is given. The distributionof the sample is described and the use of confidence intervals in making statistical decisionsis both considered and illustrated by many examples of hypothesis testing. The problem offitting smooth curves to experimental data is analyzed, and the use of linear regression isillustrated by practical examples. The problem of determining the correlation between datasets is examiried.A general discussion ofrandom processes and their classification is given in Chapter 5. Theemphasis here is on selectingprobability models thatareuseful in solving engineering problems.Accordingly, agreatdealofattention is devotedtothephysical significanceofthevarious processclassifications, with no attempt at mathematical rigor. A unique feature ofthis chapter, which iscontinued in subsequent chapters, is an introduction to the practical problem of estimating themean of a random process from an observed sample function. The technique of smoothing datawith a moving window is discussed.Properties and applications of autocorrelation and crosscorrelation functions are discussedin Chapter 6. Many examples are presented in an attempt to develop some insight into thenature of correlation functions. The important problem of estimating autocorrelation functionsis discussed in some detail and illustrated with several computer examples.·Chapter 7 turns to a frequency-domain representation of random processes by introducingthe concept of spectral density. Unlike most texts,which simply define spectral density as theFourier transform of the correlation function, a more fundamental approach is adopted here iriorder to bring out the physical significance of the concept. This chapter is the most difficultone in the book, but the authors believe the material should be presented in this way. Methodsof estimating the spectral density from the autocorrelation function and from the periodogramare developed and illustrated with appropriate computer-based examples. The use of windowfunctions to improve estimates is illustrated as well as the use of the computer to carry outintegration of the spectral density using both the real and complex frequency representations.Chapter 8 utilizes the concepts of correlation functions and spectral density to analyze theresponse of linear systems to random inputs. In a sense, this chapter is a culmination of allthat preceded it, and is particularly significant to engineers who must use these concepts. Itcontains many examples that arerelevant to engineering probiems and emphasizes the need formathematical models that are both realistic and manageable. The comJ.lmtation of system outputthrough simulation is examined and illustrated with computer examples,Chapter 9 extends the concepts of systems analysis to consider systems that are optimum insome sense. Both theClassical matched filter forknown signals and the Wienerfilterfor randomsignals are considered from an elementary standpoint. Computer examples of optimization areconsidered and illustrated with an example of an adaptive filter.Several Appendices are included to provide useful mathematical and statistical tables anddata. Appendix G contains a detailed discussion, with examples, ofthe application ofcomputersto the analysis of signals and systems and can serve as an introduction to some of the waysMATLAB can be used to solve such problems.
  11. 11. PREFACE xiiiIn a more general-vein, each chapter contains references that the reader may use to extendhis or her knowledge. There is also a wide selection of problems at the end of each chapter. Asolution manual for these problems is available to the instructor.As an additional aid to learning and using the concepts and methods discussed in this text,there are exercises at the end of each major section. The reader should consider these ex0rcisesas part of the reading assignment and should make every effort to solve each one before gomgon to the next section. Answers areprovided so that thereadermay know when his or her effortshave beep successful. It should be noted, however, that the answers to each exercise may notbe listed in the same order as the questions. This is intended to provide an additional challenge.The presence of these exercises should substantially reduce the number of additional problemsthat need to be assigned by the instructor.The material in this text is appropriate for a one-semester, three-credit course offered in thejunior year. Not all sections ofthe text need be used in such a course but 90% ofit can be coveredin reasonable detail. Sections that may be omitted include 3-6, 3-7, 5-7, 6-4, 6-9, 7-9, andpart of Chapter 9; but other choices may be made at the discretion of the instructor. There are,of course, many other ways in which the text material could be utilized. For those schools on a.quarter system, the material noted above could be covered in a four-credit course. Alternatively,if a three-credit course were desired, it is suggested that, in addition to the omissions notedabove, Sections 1-5, 1-6, 1-7, 1-9, 2-6, 3-5, 7-2, 7-8, 7-10, 8-9, and all of Chapter 9 can beomitted if the instructor supplies a few explanatory words to bridge the gaps. Obviously, thereare also many other possibilities that are open to the experienced instructor.It is a pleasure for the authors to acknowledge the very substantial aid and encouragement thatthey have received from their colleagues and students overthe years. In particular, special thanksare due to Prof. David Landgrebe of Purdue Univesity for his helpful suggestions regardingincorporation of computer usage in presenting this material.September 1997George R. CooperClare D. McGillem
  12. 12. CHAPfER 1Introductionto Probability1-1 Engineering Applications of ProbabilityBefor� embarking on a study ofelementary probability theory, it is essential to motivate such astudy by considering why probability theory is useful in the solution of engineering problems.This can be done in two different ways. The first is to suggest a· viewp�in�. or philosophy,concerning probability that emphasizes its universal physical reality rather than treating it asanother mathematical discipline that may be useful occ;tsionally. The second is to note some ofthe many different types of situations that arise in normal engineering practice in which the useof probability concepts is indispensable.A characteristic feature of probability theory is that it concerns itself with situations thatinvolve uncertainty in some form. The popular conception of this relates probability to suchactivities as tossing -dice, drawing cards, and spinning roulette wheels. Because the rules ofprobability are not widely known, and because such situations can become quite complex, theprevalent attitude is that probability theory is a mysterious and esoteric branch of mathematicsthat is accessible only to trained mathematicians and is of limited value in the real world. Sinceprobability theory does deal with uncertainty, another prevalent attitude is that a probabilistictreatment of physical problems is an_inferior substitute for a more desirable exact analysis andis forced on the analyst by a lack of complete information. Both of these attitudes arefalse.Regarding the alleged difficulty ofprobability theory, it is doubtful there is any otherbranch ofmathematics or analysis that is so completely based on such a small number ofeasily understoodbasic concepts. Subsequent discussion reveals that the major body of probability theory can bededuced from only three axioms that are almost self-evident. Once these axioms and theirapplications are understood, the remaining concepts follow in a logical manner.The attitude that regards probability theory as a substitute for exact analysis stems from thecurrent educational practice ofpresenting physical laws as deterministic, immutable, and strictly
  13. 13. z CHAPTER 1 · INTRODUCTIONtrue under all circumstances. Thus, a law that describes the response of a dynamic system issupposed to predict that response precisely if the system excitation is known precisely. Forexample, Ohms lawv(t) = Ri (t) (8-1)is assumedtobeexactly trueatevery instantoftime, and, on amacroscopicbasis; this assumptionmay be welljustified. On a microscopic basis, however, this assumption is patently f�se-afactthat is immediately obvious to anyone who has tried to connect a large resistor to the input of ahigh-gain amplifier and listened to the resulting noise..In the light of modem physics and our emerging knowledge of the nature of matter, theviewpoint that natural laws are deterministic and exact is untenable. They are, at best, arepresentation ofthe average behavior ofnature. In many important cases this average behavioris close enough to that actually observed so that the deviations are unimportant. In such cases,the deterministic laws are extremely valuable because they make it possible to predict systembehavior with a minimum ofeffort. In otherequallyimportantcases, therandomdeviationsmaybe significant-perhaps even more significant than the deterministic response. For these cases,analytic methods derived from the concepts ofprobability are essential.From the above discussion, it should be clear that the so-called exact solution is· not exactat all, but, in fact, represents an idealized special case that actually never arises in nature. Theprobabilistic approach, on the other hand, far from being a poor substitute for exactness, isactually the method that most nearly represents physical reality. Furthermore, it includes thedeterministic result as a special case.rt is now appropriate to discuss the types of situations in which probability concepts arise inengineering. The examples presented here emphasize situations that arise in systems studies;butthey do serve to illustrate the essential point that engineering applications ofprobability tendto be the rule rather than the exception.Random Input SignalsFor a physical system to perform a useful task, it is usually necessary that some sort offorcing function (the input signal) be applied to it. Input signals that have simple mathematicalrepresentations are convenient for pedagogical purposes or for certain types of system analysis,but they seldom arise in actual applications. Instead, the input signal is more likely to involvea certain amount of uncertainty and unpredictability that justifies treating it as a randomsignal. There are many examples of this: speech and music signals that serve as inputs tocommunication systems; random digits applied to a computer; random command signals appliedto an aircraft flight control system; random signals derived from measuring some characteristicof a manufactured product, and used as inputs to a process control system; steering wheelmovements in an automobile power-steering system; thesequenc�inwhichthecall andoperatingbuttons ofan elevator are pushed; the numberofvehicles passing various checkpointsin atrafficcontrol system; outside and inside temperature fluctuations as inputs to a building heating andair conditioning system; and many others.
  14. 14. 1 - 1 ENGINEERING APPLICATIONS OF PROBABILITY 3Random DisturbancesMany systems have unwanted disturbances applied to their input or output in addition to thedesired signals. Such disturbances are almost always random in nature and call .for the use ofprobabilistic methods even ifthe desired signal does not. A few specific cases serve to illustrateseveral different types ofdisturbances. If, for a first example, the output ofa high-gain amplifieris connected to a loudspeaker, one frequently hears a variety of snaps, crackles, and pops. Thisrandomnoise arises fromthermal motion ofthe conductionelectronsinthe amplifierinputcircuitor fromrandomvariations in the number ofelectrons (or holes) passing through the transistors.It is obviousthatonecannothope tocalculatethevalue ofthis noise ateveryinstantoftime si.t).Cethis value represents the combined effects of literally billions of individual moving charges. Itis possible, however, to calculate the average power of this noise, its frequency spectrum, andeven the probability of observing a noise value larger than some specified value. As a practicalmatter, these quantities are more important in determining the quality of the amplifier than is aknowledge ofthe instantaneous waveforms.As a second example, consider a radio or television receiver. In addition to noise generatedwithin the receiver by the mechanisms noted, there is random noise arriving at the antenna.This results from distant electrical storms, manmade disturbances, radiation from space, orthermalradiation from surrounding objects. Hence, even ifperfectreceivers and amplifiers wereavailable, the received signal would be combined with random noise. Again, the calculation ofsuch quantities as average power and frequency spectrum may be more significant th� thedetermination of instantaneous value.A differenttype ofsystem is illustrated by alargeradarantenna, which may bepointed in anydirection by means of an automatic control system. The wind blowing on the antennaproducesrandom forces that must be compensated for by the control system. Since the compensation isnever perfect, there is always some random fluctuation in the antenna direction; it is importantto be able to calculatethe effective value and frequency content of this fluctuation.A still different situation is illustrated by an airplane flying in turbulent air, a ship sailing instormy seas, or an army truck traveling overroughterrain. In all these cases, randori-i,disturbingforces, acting on complex mechanical systems, interfere with the proper control or guidance ofthe system. It is essential to determine how the system responds to these random input signals.. .Random System CharacteristicsThe system itself may have characteristics that are unknown and that vary in a random fashionfrom time to time. Some typical examples are aircraft in which the load (that is, the number ofpassengers or the weight of the cargo) varies from flight to flight; troposcatter communicationsystems in which the path attenuation varies radically from moment to moment; an electricpower system in which the load (that is, the amount of energy being used) fluctuates randomly;and a telephone system in which the number of users changes from instant to instant.Therearealso manyelectronic systems in which theparameters may be random. Forexample,it is customary to specify the properties of many solid-state devices such as diodes, transistors,digital gates, shift registers, flip-flops, etc. by listing a range of values for the more important
  15. 15. 4 CHAPTER 1 · INTRODUCTIONitems. The actual value ofthe parameters are random quantities that lie somewhere in this rangebut are not known a priori.System ReliabilityAll systems are composed of many -individual elements, and one or more of these elementsmay fail, thus causing the entire system, or part of the system, to fail. The times at whichsuch failures will occur are unknown, but it is often possible to determine the probability offailure for the individual elements and from these to determine the "mean time to failure" forthe system. Such reliability studies are deeply involved with probability and are extremelyimportant in engineering design. As sy.stems become more complex, more costly, and containlarger numbers ofelements, the problems ofreliability become more difficult and take on addedsignificance.Quality ControlAn important method of improving system reliability is to improve the quality ofthe individualelements, and this can often be done by an inspection process. As it may be too costly toinspect every element after every step during its manufacture, it is necessary to develop rulesfor inspecting elements selected at random. These rules are based on probabilistic concepts andserve the valuable purpose of maintaining the quality of the product with the least expense.Information TheoryA major objective ofinformation theory is to provide a quantitative measure for the informationcontent of messages such as printed pages, speech, pictures, graphical data, numerical data, orphysical observations of temjJerature, distance, velocity, radiation intenshy, and rainfall. Thisquantitative measure is necessary to provide communication channels that are both adequateand efficient for conveying this information from one place to another. Since such messagesand observations are almost invariably unknown in advance and random in·nature, they canbe described only in terms of probability. Hence, the appropriate information measure is aprobabilistic one. Furthermore, the communication channels are subject to random distur.Pances(noise) that limit their ability to convey information, and again a probabilistic description isrequired.SimulationIt is frequently useful to investigate system performance by computer simulation. This can oftenbe carried out successfully even when a mathematical analysis is impossible or impractical. Forexample, whenthere arenonlinearities presentin asystemitisoftennotpossibletomake an.exactanalysis. However, it is generally possible to carry out a simulation ifmathematical expressionsforthenonlinearitiesca�be obtained.Wheninputshaveunusual statisticalproperties, simulation
  16. 16. 1 -2 RAN DOM EXPERIMENTS AND EVENTS 5may be the only way to obtain detailed information about system performance. It is possiblethrough simulation to see the effects ofapplying a wide range ofrandom and nonrandom inputsto a system and to investigate the effects of random variations in component values. Selectionof optimum component values can be made by simulation studies whei;i other methods arenot feasible.It should be clear from the above partial listing that almost any engineering endeavor involvesa degree of uncertainty or randomness that makes the use of probabilistic concepts an essentialtool for the present-day engineer. In the case of system analysis, it is necessary to have somedescription of random signals and disturbances. There are two general methods of describingrandomsignals mathematically. The first, and most basic, is a probabilistic description in whichthe random quantity is characterized by a probability model. This method is discussed later inthis chapter.The probabilistic description of random signals cannot be used directly in system analysissince it indicates very little about how the random signal varies withtime or what its frequencyspectrum is. It does, however, lead to the statisticai description of random signals, which isuseful in system analysis. In this case the random signal is characterized by a statistical model,which consists of an appropriate set of average values such as the mean, variance, correlationfunction, spectral density, and others. These average values represent a less precise descriptionof the random signal than that offered by the probability model, but they are more useful forsystem analysis because they can be computed by using straightforward and relatively simplemethods. Some of the statistical averages are discussed in subsequent chapters.There are many steps that need to be taken before it is possible to apply the probabilistic andstatistical concepts to system analysis. In order that the reader may understand that even themost elementary steps are important to the final objective, it is desirable to outline these Stepsbriefly. The first step is to introduce the concepts of probability by considering discrete randomevents. These concepts are then extended to continuous random variables and subsequently torandom functions oftime. Finally, several ofthe average values associated with random signalsare introduced. At this point, the tools are available to consider ways of analyzing the responseof linear systems to raadom inputs.1 -2 Random Experiments and EventsThe concepts ofexperiment and event are fundamental to an understanding ofelementary prob­ability concepts. An experiment is some action that results in an outcome. A random experimentis one in which the outcome is uncertain before the experiment is performed. Although there is aprecise mathematical definition of a random experiment, a better understanding may be gainedby listing some examples of well-defined random experiments and their possible outcomes.This is done in Table 1-1 . It should be noted, however, that the possible outcomes often maybe defined in several different ways, depending upon the wishes ofthe experimenter. The initialdiscussion is concerned with a single performance of a well-defined experiment. This singleperformance is referred to as a trial.An important concept in connection with random events is that of equally likely events. For.example, ifwe toss a coin we expect that the event ofgetting a head and the eventofgetting a tail
  17. 17. 6 CHAPTER 1 • INTRODUCTIONare equally likely. Likewise, ifwe roll a die we expectthat the events ofgetting any numberfrom1 to 6 are equally likely. Also, when a card is drawn from a deck, each ofthe 52 cards is equallylikely. Aterm that is often used to be synonymous withthe concept ofequally likely events is thatof selected at random. For example, when we say that a c3!d is selected at random from .a deck,we are implying that all cards in the deck are equally likely to have been chosen. In general, weassume th"at the outcomes ofan experiment areequally likely unless Uiere is someclearphysicalreason why they should not be. In the discussions that follow, there will be examples ofeventsthat are assumed to be equally likely and even.ts that are not assumed to be equally likely. Thereader should clearly understand the physical reasons for the assumptions in both cases.It is also important to distinguish between elementary events and composite events. Anelementary event is one for which there is only one outcome. Examples of elementary eventsinclude such things as �ossing a coin or rolling a die when the events are defined in a specificway. When a coin is tossed, the event of getting a head or the event of getting a tail can beachieved in only one way. Likewise, when a die is rolled the event of getting any integer froml to 6 can be achieved in on!y one way. Hence, in both cases, the defined events are elementaryevents. On the other hand, it is possible to define events associated with rolling a die that are notelementary. For example, let one event be that of obtaining an even number while another eventis that of obtaining .an odd number. In this case, each event can be achieved in three differentways and, hence, these events are composite.Thereare many differentrandom experiments in which the events can be defined to be eitherelementary or composite. For example, when a card is selected at random from a deck of 52cards, there are 52 elementary events corresponding to the selection ofeach ofthe cards. On theother hand, the event ofselecting a heart is a composite event containing 13 different outcomes.Likewise, ·the event of selecting an ace is a composite event containing 4 outcomes. Clearly,there are many other ways in which composite events could be defined.When the number of outcomes of an experiment are countable (that is, they can be put inone-to-one correspondence with the integers), the outcomes are said to be ·discrete. All of theexamples discussed above represent discrete outcomes. However, there are many experimentsin which the outcomes are not countable. For example, ifa random voltage is observed, and theoutcome taken to be the value ofthe voltage, there may be an infinite and noncountable numberofpossible values that can be obtained. In this case, the outcomes are said to form a continuum.Table 1-1 Possible Experiments and Their OutcomesExperimentFlipping a coinThrowing a dieDrawing a cardObserving a voltageObserving a voltageObserving a voltagePossible OutcomesHeads (H), tails (T)1, 2, 3, 4, 5, 6Any of the 52 possible cardsGreater than 0, less than 0Greater than V, less than VBetween V, and V2, not between V, and V2
  18. 18. 1 - 3 DEFINITIONS OF PROBABI LITY 7The concept of an elementary event does not apply in this case.It is also possible to conduct more complicated experiments with more complicated sets ofevents. The experiment may consist of tossing 10 coins, and it is apparent in this case thatthere are many different possible outcomes, each of which may be an event. Another situation,which has more ofan engineering flavor, is that ofa telephone system having 10,000 telephonesconnected to it. At any given time, a possible event is that 2000 of these telephones are in use.Obviously, there are a great many other possible events.Ifthe outcome ofan experiment is uncertain before the experiment is performed, the possible-outcomes arerandom events. To each ofthese events it is possible to assign a number, called theprobability ofthat event, and this numberis a measure ofhow likely that event is. Usually, thesenumbers are assumed, the assumed values being based on our intuition about the experiment.For example, if we toss a coin, we would expect that the possible outcomes of heads and tailswould be equally likely. Therefore, we would assume the probabilities ofthese two events to bethe same.1 -3 Definitions of ProbabilityOne of the most serious stumbling blocks in the study of elementary probability is that ofarriving at a satisfactory definition of the term "probability." There are, in fact, four or fivedifferent definitions for probability that have been proposed and used with varying degreesof success. They all suffer from deficiencie� in concept or application. Ironically, the mostsuccessful "definition" leaves the term probability undefined.Of the various approaches to probability, the two that appear to be most useful are therelative-frequency approach and the axiomatic approach. The relative-frequency approach isuseful because it attempts to attach some physical significance to the concept of probabilityand, thereby, makes it possible to relate probabilistic concepts to the real world. Hence, theapplication ofprobability to engineering problems is almost always accomplished by invokingthe concepts ofrelative frequency, even when engineers may not be conscious of doing so.The limitation of the relative-frequency approach is the difficulty of using it to deduce theappropriate mathematical structure for situations that are too complicated to be analyzed readilyby physical reasoning, This is not to imply that this approach cannot be used in such situations,for it can, but it does suggest that there may be a mucheasier way to deal withthese cases. Theeasier way turns out to be the axiomatic approach.The axiomatic approach treats the probability of an event as a number that satisfies certainpostulates but is otherwise undefined. Whether or not this number relates to anything in thereal world is of no concern in developing the mathematical structure that evolves from thesepostulates. Engineers may object to this approach as being too artificial and too removed fromreality, but they should remember that the whole body of circuit theory was developed inessentially the same way. In the case of circuit theory, the basic postulates are Kirchhoffs lawsand the conservation of energy. The same mathematical structure emerges regardless of whatphysical quantities are identified with the abstract symbols--or even if no physical quantitiesare associated with them. It is the task of the engineer to relate this mathematical structure to
  19. 19. 8 CHAPTER 1 · INTRODUCTIONthe real world in a way that is admittedly not exact, but that leads to useful solutions to realproblems.Fromthe abovediscussion, itappears thatthemostuseful approachtoprobabilityforengineersis a two-pronged one, in which the relative-frequency concept is employed to relate simpleresults to physical reality, and the axiomatic approach is employed to develop the appropriatemathematics for more complicated situations. It is this philosophy that is presented here.1-4 The Relative-Frequency ApproachAs its name implies, the relative-frequency approach to probability is closely linked to thefrequency of occurrence of the defined events. For any given event, the frequency of occur­rence is used to define a number called the probability of that event and this number is ameasure of how likely that event is. Usually, these numbers are assumed, the assumed valuesbeing based on our intuition about the experiment or on the assumption that the events areequally likely.To make this concept more precise, consider an experimentthat is performedN times and forwhich there are four possible outcomes that are considered to be the elementary events A, B, C,and D. Let NA be the number of times that event A occurs, with a similar notation for the other·events. It is clear thatWe now define the.relative frequency ofA, r (A) asNAr (A) =NFrom (1-1) it is apparent thatr (A) + r(B) + r (C) + r (D) = 1(1-1)(1-2)Now imaginethat N increases withoutlimit. When aphenomenon known as statistical regularityapplies, the relative frequency r (A) tends to stabilize and approach a number, Pr (A), that canbe taken as the probability of the elementary event A. That isPr (A) = lim r(A)N�oo(1-3)From the relation given above, it follows thatPr (A) +Pr (B) + Pr (C) + · · · + Pr (M) = 1 (1-4)and we can conclude that the sum of the probabilities of all of the mutually exclusive eventsassociated with a given experiment must be unity.
  20. 20. 1 -4 TH E RELATIVE-FREQU ENCY APPROACH 9These concepts can be summarized by the following set of statements:1. 0 :::; Pr (A) ::S 1.2. Pr (A) + Pr (B) + Pr (C) + · · · + Pr- (M) = 1, for a complete set of mutually exclusiveevents.3. An impossible everit is represented by.Pr (A) = 0.4. A certain event is represented by Pr (A) = 1.To make some of these ideas more specific, consider the following hypothetical example.Assume that a large bin contains an assortment of resistors of different sizes, which arethoroughly mixed. In particular; let there be 100 resistors having a marked value of I Q, 500resistors marked 10 Q, 150resistors marked 100Q, and 250resistors marked 1000Q. Someonereaches into the bin and pulls out one resistor at random. There are now four possible outcomescorresponding to the value of the particular resistor selected. To determine the,probability ofeach ofthese events we assume that the probability of each event is proportional to the numberof resistors in the bin corresponding to that event. Since there are 1000 resistors in the bin alltogether, the resulting probabilities are100Pr (1 Q) =lOOO= 0.1150Pr (lOO Q) =lOOO= 0.15500Pr (10Q) =lOOO= 0.5250Pr (1000Q) =lOOO= 0.25Note that these probabilities are all positive, less than 1, and do add up to 1.Many times one is interested in more than one event at a time. If a coin is tossed twice, onemay wish to determine the probability that a head will occur on both tosses. Such a probabilityis referred to as ajoint probability. In this particular case, one assumes that all four possible.outcomes (HH, HT, TH, and IT) are equally likely and, hence, the probability of each is one­fourth. In a more general case the situation is not this simple, so it is necessary to look at amore complicated situation in order to deduce the true nature ofjoint probability. The notationemployed is Pr (A , B) and signifies the probability of thejoint occurrence of events A and B.Consider again the bin of resistors and specify that in addition to having different resistancevalues, they also have different power ratings. Let the different power ratings be 1 W, 2W, and5 W; the number having each rating is indicated in Table 1-2.Before using this example to illustrate joint probabilities, consider the probability (nowreferred to as a marginalprobability) ofselecting a resistor having a given powerratingwithoutregard to its resistance value. From the totals given in the right-hand column, it is clear thatthese probabilities are440Pr (1 W) =lOOO =0.44360Pr (5W) =lOOO= 0.36200Pr (2W) =lOOO= 0.20
  21. 21. 10 CHAPTER 1 · INTRODUCTIONWe now ask what the joint probability is of selecting a resistor of 10 Q having S-W powerrating. Since there are lSO such resistors in the bin, this joint probability is clearlylSOPr (lO Q, SW) =lOOO= 0.lSThe 11 otherjoint probabilities canbe determined in a similar way. Note that some of thejointprobabilities are zero [forexample, Pr (1 Q, SW) = 0] simply because a particular combinationof resistance and power does not exist.It is necessary at this point to relate thejoint probabilities to the marginal probabilities. In theexample of tossing a coin two times, the relationship is simply a product. That is,1 1 1Pr (H, H) = Pr (H) Pr (H) = l x l = 4But this relationship is obviously not true for the resistor bin example. Note thatand it was previously shown thatThus,Pr (SW) =130�0= 0.36Pr (10 Q) = O.SPr (10 Q) Pr (SW) = O.S x 0.36 = 0. 18 ;6 Pr (10 Q, SW) � O. lSand thejoint probability is not the product of the marginal probabilities.To clarify this point, it is necessary to introduce the concept of conditional probability. Thisis the probability of one event A, given that another event B has occurred; it is.designated asPr (AJB). In terms of the resistor bin, consider the conditional probability of selecting a 10-Q resistor when it is already known that the chosen resistor is S W. Since there are 360 S-Wresistors, and I SO of these are 10 Q, the required .conditional probability isTable 1-2 Resistance Values and Power RatingsResistance ValuesPower Rating lU IOU lOOU lOOOU Totaisl W so 300 90 0 4402 W so so 0 100 200S W 0 lSO 60 lSO 360Totals 100 soo lSO 2SO 1000
  22. 22. 1-4 TH E RELATIVE-FREQU ENCY APPROACHlSOPr (lO QISW) =360= 0.4171 1Nowconsidertheproductofthis conditionalprobability andthemarginal probability ofselectinga S-W resistor.Pr(lO QISW}Pr (SW) = 0.417 x 0.36 = O. lS = Pr (lO Q, SW)It is seen that this product is indeed thejoint probability.The same result can also be obtained another way. Consider the conditional probabilitylSOPr (SWl lO Q) =SOO= 0.30since there are lSO S-W resistors out ofthe SOO 10-Q resistors. Then form the"productPr (SWl lO Q) Pr (lO Q) = 0.30 x O.S = Pr (lO Q, SW)Again, the product is thejoint probability.(l-5)The foregoing ideas concerning joint probability can be summarized in the general equationPr (A, B) = Pr (AI B) Pr (B) = Pr (B IA) Pr (A) (1-6)which indicates that thejoint probability of two events can always be expressed as the productofthe marginal probability ofone event and the conditional probability ofthe other event giventhe first event.We now return to the coin-tossing problem, in which it is indicated that the joint probabilitycan be obtained as the product oftwo marginal probabilities. Under what conditionsWill this betrue? From equation (1-6) it appears that this can 1:>e true ifPr (AIB) = Pr (A) and Pr (BIA) = Pr (B)These statements imply that the probability of event A does not depend upon whether or notevent B has occurred. This is certainly true in coin tossing, since the outcome of the secondtoss cannot be influenced in any way by the outcome of th,e first toss. Such events are said tobe statistically independent. More precisely, two random events are statistically independent ifand only ifPr (A, B) = Pr (A) Pr (B) (1-7)The preceding paragraphs provide a very brief discussion of many of the basic concepts ofdiscrete probability. They have been presented in a heuristic fashion without any attempt tojustify them mathematically. Instead, all of the probabilities have been formulated by invokingthe .concepts of relative frequency and equally likely events in terms of specific numerical
  23. 23. 12 CHAPTER 1 · I NTRODUCTIONexamples. It is clear from these examples that it is not difficult to assign .reasonable numbersto the probabilities of various events (by employing the relative-frequency approach) when thephysical situation is notvery involved. It should also be apparent, however, that such an approachmight become unmanageable when there are many possible outcomes to any experiment andmany different ways ofdefining events. This is particularly true when one attempts to extend theresults for the discrete case io the continuous case. It becomes necessary, therefore, to reconsiderall of the above ideas in a more precise manner and to introduce a measure of mathematicalrigor that provides a more solid footing for subsequent extensions.Exercise 1-4.1a) A box contains 50 diodes of which 1 0 are known to be bad. A diode isselected at random. What is the probability that it is bad?b) If the first diode drawn from the box was good, what is the probabilitythat a: second diode drawn will be good?c) If two diodes are drawn from the box what is the probability that theyare both good?Answers: 39/49, 1 56/245, 1/5(Note: In the exercise above, and in others throughout the book, answersare not necessarily given in the same order as the questions.)Exercise 1-4.2A telephone switching center survey indicates that one of four calls is abusiness call, that one-tenth of business calls are long distance, and one­twentieth of nonbusiness calls are long distance.a) What is the probability that the next call will be a nonbusiness long­distance call?b) What is the probability that the next call will be a business call given thatit is a long-distance calltc) What is the probability that the next call will be a nonbusiness call giventhat the previous call was��mg distance?Answers 3/80, 3/4, 2/5------------�- ·�-�-=--------------
  24. 24. 1 - 5 ELEMENTARY SET THEORY 131 -5 Elementary Set TheoryThe more precise formulation mentioned in Section 1-4 is accomplished by putting the l.deasintroduced in that section into the framework ofthe axiomatic approach. To do this, however, itis first necessary to review some of the elementary concepts of set theory.A set is a collection of objects known as elements. It will be designated aswhere the set is A and the elements are a1 , • • • , an . For example, the setA may consist of theintegers from 1 to 6 so that a1 = 1 , a2 = 2, . . . , a6 = 6 are the elements. A subset of Ais any set all of whose elements are also elements of A. B = { 1 , 2, 3 l is a subset of the setA = { 1 , 2, 3, 4, 5, 6}. The general notation for indicating that B is a subset ofA is· B c A. Notethat every set is a subset of itself.All sets of interest in probability theory have elements taken from the largest set called aspace and designated as S. Hence, all sets will be subsets of the space S. The relation of S andits subsets to probability will become clear shortly, but in the meantime, an illustration may behelpful. Suppose that the elements of a space consist of the six faces of a die, and that thesefaces are designated as 1 , 2, . . . , 6. Thus,S = { l , 2, 3 , 4, 5, 6}There are many ways in which subsets might be formed, depending upon the number ofelementsbelonging to each subset. In fact, ifone includes the null set orempty set, which has no elementsin it and is denoted by 0, there are 26 = 64 subsets and they may be denoted as0, { 1 } , . . . {6} , { l , 2} , { 1 , 3}, . . . {5, 6}, { 1 , 2, 3}, . . . shi general, if S contains n elements, then there are 2n subsets. The proof of this is left as anexercise for the student.One of the reasons for using set theory to develop probability concept� is that the importantoperations are already defined for sets and have simple geometric representations that aidin visualizing and understanding these operations. The geometric representation·is the Venndiagram in which the space S is represented by a square and the various sets are representedby closed plane figures. For example, the Venn diagram shown in Figure 1-1 shows that B is asubset ofA and that C is a subset of B (and also ofA). The various operations are now definedand represented by Venn diagramsEqualitySet A equals set B if! (if and only if) every element ofA is an element of B and every elementof B is an element ofA. ThusA = B if! A c B and B c A
  25. 25. 14 CHAPTER 1 · INTRODUCTIONFigure 1-;-1 · Venn diagram for C c B c A.The Venn diagram is obvious and will not be shown.SumsThesum or union oftwo sets is a set consisting ofall the elements that are elements ofA or ofB or of both. It is designated as A U B. This is shown-in Figure 1-2. Since the associative lawholds, the sum of more than two sets can be written without-parentheses. That is(A U B) U C = A U (B U C) = A U B U CThe commutative law also holds, so thatProductsA U B = B U AA U A = AA U 0 = AA U S = SA U B = A, if B c ATheproduct or intersection oftwo sets !s the set consisting of all the elements that are commonto both sets. It is designated as A n B and is illustrated in Figure 1-3. A number of resultsapparent from the Venn diagram are
  26. 26. 1 -5 ELEMENTARY SET THEORYfigure 1-2 The sum of two sets, A U B.figure 1-3 The intersection oftwo sets. A n B .A n B = B n A (Commutative law)A n A = AA n 0 = 0A n S = AA n /3 = B, if B c A1 5If there are more than two sets involved in the product, the Venn diagram of Figure 1-4 isappropriate. From this it is seen that(A n B) n c = A n (B n C) = A n B n cA n (B U C) = (A n 8) U (A n C) (Associative law)Two sets A and B are mutually exclusive or disjoint if A n B = 0 . Representations of suchsets in the Venn diagram do not overlap.
  27. 27. 1 6 CHAPTER 1 · I NTRODUCTIONFigure 14 Intersections fo r three sets.ComplementThe complement ofa set A is a set containing all the elements ofS that are not inA. It is denotedA and is shown in Figure 1-5. It is clear that0 = SS = 0(A) = AA U A = SA n A = 0A c B, if B c AA = B, if A = BTwo additional relations that are usually referred to as DeMorgans laws areDifferences(A U B) = A n B(A n B) = A U BThe difference of two sets, A - B, is a set consisting of the elements ofA that are not in B. Thisis shown in Figure 1-6. The difference may als9 be expressed asA - B = A n B = A - (A n B)
  28. 28. t -5 ELEMENTARY SET TH EORY 17Figure 1 -5 The complement ofA .Figure 1 -6 The difference o f two sets.The notation (A - B) is often read as "A take away B." The following results are also apparentfrom the Venn diagram:(A - B) U B f. A(A U A) - A = 0A U (A - A) = AA - 0 = AA - S = 0S - A = ANote that when differences are involved, the parentheses cannot be omitted.
  29. 29. 1 8 CHAPTER 1 · I NTRODl,.JCTIONIt is desirable to illustrate all of the above operations with a specific example. In order to dothis, let the elements of the space S be the integers from 1 to 6, as before:S = {l, 2, 3, 4, 5, 6}and define certain sets asA = {2, 4, 6}, B = {l , 2, 3, 4}, c = {1, 3, 5}From the definitions just presented, it is clear that(A U B) = { 1 , 2, 3, 4, 6}, (B U C) = {1, 2, 3, 4, 5}A U B U C = { 1 , 2, 3, 4, 5, 6} = S = A U CA n B = {2, 4}, B n c = { 1 , 3}, A n c = 0A n B n c = 0, A = {l, 3, 5} = C, B = {5, 6}C = {2, 4, 6} = A, A - B = {6}, B - A = {l, 3}A - C = {2, 4, 6} = A, C - A = {l, 3, 5} = C, B - C = {2, 4}C - B = {5}, (A - B) U B = {1, 2, 3, 4, 6}The.student should verify these results.Exercise 1-5.1If A and B are subsets of the same space, S, finda) (A n B) u (A - B)b) A n (A - B)c) (A n B) n (B u A)Answers: A r1 B, 0, AExercise 1 -5.2Using the algebra of sets show that the following relations are true:
  30. 30. 1 -6 TH E AXIOMATIC APPROACHa)" A u (A n B) = Ab) A U (A n B) = A u B1 -6 The Axiomatic Approach19Itis now necessary to relate probability theory to the set concepts that havejust been discussed.This relationship is established by defining a probability space whose elements are all theoutcomes (of a possible set of outcomes) from an experiment. For example, if an experimenterchooses to view the six faces of a die as the possible outcomes, then the probability spaceassociated with throwing a die is the sets = {1, �. 3, 4, 5, 6}The various subsets ofS can be identified with the events. For example, in the case ofthrowinga die, the event {2} corresponds to obtaining the outcome 2, while the event {1, 2, 3} correspondsto the outcomes of either 1, or 2, or 3. Since at least one outcome must be obtained on each trial,the space S corresponds to the certain event and the empty set 0 corresponds to the impossibleevent. Any event consisting of a single element is called an elementary event.The next step is to assign to each event a number called, as before, the probability of theevent. If the event is denoted as A, the probability of event A is denoted as Pr (A). This numberis chosen so as to satisfy the following three conditions or axioms:Pr (A) ?: 0Pr (S) = 1If A n B = 0, then Pr (A U B) = Pr (A) + Pr (B)(1-9)(1-10)(1-1 1)The whole body of probability can be deduced from these axioms. It should be emphasized,however, that axioms are postulates and, as such, it is meaningless to try to prove them. Theonly possible test of their validity is whether the resulting theory adequately represents the realworld. The same is true of any physical theory.A large number of corpllaries can be deduced from these axioms and a few are developedhere. First, sinceS n 0 = 0 and S U 0 = Sit follows from (l-11) thatPr (S U 0) = Pr (S) = Pr (S) + Pr (0)
  31. 31. 20 CHAPTER 1 · INTRODUCTIONHence,Pr (0) = 0 (l-12)Next, sinceA n A = 0 and A U A = Sit also follows from (1-1 1) and (1-10) thatPr (A u A) = Pr (A) + Pr (A) = Pr (S) = 1 (1-13)From this and from.(1-9)Pr (A) = 1 - Pr (A) ::: 1 (1-14)Therefore, the probability of an event must be a number between 0 and 1.If A and B are not mutually exclusive, then (1-1 1) usually does not hold. A more generalresult can be obtained, however. From the Venn diagram of Figure 1-3 it is apparent thatA U B = A u (A U B)and that A and A n B are mutually exclusive. Hence, from (1-1 1) it follows thatPr (A U B) = Pr (A U A n B) = Pr (A) + Pr (A n B)From the same figure it is also apparent thatB = (A n B) u (A n B)and that A n B and A n B are mutually exclusive. Frqm (1-9)Pr (B) = Pr [(A n B) u (A n B)] = Pr (A n B) + Pr (A n B) (1-15)Upon eliminating Pr (A n B), it follows thatPr (A U B) = Pr (A) + Pr (B) - Pr (A n B) ::: Pr (A) + Pr (B) (1-16)which is the desired result.Now thatthe formalism ofthe axiomatic approach has been established, it is desirable to lookat the problem of constructing probability spaces. First considerthe case ofthrowing a single dieand the associated probability space of S = { 1, 2, 3, 4, 5, 6}. The elementary events are simplythe integers associated with the upper face ofthe die and these are clearly mutually exclusive. Ifthe elementary events are assumed to be equally probable, then the probability associated witheach is simply
  32. 32. 1 - 6 T H E A X I O M ATI C A P P RO A C H1Pr {ct· } = -I6 ct; = 1 . 2, . . . 6Z tNote that this assumption is consistent with the relative-frequency approach, but within theframework of the axiomatic approach it is only an assumption, and. any number of otherassumptions could have been made.For this same probability space, consider the event A = { 1 , 3} = { 1 } U {31. From ( 1 - 1 1)1 1 lPr (A) = Pr { l } + Pr {3} = (5 + (5 = "3and this can be interpreted as the probability ofthrowing either a 1 or a 3. A somewhat morecomplex situation arises when A = { l , 3}, B = {3, 5} and it is desired to determine Pr (A U B).SinceA and B are not mutually exclusive, the resultof( 1 - 1 6) mustl;>e used. From the calculationabove, it is clear that Pr (A) = Pr (B) = �· However, since A n B = {3}, an elementary event,it must be that Pr (A n B) = �· Hence, from ( 1 - 1 6)1 l l lPr (A U B) = Pr (A) + Pr (B) - Pr (A n B) = - + - - - = -3 3 6 2An alternative approach is to note that A U B = { l , 3, 5}, which is composed of three mutuallyexclusive eiementary events. Using (1-11) twice leads immediately to1 l l 1Pr (A U B) = Pr { l } + Pr {3} + Pr {5} = (5 + (5 + (5 = "2Note that this can be interpreted as the probability of either A occurring or B occurring or bothoccurring.Exercise 1-6.1A roulette wheel has 36 slots painted alternately red and black and numberedfrom 1 to 36. A 37th slot is painted green and numbered zero. Bets can bemade in two ways: selecting a number from 1 to 36, which pays 35: 1 if thatnumber wins, or selecting two adjacent numbers, which pays 1 7: 1 if eithernumber wins. Letevent A be the occurrence of the number 1 when the wheelis spun and event B be the occurrence of the number 2.a) Find Pr (A) and the probable return on a $1 bet on this number.b) Find Pr (A u B) and the probable return on a $1 bet on A u B .Answers,: 1 /37, 36/37, 36/37, 2/37
  33. 33. 22 CHAPTER 1 · I NTRODUCTIONExercise 1-6.2Draw a /13nn diagram showing three subsets that are not mutually exclusive.Using this diagram derive an expression for Pr (A u B u C).Answer: Pr (A) + Pr (B) + Pr (C) - Pr (A n B) - Pr (A n C) - Pr (B n C) +Pr (A n B n C)1 -7 Conditional ProbabilityThe conceptofconditional probability was introducedin Section 1-3 on the basis ofthe relativefrequency of one event when another event is specified to have occurred. In the axiomaticapproach, conditional probability is a defined quantity. If an event B is assumed to have anonzero probability, then the conditional probability of an event A, given B, is defined asPr (AIB) =Pr (A n B)Pr (B)Pr (B) > 0 (1-17)where Pr (A n B) is theprobability oftheevent A n B. Inthepreviousdiscussion, thenumeratorof (1-17) was written as Pr (A, B) and was called thejoint probability of events A and B. Thisinterpretation is still correct ifA and B are elementary events, but in the more general case theproper interpretation must be based on the set theory concept ofthe product, A n B, oftwo sets.Obviously, ifA and B are mutually exclusive, then A n B is the empty set and Pr (A n B) = 0.On the other hand, ifA is contained in B (that is, A c B), then A n B = A andPr (A)Pr (AIB) =Pr (B)..: Pr (A)Finally, if B c A, then A n B = B andPr (B)Pr (AIB) = -- = lPr (B)In general, however, when neither A c B nor B c A, nothing can be asserted regarding therelative magnitudes of Pr (A) and Pr (AIB).So far it has not yet been shown that conditional probabilities are really probabilities in thesense that they satisfy the basic axioms. In the relative-frequency approach they are clearlyprobabilities in that they could be defined as ratios of the numbers of favorable occurrences tothe total number of trials, but in the axiomatic approach conditional probabilities are definedquantities; hence, it is necessary to verify independently their validity as probabilities.The first axiom isPr (AIB) ..: 0
  34. 34. 1 - 7 CONDITIONAL PROBABILITY 23and this is obviously true from the definition (1-17) since both numerator and denominator arepositive numbers. The second axiom isPr (SIB) = 1and this is also apparent since B c S so that S n B = B and Pr (S n B) = Pr (B). To verifythat the third axiom holds, consider another event, C, such that A n C = 0 (that is, A and Care mutually exclusive). ThenPr [(A u C) n B] = Pr [(A n B) u (C n B)] = Pr (A n B) + Pr (C n B)since (A n B) and (C n B) are also mutually exclusive events and (1-1 1 ) holds for such events.So, from (1-17)Pr [(A U C)IB]_ Pr [(A U C) n B] _ Pr (A n B)+Pr (C n B)Pr (B) Pr (B) Pr (B)= Pr (AIB) + Pr (CIB)Thus the third axiom does hold, and it is now clear that conditional probabilities are validprobabilities in every sense.Before extending the topic ofconditional probabilities, it is desirable to consider an examplein which the events are not elementary events. Let the experiment be the throwing of a singledie and let the outcomes be the integers from 1 to 6. Then define event A as A = { l , 2}, that is,the occurrence of a 1 or a 2. From previous considerations it is clear that Pr (A) = � + � = �­Define B as the event of obtaining an even number. That is, B = {2, 4, 6} and Pr (B) = 4since it is composed of three elementary events. The event A n B is A n B = {2}, from whichPr (A n B) = �- The conditional probability, Pr (AIB), is now given byPr (AIB) =Pr (A n B)=1 =�Pr (B) 4 3This indicates that the conditional probability of throwing a 1 or a 2, given that the outcome is. Ieven, 1s 3 .On the other hand, suppose it is desired to find the conditional probability of throwing aneven number given that the outcome was a 1 or a 2. This isPr (B IA) =Pr (A n B)=1Pr (A) �a result that is intuitively correct.-2One of the uses of conditional probability is in the evaluation of total probability. Supposethere are n mutually exclusive events A1, A2, • • • , An and an arbitrary event B as shown in theVenn diagram of Figure 1-7. The events A occupy the entire space, S, so that
  35. 35. 24 CHAPTER 1 · INTRODUCTIONA1 U A1 U · · · U An = S (1-18)Since A; and Aj (i =fa j) are mutually exclusive,_ it follows that B n A; and B n Aj are alsomutually exclusive. Further,because of (1-18). Hence, from (1-11),(1-19)But from (1-17)Pr (B n A;) = Pr (BIA;)Pr (A;)Substituting into (1-19) yieldsfigure 1-7 Venn diagram for total probability.Table 1-3 Resistance ValuesBin NumbersOhms 1 2 3 4 s 6 TotalIO Q 500 0 200 800 1200 1000 3700IOO Q 300 400 600 200 800 0 2300IOOO Q 200 600 200 600 0 1000 2600Totals 1000 1000 1000 1600 2000 2000 8600
  36. 36. 1 - 7 CONDITIONAL PROBABILITY 25Pr (.B) = Pr (B I A 1 ) Pr (A1 ) + Pr (B I A2) Pr (A2) + · · · + Pr (B lAn) Pr (An) (1-20)The quantity Pr (B) is the totalprobability and is expressed in (1-20) in terms of its variousconditional probabilities.An example serves to illustrate an application oftotal probability. Consideraresistorcarrouselcontaining six bins. Each bin contains an assortment-of resistors as shown in Table 1-3. If oneof the bins is selected at random,I and a single resistor drawn :rom that bin at random, what isthe probability that the resistor chosen will be 10 Q? The A; events.in (1-20) can be associatedwith the bin chosen so that1Pr (Ai) = - ,· 6i = 1 , 2, 3, 4, 5, 6since it is assumed that the choices of bins are equally likely. The event B is the selection of a10-n resistor and the conditional probabilities can be related to the numbersof such resistors ineach bin. Thus·500 1Pr (B I A 1 ) =l OOO=2200 2Pr (B IA3) =l OOO=101200 6Pr (B lAs) =2000=100Pr (B I A2) =l OOO= 0800 1Pr (B I A4) =1600=21000 1Pr (B I A6) =2000=2Hence, from (1-20) the total probability of selecting a 10-Q resistor is1 1 1 2 1 1 1 6 1 1 1Pr (B) =2x6 + O x6 +10x6 + 2x6 +1.0x6 +2 x 6= 0.3833It is worth noting that the concepts of equally likely events and relative frequency have beenused in assigning values to the conditional probabilities above, but that the basic relationshipsexpressed by (1-20) is derived from the axiomatic approach.The probabilities Pr (A; ) in (1-20) are oftenreferred to as aprioriprobabilitiesbecause theyare the ones that describe the probabilities ofthe events A; beforeany experiment is performed.Afteran experiment is performed, andeventB observed, theprobabilitiesthatdescribetheeventsA; are the conditionalprobabilities Pr (A; I B). These probabilities may be expressed in termsof those already discussed by rewriting (1-17) asPr (A; n B) = Pr(Ai !B) Pr (B) = Pr (B I A; ) Pr (A; )1 The phrase "at random" is usually interpreted to mean "with equal probability."
  37. 37. 26 CHAPTER t • INTRODUCTIONThe lastformin the above is obtainedby simply interchanging the roles ofB and Ai . The secondequality may now be writtenPr (A- jB) = _Pr_(B_IA_i)_Pr_(_A_i) Pr (B) into which (1-20) may be substituted to yieldPr (B) ;6 0Pr (BIAi) Pr (Ai)Pr (A;.IB) = -------------­Pr (B IA.1) Pr (Ai) + · · · + Pr (BIAn) Pr (An)(l-21)(l-22)The conditional probability Pr (A; I B) is often called the a posteriori probability because itapplies after the experiment is performed; and either (1-21) or (1-22) is referred to as Bayestheorem.The a posteriori probability may be illustrated by continuing the example just discussed.Suppose the resistorthat is chosen fromthe carrousel is found to be a 10-Q resistor. What is the .probability that it came from bin three? Since B is still the event of selecting a 10-Q resistor, theconditional probabilities Pr (BlAi) are the same as tabulated before. Furthermore, the aprioriprobabilities are still �· Thus, from (1-21), and the previous evaluation of Pr (B),Pr (A IB) = (-fo) (�) = o.086930.3833This is the probability that the 10-Q resistor, chosen at random, came frombin three.Exercise 1 -7.1Using the data of Table 1 -3, find the probabilities:a) a 1 000-n resistor that is selected came from bin 4.b) a 1 0-Q resistor that is selected came from bin 3.Answers: 0.20000, 0.08696Exercise 1 -7.2 ·A manufacturer of electronic equipment purchases 1 000 ICs from supplierA," 2000 ICs from supplier 8, and 3000 ICs from supplier C. Testing revealsthat the conditional probability of an IC failing during burn-in is, for devicesfrom each of the suppliers
  38. 38. 1 -8 INDEPENDENCEPr (FIA) = 0.05, Pr (FIB) = 0.10, Pr (FIC) = 0.10The ICs from all suppliers are mixed together and one device is selected atrandom.a) What is the probability that it will fail during. burn-in?b} Given that the device fails, what is the probability that the device camefrom supplier A?Answers: 0.0909 1 , 0.09 1 671 -8 Independence17The concept ofstatistical independence is a very important one in probability. It was introducedin connection with the relative-frequency approach by considering two trials of an experiment,such as tossing a coin, in which it is clear that the second trial cannot depend upon the outcomeof the first trial in any way. Now that a more general formulation of events is available, thisconcept can be extended. The basic definition is·unchanged, however:Two events, A and B, are independent if and only ifPr (A n B) = Pr (A) Pr (B)(1-23)In many physical situations, independence of events is assumedbecause there is no apparentphysical mechanism by which one event can depend upon the other. In other cases, the assumedprobabilities ofthe elementary events lead to independence of other events defined. from these.In such cases, independence may not be obvious, but can be established from (1-23).The concept of independence can also be extended to more than two events. For example,with three events, the conditions for independence arePr (A 1 n A2) = Pr (Ai ) Pr (A2)Pr (A2 n A3) = Pr (A2) Pr (A3)Pr (Ai n A3) = Pr (A 1 ) Pr (A3)Pr (A 1 n A2 n A3) = Pr (A1) Pr (A2) Pr (A3)Note that four conditions must be satisfied, and that pairwise independence is not sufficientfor the entire set of events to ·be mutually independent. In g·eneral, if there are n events, it isnecessary thr.t(1-24)for every set of integers less than or equal to n. This implies that 2n - (n + 1) equations oftheform (1-24) are required to establish the independence of nevents.
  39. 39. 28 CHAPTER 1 • INTRODUCTIONOne important consequence ofindependence is·a special form of (1-16), which statedPr (A U B) = Pr (A) + Pr (B) - Pr (A n B)IfA and B are independent events, this becomesPr (A U B) = Pr (A) + Pr (B) - Pr (A) Pr (B) (l-25)Another result of independence is(1-26)if A1 , A2, and A3 are all independent. This is nottrue if they are independent only in pairs. Ingeneral, if A1 , A2, • • • , ·An are independent events, then any one of them is independent ofanyevenfformed by sums, products, and complements of the others.Examples of physical situations that illustrate independence are most often associated withtwo or more trials of an experiment. However, for purposes of illustration, consider twoevents associated with a single experiment. Let the experiment be that of rolling a pair ofdice and define event A as that of obtaining a 7 and event B as that of obtaining an 11. Arethese events independent? The answer is that they cannot be independent because they aremutµally exclusive-if one occurs the other one cannot. Mutually exclusive events can neverbe statistically independent.As a second example consider two events that are not mutually exclusive. For the pair ofdiceabove, define event A as that ofobtaining an odd number and event B as that ofobtaining an 11.The event An B is just B since B is a subset ofA. Hence, the Pr (A n B) = Pr (B) = Pr (11) =2/36 = 1/18 since there are two ways an 11 can be obtained (that is, a 5 and a 6or a 6and a 5).Also ·the Pr (A) = � since halfof all outcomes are odd. It follows then thatPr (A n B) = 1/18 :;6 Pr (A) Pr (B) = (1/2) · (l/18) = 1/36Thus, events A and B are not statistically independent. That this must be the case is opvioussince if B occurs then A must also occur, although the converse is not true.It is also possible to define events associated with a single trial that are independent, butthese sets may not represent any physical situation. For example, consider throwing a singledie and define two events as A = { I , 2, 3} and B = {3, 4}. From previous results it is clearthat Pr (A) = � and Pr (B) = 1· The event (A n B) contains a single element {3}; hence,Pr (A n B) = �· Thus; it follows thatI I I IPr (A n B) = - = Pr (A) Pr (B) = - · -= -6 . 2 3 6and events A and B are independent, although the physical significance of this is not intuitivelyclear. The next section considers situations in which there is morethan one experiment, or morethan one trial of a given experiment, and that discussion will help clarify the matter.
  40. 40. 1 -9 COMBINED EXPERIMENTSExercise 1-8.1A card is selected at random from a standard deck of 52 cards. Let A be theevent of selecting an ace, and let B be the event of selecting a red card. Arethese events statistically independent? Prove your answer.Answer: YesExercise 1-8�2In the switching circuit shown below, the switches are assumed to operaterandomly and independently.�. .,,/a0 c��DThe probabilities of the switches being closed are Pr (A) = 0.1, Pr (B) =Pr (C) = 0.5 and Pr (p) = Q.2. Find the probability that there is a completepath through the circuit.Answer: 0.04001 -9 Combined Experiments29In the discussion of probability presented thus far, the probability space, S, was associated witha single experiment. This concept is too restrictive to deal with many realistic situations, soit is necessary to generalize it somewhat. Consider a situation in which two experiments areperformed. For example, one experiment might be throwing a die and the other ope tossing acoin. It is then desired to find the probability that the outcome· is, say, a "3" on the die and a"tail" on the coin. In other situations the second experiment might be simply a repeated trial ofthe first experiment. The two experiments, taken together, form a combined experiment,. and itis now necessary to find the appropriate probability space for it.Let one experiment have a space S1 and the other experiment a space S2. Designate theelements of S1 as
  41. 41. 30 C H A PT E R 1 • I NTRO D U CT I O Nand those of S2 asThen form a new space, called the cartesian product space, whose elements are ali the orderedpairs (a1 , /Ji), (a1 , /Ji), . . . , (ai , /3j)• • • . , (an , f3m). Thus, if S1 has n elements and S2 has melements, the cartesian product space has mn elements. The cartesian product space may bedenoted asto distinguish it from the previous product or intersection discussed in Section 1-5.As an illustration of the cartesian product space for combined experiments, consider the dieand the coin discussed above. For the die the space isS1 = {1, 2, 3, 4, 5, 6}while for the coin it isS2 = {H, T}Thus, the cartesian product space has 12 elements and isS = S1 x S2 = {(I , H), (1, T) , (2, H), (2, T) , (3, H), (3, T); (4, H),(4, T), (5, H), (5, T) , (6, H), (6, T)}It is now necessary to define the events ofthe new probability space. IfA1 is a subset consideredto be an event in Si . and A2 is a subset considered to be an event in S2, then A = A1 x A1 is anevent in S. For example, in the above illustration let A1 = {1, 3, 5} and A2. = {H}. The eventA corresponding to these isA = A1 x A1 = {(I , H), (3, H), (5, H)}To specify the probability of event A, it is necessary to consider whether the two experimentsare independent; the only cases discussed here are those in which they are independent. Insuch cases the probability in the product space is simply the products of the probabilities in theoriginal spaces. Thus, if Pr (A1) is the probability of event A1 in space S1 , and Pr (A2)" is theprobability of A1 in space S2, theri the probability of event A in space S is(l-27)
  42. 42. 1 - 1 0 BERNOULLI TRIALS 3 1This result may be illustrated by daia from the above example. From previous results,Pr (A1) = � + � + � = t when A1 = {1, 3, 5} and Pr (A2) = t when Az = {H}. Thus,the probability of getting an oddnumberon the die and a head on the coin isPr (A) = (�)(�) = �It is possible to generalize the above ideas in a straightforward manner to situations in whichthere are more than two experiments. However, this will be done only for the more specializedsituation ofrepeating the same experiment an arbitrary number of times.Exercise 1 -9.1A combined experiment is performed by flipping a coin three times. Theelements of the product space are HHH, HHT, HTH, etc.a) Write all the elements of the cartesian product space.b) Find the probability of obtaining exactly one head.c) Find the probability of obtaining at least two tails.Answers: 1/2, 1/4Exercise 1 -9.2A combined experiment is performed in which two coins are flipped and asingle die is rolled. The outcomes from flipping the coins are taken to he HH,TT, and HT (which is taken to be a single outcome regardless of which coinis heads and which coin is tails). The outcomes from rolling the die are theintegers from one to six.·a) Write all the elements in the cartesian product space.b) Let A be the event of obtaining two heads and a number of 3 or less.Find the probability of A.Answer: 1/81 -1 0 Bernoulli TrialsThe situation considered here is one in which the same experiment is repeated n times and itis desired to find the probability that a particular event occurs exactly k of these times. For
  43. 43. 32 C H A PTER 1 • I NTRO D U CT I O N. example, what is the probability that exactly four heads will be observed when a coin is tossed10 times? Such repeated experiments are referred to as Bernoullitrials.Consider some experiment for which the event A has a probability Pr (A) :::: p. Hence, theprobability that the event does not occur is Pr (A) = q, where p + q = 1.2 Then repeat thisexperiment n times and assume that the trials are independent; that is, that the outcome of anyone trial does not depend in any way upon the outcomes of any previous (or future) trials. Nextdetermine the probability that event A occurs exactly k times in some specific brder, say inthe first k trials and none thereafter. Because the trials are independent, the probability of thisevent isPr (A) Pr (A) · · · Pr (A) Pr (A) Pr (A) . . . Pr (A) = pkqn�kk of these n-k of theseHowever, there are many other ways in which exactly k events could occur because they can arisein any order. Furthermore, because of the independence, all of these other orders have exactlythe same probability as the one specified above. Hence, the event that A occurs k times in anyorder is the sum of the mutually exclusive events that A occurs k times in some specific order,and thus, the probability that A occurs k times is simply the above probability for a particularorder multiplied by the number of different orders that can occur.It is necessary to digress at this point and briefly discuss the theory of combinations in orderto be able to determine the number of different orders in which the event A can occur exactlyk times in n trials. It is apparent that when one forms a sequence of length n, the first A can goin any one of the nplaces, the second A can go into any one of the remaining n - l places, andso on, leaving n - k + 1 places for the kth A. Thus, the total number of different sequences oflength ncontaining exactly k As is simply the product of these various possibilities. Thus, sincethe k ! orders of the k event places are identicali n !k [n(n - l)(n - 2) . . . (n - k + 1)] = 1 _k1. k.(n ). (1-28)The quantity on the right is simply the binomialcoefficient, which is usually denoted either asnCk or as (�) .3 The latter notation is employed here.As an example of binomial coefficients, let n = 4 and k = 2. Then(n) - � -6k -2!2!-and there are six different sequences in which the event A occurs exactly twice. These can beenumerated easily asAAAA, AAAA, AAAA , AAAA , AAAA, AAAA2Tue only justification for changing the notation from Pr (A) to p and from Pr (A) to q is that the p and qnotation is traditional in discussing Bernoulli trials and most of the literature uses it.3A table of binomial coefficients is given in Appendix C.
  44. 44. 1 - 1 0 BERNOULLI TRIALS 33It is now possible to write the desired probability ofA occurring k times asPn (k) = Pr {A occurs k times} = G)p�qn-k (1 -29)As an illustration ofa possible application ofthis result, consider a digital computer in whichthebinary digits (0 or 1) areorganized into "words" of 32 digits each. If there is a probability of10-3 that any one binary digit is incorrectly read, what is the probability that there is one errorin � entire word? For this case, n = 32, k = 1,and p= 10-3.Hence,Pr {one error in a word} = p32(1)= (312)(10-3)1(0.999)31= 32(0.999)31(10-3):::::0.031It is �lso possible to use (1-29) to find the probability that there will be no errorin a word. Forthis, k = 0 and (�) = 1. Thus,Pr {no error in a word} = p32(0)= (3;)(10-3)0(0.999)32= (0.999)32:::::0.9685There are many other practical applications of Bernoulli trials. For example, if a system has ncomponents and there is a probability pthat any one of them will fail, the probability that oneand only one component will fail isPr {one failure} = Pn Cl) = G)pq<n-I)In some cases, one may be interested in determining the probability that event A occurs atleast k times, or the probability that it occurs no more than k times. These probabilities may beobtained by simply adding the probabilities of all the outcomes that are included in the desiredevent. For example, if a coin is tossed four times, what is the probability of obtaining at leasttwo heads? For this case, p= q == ! and n = 4. From (1-29)the probability of getting twoheads (that is, k = 2)isp4 (2) =G) (�Y(�Y= (6)(�) (�) =�Similarl), the prnbabiity of three heads is
  45. 45. 34 CHAPTER 1 · INTRODUCTIONand the probability offour heads isHence, the probability of getting at least two heads is3 1 1 1 1Pr {at least two heads} = p4(2) + p4(3) + p4(4) = 8 + 4 +1 6=16The general formulation of problems of this kind can be expressed quite easily, but there areseveral different situations that arise. These may be tabulated as follows:k-1Pr {A occurs lessthan k times in n trials} = LPn (i)i=OnPr {A occurs morethan k times in n trials} = L Pn (i)i=k+IkPr {A occurs nomorethan k times in n trials} = LPn (i)i=OnPr {A occurs at leastk times in n trials} = LPn (i)i=kA final comment in regard to Bernoulli trials has to do with evaluating Pn (k) when n is large.Since the binomial coefficients and the large powers of p and q become difficult to evaluatein such cases, often it is necessary to seek simpler, but approximate, ways of carrying out thecalculation. One such approximation, known as the DeMoivre-Laplace theorem, is1 useful ifnpq » 1 and if lk - npl is on the order of or less than ,.Jrliiii. This approximation isPn (k) = (n )pkqn-k �1e-<k-np)2/2npqk J2nnpq (1-30)The DeMoivre-Laplace theorem has additional significance when continuous probability isconsidered in a subsequent chapter. However, a simple illustration of its utility in discreteprobability is worthwhil�. · Suppose a coin is tossed 100 times and it is desired to find theprobability of k heads, where k is in the vicinity of 50. Since p = q = � and n = 100, (1-30)yieldsPn (k) � _l_e-<k-50)2/50J50;for k values ranging (roughly) from 40 to 60. This is obviously much easier to evaluate thantrying to find the binomial coefficient (�0) for the same range of k values.
  46. 46. 1,- 1 1 APPLICATIONS OF BERNOULLI TRIALS1 .-:1 1 Applications of Bernoulli Trials35Because of the extensive use of Bernoulli trials in many engineering applications it is usefulto examine a few of these applications is more detail. Three such applications are consideredhere. The first application pertains to digital communication systems in which special typesof coding are used in order to reduce errors in the received signal. This is usually referred toas error-correction coding. The second considers a radar system that employs a type of targetdetection known as binary integration or double thresholddetection. Finally, the third exampleis one that arises in connection· with system reliability.Digital communication systems transmit messages that have been converted into sequencesof binary digits (bits) that have values of either 0 or 1. For practical implementation reasons itis convenient to separate these sequences into blocks, each containing the same number ofbits.Eachblock is usually referred to as a word.Any transmitted word is received correctly only if all the bits in that word are detectedcorrectly. Because of noise, interference, or multipath in the communication channel, one·ormoreofthe bits in any given word may be received incorrectly and, thus, suggest thata differentword was transmitted. To avoid errors of this type it is common to increase the length of theword by adding additional bits (known as check digits) that are uniquely related to the actualmessage bits. Appropriate processing at the receiver then makes it possible to correctly decodethe word provided that the number of bits received in error is not greater than some specifiedvalue. For example, a double-error-correcting code will produce the correct message word if nomore than two bits are received in error in each code word.To illustrate the effectiveness of such an approach, assume that each message word containsfivebits and istransmitted, without error-correction coding, in achannel in which the probabilityof any one bit being received in error is 0.01. Because there is no error-correction coding, theprobability that a given word is received correctly isjust the probability that nobits are receivedin error. The probability of this event, from (1-29), isPr (Correct Word) = ps (O) = (�)(0.01)0(1 - 0.01)5 = 0.951Next assume that a double-error-correction code exists in which the 5 check digits are addedto the 5 message digits so that each transmitted word is now i0 bits long. The message wordwill be correctly decoded now if there are no bits received in error, one bit received in error, ortwo bits received in error. The sum of the probabilities of these three events is the probabilitythat a given message word is correctly decoded. Hence,Pr (Correct Word) = (�)(0.01)0(1 - 0.01)10 + (0)(0.01)1(1 - 0.01)9+ (12°)(0.01)2(1 - 0.01)8 = 0.9999Itisclearthattheprobability ofcorrectly receiving this messagewordhasbeengreatly increased.
  47. 47. 36 CHAPTER 1 · INTRODUCTIONA radar system transmits short pulses of RF energy and receives the reflected pulses, alongwith noise, in a suitable receiver. To improve the probability of detecting the reflected pulses, itis customary to base the detection on a number ofpulses ratherthanjust one. Although there areoptimum techniques for processing such a sequence of received pulses, a simple suboptimumtechnique involves the use of two thresholds. If the received signal pulse, or noise, or both,exceed the first threshold, the observation is declared to result in a 1. If the first threshold is notexceeded, the observation is declared to result in a 0. After nobservations (i.e., Bernoulli trials),ifthe number of ls is equal to or greater than some value m S n, a detection is declared. Thevalue ofmis the second threshold and is selected on the basis of some criterion ofperformance.Because we are adding l s and Os, this procedure is referred to as binary integration.The two aspects of performance that are usually of greatest importance are the probabilityofdetection and the probability offalse alarm. The probability of detection is the probabilitythat a real target will actually be detected and is desired to be as close to one as possible. Theprobability of false alarm is the probability that a detection_will be declared when there is onlynoise into the receiver and is desired to be as close to zero as possible. Using the results in theprevious section, the probability of detection can be written asPr (Detection) = t G)p;(I.- Ps)n-kk=mwhere Ps is the probability that any one signal pulse will exceed the first threshold. Similarly,the probability of false alarm becomesPr (False alarm) = t G)P!(1 - Pn)n-kk=mwhere Pn is theprobability thatnoisealonewillexceedthethresholdin anyoneobservation. Notethat these two expressions are the same except for the value of the first threshold probabilitiesthat are used.To illustrate this technique, assume that Ps = 0.4 and Pn = 0. 1. (Methods for determiningthese values are considered in subsequent chapters.) Althoughthereare methods fordeterminingthe best value ofmto use for any given value ofn,arbitrarily select mto be the nearest integer ton/4.The resulting probabilities ofdetection and false alarm are shown in Figure 1-8 as afunctionof n, the number of Bernoulli trials. (The ragged nature of these curves is a consequence ofrequiring m to be an integer.) Note that the probability ofdetection increas�s and the probabilityof·false alarm decreases as the number pulses integrated, n, is made larger. Thus, larger nimproves the radar performance. The disadvantage of this, of course, is that it takes longer tomake a detection.The third application of Bernoulli trials to be discussed involves the use of redundancy toimprove system reliability. Components in a complex and expensive system that are essentialto its operation, and difficult or impossible to replace, are often replicated in the system so thatif one component fails another one may continue to function. A good example of this. is foundin communication satellites, in which each satellite carries a number of amplifiers that can be
  48. 48. 1 - 1 1 APPLICATIONS OF BERNOULLI TRIALSfigure 1-8 Result of binaryintegration in a radar system.8100.099.899.S99.09S.O90.0;; 70.0� so.o:g 30.0-e"" 10.0s.o1.0o.s0.2. IV-,II V.A /o.11. , v v,, -.-II- A; I A _yM"V..l.. -£ -· -V v Pr (detec:lion).,. - Pr (false alarm)�� �- -Il • "" -- - 1- ..,"""""37,,...,-- -0 10 20 30 40 so 60 70 80 90 100NumbefofBernoulli Tiialsswitched into various configurations as required. These amplifiers are usually traveling wavetubes (TWT) at frequencies above 6 GHz, although solid-state amplifiers are sometimes used atlower frequencies. As amplifiers die through the years, the amount oftraffic that can be carriedby the satellite is reduced untilthere is at last no useful transmission capability. Clearly, replacingdead amplifiers in a satellite is not an easy task.Toillustrate·howredundancy canextendtheusefullife ofthe communication satellite, assumethat a given satellite contains 24 amplifiers with 12 being used for transmission in one directionand 12 fortransmission in thereverse direction, andtheyarealways used inpairs to accommodatetwo-way traffic on every channel. Assume further that the probability that any one amplifier willfail within the first 5 years is 0.6, and that the two amplifiers that make up a pai.r;.are alwaysthe same. Hence, the probability that both amplifiers in a given pair are still functioning after 5years isPr (Good Pair) = (1- 0.6)2 = 0. 16The probability that one or more of the 12 amplifier pairs are still functioning after 5 yearsis simply 1 minus the probability that all pairs have failed. From the previous equation, theprobability that any one pair has failed is 0.84. Thus;Pr (One or More Good Pairs) = 1- 0.8412 = 0.877This result assumes that the two amplifiers that make up a pair are always the same and thatit is not possible to switch amplifiers to make pairs with different combinations. In actuality,such switching is possible so that the last good pair of amplifiers can be any two ofthe original24 amplifiers. Now the probability that there are one or more good pairs is simply 1 minus theprobability that exactly 22 amplifiers have failed. This isPr (One or More Good Pairs) = 1-G�)o.622(1-0.6)2= 0.999
  49. 49. 38 CHAPTER 1 · I NTRODUCTIONNotice the significant improvement in reliability that has resulted from adding the amplifierswitching capability to the communications satellite. Note also that the above calculation ismuch easier than trying to calculate the probability that two or more amplifiers are good.Exercise 1 -1 0.1A pair of dice are tossed 10 times.a) Find the probability that a 6 will occur exactly 4 times.b) Find the probability that an 1 O will occur 2 times.c) Find the probability that a 1 2 will occur more than once.Hint: Subtract the probability of a 1 2 occurring once or not at all from1 .0.Answers: 0. 1 558, 0.0299, 0.0430Exercise 1 -1 0.2A manufacturer of electronic equipment buys 1 000 ICs for which the prob­ability of one IC being bad is 0.01 . Using the DeMoivre-Laplace theoremdeterminea) What is the probability that exactly 1 O of the ICs are bad?b) What is the probability that none of the ICs is bad?c) What is the probability that exactly one of the ICs is bad?Answers: ·o. 1 268, 4.36 x 1 0-4, 4.32 x 1 0-sPROBLEMS ------------------------------------------Notethatthefirsttwodigitsofeachproblemnumbercorrespondtothesection numberin which theappropriatematerialisdiscussed.1-1 . 1 A six-cell storage battery having a nominal terminal voltage of 1 2 V is connected inseries·with an ammeter and a resistor labeled 6 Q.
  50. 50. PROBLEMS 39a) List as many random quantities as you can for this circuit.b) If the battery voltage can have _any value between 10.5 and 12.5, the resistor canhave any value within 5% of its marked value, and the ammeter reads within 2%of the true current, find the range of possible ammeter readings. Neglect ammeterresistance.c) List any nonrandom quantities you can for this circuit.1-1 .2 In determining the probability characteristics of printed English, it is common toconsider a 27-letter alphabet in which the space between words is counted as a letter.Punctuation is usually ignored.a) Count the number of times each of the 27 letters appears in this problem.b) On·the basis of this count, deduce the most probable letter, the next most probableletter, and the least probable letter (or letters).1-2. 1 For each of the following random experiments, list all of the possible outcomes andstate whether these outcomes are equally likely.a) Flipping two coins.b) Observingthelastdigitofatelephone number selectedatrandomfromthedirectory.c) Observing the sum of the last two digits of a telephone number selected at randomfrom the directory.1-2.2 State whether each of the following defined events is an elementary event.a) Obtaining a seven when a pair of dice are rolled.b) Obtaining two heads when three ct>ins are flipped.c) Obtaining an ace when a card is selected at random from a deck of cards.d) Obtaining a two of spades when a card is selected at random from a deck of cards.e) Obtaining a two when a pair ofdice are rolled.f) Obtaining three heads when three coins are flipped.g) Observing a value less than ten when a random voltage is observed.
  51. 51. 40 CHAPTER 1 • INTRODUCTIONh) Observing the letter e sixteen times in a piece oftext.1-4. 1 If a die is rolled, determine the probability of each of the following events..a) Obtaining the number 5.b) Obtaining a number greater than 3.c) Obtaining an even number.1-4.2 If a pair of dice are rolled, determine the probability of each of the following events.a) Obtaining a sum of 11.b) Obtaining a sum less than 5.c) Obtaining a sum that is an even number.1-4.3 A box of unmarked ICs contains 200 hex inverters, 100 dual 4-input positive-ANDgates, 50 dual J-K flip flops, 25 decade counters, and 25 4-bit shift registers.a) If an IC is selected at random, what is the probability that it is a dual J-K flip flop?b) What is the probability that an IC selected at random is not a hex inverter?c) Ifthe first IC selected is found to be a 4-bit shift register, what is the probability thatthe second IC selected will also be a 4-bit shift register?1-4.4 In the IC box ofProblem 1-4.3 it is known that 10% ofthe hex inverters are bad, 15%ofthe dual 4-input positive-AND gates are bad, 18% ofthe dual J-K flip flops are bad,and 20% of the decade counters and 4-bit shift registers are bad.a) If an IC is selected at random, what is the probability thatit is both adecadecounterand good?b) If an IC is selected atrandom and found to be a J-K flip flop, what is the probabilitythat it is good?c) If an IC is selected at random and found to be good, what is the probability that itis a decade counter?1-4.5 A company manufactures small electric motors having horse power ratings of 0.1,0.5, or 1.0 horsepower and designed for operation with 120 V single-phase ac, 240 Vsingle-phase ac, or 240 V three-phase ac. The motor types can be distinguished only
  52. 52. PROBLEMS 41by their nameplates. A distributor has on hand 3000 motors in the quantities shown inthe table below.Horsepower 120 V ac 240 V ac 240 v 300.10.51.09002001004005002000100600One motor is discovered without a nameplate.. For this motor determine the probabilityof each of the following events.·a) The motor has a horsepower rating of 0.5 hp.b) The motor is designed for 240 V single-phase operation.c) The motor is 1.0 hp and is designed for 240 V three-phase operation.d) The motor is 0.1 hp and is designed for 120 V operation.1-4.6 In Problem 1-4.5, assume that 10% of the motors labeled 120 V single-phase aremismarked and that 5% of the motors marked 240 V single-phase are mismarked.a) If a motor is selected at random, what is the probability that it is mismarked?b) If a motor is picked at random from those marked 240 V single-phase, what is theprobability that it is mismarked?c) What is the probability that a motor selected at random is 0.5 hp and mismarked?1-4.7 A box contains 25 transistors, of which 4 are known to be bad. A transistor is selectedat random and tested.a) What is the probability that it is bad?b) Ifthe first transistor tests bad what is the probability that a second transistor selectedat random will also be bad?c) Ifthe first transistor tested is good, what is the probability that the second transistorselected at random will be bad?1-4.8 A traffic survey on a busy highway reveals that one of every four vehicles is a truck.This survey also established that one-eighth of all automobiles are unsafe to drive andone-twentieth of all trucks are unsafe to drive.

×