Statistics for business and financial economics


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Statistics for business and financial economics

  1. 1. Chapter 21Statistical Decision Theory:Methods and ApplicationsChapter Outline21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106621.2 Four Key Elements of a Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106721.3 Decisions Based on Extreme Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106821.4 Expected Monetary Value and Utility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107021.5 Bayes’ Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107821.6 Decision Trees and Expected Monetary Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108021.7 Mean and Variance Trade-Off Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108521.8 The Mean and Variance Method for Capital Budgeting Decisions . . . . . . . . . . . . . . . . . . . 109621.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1100Questions and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101Appendix 1: Using the Spreadsheet in Decision-Tree Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116Appendix 2: Graphical Derivation of the Capital Market Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1119Appendix 3: Present Value and Net Present Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121Appendix 4: Derivation of Standard Deviation for NPV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123Key TermsStatistical decision theory Worst outcomeBayesian decision statistics Expected monetary valueDecision theory Utility analysisActions Total utilityAlternatives Marginal utilityStates of nature Utility functionEvent Risk-averseOutcomes Risk-neutralPayoff Risk loverProbability Expected utilityPrior probability Decision nodePosterior probability Event nodeMaximin criterion Systematic risk (continued)C.-F. Lee et al., Statistics for Business and Financial Economics, 1065DOI 10.1007/978-1-4614-5897-5_21, # Springer Science+Business Media New York 2013
  2. 2. 1066 21 Statistical Decision Theory: Methods and ApplicationsStatistical decision theory Worst outcomeMinimax regret criterion Market riskCapital market line Market portfolioSharpe investment performance measure Lending portfolio Borrowing portfolioCapital asset pricing model Market risk premiumTreynor investment performance measure Present value Net present valueStatistical distribution method Jensen investment performance measure21.1 IntroductionIn business, decision making is at the heart of management. Using statistics as aguide, this chapter introduces and examines decision making in business and eco-nomics in terms of statistical decision theory. The branch of statistics called statisticaldecision theory is sometimes termed Bayesian decision statistics, in honor of researchpresented over 200 years ago by the English philosopher the Reverend Thomas Bayes(1702–1761). Nevertheless, statistical decision theory is a new branch of statistics.Propelled by research by Howard Raiffa, John Pratt, and Leonard Savage (amongothers), it developed rapidly in the 1950s, and it now occupies an important place instatistical literature. In contrast to classical statistics, where the focus is on estimation,constructing intervals, and hypothesis testing, statistical decision theory focuses onthe process of making a decision. In other words, it is concerned with the situation inwhich an individual, group, or corporation has several feasible alternative courses ofaction in an uncertain environment. This chapter discusses methods for selecting the best management alternativesby using statistical decision theory. Here are a few examples of statistical decisionproblems associated with business decision making:1. Manufacturers must decide what products to produce.2. Portfolio managers must decide what investments to purchase while maintaining a portfolio consistent with investors’ risk–return preference.3. Oil company managers must determine, with the help of geologists, where to drill.4. Corporate managers must choose from among alternative investment projects under conditions of uncertainty. In this final chapter of the book, we will discuss a variety of statistical methods,collectively referred to as decision theory, for dealing with such decision-makingproblems. First, we present the four key elements of making a decision on the basisof extreme values, expected monetary value, and utility analysis. Then we exploreBayes’ strategies and decision trees of expected monetary values in terms ofstatistical concepts and methodology. We also propose a mean and variancetrade-off analysis to replace the expected utility analysis in business decisionmaking. Finally, the mean and variance method is applied in the context of a capital
  3. 3. 21.2 Four Key Elements of a Decision 1067budgeting decision. Appendix 1 discusses how the spreadsheet can be used to dodecision-tree analysis. Appendix 2 presents the graphical derivation of the capitalmarket line; Appendix 3 discusses present value and the net present value (NPV)decision rule; and the derivation of standard deviation for NPV is presented inAppendix 4.21.2 Four Key Elements of a DecisionFour elements are needed to analyze a decision-making problem: actions, states ofnature, outcomes, and probabilities.1. The choices available to the decision maker are called actions (or sometimes alternatives). For instance, in a person’s decision whether to carry an umbrella, the possible actions are to carry the umbrella and not to carry the umbrella. Although our approach assumes that the decision maker can specify a finite number of mutually exclusive and exhaustive actions, it is also possible to analyze problems with an infinite number of outcomes.2. The uncertain elements in a problem are referred to as states of nature. The states of nature are simply events. Like actions, states of nature can be either finite or infinite. The states of nature (events) in our umbrella decision are rain and no rain.3. An outcome is a consequence for each combination of an action and an event (state of nature). The possible outcomes in our umbrella decision are stay dry, be burdened unnecessarily, get wet, and be dry and free. The reward or penalty attached to each outcome is termed the payoff, which in business decisions is usually expressed in monetary terms. The relationship among the actions, events, and outcomes involved in a decision process can be presented in a decision tree (see Fig. 21.1).14. Probability, which we discussed in Chap. 5, is the chance that an event will occur. The probability of each event may be set by referring to historical data, expert opinion, or any other factor (including personal judgment) the decision maker wishes to use. These probabilities are initial probabilities, so they are called prior probabilities. (Revised probabilities based on the prior probabilities are called posterior probabilities. They will be discussed in detail in Sect. 21.5) In our umbrella example, the probability is the chance of rain as assigned in accordance with the weather forecast. Armed with all the foregoing information, the decision maker can decide whetherto carry an umbrella. In the next section, we first discuss alternative business decision rules withoutprobabilities and then introduce the probability variable into the decision process.1 Decision trees were introduced in Sect. 5.3 of Chap. 5. They will be discussed in terms ofexpected monetary values in Sect. 21.6.
  4. 4. 1068 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.1 Decision tree for the umbrella decisionTable 21.1 Data for State of natureExample 21.1 Product Recession Flat Boom Minimum payoff 1 À$10 À$5 $20 À$10 2 À$15 0 $15 À$15 3 À$7 À$1 $25 À$7 4 À$3 $2 $17 À$321.3 Decisions Based on Extreme ValuesMost of the decision rules discussed in this chapter require the specification ofprobabilities for the states of nature. However, two strategies do not. The maximincriterion maximizes the minimum payoff; the minimax regret criterion minimizesthe difference between the optimal payoff and the actual outcome.21.3.1 Maximin CriterionUsing this criterion, we first consider the worst possible outcome for each actionand then choose the highest of the minimum payoffs (hence “maximin”). The worstoutcome is simply the smallest payoff that could conceivably result whatever stateof nature happens. Thus, the maximin criterion is a decision rule for bornpessimists. Pessimists tend to assume nature is against them no matter what activitythey choose. (Some even go so far as to be convinced that they can guarantee sunnyweather by carrying an umbrella!)Example 21.1 Applying the Maximin Criterion to Different Possible EconomicConditions. Suppose a firm can produce any one of four products, 1, 2, 3, and 4.The four products, then, are the possible actions. The state of nature (which in thiscase is the state of the economy) has three possibilities: recession, flat, and boom.The payoff, or the outcome, is whatever profits result. This information is presentedin Table 21.1. In this example, the minimum payoff is À$10 for product 1, À$15 for product 2,À$7 for product 3, and À$3 for product 4. Product 4 has the lowest negative payoffin terms of the maximin criterion. Therefore, if it subscribes to the maximincriterion, this pessimistic firm will choose to produce product 4.
  5. 5. 21.3 Decisions Based on Extreme Values 1069Table 21.2 Data for State of natureExample 21.2 Rain No rain Minimum payoff Indoors $100,000 $100,000 $100,000 Outdoors 0 $100,000,000 0Table 21.3 Choice of two stocks in terms of rates of return by the maximin criterion State of natureStock Recession (%) Flat (%) Boom (%) Minimum payoff (%)A (growth type) À5 8 15 À5B (income type) 2 4 7 2Example 21.2 Applying the Maximin Criterion in Rock Music Promotion. Supposea rock music promoter has the option of booking a major group at the local indoorstadium, which has a seating capacity of 17,000, or at a 100,000-seat outdoorstadium. Obviously, rain is a major concern for the promoter. The possible actionsare holding the concert indoors and holding it outdoors; the states of nature are rainand no rain. The payoffs are the profits. These factors are summarized in Table 21.2.By the maximin criterion, the concert should be held indoors because the minimumprofits for that strategy are $100,000, whereas they are 0 for holding it outdoors. The main problem with the maximin criterion is that it does not take into accountthe probabilities of the states of nature. In the concert example, suppose the concertis to be held in Arizona, where the likelihood of rainfall is quite low. Because theprobability of rain is small, it may be better to hold the concert outside. Another problem with this method is that it does not take other alternatives intoconsideration. For example, assume that an investor must choose between twostocks and that the states of nature are a recession, a flat economy, and a boom(see Table 21.3). Rates of return are the payoffs. The maximin criterion woulddictate choosing stock B, because its minimum payoff is 2 %. However, althoughstock A has lower rates of return (À5 %) for the recession, its rates of return are muchhigher when the economy is flat or booming. Of course, the chances of recession, flateconomy, and boom are not necessarily equal. Hence, the decision method describedin Table 21.3 is relatively restrictive. Later in this chapter, we will explicitly takeinto account the probability of occurrence of different states of nature.21.3.2 Minimax Regret CriterionIn the minimax regret criterion, the best action is the one that minimizes themaximum regrets for each decision. When the decision maker aims to maximizethe benefit, the regret equals the difference between the optimal payoff and the actualpayoff. In Example 21.1, the best outcome in the recession is a loss of $3 for product4, so the regret for product 1 if a recession occurs is À$3 À (À$10) ¼ $7, and for
  6. 6. 1070 21 Statistical Decision Theory: Methods and ApplicationsTable 21.4 Regret table State of natureProduct Recession Flat Boom Maximum regret1 7 7 5 72 12 2 10 123 4 3 0 44 0 0 8 8product 2 the regret is À$3 À (À$15) ¼ $12. In the flat state of nature, the regret forproduct 1 is $2 À (À$5) ¼ $7, and in the boom state of nature, the regret for product 1is $25 À $20 ¼ $5. The regrets for all four products under the three economicconditions are presented in Table 21.4. The best product under this criterion is the one that minimizes the maximumregret (hence “minimax”). Thus, product 3 is the best choice because its regret, 4, isthe smallest.Example 21.3 Applying the Minimax Regret Criterion. The following table givesthe regrets for the concert example: State of nature Rain No rain Maximum regretIndoor 0 900,000 900,000Outdoor 100,000 0 100,000 By this criterion the best choice is the outdoor concert, because it has theminimum regret ($100,000). Different methods, it seems, can lead to differentanswers. The methods presented in this chapter are only aids in decision making. Thedecision maker must rely partly on personal judgment when making decisions. Inour concert example, for instance, the promoter must also take into considerationsuch factors as the probability of rain, the expected attendance, and ticket prices. In the following sections, we will discuss adding probability information tostatistical decision theory.21.4 Expected Monetary Value and Utility AnalysisIn this section, we discuss both the expected monetary value criterion and utilityanalysis through probability information. We also examine the application of thesetechniques in decision making.
  7. 7. 21.4 Expected Monetary Value and Utility Analysis 1071Table 21.5 State of economy State of natureand payoff Recession Flat Boom EMV Probability of occurring .2 .5 .3 Product 1 À$20 $0 $15 $.5 Product 2 À$5 À$2 $5 À$.5 Product 3 À$10 $1 $25 $6.0 Product 4 À$1 $5 $0 $2.321.4.1 The Expected Monetary Value CriterionThe monetary value at a particular state of nature is calculated by multiplying theprobability of the action by the payoff of that particular action. If a decision makerhas H possible actions, A1, A2, . . ., AH, and is faced with M states of nature, then theexpected monetary value associated with the ith action, EMV(Ai), can be obtainedby summing the monetary value over all states of nature: X M EMVðAi Þ ¼ Pj Mij ði ¼ 1; 2; . . . ; H Þ (21.1) j¼1where PPj ¼ probability associated with state of nature j with M Pj ¼ 1 j¼1Mij ¼ payoff corresponding to the ith action and the jth state of nature For example, suppose economists estimate the probability of a recession to be .2,that of a flat economy to be .5, and that of a boom to be .3, as shown in Table 21.5.The payoff (profit) and EMV related to each product listed in the last column ofTable 21.5 can be calculated as EMV1 ¼ ð:2ÞðÀ20Þ þ ð:5Þð0Þ þ ð:3Þð15Þ ¼ $:5 EMV2 ¼ ð:2ÞðÀ5Þ þ ð:5ÞðÀ2Þ þ ð:3Þð5Þ ¼ À$:5 EMV3 ¼ ð:2ÞðÀ10Þ þ ð:5Þð1Þ þ ð:3Þð25Þ ¼ $6:0 EMV4 ¼ ð:2ÞðÀ1Þ þ ð:5Þð5Þ þ ð:3Þð0Þ ¼ $2:3 The best alternative is the one that maximizes the EMV. In this example, theproduct with the highest EMV is product 3, with an EMV of $6.0.Example 21.4 Applying the EMV Criterion to Pricing a New Product. A marketingmanager who is responsible for pricing a new product must decide which of thefollowing three alternative pricing strategies to use:A1 (Skim-pricing strategy) $15.50/unitA2 (Intermediate price) $12.00/unitA3 (Penetration strategy) $.50/unit
  8. 8. 1072 21 Statistical Decision Theory: Methods and ApplicationsTable 21.6 State of nature and payoff (thousands of dollars) State of natureAlternative Light demand, S1 Moderate demand, S2 Heavy demand, S3A1 100 60 À60A2 60 110 À30A3 À50 0 90Probability of occurring .70 .20 .10 The payoff results given in Table 21.6 are total net income associated with eachstate of nature. In addition, the probability that each state of nature will occur isgiven at the bottom of the table. The payoff (net income) is calculated as follows: EMVðA1 Þ ¼ ð:7Þð100Þ þ ð:2Þð60Þ þ ð:1ÞðÀ60Þ ¼ $76 EMVðA2 Þ ¼ ð:7Þð60Þ þ ð:2Þð110Þ þ ð:1ÞðÀ30Þ ¼ $61 EMVðA3 Þ ¼ ð:7ÞðÀ50Þ þ ð:2Þð0Þ þ ð:1Þð90Þ ¼ À$26 Alternative A1 offers the highest EMV. It is the best choice if the decision maker’sgoal is to maximize expected return. However, if the decision maker wants tominimize potential loss, then alternative A2, with a maximum loss of $30, is thebest choice. In Sect. 21.6, after we discuss Bayes’ strategies, this example will beextended to allow sample information.Example 21.5 Applying the EMV Criterion to Selecting a Stock. A portfoliomanager predicts the following probabilities (Pj) for the rates of return on fourdifferent stocks associated with three different economic conditions. The rates ofreturn for the jth stock (i ¼ 1, 2, 3, 4) are the payoffs. Using Eq. 21.1, we can calculate the EMV for each stock; all are listed in the lastcolumn of Table 21.7. By the EMV criterion, the best choice is stock 4, with a rateof return of ð:3Þð:20Þ þ ð:5Þð:03Þ þ ð:2ÞðÀ:25Þ ¼ 2:5% The problem with the EMV criterion is that it does not take the element of riskinto consideration. The following example illustrates this. Assume that a coin isflipped. In game 1, a head pays $2 and a tail pays $0. In game 2, a head pays $1million and a tail pays À$750,000: State of natureGame Heads Tails EMV1 $2 0 $12 $1 million À$750,000 $125,000 In the first game, the EMV is ($2)(.5) þ (0)(.5) ¼ $1; in the second game, it is($l,000,000)(.5) þ (À$750,000)(.5) ¼ $125,000. By the EMV criterion, the bestchoice is game 2. However, a very high degree of risk is associated with game 2:
  9. 9. 21.4 Expected Monetary Value and Utility Analysis 1073Table 21.7 States of market State of natureand payoffs Boom Flat Recession EMV Pj .3 .5 .2 R1 10 % 0% À5 % 2% R2 15 % À2 % À10 % 1.5 % R3 7% 2% À5 % 2.1 % R4 20 % 3% À25 % 2.5 %Table 21.8 Utility function Scoops Utility Marginal utility 1 10 10 2 14 4 3 17 3 4 19 2 5 20 1 6 20.5 .5$750,000 will be lost if a tail results. In contrast, there is no possibility of loss ingame 1. Anyone who chooses game 2 must be prepared to accept a negativeexpected payoff as the price for the chance of earning a large payoff. In doing so,he is expressing a preference for risk. By contrast, anyone who chooses game 1accepts a lower expected payoff in order to eliminate the chance of experiencing alarge loss—and thus expresses an aversion to risk. The next section shows how wecan employ utility analysis to take individual attitude toward risk into account. InSect. 21.7, we will use mean and variance trade-off analysis to take this kind ofinvestigation further.21.4.2 Utility AnalysisIn all the decisions we have looked at, the decision criterion of choice was themaximization of expected monetary value. That is, an individual or corporationbelieves that the action offering the highest expected monetary value is the pre-ferred course. However, this kind of decision rule does not allow for risk. Forexample, investors who, in spreading their investments over a portfolio of stocks,accept a lower expected return in order to reduce the chance of a large loss areexpressing an aversion to risk. Hence, the investors’ or the managers’ attitudestoward risk are important in their decision making. Utility analysis gives us information on the decision makers’ attitude towardrisk. Utility measures the satisfaction a consumer or decision maker derives fromconsumption or the income associated with investment. For example, Table 21.8gives the utility function for a consumer who consumes ice cream. In this case, theutility function relates the scoops of ice cream consumed to the utility generatedfrom this consumption.
  10. 10. 1074 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.2 Various types of utility functions It does not matter what units are used to measure utility. The total utility for thefirst scoop of ice cream for this consumer could be 100, 140 for the second, and soon. The total utility increases as the scoops increase; however, this total utilityincreases at a decreasing rate. This implies that marginal utility decreases as thenumber of scoops of ice cream increases. That is, as a person eats more and more icecream, the extra satisfaction received from each additional scoop decreases. InTable 21.8 the marginal utility of any row is equal to the difference of the utilityof that row and the utility of the previous row, for example, 4 ¼ 14À10. Utility functions can also be used to do utility analysis in business decisions. Inthis case the utility function, a curve relating utility to payoff, can be used todetermine whether an investor is risk-averse, risk-neutral, or a risk lover. Varioustypes of utility functions are described in Fig. 21.2. Here the payoff presented on thehorizontal axis can originate from either positive or negative value. A risk-averseinvestor has a utility function wherein utility increases at a decreasing rate as payoffincreases. In other words, a risk avoider prefers a small but certain monetary gain toa gamble that has a higher expected monetary value but may involve a large butunlikely gain or a large but unlikely loss. A risk-neutral investor has a utilityfunction wherein utility increases at a constant rate. For an individual neutral to
  11. 11. 21.4 Expected Monetary Value and Utility Analysis 1075Table 21.9 Alternative Investment opportunities Payoffs Probability Utilityinvestment payoffs Risky À$100 .5 0 $200 .5 1 Risk-free $50 1.0 ?risk, every increase of, say, $100 has an associated constant increase in utility. Thistype of individual uses the criterion of maximizing expected monetary value indecision making, because doing so maximizes expected utility. A risk lover’s utilityfunction has utility increasing at an increasing rate. This type of person willinglyaccepts gambles having a smaller expected monetary value than an alternativepayoff that is a “sure thing.” We will use the following example to analyze how the utility function operateswithin the decision-making process and how it affects the decision. Suppose an investor faces investment opportunities with the payoffs À$100,$200, and $50, as indicated in Table 21.9. We are interested in the investor’s utilitylevel for each situation. The different utility levels the investor can reach lead todifferent decisions. For simplicity, we attach a utility of 0 to the payoff of À$100and a utility of 1 to the $200 payoff, leaving us with the utility for the $50 payoff. Inorder to link the utility for the $50 payoff with the information on the decisionmaker’s preference for risk, we then ask, “Would the investor prefer to receive $50with certainty or to gamble, possibly gaining $200 but just as likely to lose $100?” Drawing the information given in Table 21.9, we can calculate the expectedutility of the payoff as :5U ðÀ$100Þ þ :5U ð$200Þ ¼ :5ð0Þ þ :5ð1Þ ¼ :5Case I: Risk-averse U ð$50Þ>:5Case II: Risk-neutral Uð$50Þ ¼ :5Case III: Risk lover U ð$50Þ<:5 Thus, we know that if the utility for the $50 payoff is greater than .5, the investoris risk-averse. By similar reasoning, if the utility for the $50 payoff is less than(equal to) .5, the investor is a risk lover (is risk-neutral). We graph these threealternative utility curves in Fig. 21.3, which is a more detailed version of Fig. 21.2.
  12. 12. 1076 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.3 Alternative utility curves The expected monetary value (EMV) for risky investment opportunities withtwo possible uncertain investment payoffs as indicated in Table 21.9 is ðÀ$100Þð:5Þ þ ð$200Þð:5Þ ¼ $50 If we use the EMV approach to analyze this problem, we implicitly assume thatthe investor is risk-neutral. If we employ utility analysis, we consider not only therisk-neutral case but also the cases of both the risk avoider and the risk lover. Inshort, the EMV approach uses expected objective dollar utility as the decisioncriterion, and utility analysis uses expected subjective utility. The process we went through in this example tells us several things:1. The utility function affects the decision-making process.2. If the utility of the expected payoff is greater than, equal to, or less than the expected utility of the payoff, the decision maker will be risk-averse, risk- neutral, or risk-loving, respectively.22 In our case, the expected utility of the payoff is 0.5. The utility of the expected payoff for a risk-averse investor is larger than .5; the utility of the expected payoff for a risk-neutral investor is .5;and the utility of the expected payoff for a risk lover is smaller than .5.
  13. 13. 21.4 Expected Monetary Value and Utility Analysis 10773. If the utility function is strictly concave, linear, or convex, the decision maker will be risk-averse, risk-neutral, or risk-loving, respectively.4. If marginal utility is decreasing, constant, or increasing, the decision maker will be risk-averse, risk-neutral, or risk-loving, respectively. Let’s look at another example of the use of utility analysis in decision making.Example 21.6 Different Attitudes Toward Risk. Assume that an investor’s utility pffiffiffifunction can be defined by UðxÞ ¼ x; where x ¼ return (payoff). Now, he faces thechoice of x ¼ 14.5 with certainty or x ¼ 25 with probability .5 and x ¼ 4 withprobability .5. We want to determine whether this is a risk-averse person. From the foregoing example, we know that if the investor’s utility of the expectedpayoff is greater than the expected utility of the payoff, if the utility function is strictlyconcave, or if the marginal utility is diminishing, then the investor is said to be risk-averse. In fact, the last two criteria are the same. To determine whether this individualis risk-averse, we simply take the ordinary derivatives of U(x) with respect to x twotimes; then we can mathematically show that the individual is risk-averse if x is largerthan zero.3 Such an investor will take the certain payoff of x ¼ 14.5. Another approach to determining an individual’s risk preference is to calculatethe expected utility of the payoff and the utility of the expected payoff as indicatedin Table 21.10. In the table, U(x), the utility given x, is calculated by substituting the pffiffiffivalues for x of 4 and 25, respectively, into the utility function UðxÞ ¼ x. E[U(x)],the expected utility given x, is equal to pU(x). Finally, E(x), the expected value of x,and U [E(x)], the utility from the expected value of x, are calculated as follows: EðxÞ ¼ ð:5Þð4Þ þ ð:5Þð25Þ ¼ 14:5 pffiffiffiffiffiffiffiffiffi U½ðEðxÞފ ¼ 14:5 ¼ 3:808 From this example, we can see that the utility of the expected payoff, 3.808, isgreater than the expected utility of the payoff, 3.5 (1 + 2.5). Again, we can see thatthe investor is risk-averse; the utility received from the expected value of x isgreater than the expected value of the utility of x. What this means is that theinvestor will get greater utility from receiving the expected value of x with certaintythan he’d get if he took the gamble.3 dUðxÞ dðxÞ1=2 1 À1=2 ¼ ¼ 2x dx dx d 2 UðxÞ dð1xÀ1=2 Þ ¼ 2 ¼ À1xÀ3=2 4 dx2 dx If x > 0, then we know the second derivative of the utility function is negative. This conditionrepresents the curve as concave, as shown in Fig. 21.3a.
  14. 14. 1078 21 Statistical Decision Theory: Methods and ApplicationsTable 21.10 Expected utility pffiffiffi x U(x) ¼ x P E[U(x)] E(x) U[E(x)]of the payoff and utility of theexcepted payoff 4 2 .5 1 2 25 5 .5 2.5 12.5 – – 1.0 3.5 14.5 3.808 An investor is risk-averse if he prefers a safe investment over a risky investmentwith a higher expected value. For example, if an investor prefers to invest in risk-free Treasury bills rather than investing in a stock with a higher expected value, heis considered risk-averse. Having determined the appropriate utilities, we need only solve the decision-making problem by finding that course of action with the highest expected utility.Employing utility analysis concepts, we can modify the expected monetary valuecriterion defined in Eq. 21.1 to be the expected utility criterion: X m E½ U ð A i Þ Š ¼ Pj Uij ði ¼ 1; 2; . . . ; nÞ (21.2) j¼1whereE[U(Ai)] ¼ expected utility of action i Pj ¼ probability associated with state of nature j Uij ¼ utility corresponding to the ith action and the jth state of nature Pm In Eq. 21.2, we also assume that j¼1 Pj ¼ 1. If the decision maker is risk-neutral (indifferent to risk), the expected utility criterion and the expected monetaryvalue criterion are equivalent.21.5 Bayes’ StrategiesWe studied Bayes’ theorem in Chap. 5. This theorem enables us to work out theprobability for one event that is conditional on another event. Bayesian analysis canbe used in the decision-making process. The difference between Bayes analysis, themaximin criterion, and the minimax regret criterion is that in Bayes analysis,probabilities of the states of nature must be specified. Recall Bayes’ theorem: PðE1 jE2 ÞPðE2 Þ PðE2 jE1 Þ ¼ (21.3) Pð E1 Þwhere P(E2|E1) and P(E1|E2) are, respectively, conditional probabilities of event2 (E2) given event 1 (E1) and of E1 given E2. P(E1) and P(E2) are unconditionalprobabilities of E1 and E2, respectively. In terms of Bayesian statistic, P(E2) is the
  15. 15. 21.5 Bayes’ Strategies 1079initial or prior probability of E2 and is modified to the posterior probability, p(E2|E1),given the sample information that event E1 has occurred. We can incorporatedifferent states of nature into Eq. 21.3 to obtain the generalized Bayes model definedin Eq. 21.4: PðI Si Þ PðIjSi ÞPðSi Þ PðSi jI Þ ¼ ¼P m (21.4) PðIÞ PðIjSi Þ PðSi Þ i¼1where P(Si|I) is the probability of state of nature Si given sample information I.P(Si) is the probability of state of nature Si not incorporating sample informationI, and it is called a prior probability of Si. We also assume that there existm states of nature.Example 21.7 Bayesian Approach in Forecasting Interest Rates. Suppose thatmacroeconomists are hired to predict interest rates. Past results for economicprognostications are presented in the following table. Interest rate outcomeBelief Up DownStrong credit market .60 .30Weak credit market .40 .70 When economists believed that credit markets would be strong, interest rateswent up and went down with 60 % and 30 % chances, respectively. Thus, Pðstrong market=upÞ ¼ :60 Pðstrong market=downÞ ¼ :30 Now suppose the economists believe that the probability that rates will rise is .7and that the probability of lower rates is .3. PðupÞ ¼ :7 PðdownÞ ¼ :3 Following Eq. 21.4, we find that in the case of two states of nature, theprobability that interest rates will rise, given a strong credit market assessment byeconomists, is Pðupjstrong marketÞ Pðstrong marketjupÞPðupÞ ¼ Pðstrong marketjupÞ½Pðupފ þ Pðstrong marketjdownÞ½Pðdownފ :60ð:70Þ :42 ¼ ¼ ¼ :82 :6ð:7Þ þ :3ð:3Þ :51 The probability that interest rates will rise, given a weak credit market assess-ment, is
  16. 16. 1080 21 Statistical Decision Theory: Methods and Applications :4ð:7Þ Pðup=weak marketÞ ¼ ¼ :57 ð:4Þð:7Þ þ ð:7Þð:3Þ Corporate executives can use these probabilities to assess future interest rates.21.6 Decision Trees and Expected Monetary ValuesIn this section we use the expected monetary value (EMV) criterion in decision-treeform to select the best alternative in business decision making. As a generalapproach to structuring complex decisions, a decision tree helps direct the user toa solution. It is a graphical tool that describes the types of actions available to thedecision maker and the resulting events. The decision-tree approach to capital budget decision making is used to analyzeinvestment opportunities involving a sequence of investment decisions over time.To best illustrate the use of the decision tree, we will develop a problem involvingnumerous decisions. First, we must enumerate some of the basic rules for implementing this method.The decision maker should try to include only important decisions or events. Thedecision-tree model requires the decision maker to make subjective estimates whenassessing probabilities. And it is important to develop the tree in chronologicalorder to ensure the proper sequence of events and decisions. A decision point, or decision node, is represented by a box. The availablealternatives are represented by branches out of this node. A circle represents anevent node, and branches from this type of node represent possible events. The expected monetary value (EMV) is calculated for each event node bymultiplying probabilities by conditional profits and summing them. The EMV isthen placed in the event node and represents the expected value of all branchesarising from that node. A decision tree is shown in Fig. 21.4. The states of nature are high, medium, andlow levels of GNP, and their probabilities are .2, .5, and .3, respectively. Forproduct l, the expected value is (.2 Â 100) þ (.5 Â 5) þ (.3 Â À30) ¼ 13.5.The highest EMV (14) is that of product 3. Each square on the decision tree denotes a decision that must be made. Thecircles indicate the states of nature. The square represents the decision to produceproduct 1, 2, or 3. As we have said, the states of nature that can occur are high,medium, and low levels of economic performance. Now let’s look at two examplesof how decision trees using objective (dollar payoff) utility instead of subjectiveutility are employed in business decision making.Example 21.8 A Decision Tree for Testing a Drilling Site. An oil company is tryingto decide whether to test for the presence of oil or to drill for oil (see Fig. 21.5). If oilis struck, the revenues are $1 million, with a cost of $100,000 to drill. The firm hasto decide whether to test first for the presence of oil. Without testing, the probabilityof striking oil is .1. Thus, the firm’s expected value without testing is 0 ($1 million
  17. 17. 21.6 Decision Trees and Expected Monetary Values 1081Fig. 21.4 Decision tree for determining expected profitFig. 21.5 Decision tree for Example 21.8times .1, less drilling fees of $100,000). The cost of the oil test is $50,000. If the testis positive, there is a .6 probability that oil will be struck; if the test is negative, theprobability of striking oil is .05.
  18. 18. 1082 21 Statistical Decision Theory: Methods and Applications The expected gain for a negative test result is .05(1,000,000) À 100,000 À50,000 ¼ À$100,000. If the test is positive, the expected profit is (1,000,000)(.6) À100,000 À 50,000 ¼ $450,000. If the firm’s test is negative, the firm should notdrill; however, if the test is positive, it should drill. This analysis makes it clear thatthe test should be done. In Appendix 1 we show how the spreadsheet can be used fordecision-tree analysis for drilling oil.Example 21.9 A Decision Tree for Capital Budgeting. A firm currently sells paperand paperboard packaging materials. Company planners predict that, with the adventof plastic shrink-film packaging, their line of products may be obsolete within adecade. They must quickly decide on a short-term plan of action from among fouralternatives: (1) do nothing, (2) establish a tie-in with a machinery company thatmanufactures plastic packaging, (3) acquire such a company, or (4) take on theresearch and development of plastic packaging. These four alternatives are the firstfour branches arising out of the event node in Fig. 21.6. If the company planners donothing, the firm’s short-term profits will be about the same as in the previous year. Ifthey decide to establish a tie-in with another firm, they foresee one of two eventsoccurring; there is a 90 % chance of successful introduction of their new plastics lineand a 10 % possibility of failure. If they decide on acquisition, they foresee a 10 %chance of problems with antitrust laws, a 30 % possibility of an unsuccessfulintroduction of the plastics line, and a 60 % chance of success. If they decide tomanufacture a whole plastics line on their own, they foresee many more problems.They anticipate a 10 % chance of having trouble developing their own machines, a10 % chance of having problems with suppliers in developing a total packagingsystem for their customers, a 30 % chance that customers will not purchase theirsystems, and a 50 % chance of success in the development and introduction of theplastics line. Conditional profit is the amount of profit the firm can expect to make by adoptingeach of the preceding sets of alternatives and consequent events. In Fig. 21.6 the expected monetary values are shown in the event nodes. Thefirm’s financial planner can use EMV to decide which action to take, selecting thedecision node with the highest EMV (in this case, establishing a tie-in, which has anEMV of 76.5). The slash marks indicate elimination of nonoptimal decisionbranches from consideration. If the probabilities associated with events change,then the EMVs associated with the alternatives may change—and with them theselection of the optimal alternative. For Example 21.9, we greatly simplified the number of possible alternatives andevents. In fact, decision trees are more useful for more complex problems—that is,for problems containing more possibilities or problems in which management mustmake a sequence of decisions rather than a single decision.Example 21.10 Utilizing Sample Information to Improve the Determination ofPricing Policy. In Example 21.4 we determined the price of a new product without
  19. 19. 21.6 Decision Trees and Expected Monetary Values 1083Fig. 21.6 Decision tree for Example 21.9 (Source: Cheng F. Lee and Joseph E. Finnerty (1990),Corporate Finance, Theory, Method, and Application (San Diego: Harcourt Brace Jovanovich))using sample information.4 Such sample information as test market results, how-ever, can be very helpful. Suppose past product performances can give someindication about the relationship between test market results and product perfor-mance nationally. Let4 This example is similar to an example given in Gilbert A. Churchill, Jr. (1983), Market Research:Methodological Foundations, 3d ed. (Chicago: Dryden) pp. 37–42.
  20. 20. 1084 21 Statistical Decision Theory: Methods and ApplicationsTable 21.11 Conditional probabilities of each test market result, given each state of nature State of natureTest market result Light demand S1 Moderate demand S2 Heavy demand S3Z1 .5 .2 .2Z2 .3 .7 .6Z3 .2 .1 .2 1.0 1.0 1.0Table 21.12 Revision of prior probabilities in light of possible test market result (3) Prior (4) Conditional (5) ¼ (3) Â (4) (6) ¼ (5) Ä sum of (5)(1) (2) State of probability probability Joint probability Posterior probabilityj nature Sj P(Sj) P(Zk|Sj) P(Sj)P(Zk|Sj) P(Sj|Zk)Z1 S1 .7 .5 .35 .854 S2 .2 .2 .04 .097 S3 .1 .2 .02 .049 .41 1.000Z2 S1 .7 .3 .21 .512 S2 .2 .7 .14 .342 S3 .1 .6 .06 .146 .41 1.000Z3 S1 .7 .2 .14 .778 S2 .2 .1 .02 .111 S3 .1 .2 .02 .111 .18 1.000Z1 ¼ disappointing or only slightly successful test marketZ2 ¼ moderately successful marketZ3 ¼ highly successful market By using Bayes’ theorem (Eq. 21.4) and supposing that past experiences providedthe estimate of conditional probabilities given in Table 21.11, we find the revisedprior probabilities P(S1) ¼ .7, P(S2) ¼ .2, and P(S3) ¼ .1, as presented in Table 21.12.Conditional probabilities from Table 21.11 are presented in column (4) ofTable 21.12. Using Eq. 21.4, we calculate the posterior probabilities P(Sj|Zk)presented in column (6) of Table 21.12. Using the information on states of natureand payoffs listed in Table 21.6 and the posterior probability information listed inTable 21.12, we find the expected value of each alternative, given each researchoutcome (see Table 21.13). The probability of obtaining each test market result—that is, the probability ofeach Zk—is given as X À Á À n Á P ð Zk Þ ¼ P Sj P Zk jSj j¼1and for k—1, for example, the probability is
  21. 21. 21.7 Mean and Variance Trade-Off Analysis 1085Table 21.13 Expected value of each alternative, given each research outcomeZ1, disappointing or only slightly successful test market EV(A1) ¼ (100)(.854) þ (60)(.097) þ (À60)(.049) ¼ 88.28 EV(A2) ¼ (60)(.854) þ (110)(.097) þ (À30)(.049) ¼ 60.44 EV(A3) ¼ (À50)(.854) þ (0)(.097) þ (90)(.049) ¼ À38.29Z2, moderately successful test market EV(A1) ¼ (100)(.512) þ (60)(.342) þ (À60)(.146) ¼ 62.96 EV(A2) ¼ (60)(.512) þ (110)(.342) þ (À30)(.146) ¼ 63.96 EV(A3) ¼ (À50)(.512) þ (0)(.342) þ (90)(.146) ¼ À12.46Z3, highly successful test market EV(A1) ¼ (100)(.778) þ (60)(.111) þ (À60)(.111) ¼ 77.80 EV(A2) ¼ (60)(.778) þ (110)(.111) þ (À30)(.111) ¼ 55.56 EV(A3) ¼ (À50)(.778) þ (0)(.111) þ (90)(.111) ¼ À28.91 PðZ1 Þ ¼ PðS1 ÞPðZ1 jS1 Þ þ PðS2 ÞPðZ1 jS2 Þ þ PðS3 ÞPðZ1 jS3 Þ ¼ ð:7Þð:5Þ þ ð:2Þð:2Þ þ ð:1Þð:2Þ ¼ :41 The probability is given as the sum of the elements in column (5) of Table 21.12.Table 21.12 thus indicates that the probabilities associated with these test marketsare P(Z1) ¼ .41, P(Z2) ¼ .41,and P(Z3) ¼ .18. (The probabilities sum to 1, as theyshould, because one of the three test market outcomes must result.) The expectedvalue of the test-marketing procedure is found by weighting each expected value ofthe optimal action in Table 21.13, given each research result, by the probability ofreceiving that expected value. The expected value of the proposed research is thusfound to be EVðresearchÞ ¼ ð88:28Þð:41Þ þ ð63:96Þð:41Þ þ ð77:80Þð:18Þ ¼ 76:42 This value is $0.42 over the expected value of the optimal action withoutresearch, which, as indicated in Example 21.4, is $76. Hence, the market researchshould be undertaken if the research cost is less than $ Mean and Variance Trade-Off Analysis21.7.1 The Mean–Variance Rule and the Dominance PrincipleThe expected utility rule we discussed in Sect. 21.4 is theoretically the best criterionavailable, but sometimes it is very hard to implement. We frequently do not know theinvestor’s utility function, and furthermore, the decision maker, as in the case of amanager, must act on behalf of many stockholders with different utility functions.
  22. 22. 1086 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.7 Trade-off betweenmean and standard deviationfor investment projectsHence, the expected utility rule is often replaced by a more practical mean–variancedecision criterion that assumes that the decision maker has a risk-aversion utilityfunction. According to the mean–variance rule, the expected return (mean) measures aninvestment’s profitability, whereas the variance (or standard deviation) of returns measures its risk. Consider the following four alternative projects, with the means xand standard deviations sx specified.Investment project x sxA $9 $90B 8 90C 8 100D 10 120 To discuss the implications of the trade-off between risk and return and of thedominance principle, we plot this set of data in Fig. 21.7. A pairwise comparison ofthe investment projects shows that project A dominates projects B and C; it has thehighest return, and its risk is equal to that of project B and lower than that of projectC. However, there is no clear-cut decision between projects A and D, projects B andD, or projects C and D. Here the investor needs to consider the trade-off betweenprofit and risk in terms of his or her attitude toward risk. In the analysis of stock investments, average rates of return and their variance(or standard deviation) are used to represent investments’ profitability and risk,respectively. The variance of rates of return can be decomposed into twocomponents by the market model5 defined in Eq. 21.5.5 We discussed the market model in Chap. 14 and elsewhere.
  23. 23. 21.7 Mean and Variance Trade-Off Analysis 1087 Ri;t ¼ ai þ bi Rm;t þ ei;t (21.5)where Ri,t and Rm,t are rates of return for the ith security (portfolio) and market ratesof return, respectively. Following Eq. 13.19 of Chap. 13, s2 ¼ b2 s2 þ s2 i i m ei (21.6)wheres2 ¼ variance of Ri,t is2 ¼ variance of market rates of return ms2 ¼ residual variance of rates of return for the ith security ei In investment analysis, we define s2 , b2 s2 , and s2 as total risk, systematic risk, i i m eiand unsystematic risk, respectively. Systematic risk is the part of total risk that results from the basic variability ofstock prices. It accounts for the tendency of stock prices to move together with thegeneral market. The other portion of total risk is unsystematic risk, which is theresult of variations peculiar to the firm or industry—for example, a labor strike orresource shortage. Systematic risk, also referred to as market risk, reflects the fluctuations andchanges in general market conditions. Some stocks and portfolios are very sensitiveto movements in the market; others exhibit more independence and stability. Ameasure of a stock’s or a portfolio’s relative sensitivity to the market, assigned onthe basis of its past record, is designated by the upper-case Greek letter beta (b).Example 21.11 Market Model and Risk Decomposition for JNJ. The annual rate ofreturn for JNJ and the market rate of return for 1990–2009 are used to estimate themarket model in accordance with Eq. 21.5 and in terms of MINITAB. The resultsare shown in Fig. 21.8. From Fig. 21.8, we find that the beta coefficient for themarket model is .639. From the analysis of variance data in Fig. 21.8, we obtain thetotal risk (s2 ), systematic risk (b2 s2 ), and unsystematic risk (s2 ) as follows: i i m ei 1:74158 s2 ¼ i ¼ :09166 19 :17720 b2 s2 ¼ i m ¼ :00933 19 1:56437 s2 ¼ ei ¼ :08234 19 b2 s2 þ s2 ¼:00933 þ :08234 ¼ :09167¼:09166 i m ei _
  24. 24. 1088 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.8 MINITAB output Data Displayof the market model for JNJ Row JNJ SP 1 0.230108 0.036396 2 0.616842 0.124301 3 -0.551293 0.105162 4 -0.091578 0.085799 5 0.244915 0.019960 6 0.584558 0.176578 7 -0.409758 0.237724 8 0.340804 0.302655 9 0.287688 0.242801 10 0.124207 0.222782 11 0.139719 0.075256 12 -0.431191 -0.163282 13 -0.078010 -0.167680 14 -0.021172 -0.028885 15 0.248595 0.171379 16 -0.032496 0.067731 17 0.122480 0.085510 18 0.034602 0.127230 19 -0.076435 -0.174081 20 0.108473 -0.222935 Regression Analysis: JNJ versus SP The regression equation is JNJ = 0.0273 + 0.639 SP Predictor Coef SE Coef T P Constant 0.02726 0.07227 0.38 0.710 SP 0.6386 0.4472 1.43 0.170 S = 0.294804 R-Sq = 10.2% R-Sq(adj) = 5.2% Analysis of Variance Source DF SS MS F P Regression 1 0.17720 0.17720 2.04 0.170 Residual Error 18 1.56437 0.08691 Total 19 1.74158 Durbin-Watson statistic = 2.51280
  25. 25. 21.7 Mean and Variance Trade-Off Analysis 108921.7.2 The Capital Market LineFrom the rates of return and alternative measurements of risk, we can derive either thetrade-off between expected return and total risk or the trade-off between expectedreturn and systematic risk. In these cases, the utility function used to measure adecision maker’s attitude toward risk is the von Neumann and Morgenstern (VNM)type. The use of the term utility in the VNM type of utility function differs from its useby traditional economists. The VNM type of utility function is applied in situationswhere money payoffs are inappropriate as a measuring device. In traditional econom-ics, utility reflects the inherent satisfaction delivered by a commodity and is measuredin terms of psychic gains and losses. Von Neumann and Morgenstern, on the otherhand, conceived of utility as a measure of value that provides a basis for makingchoices in the assessment of situations involving risk. This approach integrates theEMV and the utility analyses discussed in Sect. 21.4.6 The capital market line used to describe the trade-off between expected returnand total risk is7  à si EðRi Þ ¼ Rf þ EðRm Þ À Rf (21.7) smwhere Rf ¼ risk-free rateE(Rm) ¼ expected return on the market portfolioE(RP) ¼ expected return on the ith portfoliosi, sm ¼ standard deviations of the portfolio and the market, respectively The capital market line (CML) defined in Eq. 21.7 implies that the expected ratesof return for portfolio i equal the risk-free rate plus total market risk,  à ðEðRm Þ À Rf si : sm The total portfolio risk premium is equal to price per market risk, Eð Rm Þ À R f ; sm6 In other words, the utility function can be defined as U½EðRÞ; sŠ, where E(R) and s representexpected rates of return and the standard deviation of rates of return, respectively. By assumingthat the investors are risk avoiders, we have [∂ U/∂ E(R)] 0 and (∂ U/∂ s) 0. In other words,investors prefer return and dislike risk.7 The graphical derivation of this mode appears in Appendix 2.
  26. 26. 1090 21 Statistical Decision Theory: Methods and ApplicationsTable 21.14 Return and standard deviation for mutual funds Mutual fund A (%) Mutual fund B (%) Average return, Ri 20 15Standard deviation, si 8 5times total risk associated with portfolio i—that is, si. By using the concept ofCML, we can define the Sharpe investment performance measure for the ithportfolio as Ri À Rf SMi ¼ (21.8) siExample 21.12 Using the Sharpe Investment Performance Measure to DetermineInvestment Performance. An investor is considering investing in either mutualfund A or mutual fund B. For past performance, he calculates for both funds theaverage returns and variances listed in Table 21.14. It is assumed that the T-bill rateis 8 %, which the firm uses as the risk-free rate. The Sharpe performance measure, then, gives :20 À :08 SMA ¼ ¼ 1:5 :08 :15 À :08 SMB ¼ ¼ 1:4 :05 These calculations reveal that mutual fund A will give a slightly better perfor-mance and thus is the better alternative of the two investments.21.7.3 The Capital Asset Pricing ModelThe capital market line (Eq. 21.7) is used to describe the trade-off between expectedrate of return and total risk. The trade-off between expected rate of return andsystematic risk defined in Eq. 21.9 is called the capital asset pricing model(CAPM): À Á E ð R i Þ ¼ R f þ bi E ð R m Þ À R f (21.9)whereE(Ri) ¼ expected rate of return for asset i Rf ¼ risk-free rate bi ¼ measure of systematic risk (beta) for asset iE(Rm) ¼ expected return on the market portfolio
  27. 27. 21.7 Mean and Variance Trade-Off Analysis 1091Fig. 21.9 The capital assetpricing model (CAPM) Equation 21.9 implies that bi is the systematic risk for determining the price ofthe individual asset and the portfolio. Figure 21.9 illustrates graphically the rela-tionship between E(Ri) and bi that is defined in Eq. 21.9. Professor William Sharpewon the Nobel Prize in Economics mainly because he derived this model. The reason why the CAPM can be regarded as part of decision theory is that it isbased on a utility function in terms of expected rates of return and the standarddeviation of rates of return. Expected rates of return and the standard deviation ofrates of return are essentially based on the monetary value of the investment. InSect. 21.4, we used expected monetary values and utility analysis to makedecisions. Here we treat risk (the standard deviation of rates of return) as an explicitfactor, whereas in Sect. 21.4 we treated it as an implicit factor in determining thevalue of an investment. With the capital asset pricing model, we must assume that all utility-maximizinginvestors will attempt to position themselves somewhere along the CML and willattempt to put some portion of their wealth into the market portfolio of risky assets. The CAPM implies that the market portfolio is the only relevant portfolio ofrisky assets. Hence, the relevant risk measurement of any individual security isits covariance with the market portfolio—that is, the systematic risk of thesecurity. The relationship between the capital market line (CML) and the CAPM can beshown by starting with the definition of the beta coefficient: CovðRi ; Rm Þ ri;m si sm ri;m si bi ¼ ¼ ¼ VarðRm Þ s2m sm
  28. 28. 1092 21 Statistical Decision Theory: Methods and Applicationswheresi ¼ standard deviation of the ith security’s rate of returnsm ¼ standard deviation of the market rate of returnri,m ¼ correlation coefficient of Ri and RmIf ri,m ¼ 1, then Eq. 21.9 reduces to s i ÂÀ ÁÃ EðRi Þ ¼ Rf þ Eð Rm Þ À Rf (21.10) sm If ri,m ¼ 1, then this implies that the portfolio in question is the efficient portfolio,or, for an individual security, it implies that the returns and risks associated with theasset are similar to those associated with the market as a whole. The implications ofthis comparison, in turn, are that1. Equation 21.9 is a generalized case of Eq. 21.10, because Eq. 21.9 includes the correlation coefficient, whereas Eq. 21.10 assumes that the correlation coeffi- cient is equal to 1.2. The capital asset pricing model (CAPM) instead of the capital market line (CML) should be used to price an individual security or an inefficient portfolio. To use the CML to price an inefficient portfolio would be to price unsystematic risk.3. The CML prices the risk premium in terms of total risk, and the security market line (SML) prices the risk premium in terms of systematic risk. In order to apply the CAPM, we need to estimate the beta coefficient, therisk-free rate, and the market risk premium. Estimates of these quantities canbe obtained from time-series data as shown in Example 21.11 (see alsoChap. 14). The capital asset pricing model (CAPM) defined in Eq. 21.9 implies that rates ofreturn for the ith security (or portfolio) equal the risk-free rate plus the security’s (orportfolio’s) risk premium [E(Rm)—Rf bi. This risk premium is equal to systematicrisk bi times the expected market risk premium, E(Rm—Rf). By using the concept ofCAPM, we can define the Treynor investment performance measure8 for the ithsecurity (or portfolio) as follows: R i À Rf TMi ¼ (21.11) bi If in Eq. 21.12 we also know that the beta coefficients for mutual funds A and Bare bA ¼ 1.8 and bB ¼ 1.2, respectively, then the Treynor investment measures forthese two mutual funds are8 The derivation and justification of this investment performance measure can be found inJ. Treynor (1965), “How to Rate Management of Investment Fund,” Harvard, Business Review43, 63–75.
  29. 29. 21.7 Mean and Variance Trade-Off Analysis 1093 :20 À :08 TMA ¼ ¼ :0667 1:8 :15 À :08 TMB ¼ ¼ :0583 1:2 Like the Sharpe performance measure, the Treynor measure indicates thatmutual fund A is the better alternative of the two investments. By using the specification of Eq. 21.9, we can define the CAPM version ofmarket model as À Á Ri;t À Rf ;t ¼ ai þ bi Rm;t À Rf ;t þ ei;t (21.12)where Ri,t are the rates of return for ith security (or portfolio) in period t; Rm,t aremarket rates of return in period t; and Rf,t is the return on a risk-free asset (such asT-bills rate) in period t. Jensen (1968) has shown that ai can be used to evaluate theinvestment performance for either security or portfolio.9 Therefore, it is called the Jensen investment performance measure which can beexplicitly defined as  À Á JMi ¼ Ri À Rf þ bi Rm À Rf Š (21.13) where Ri, Rm, and Rf represent the average rates of return for ith security (portfolio),market rates of return, and risk-free rate, respectively. Regression results in terms of Eq. 21.12 for both JNJ and MRK are presented inFig. 21.10. From the estimated ai, we conclude that JNJ perform worse than MRKduring the period of 1990–2009 since the JM of JNJ is smaller than that of MRK. Interrelationship Among Three Performance Measures. It should be noted thatall three performance measures are interrelated. For instance, if rim ¼ sim/sism ¼ 1,then the Jensen measure divided by si becomes equivalent to the Sharpe measure.Since bi ¼ sim =s2 m and rim ¼ sim =si smthe Jensen measure (JM) must be multiplied by 1/si in order to derive the equivalentSharpe measure:9 The derivation and justification of Jensen investment performance measure can be found inMichael C. Jensen (1968), “The Performance of Mutual Fund in the Period 1945–1964,” Journalof Finance 23, 389–416.
  30. 30. 1094 21 Statistical Decision Theory: Methods and Applications Data Display T-Bill Row JNJ MRK SP rate 1 0.230108 0.185441 0.036396 0.0749 2 0.616842 0.878595 0.124301 0.0538 3 -0.551293 -0.733803 0.105162 0.0343 4 -0.091578 -0.182990 0.085799 0.0300 5 0.244915 0.142442 0.019960 0.0425 6 0.584558 0.753915 0.176578 0.0549 7 -0.409758 0.235280 0.237724 0.0501 8 0.340804 0.352548 0.302655 0.0506 9 0.287688 0.409696 0.242801 0.0478 10 0.124207 -0.537078 0.222782 0.0464 11 0.139719 0.411867 0.075256 0.0582 12 -0.431191 -0.357447 -0.163282 0.0339 13 -0.078010 -0.013313 -0.167680 0.0160 14 -0.021172 -0.158294 -0.028885 0.0101 15 0.248595 -0.271964 0.171379 0.0137 16 -0.032496 0.036942 0.067731 0.0315 17 0.122480 0.418327 0.085510 0.0473 18 0.034602 0.367425 0.127230 0.0435 19 -0.076435 -0.450781 -0.174081 0.0137 20 0.108473 0.254065 -0.222935 0.0015 Regression Analysis: JNJ-TBill versus SP-TBill The regression equation is JNJ-TBill = 0.0155 + 0.574 SP-TBill Predictor Coef SE Coef T P Constant 0.01547 0.06708 0.23 0.820 SP-TBill 0.5739 0.4788 1.20 0.246 S = 0.293711 R-Sq = 7.4% R-Sq(adj) = 2.2%Fig. 21.10 Regression results of Eq. 21.12 for JNJ and MRK
  31. 31. 21.7 Mean and Variance Trade-Off Analysis 1095 Analysis of Variance Source DF SS MS F P Regression 1 0.12390 0.12390 1.44 0.246 Residual Error 18 1.55279 0.08627 Total 19 1.67670 Unusual Observations Obs SP-TBill JNJ-TBill Fit SE Fit Residual St Resid 3 0.071 -0.5856 0.0561 0.0687 -0.6417 -2.25R 7 0.188 -0.4599 0.1231 0.1006 -0.5830 -2.11R R denotes an observation with a large standardized residual. Durbin-Watson statistic = 2.52893 Regression Analysis: MRK-TBill versus SP-TBill The regression equation is MRK-TBill = 0.0309 + 0.646 SP-TBill Predictor Coef SE Coef T P Constant 0.03092 0.09522 0.32 0.749 SP-TBill 0.6457 0.6797 0.95 0.355 S = 0.416938 R-Sq = 4.8% R-Sq(adj) = 0.0% Analysis of Variance Source DF SS MS F P Regression 1 0.1569 0.1569 0.90 0.355 Residual Error 18 3.1291 0.1738 Total 19 3.2859 Unusual Observations Obs SP-TBill MRK-TBill Fit SE Fit Residual St Resid 3 0.071 -0.7681 0.0767 0.0976 -0.8448 -2.08R R denotes an observation with a large standardized residual. Durbin-Watson statistic = 2.47470Fig. 21.10 (continued)
  32. 32. 1096 21 Statistical Decision Theory: Methods and ApplicationsTable 21.15 Means and Probability, Returnstandard deviation State of economy Pi Ri (%) RiPi (%) Project X Prosperity .25 25 6.25 Normal .50 15 7.50 Recession .25 5 1.25 1.00 15.00 Standard deviation ¼ sX ¼ 7.07 % Project Y Prosperity .25 40 10 Normal .50 15 7.5 Recession .25 À10 À2.5 1.00 15.00 Standard deviation ¼ sY ¼ 17.68 %  à  à JMi Ri À Rf Rm À Rf ðsim Þ ¼ À si si sm s s  à  à m i R i À Rf Rm À Rf ¼ À ðcommon constantÞ si sm ¼ SMi À SMm ðcommon constantÞ If the Jensen measure (JM) is divided by bi, it is equivalent to the Treynormeasure (TM) plus some constant common to all portfolios:  à  à JMi Ri À R f Rm À Rf b i ¼ À bi bi b  à i ¼ TMi À Rm À Rf21.8 The Mean and Variance Method for Capital Budgeting DecisionsThe capital budgeting decision is the manager’s decision to undertake a certainproject instead of other projects.10 Capital budgeting frequently incorporates the concept of probability theory.Consider two projects (project X and project Y) and three states of the economyfor any given time (prosperity, normal, and recession). For each of these states, aprobability of occurrence can be calculated and an estimate made of its return, see Table 21.15. We can calculate the expected returns R for projects X and Y as follows:10 This section discussed how can we use statistical distribution method to make capital budgetingunder uncertainty. Other methods to perform capital budgeting under uncertainty can be found inLee, A. C, J. C. Lee, and C. F. Lee. Financial Analysis, Planning and Forecasting: Theory andApplication, 2nd ed. Singapore: World Scientific Publishing Company, 2009.
  33. 33. 21.8 The Mean and Variance Method for Capital Budgeting Decisions 1097 X m R¼ Ri Pi (21.14) i¼1where Ri is the return for the ith state of nature and Pi is the probability associatedwith the ith state of nature. Substituting the information given in Table 21.15 intoEq. 21.13, we obtain RX ¼ 6:25% þ 7:50% þ 1:25% ¼ 15:00% RY ¼ 10% þ 7:50% À 2:50% ¼ 15:00% The standard deviation for these returns can be found by using sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X n s¼ ðRi À RÞ pi 2 (21.15) i¼1 Substituting into Eq. 21.14 the information from Table 21.15, Rx ¼ .15 and RY ¼0.15, we obtain h i1=2 sX ¼ ð:25 À :15Þ2 ð:25Þ þ ð:15 À :15Þ2 ð:50Þ þ ð:05 À :15Þ2 ð:25Þ ¼ 7:07% h i1=2 sY ¼ ð:40 À :15Þ2 ð:25Þ þ ð:15 À :15Þ2 ð:50Þ þ ðÀ:10 À :15Þ2 ð:25Þ ¼ 17:68% The data given in Table 21.15 can be used to draw histograms of both projects(see Fig. 21.11a). If we assume that rates of return R are continuously and normallydistributed, then Fig. 21.10a can be drawn approximately as Fig. 21.11b. The concept of statistical probability distribution can be combined with capitalbudgeting to derive the statistical distribution method for selecting risky investmentprojects. The expected return for both projects is 15 %, but because project Y has anormal distribution with a wider range of values, it is the riskier project. Project Xhas a normal distribution with a larger collection of values closer to the 15 %expected rate of return and is therefore more stable.21.8.1 Statistical Distribution of Cash FlowAccounting concepts make it possible to define the net cash flows (Ct) as Ct ¼ ðCFt À dt À It Þð1 À tc Þ þ dt (21.16)
  34. 34. 1098 21 Statistical Decision Theory: Methods and ApplicationsFig. 21.11 Histograms andprobability distributions ofprojects X and YwhereCFt ¼ Qt(Pt À Vt)(CFt—dt—It)(1—tt) ¼ net incomeQ ¼ quantity produced and soldP ¼ price per unitV ¼ variable costs per unitdt ¼ depreciationtc ¼ tax rateIt ¼ interest expense For this equation, net cash flow is a random number because Q, P, and V are notknown with certainty. We can assume that Net Ct has a normal distribution with mean Ct and variance s2 , which was similar to that defined in Eq. 7.17. t If two projects have the same expected cash flow, or return, as determined by theexpected value defined in Eq. 21.14, we might be indifferent between the projects ifwe were to make our choice on the basis of return alone. If, however, we also take risk(variance) into account, we will get a more accurate picture of what type of cash flowor return distribution to expect. With the introduction of risk, a firm is not necessarily indifferent between twoinvestment proposals that are equal in net present value (NPV).11 We should estimate11 See Appendix 3 for the definition of NPV.
  35. 35. 21.8 The Mean and Variance Method for Capital Budgeting Decisions 1099both NPV and its standard deviation sNPV when we perform capital budgetinganalysis under uncertainty. Net present value under uncertainty can be defined as X N ~ Ct St NPV ¼ À Át þ À ÁN À I0 (21.17) t¼1 1 þ Rf 1 þ Rfwhere ~Ct ¼ uncertain net cash flow in period tRf ¼ risk-free discount rateSt ¼ salvage value of facilitiesIo ¼ initial outlay (investment) The mean of the NPV distribution and its standard deviation can be defined asfollows for mutually independent cash flows: X N Ct St NPV ¼ À Át þ À ÁN À I0 (21.18) t¼1 1 þ Rf 1 þ Rf !1=2 X n s2 sNPV ¼ À t Á2t (21.19) t¼1 1 þ Rf The generalized case for Eq. 21.19 is explored in Appendix 4.Example 21.13 The Mean and Variance Approach for Capital BudgetingDecisions. A firm is considering the introduction of two new product lines, Aand B, that have the same life and have the cash flows, standard deviations of cashflows, and salvage values shown in Table 21.16. Assume a discount rate of 10 %.Both projects have the same expected NPV: X 5 Ct NPVA ¼ NPVB ¼ À Át 1 þ Rf t¼1 À Á À Á À Á ¼ 20 PVIF10%;1 þ 20 PVIF10%;2 þ 20 PVIF10%;3 À Á À Á þ 20 PVIF10%;4 þ 20 PVIF10%;5 À 60 þ 5ð:6209Þ ¼ 20ð:9091Þ þ 20ð:8264Þ þ 20ð:7513Þ þ 20ð:6830Þ þ 20ð:6209Þ À 60 þ 5ð:6209Þ ¼ 18:90where PVIF is the present value interest factor (see Appendix 3 for the calculation). However, because the standard deviation of A’s cash flows is greater than that ofB’s, project A is riskier than project B. This difference can be explicitly evaluated
  36. 36. 1100 21 Statistical Decision Theory: Methods and ApplicationsTable 21.16 Data for Project A Project BExample 21.10 (in thousands) Initial investment $60 $60 Cash flows Year 1 $20 $20 Standard deviation $4 $2 Year 2 $20 $20 Standard deviation $4 $2 Year 3 $20 $20 Standard deviation $4 $2 Year 4 $20 $20 Standard deviation $4 $2 Year 5 $20 $20 Standard deviation $4 $2 Salvage value $5 $5only by using the statistical distribution method. To compare the riskiness of thetwo projects, we calculate the standard deviation of their NPVs. We will assumethat cash flows between different periods are perfectly positively correlated. ThesNPV can then be defined (see Appendix 4) as X N st sNPV ¼ (21.20) t¼1 ð1 þ Rf Þt À Á À Á À Á sNPVA ¼ ð$4Þ PVIF10%;1 þ ð$4Þ PVIF10%;2 þ Á Á Á þ ð$4Þ PVIF10%;5 ¼ ð4Þð:9091Þ þ ð4Þð:8264Þ þ ð:7513Þ4 þ 4ð:6830Þ þ 4ð:6209Þ ¼ 15:16; or $15; 160 À Á À Á À Á sNPVB ¼ ð$2Þ PVIF10%;1 þ ð$2Þ PVIF10%;2 þ Á Á Á þ ð$2Þ PVIF10%;5 ¼ ð2Þð:9091Þ þ ð2Þð:8264Þ þ ð2Þð:7513Þ þ ð2Þð:6230Þ þ ð2Þð:6209Þ ¼ 7:58; or $7; 580 With the same NPV, project B’s cash flows would fluctuate $7,580 per year, andproject A’s $15,160. Therefore, B is to be preferred given the same returns, becauseit is less risky.21.9 SummaryIn this chapter, we examined the concepts and applications of statistical decisiontheory and saw that it is different from the classical statistics we have worked within the last 20 chapters. In the context of statistical decision theory, we discussedelements of decision making under uncertainty. Decisions based on extreme values,expected monetary values and utility measurement, Bayes’ strategies, and decisiontrees were explored. In addition, we developed the Von Neumann and Morgenstern
  37. 37. Questions and Problems 1101utility and risk-aversion concepts in order to discuss trade-offs between risk andreturn. The capital asset pricing model and the statistical distribution method forproject selection were also investigated.Questions and Problems 1. What are the basic elements of decision making? Define those elements separately. If we don’t know the probabilities for the states of nature, can we still make a decision? 2. John faces the following decision problem: Study hours per day High confidence Average confidence Low confidence 0 60 40 30 5 80 60 50 10 90 80 60 With 5 h of study per day, John estimates that there are three different numbers of points he can get on the midterm: 80 with high confidence, 60 with average confidence, and 50 with low confidence. He also has two other possible actions: studying 0 h per day and studying 10 h per day. Estimate the points he can get on the midterm in each of the three states: high, average, and low. Try to use the maximin criterion to choose the best action and specify the most points he can get on the midterm. 3. In question 2, rebuild the table by using the minimax criterion and specify the best action and the best points. Is the best action the same as that of question 2? If yes, is this by chance or is it always true? 4. Reconsider the table in question 2 in the following way: Study hours per day High Average Low 0 .2 .5 .3 5 .2 .6 .2 10 .4 .5 .1 If John studies 5 h per day, then he estimates the probabilities in three confidence levels as .2 for high, .6 for average, and .2 for low. The same interpretations apply to the other two levels. (a) What are the probabilities for the high level, given 0, 5, and 10 studying hours per day? (b) Suppose the probability for each action is 1 . What are the probabilities for 3 0, 5, and 10 h of study per day, given the high level? 5. (a) Using the expected monetary value (EMV) criterion, try to construct a table to determine which action is best. (b) 5b. Does applying the EMV criterion yield the same best action as applying the maximin criterion or applying minimax regret criterion?
  38. 38. 1102 21 Statistical Decision Theory: Methods and Applications 6. Assume the following utilities for the different midterm points in question 2. The point scores 30, 40, 50, 60, 80, and 90 have utilities of 2, 5, 7.5, 9.5, 11, and 12 units, respectively. Is this a risk-averse, risk-neutral, or risk-loving type of utility function? (The implications of this assessment are worth pondering!) 7. Define the terms risk-averse, risk-neutral, and risk-taking. 8. (a) Reconstruct an expected utility table by using the table in question 4 and the assumption for utility units in question 6. If John makes his decision in accordance with the criterion of largest expected utility, which action will he choose? (b) Redefine risk-averse, risk-neutral, and risk-taking in terms of the expected utility concept. 9. Given the utility function U(W) ¼ 10W1/2, wherein W ¼ payoff, (a) Graph the function. (b) Does the function exhibit risk aversion? What is your criterion? (c) How will changing the constant term 10 to an arbitrary number a affect the answer to parts (a) and (b)? (i.e., by assuming that a is greater or less than 0, what different result will we get?)10. Mr. Clark has $100 and would like to try his luck in an Atlantic City casino. Suppose he is faced with a 60/40 chance of losing $20 or winning $15. Further suppose that for a fee of $10, he can buy insurance that completely removes the risk. (a) If Mr. Clark’s utility function is logarithmic for U(W) ¼ In W, is he a risk- averse person? How do you know? (b) Will Mr. Clark buy the insurance or take the gamble? (c) Say the risk increases to a 70/30 chance of losing $20 or winning $15. How much of a premium will Mr. Clark now pay for insurance to remove the risk completely (assuming he remains risk-averse)? (d) Say Mr. Clark’s initial wealth increases to $150. What change does this bring about in the risk premium he pays to remove the risk completely (assuming he remains risk-averse)?11. Lottery A offers a 70 % chance of winning $45 and a 30 % chance of losing $100. Lottery B offers a 60 % chance of winning $55 and a 40 % chance of losing $85. Lottery C offers an 80 % chance of winning $30 and a 20 % chance of losing $110. Without knowing any additional informa- tion, such as the utility function, which lottery will you choose? By what criterion?12. Use the MINITAB and the Ri and Rm information given in the table to calculate systematic risk, which was defined in Eq. 21.5 in the text.
  39. 39. Questions and Problems 1103 Quarterly rates of return for IBM (R,) and market rates (Rm), second quarter 1981 to second quarter 1991 Market return IBM return 1981 2 À.03522 À.05835 3 À.11454 À.04993 4 .054828 .066697 1982 1 À.08641 .065670 2 À.02098 .029037 3 .098622 .224494 4 .167912 .323475 1983 1 .087599 .066077 2 .099045 .191154 3 À.01213 .062993 4 À.00686 À.03093 1984 1 À.03486 À.05778 2 À.03769 À.06403 3 .084345 .185342 4 .006863 À.00020 1985 1 .080243 .040406 2 .061939 À.01692 3 À.05092 .009898 4 .160369 .264177 1986 1 .130726 À.01864 2 .049979 À.02574 3 À.07781 À.07440 4 .046904 À.09962 1987 1 .204525 .260208 2 .042166 .089758 3 .058651 À.06553 4 À.23232 À.22653 1988 1 .047883 À.05865 2 .056433 .193728 3 À