126 4 Improving Organizational Risk Managementstock exchanges, ﬁnancial options and derivatives, securitization and bundling ofcollateralized debt obligations, and networks of cooperative and reciprocal risk-underwriting agreements are among the developments in business and ﬁnancial riskmanagement that helped to shape and make possible the modern world. From theAge of Discovery through the scientiﬁc and industrial revolutions and into moderntimes, ability to coordinate the activities of speculative investors to fund risky ven-tures and business enterprises, in return for shares in resulting gains or losses, hasenabled large-scale proﬁtable risk-taking (Bernstein 1998). Large-scale risk-taking,in turn, has helped to power risky but enormously beneﬁcial explorations, discover-ies, innovations, and developments in a variety of industries.Risk-taking in modern businesses and ﬁnance exploits a key principle: risk shar-ing among investors allows mutually beneﬁcial acceptance of larger-scale risksthan any investor alone would accept. In ﬁnancial risk analysis, a risky prospect isan investment opportunity that offers different sizes of potential gains or losses,with different corresponding probabilities. A risky prospect that each investor in agroup would be unwilling to accept, because its potential losses are too large tojustify its potential gains (for a given degree of individual risk aversion), mightnonetheless be acceptable to all of them if they take shares in it.Example: Sharing Large-Scale Risks Can Make Them Acceptableto Risk-Averse InvestorsA risk-averse decision-maker who would refuse to accept a 50–50 chance of gaining $2,000 orlosing $1,000 might nonetheless want to accept a 50–50 chance of gaining $20 or losing $10. If so,then 100 such individuals could all beneﬁt by taking equal shares in the large risk that returnseither $2,000 or -$1,000. This illustrates one of the basic principles that enabled investors in earlyjoint stock companies to fund risky exploration, exploitation, and colonization ventures: shares ina risky prospect may be acceptable, even if the prospect as a whole would not be. (The economictheory of syndicates (Wilson 1968) extends this insight by showing that a group of investors withexponential individual utility functions and different degrees of risk aversion, ui(x) = 1 - exp(-x/ci)for individual i, should act like one individual with utility function u(x) = 1 - exp[-x/(c1+ c2+ …+ cn)] in deciding what risky investments to accept. Each individual member maximizes expectedutility by taking a share ci/(c1+ c2+ …+ cn) in each accepted investment and perhaps participatingin side bets with other individuals. The inequality 1/(c1+ c2+ …+ cn) < 1/ciimplies that the groupas a whole should be less risk-averse than its members.) Such arrangements for sharing risks,together with diversiﬁcation of investments across multiple independent prospects, creation andmanagement of investment portfolios of prospects (possibly with correlated returns), and hedgingof bets over time (by exploiting negatively correlated assets to reduce the variance in returns), havebecome staples of ﬁnancial risk management.However, there is a widespread perception that additional principles are neededfor enterprise risk management (ERM) in today’s world, as novel risks are createdby increasingly interlinked and interdependent enterprises, new ﬁnancial instru-ments for packaging and distributing risky prospects, changing social and moralmores and standards for acceptable (and legal) risk-taking behavior, and new
127Background: Traditional Versus Novel Risks and Risk Management Principlesrisk-taking incentives created by modern compensation, liability, corporate gover-nance, and institutional structures. The resulting nontraditional risks can threatenthe stability and viability of even the largest organizations and institutions. Initiatingevents, from unanticipated droughts in remote locations, to factory ﬁres, to loss ofreputation and public conﬁdence in a previously trusted organization or institution,can send repercussions spreading through networks of tightly coupled supply chains,contractual obligations, and contingent claims, sometimes causing unexpectedlylarge and systemic cascades of losses or failures in enterprises far from the originalsource.Unrecognized correlations or interdependencies can also create hidden systemicrisks in networks of tightly coupled enterprises, making them vulnerable to swiftlycascading failures. As discussed in Chap. 3, and as emphasized in the literature onblack swan risks, the resulting heavy-tailed loss distributions, in which unprece-dentedly large losses occur too often to be ignored, do not satisfy traditional statisti-cal risk modeling assumptions. Such risks make previous experience an inadequatebasis for assessing, planning for, or underwriting future risks of loss. Instead, itbecomes necessary to try to anticipate and prepare for risks which, by their verynature, are unlikely to have been seen before. Even within a single enterprise,incomplete and private information, costly communications and transactions costs,and organizational incentives too often undermine effective cooperation and riskmanagement. Many commentators on enterprise risk management (ERM) haveconcluded that traditional risk management principles need to be augmented withnew ideas for managing such nontraditional risks.New business and ﬁnancial risks arise largely from uncertainty about the trust-worthiness of partners and of agreed-to plans and commitments. Can supply chainpartners be relied on to fulﬁll their contractual agreements, or are they subject tounexpected interruptions due to strikes, factory ﬁres, unanticipated shortages, orother causes? Can fellow employees in other divisions of a company, or within asingle division, be trusted to deliver what they have committed to, or are they likelyto be overwhelmed by unforeseen changes in market demand or competition orregulation? Will poorly aligned incentives cause business partners or fellow employ-ees to take less care than we might want or expect? Uncertainties about whetheragreements and internal operational procedures and systems can be trusted, togetherwith high transaction costs for creating, monitoring, and enforcing formal contracts,increase the costs of starting and operating proﬁtable businesses.Questions about trust and trustworthiness also arise in many economic transac-tions, for example, between employers and employees, producers and consumers,insurers and insured, as well as among business partners. Similar questions affectdomestic political risks at multiple levels (e.g., how far can union members trustunion bosses, or voters trust those they have voted for?) and international relations(e.g., how far can countries trust each other to abide by agreements on disarma-ments, or free trade, or environmental emissions, or fair work practices?) A fewexamples follow, to emphasize and illustrate the types of political, economic, andorganizational risks that spring from limited or uncertain trustworthiness of otherindividual agents.
128 4 Improving Organizational Risk ManagementExample: Individual Versus Social Rationality in Games of TrustGame theory illuminates many challenges for creating and maintaining high-trust relations inorganizations. Principles of individual rationality often conﬂict strongly with requirements for col-lective rationality, especially when the incentives of a game undermine trustworthy behavior.Perhaps most famously, temptations to free ride, or succumb to tragedies of the commons, can leadplayers to make individually rational choices which leave them all worse off than would differentchoices. In Prisoner’s Dilemma (often used as a model for international arms races or local freeriding) and similar games, playing always defect is a dominant strategy for every player, eventhough it leads to Pareto-dominated outcomes.Prisoner’s DilemmaThus, the social rationality principle “Don’t choose Pareto-dominated outcomes” conﬂicts withthe individual rationality principle “Don’t choose dominated strategies.” The Centipede Game andChain Store Paradox (discussed in most modern game theory texts and expositions, e.g., Gintis(2000) and Rosenthal (2011)) show that social rationality also conﬂicts with other foundations ofindividual rationality, such as backward induction (used in decision tree analysis and dynamicprogramming) and dynamic consistency (or its multi-person extension, subgame perfection),respectively. In each of these games, if players could trust each other to cooperate despite theincentives to defect, all would all end up better off (with higher individual payoffs) than when eachapplies principles of individual rationality to the incentives provided by these games (i.e., choosingdominant strategies in Prisoner’s Dilemma, using backward induction in the Centipede Game, andselecting the subgame perfect equilibrium in the Chain Store Paradox) (Gintis 2000). In reality,both laboratory experiments (such as the ultimatum, trust, and dictator games) and real-worldevidence (e.g., from labor markets, participation in voting, paying honest taxes, and so forth), aswell as neuroeconomic studies of oxytocin levels and reward pathways in the brain when decidingwhether to trust and to cooperate, all show that people are predisposed to cooperate more thangame theory would predict (Rosenthal 2011; Gintis et al. 2003). Yet, with repeated play, the incen-tives of these games start to prevail, and defection, rather than cooperation, increases unless someform of retaliatory punishment is allowed (Gintis 2000).Example: Incentives and Trust in Principal-Agent RelationsIn organizations, employees must repeatedly decide how trustworthy to be (e.g., how hard towork each day to achieve their employer’s goals, if level of effort is private information and noteasily monitored) and also how much to trust each other, for example, in creating shared planswhose success requires multiple divisions to keep commitments. Economists and managementscientists have studied how to design compensation rules and other organizational incentives toavoid providing constant temptations to free ride, cheat, lie, or otherwise defect, so that thebeneﬁts of mutual cooperation can be more fully achieved. In simple principal-agent models, asingle agent chooses a level of effort and produces an outcome for the principal. The outcomedepends on the agent’s level of effort, and also on chance, so that higher levels of effort areassociated with more valuable outcomes, but do not guarantee them. The agent receivesPlayer 2 cooperates Player 2 defectsPlayer 1 cooperates 2, 2 0, 3Player 1 defects 3, 0 1, 1
129Background: Traditional Versus Novel Risks and Risk Management Principlescompensation from the principal, typically according to a compensation rule or contract towhich both agree in advance. The principal can observe the outcome, but not the agent’s effort,and hence, the agent’s compensation can depend only on the outcome, but not on his level ofeffort. Analysis of such models shows that private information (here, the agent’s level of effort),coupled with the assumption of purely rational play, leads to Pareto-inefﬁcient levels of effortand probability distributions for outcomes. That is, under any contract that can be designedwhen only the outcome but not the agent’s effort is common knowledge (called a second-bestcontract), the agent typically provides less effort and receives less compensation than if his levelof effort could be freely observed by the principal. Both the principal and the agent have lowerexpected utility than could be achieved by a ﬁrst-best contract based on common knowledge ofeffort as well as outcome (Gintis 2000; Rosenthal 2011). Both parties could gain, if only theprincipal could trust the agent to put in a ﬁrst-best level of effort, and compensate him accord-ingly. But it would be strategically irrational for them to cooperate this way, in the sense that theprincipal trusting the agent and the agent being trustworthy do not constitute a Nash equilibriumpair of mutual best (expected utility maximizing) responses to each other’s choices. However,when multiple agents repeatedly compete to serve one or more principals, the rewards to favor-able reputation, together with improved opportunities for the principal to gauge each agent’seffort by comparing results across agents and over time, can induce more trustworthy, and hencemore valuable and better-rewarded, agent performance.Example: Incentives, Trust, and Risk in Market TransactionsSimilar principles hold for insurance contracts and for consumer product quality and liability, aswell as for employment contracts (Rosenthal 2011; Gintis 2000). In each case, Pareto efﬁciency ofenforceable agreements or contracts is reduced by the existence of private information (or asym-metric information) that creates incentives for one or both parties to defect, compared to what theywould do if the private information could be credibly and freely shared. Both parties could gain ifeach could trust the other to provide a ﬁrst-best level of effort or due care (i.e., the level that wouldbe achieved if private information were common knowledge), but such trust would not be strategi-cally rational.In insurance markets, two well-known incentive effects reduce the ability of insurer and insuredto agree on mutually beneﬁcial contracts, if the insured’s true risk level and care level are privateinformation that cannot be freely observed or veriﬁed by the insurer. Adverse selection occurswhen only people with above-average risks (who expect to beneﬁt from having policies) arewilling to pay the premiums for insurance coverage. This self-selection makes the insurance con-tract less attractive and more expensive for the insurer. If insurer solvency or regulatory constraintsrequire higher premiums to cover the expected higher payouts, then rates may increase, so that onlyeven riskier subsets of buyers are willing to pay the high premiums. In extreme cases, this cycle ofescalating costs and increasing self-selection of the riskiest individuals continues until the marketcollapses, and no insurance is offered, even though many people would have been willing to buyinsurance at rates that would have beneﬁtted both themselves and the insurer. Moral hazard arisesbecause those who are insured have less incentive to take care than if they were not insured. Again,both parties could gain if the insurer could trust the insured to take more care despite having insur-ance. Likewise, in product markets, both manufacturers and consumers might gain if the consumerscould trust the manufacturers to deliver high-quality products at market prices and if manufacturerscould trust consumers to exercise care in the use of products.Enterprise risk management (ERM) and related practices help organizations tothink about and manage nontraditional risks. In addition to ﬁnancial risks, these
130 4 Improving Organizational Risk Managementinclude legal, reputational, and brand image risks. They include the many risks aris-ing from complex interdependences and networks of obligations and commitments,and from uncertainty about the willingness or ability of employees, partners, andcustomers to deliver on commitments and to maintain trustworthy behaviors in theface of temptations to defect. Successful ERM reduces the costs of uncertainty andits adverse impacts on organizational performance. ERM typically focuses on iden-tifying, documenting, sharing, tracking, and managing risks that could disrupt abusiness or jeopardize its commitments and operations. At least in principle, mak-ing such risk information explicit and available for scrutiny – often with the help ofperiodic audits and reports – can reduce the adverse incentive effects of privateinformation about risks. Maintaining trust in business (and other relations) may beless difﬁcult when risk information is tracked and disclosed. In practice, however,those assessing the risks may not have a very precise understanding of how to assessor express them. Efforts to assess and share risk information and risk managementplans responsibly may degenerate into compliance exercises in which boxes arechecked off and vague descriptions or summaries are produced, with little realinsight into the extent of remaining risks or what to do about them. The followingsections provide examples. A worthwhile challenge for risk analysts is therefore todevelop and apply more useful technical methods for enterprise risk analysis, bear-ing in mind the substantial business and economic advantages of improving riskassessment, communication, and management so that the adverse incentives createdwhen such information remains private can be overcome.Top-Down ERM Risk Scoring, Rating, and RankingA popular current approach to ERM involves employees from the boardroom leveldown in trying to think through what might go wrong, how frequent or likely thesefailures are, how severe their consequences are likely to be, and what should be doneabout them, if anything, both now and later. Such ERM exercises and processesemphasize anticipation and prevention. They have the virtue of bringing together andsharinginformationamongemployeesfromdifferentpartsofacompany(andsometimesamong partners in a supply network), perhaps helping to align organizational under-standing of different risks and of plans to deal with them. Sharing information on risks,uncertainties, and measures to manage their effects can help participants more fullyachieve the potential gains from well-coordinated cooperation (both inside and outsidean organization). The results of ERM processes typically include priority lists, riskmatrices, and similar devices to focus management attention and to inform deliberationand decisions about what risks to accept and what risk management interventions toallocate attention and resources to ﬁrst.Despite their advantages, such popular approaches to risk management in orga-nizations can inadvertently increase the very risks that they seek to manage; andthey too often recommend risk management interventions that could easily be
131Limitations of Risk Scoring and Ranking Systemsimproved upon (Hubbard 2009). The remainder of this chapter explains why. It alsoconsiders how to modify existing ERM systems to improve their performance.The key issues are not restricted to ERM but apply to all uses of risk ranking, scor-ing, and comparison systems to inform risk management deliberations and resourceallocations, whether in a corporation, a regulatory agency, the military, or theDepartment of Homeland Security. The potential returns are enormous for improv-ing risk management practices that are based on these methods.Limitations of Risk Scoring and Ranking SystemsMany organizations practice risk management by regularly scoring, rating, or rankingdifferent hazards (sources of risk) or risk-reducing opportunities to identify the top-ranked opportunities to be addressed in the current budget cycle. Use of priorityscoring and rating systems is becoming ever more widespread as they are incorpo-rated into commercial software offerings designed to support compliance with nationaland international standards (such as the ISO 31000 risk management standard), regu-lations, and laws (such as Section 404 of the Sarbanes–Oxley Act of 2002, in theUnited States). It is therefore useful to understand, and where possible overcome,some intrinsic limitations in the performance of all possible priority-setting rules andscoring systems, evaluated as guides to rational action (Hubbard 2009). Althoughmany of these limitations are already well recognized among specialists in decisionanalysis and ﬁnancial risk analysis, they are of great practical importance to usersseeking to understand what can and cannot be achieved using current risk-scoringmethods or seeking to develop improved approaches to risk management. In general,risk-scoring methods are not appropriate for correlated risks. Indeed, as we will dem-onstrate, they are not necessarily better than (or even as good as) purely random selec-tion of which risk management activities to fund.More constructively, when risk-reducing opportunities have correlated conse-quences, due to uncertainties about common elements (such as carcinogenic ortoxic potencies of chemicals used in manufacturing, effectiveness of counterterror-ism or cybersecurity countermeasures used in IT systems, and stability of currencyor solvency of banks and insurers used in ﬁnancing), then methods for optimizingselection of a portfolio (subset) of risk-reducing opportunities can often achievesigniﬁcantly greater risk reductions for resources spent than can priority-scoringrules. In general, the best choice of a subset of risk-reducing activities cannot beexpressed by priority scores. Instead, optimization techniques that consider interde-pendencies among the consequences of different risk-reducing activities are essen-tial. Fortunately, such methods are easy to develop and implement. They cansubstantially improve the risk-reduction return on investments in risk-reducingactivities.
132 4 Improving Organizational Risk ManagementThe Need for Improvement: Some Motivating ExamplesExamples of important applications of priority-scoring systems in diverse areas ofapplied risk analysis include the following.Example: Scoring Information Technology VulnerabilitiesThe Common Vulnerability Scoring System (CVSS) for rating information technology (IT) systemvulnerabilities uses scoring formulas such as the following to help organizations set priorities forinvesting in security risk reductions:BaseScore=(.6*Impact+.4*Exploitability-1.5)*f(Impact)Impact=10.41*(1-(1-ConfImpact)(1-IntegImpact)*(1-AvailImpact))Exploitability=20*AccessComplexity*Authentication*AccessVectorf(Impact)=0 if Impact=0; 1.176 otherwiseAccessComplexity=case AccessComplexity ofHigh: 0.35Medium: 0.61Low: 0.71Authentication = case Authentication ofRequires no authentication: 0.704Requires single instance of authentication: 0.56Requires multiple instances of authentication: 0.45AccessVector = case AccessVector ofRequires local access: .395Local Network accessible: .646Network accessible: 1(Source: http://nvd.nist.gov/cvsseq2.htm)Such a rule base, no matter how complex, can be viewed as an algorithm that maps categorizedjudgments and descriptions (such as that access complexity is high and that local access is required)into corresponding numbers on a standard scale. Higher numbers indicate greater vulnerability andneed for remedial action. Proponents envision that “As a part of the U.S. government’s SCAP(Security Content Automation Protocol) CVSS v2 will be used in standardizing and automatingvulnerability management for many millions of computers, eventually rising to hundreds of mil-lions” (http://www.ﬁrst.org/cvss/).Example: Scoring Consumer Credit RisksThe practice of rank-ordering consumers based on credit scores is ubiquitous in business today. Arecent description states that “FICO® risk scores rank-order consumers according to the likelihoodthat their credit obligations will be paid as expected. The recognized industry standard in consumercredit risk assessment, FICO® risk scores play a pivotal role in billions of business decisions eachyear. …[They] are widely regarded as essential building blocks for devising successful, preciselytargeted marketing, origination and customer management strategies by credit grantors, insuranceproviders and telecommunications companies.” Examples include BEACON® at Equifax US andCanada; FICO® Risk Score, Classic at TransUnion US; and Experian/Fair Isaac Risk Model atExperian. (Source: www.fairisaac.com/ﬁc/en/product-service/product-index/ﬁco-score/)
133The Need for Improvement: Some Motivating ExamplesExample: Scoring Superfund Sites to DetermineFunding PrioritiesThe State of Connecticut (www.ct.gov/dep/lib/dep/regulations/22a/22a-133f-1.pdf) published aSuperfund Priority Score method, to be used in determining funding priorities for remediation ofSuperfund sites. Users must score each of many factors (reﬂecting exposure potential; groundwa-ter impact; surface water impact; toxicity, persistence, mobility, and quantity of hazardous sub-stances; impact to the environment, including Species of Special Concern; and potential air releaseand ﬁre hazards) using ordered categories. Each category carries a certain number of points.For example, an area that contains a rare species gets a score of 4 on this factor. If it has a decliningor infrequent species, the score is 3; for a habitat-limited species, the score is 2. If this factor (speciesof concern) is not applicable, the score for this factor is zero. The scores for all factors are summed.The resulting total score determines the priority for funding of remedial action at sites on the SPL[the State of Connecticut Superfund Priority List].Example: Priority Scoring of Bioterrorism AgentsMacIntyre et al. (2006) proposed a risk priority-scoring system for bioterrorism agents. Theydescribed their approach as follows:“Disease impact criteria were as follows: infectivity of the agent (person-to-person trans-mission potential), case fatality rate, stability in the environment and ease of decontamina-tion, incidence of disease per 100,000 exposed persons in the worst-case release scenario,and reports of genetic modiﬁcation of the agent for increased virulence.Probability of attack criteria was [sic] designated as: global availability and ease of pro-•curement of the agent, ease of weaponization, and historical examples of use of the agentfor an attack.Prevention/intervention criteria were categorized as: lack of preventability of the disease•(such as by vaccination) and lack of treatability of the disease (such as by antibiotics).For each of the scoring categories, a score of 0–2 was assigned for each category A agent as•follows: 0=no, 1=some/low, and 2=yes/high. The sum of these scores (of a total possiblescore of 20) was used to rank priority.”This is similar to the Superfund scoring system, in that categorical ratings for various factorsare assigned numerical scores, and the sum of the scores is used to set priorities. In neither case didthe authors verify whether additive independence conditions hold, which are required in multiat-tribute value and utility theory to justify additive representations of preferences (Keeney and Raiffa1976). For example, an agent with a score of 2 for lack of preventability of disease and 0 for lackof treatability would have the same sum for these two factors (2+0=2) as an agent with lack ofpreventability of disease = 0 and lack of treatability = 2 or as an agent with lack of preventabilityof disease = 1 and lack of treatability = 1. Yet, risk managers who can completely prevent a disease(lack of preventability of disease = 0) might not care as much about whether it is treatable as theywould if the disease could not be prevented. Likewise, in Superfund site scoring, many decision-makers might care less about the presence of a declining species near a site that creates no expo-sure than near a site that creates a large, toxic exposure. Such interactions among factor scores areignored in purely additive scoring systems.
134 4 Improving Organizational Risk ManagementExample: Larger Input Uncertainties May Create SmallerOutput UncertaintiesOccasionally, users of risk-scoring systems are asked to rate or rank their uncertainties about differentinputs, with the idea being that larger uncertainties in inputs drive greater uncertainty about outputs,and therefore might beneﬁt most from further information. It may be worth noting that the assumptionthat greater uncertainty in an input does not produce smaller uncertainty in the output of a model is notnecessarily mathematically valid. Consider a model Y = f(X), where X is an uncertain input and Y isthe model’s output. For concreteness, suppose that X is a scalar input, uniformly distributed over someinterval, and that f is a known, deterministic function. Now, is it true that the uncertainty about Y cor-responding to an uncertain value of X should necessarily be a non-decreasing function of the level ofuncertainty in X? The following example suggests not. Presumably, most analysts (and all who usevariance or entropy to deﬁne and measure the uncertainty of a probability distribution) would agreethat X has smaller uncertainty if it is uniformly distributed between 98 and 100 than if it is uniformlydistributed between 0 and 198. Yet, if f is the threshold function f(X) = 1 for 99 £ X £ 100, else f(x) = 0,then the uncertainty (e.g., variance or entropy) of Y = f(X) is greatest when X is uniformly distributedbetween 98 and 100 (since there are then equal probabilities of 50% each that Y will be 0 or 1) and ismuch smaller when X is uniformly distributed between 0 and 198 (since there is then a 99.5% probabil-ity that Z=0). So, larger uncertainty about X induces smaller uncertainty about the value of output Ycaused by X. Thus, uncertainty about the output should not necessarily be assumed to be an increasingfunction of input uncertainty.Example: Threat–Vulnerability–Consequence (TVC)Risk Scores and Risk MatricesMany organizations use numerical priority-scoring formulas such as Risk=Threat×Vulnerability×Consequence or Risk=Threat×Vulnerability×Criticality or Risk=Threat×Vulnerability×Impact. The Department of Homeland Security, the Department of Defense, and the armed ser-vices all use this approach to prioritize anti-terrorism risk-reduction efforts (Jones and Edmonds2008; Mitchell and Decker 2004; www.ncjrs.gov/pdfﬁles1/bja/210680.pdf.) The formula Risk=Threat×Vulnerability×Consequence also provides the conceptual and mathematical basis for theRAMCAP™ (Risk Analysis and Management for Critical Asset Protection) standard and relatedcompliance training and software (www.ramcapplus.com/). Law enforcement ofﬁcers have beentrained to use Risk=Threat×Vulnerability×Impact scoring systems to set priorities for managingsecurity risks at major special events (www.cops.usdoj.gov/ﬁles/ric/CDROMs/PlanningSecurity/modules/3/module%203%20ppt.ppt). Unfortunately, when the components on the right-hand side(e.g., Threat, Vulnerability, and Consequence) are correlated random variables – for example,because attackers are more likely to attack facilities with high Vulnerability and Consequence orbecause larger storage facilities have higher Vulnerability and Consequence than small ones – thenthe product of their means differs from the mean of their product, and it is not clear what either onehas to do with risk. Correct expressions require additional terms to adjust for non-zero covariances(Cox 2008b). Similar comments apply to widely used risk matrices based on formulas such asRisk=Frequency×Severity, with the right-hand side variables assessed using ordered categories(such as high, medium, and low) and Risk ratings or priorities then being determined from thesecomponent ratings. In general, such risk matrices order some pairs of risks incorrectly and, in somecases, can perform even worse than setting priorities randomly (Cox 2008a).
135Setting Priorities for Known Risk-Reducing Investment OpportunitiesSetting Priorities for Known Risk-Reducing InvestmentOpportunitiesTo enable formal analysis of the properties of priority-scoring systems in a reason-ably general framework, we deﬁne a priority-setting process as consisting of thefollowing elements:1. A set of items to be ranked or scored. The items may be hazards, threats, custom-ers, interventions, assets, frequency–severity pairs, threat–vulnerability–consequence triples, threat–vulnerability–consequence–remediation costquadruples, Superfund sites, construction projects, or other objects. We will referto them generically as items, hazards, prospects, or opportunities.2. An ordered set of priority scores that are used to compare hazards. These may beordered categorical grades, such as high, medium, and low; nonnegative integersindicating relative priority or ranking; or nonnegative real numbers, representingvalues of a quantitative priority index such as Risk=Threat×Vulnerability×Consequence or priority index=expected beneﬁt of remediation/expected cost ofremediation, where the italicized variables are nonnegative numbers.3. A priority-scoring rule. A scoring rule is a mathematical function (or a procedureor algorithm implementing it) that assigns to each hazard a unique correspondingpriority score. (This implies that any two hazards having identical attribute val-ues, or identical joint distributions of attribute values, must have the same prior-ity score.)The priority-scoring rule determines a priority order in which hazards are to beaddressed (possibly with some ties). Addressing a hazard is assumed to reduce riskand hence to be valuable to the decision-maker: it increases expected utility. Forexample, it may stochastically reduce the ﬂow of illnesses, injuries, or fatalitiesresulting from a hazardous process, activity, or environment.Although items might have multiple attributes, and value trade-offs might makepreferences among them difﬁcult to deﬁne clearly in practice, we shall assume thatthe decision-maker has perfectly clear, consistent preferences for the consequencesof addressing different hazards. For example, suppose that addressing hazard jreduces loss, measured on a scale such as dollars (for ﬁnancial risks) or quality-adjusted life years (QALYs) (Doctor et al. 2004), for health risks, by an amount, xj,deﬁned as the difference between the loss if hazard j is left unaddressed and the lossif hazard j is addressed. Suppose that all value units (e.g., dollars or QALYs) areconsidered equally intrinsically valuable, with twice as many being worth twice asmuch to the decision-maker. More generally, we assume that addressing hazardscreates gains on a measurable value scale satisfying standard axioms (Dyer andSarin 1979) that allow preferences for changes in or differences between situations,from before a hazard is addressed to after it is addressed, to be coherently rankedand compared. Let xjbe the measurable value from addressing hazard j. We assumethat the value of addressing a hazard, expressed on such a measurable value scale,depends only on its attributes, and we work directly with the measurable values,
136 4 Improving Organizational Risk Managementrather than the underlying attributes. (The value scale need not be measured inQALYs, but thinking of such a concrete example may aid intuition.) If it costs thesame amount to address any hazard, and if the resulting increases in value are knownwith certainty, then, for any budget, total beneﬁts are maximized by addressing thehazards in order of their decreasing values, xj. This provides one useful model forpriority-based risk management decision-making.Priorities for Independent, Normally DistributedRisk ReductionsNext, suppose that the value achieved by addressing hazard j is uncertain. Thismight happen, for example, if the quantities or potencies of hazardous chemicalsstored at different waste sites are uncertain, or if the sizes of exposed populationsand their susceptibilities to exposure are not known, or if the effectiveness of inter-ventions in reducing risks is in doubt. To model priority-based risk managementdecisions with uncertainty about the sizes of risk reduction opportunities, weassume that their values are random variables and that the decision-maker is risk-averse. For a risk-averse decision-maker with a smooth (twice-differentiable)increasing von Neumann–Morgenstern utility function for the value attribute, theconditions in Table 4.1 are all mutually equivalent, and all imply that the utilityTable 4.1 Equivalent characterizations of exponential utility functionsLet X and Y be any two risky prospects (random variables) measured on the intrinsic valuescale. They represent the uncertain values (e.g., QALYs saved) by addressing two differenthazards• Strong Risk Independence: Adding the same constant to both X and Y leaves their prefer-ence ordering unchanged. Thus, if X + w is preferred to X + w for some value of the constantw, then X is preferred to Y for all values of w• Risk Premium Independence: The decision-maker’s risk premium (amount she is willing topay to replace a prospect with its expected value) for any risky prospect depends only onthe prospect (Thus, it is independent of background levels of the value attribute.)• Certainty Equivalent Independence: If a constant, w, is added to every possible outcome ofa prospect X, then the certainty equivalent of the new prospect thus formed is CE(X) + w,where CE(X) denotes the certainty equivalent (or selling price on the intrinsic value scale)of prospect X. (This is sometimes called the delta property, due to Pfanzagl, 1959.) Thus,for any constant, w, CE(w + X) = CE(X) + w• Equal Buying and Selling Prices: For any prospect X and any constant w, the decision-maker is indifferent between w + CE(X) – X and w + X – CE(X)• No Buying Price/Selling Price Reversals: The ranking of prospects based on their certaintyequivalents (i.e., selling prices, e.g., how many QALYs would have to be saved withcertainty to offset the loss from abandoning the opportunity to save X QALYs) neverdisagrees with their ranking based on buying prices (e.g., how many QALYs a decision-maker would give up with certainty to save X QALYs). (This assumes the decision-maker isrisk-averse; otherwise, the linear risk-neutral utility function u(x) = x would also work)• Exponential Utility: u(x) = 1 – e–kxDyer and Jia (1998), Hazen and Sounderpandian (1999)
137Priority Ratings Yield Poor Risk Management Strategies for Correlated Risksfunction is exponential. If one or more of these conditions is considered norma-tively compelling, then an exponential utility function should be used to chooseamong prospects with uncertain values.The expected value of an exponential utility function for any random variablecorresponds to its moment-generating function. For example, let Xjrepresent theuncertain measurable value of addressing hazard j, modeled as a random variable onthe value axis. Let CE(Xj) denote the certainty equivalent of Xj, that is, the value(such as QALYs saved) received with certainty that would have the same expectedutility as (or be indifferent to) random variable Xj. Then if Xjis normally distributedwith mean E(Xj) and variance Var(Xj), it follows (from inspection of the moment-generating function for normal distributions) that its certainty equivalent is:CE(Xj) = E(Xj) – (k/2)Var(Xj),where k is the coefﬁcient of risk aversion in the exponential utility function (Infanger2006, p. 208).A set of equally costly risk-reducing measures with independent, normally dis-tributed values can be prioritized in order of decreasing CE(Xj) values. For anybudget, total expected utility is maximized by funding risk-reduction opportunitiesin order of decreasing priority until no more can be purchased. Moreover, even ifthe risk-reducing measures do not have identical costs, an optimal (expected utilitymaximizing, given the budget) policy maximizes the sum of certainty equivalents,subject to the budget constraint. (This follows from the additivity of means and ofvariancesforindependentrisks.Findinganoptimalsubsetinthiscaseisawell-studiedcombinatorial optimization problem, the knapsack problem.) Thus, for any two fea-sible portfolios of risk-reducing measures, the one with the greater sum of certaintyequivalents is preferred. Certainty equivalents therefore serve as satisfactory priorityindices for identifying optimal risk-reducing investments in this case.Priority Ratings Yield Poor Risk Management Strategiesfor Correlated RisksPriority-based risk management successfully maximizes the risk-reduction value(expected utility or certainty equivalent value of risk-reducing activities) of defen-sive investments in the special cases discussed in the preceding two sections.However, it fails to do so more generally. Selecting a best portfolio of hazards toaddress (or of risk-reducing measures to implement) cannot in general be accom-plished by priority-setting if uncertainties about the sizes of risks (or of risk-reduction opportunities) are correlated. Unfortunately, this is the case in manyapplications of practical interest. No priority rule can recommend the best portfolio(subset) of risk-reducing opportunities when the optimal strategy requires diversify-ing risk-reducing investments across two or more types of opportunities, or when itrequires coordinating correlated risk reductions from opportunities of differenttypes (having different priority scores).
138 4 Improving Organizational Risk ManagementExample: Priority Rules Overlook Opportunitiesfor Risk-Free GainsA priority-setting rule that rates each uncertain hazard based in its own attributes only, as all thereal priority-scoring systems previously mentioned do, will in general be unable to recommend anoptimal subset of correlated risk-reducing opportunities. For example, any risk-averse decision-maker prefers a single random draw from a normal distribution with mean 1 and variance 1,denoted N(1, 1), to a single draw from normal distribution N(1, 2), having mean 1 but variance 2.Therefore, a scoring rule would assign a higher priority to draws from N(1, 1) than to draws fromN(1, 2). But suppose that X and Y are two N(1, 2) random variables that are perfectly negativelycorrelated, with Y = 2 – X. (This might happen, for example, if effects depend only on the sum ofX and Y, which has a known value of 2, but the relative contributions of X and Y to their sum areuncertain.) Then, drawing once from X and once from Y (each of which is N(1, 2)) would yield asure gain of 2. Any risk-averse decision-maker prefers this sure gain to two draws from N(1, 1).Unfortunately, any priority rule that ignores correlations among opportunities would miss this pos-sibility of constructing a risk-free gain by putting X and Y in the same portfolio, as it would alwaysassign draws from N(1, 1) higher priority than draws from N(1, 2).This example shows that priority-setting rules can recommend dominated portfolios, such asallocating all resources to risk reductions drawn from N(1, 1) instead of pairing negatively corre-lated N(1, 2) risk reductions, because they cannot describe optimal portfolios that depend on cor-relations among risk-reducing opportunities, rather than on the attributes of the individualopportunities. The next example shows that priority rules can, in principle, not only recommend adominated decision but in some cases can even recommend the worst possible decision.Example: Priority-Setting Can Recommend the Worst PossibleResource AllocationSetting: Suppose that an environmental risk manager must decide how to allocate scarce resourcesto remediate a large number of potentially hazardous sites. There are two main types of sites.Hazards at type A sites arise primarily from relatively long, thin chrysotile asbestos ﬁbers. Hazardsat type B sites arise from somewhat shorter and thicker amphibole asbestos ﬁbers. The risk manageris uncertain about their relative potencies, but knows that removing mixtures of approximately equalparts of the chrysotile and amphibole ﬁbers signiﬁcantly reduces risks of lung cancer and mesothe-lioma in surrounding populations. She believes that the following two hypotheses are plausible, butis uncertain about their respective probabilities. (This is intended for purposes of a simple illustra-tion only, not as a realistic risk model.)H1: Relative risk from a type A site is 0; relative risk from a type B site is 2 (compared to the•risk from a hypothetical site with equal mixtures of chrysotile and amphibole ﬁbers, which wedeﬁne as 1). This hypothesis implies that all risk is from amphibole ﬁbers.H2: Relative risk from a type A site is 2; relative risk from a type B site is 0. This hypothesis•implies that all risk is from the chrysotile ﬁbers.For purposes of illustration only, we assume that only these two hypotheses are consideredplausible, although clearly others (especially that the two types of ﬁber are equally potent) wouldbe considered in reality.Problem: If the risk manager can afford to clean N=10 sites, then how should she allocate thembetween type A and type B sites? Assume that she is risk-averse and that more than 10 sites of eachtype are available.
139Priority Ratings Yield Poor Risk Management Strategies for Correlated RisksSolution: If the risk manager cleans x type A sites and (N−x) type B sites, then the total expectedutility from cleaned sites is pu(N – x) + (1 – p)u(x). Here, p denotes the probability that hypothesisH1 is correct, 1−p is the probability that H2 is correct, N=10 is the total number of sites that canbe cleaned, and u(x) is the utility of cleaning x sites with relative risk of 2 per site cleaned. For anyrisk-averse (concave) utility function u(x), and for any value of p between 0 and 1, Jensen’s inequal-ity implies that expected utility is maximized for some x strictly between 0 and N. For example, ifu(x) = x0.5and p=0.5, then x=5 maximizes expected utility. The worst possible decision (minimiz-ing expected utility) is to allocate all resources to only one type of site (either type A or type B).Yet, this is precisely what a priority system that assigns one type a higher priority than the othermust recommend. Hence, in this case, any possible priority order (either giving type A sites prece-dence over type B sites or vice versa, perhaps depending on whether p<0.5) will recommend asubset of sites that has lower expected utility than even a randomly selected subset of sites. Thebest subset (e.g., 5 type A sites and 5 type B sites, if p=0.5) can easily be constructed by optimiza-tion if p is known. But even if both p and u(x) are unknown, it is clear that a priority order is theworst possible decision rule.Example: Priority-Setting Ignores Opportunitiesfor Coordinated DefensesSetting: Suppose that an information security risk manager can purchase either of two types of securityupgrades for each of 100 web servers. Type A prevents undetected unauthorized access to a web server,and type B prevents unauthorized execution of arbitrary code with the privileges of the web server, evenif the web server is accessed. (For examples of real-world historical vulnerabilities in an Apache webserver, see http://www.ﬁrst.org/cvss/cvss-guide.html#i1.2.) For simplicity, suppose that installing a typeA upgrade reduces the annual incidence of successful attacks via web servers from 0.03 to 0.02 perweb-server-year and that installing a type B upgrade reduces it from 0.03 to 0.025. Installing bothreduces the average annual rate of successful attacks via these machines from 0.03 to 0.Problem: If the security risk manager can afford 100 security upgrades (of either type), whatinvestment strategy for reducing average annual frequency of successful attacks would be recom-mended based on (a) priority ranking of options A and B and (b) minimization of remaining risk?(Assume that the frequency of attempted attacks remains constant, because hackers only discoverthe defenses of a web server when they attempt to compromise it.)Solution: (a) A vulnerability-scoring system could assign top priority to installing a type A upgradeon each of the 100 web servers, because a type A upgrade achieves a larger reduction in thevulnerability score of each server than a type B upgrade. Following this recommendation wouldleave a residual risk of 0.02*100=2 expected successful attack per year. (b) By contrast, a risk-minimizing budget allocation installs both A and B upgrades on each of 50 machines, leaving 50machines unprotected. The residual risk is then 0.03*50=1.5 expected successful attack per year,less than that from giving A priority over B.Comment: In this example, a scoring system that considers interactions among vulnerability-reduc-ing activities could give install A and B a higher priority for each server than either install A orinstall B. But most deployed scoring systems do not encourage considering interactions amongvulnerabilities or among vulnerability-reducing countermeasures. In many applications, doing socould lead to combinatorial explosion. (For example, the guidance for Common VulnerabilityScoring System 2.0 offers this advice: “SCORING TIP #1: Vulnerability scoring should not takeinto account any interaction with other vulnerabilities. That is, each vulnerability should be scoredindependently” http://www.ﬁrst.org/cvss/cvss-guide.html#i1.2.).
140 4 Improving Organizational Risk ManagementExample: Priority Rules Ignore Aversion to Large-ScaleUncertaintiesSetting: A bioterrorism risk manager must choose which of two defensive programs to implementthis year: (A) a prevention program (e.g., vaccination) that, if it works, will reduce the risk of fatalinfection from 10% to 0% for each affected person in the event of a bioterrorism attack with acertain agent; or (B) a treatment program (e.g., stockpiling an antibiotic) that will reduce the riskof mortality from 10% to 5% for each affected individual in the event of such an attack. For sim-plicity, suppose that program A will prevent either N expected deaths (if it works) or none (if itdoes not) following an attack and that its success probability is p. Program B prevents 0.5Nexpected deaths with certainty, leaving 0.5N remaining expected deaths in the event of an attack.Problem: (a) For a risk-averse decision-maker with utility function u(x) = 1 – e–kx, where x is thenumber of expected deaths prevented, which risk reduction measure, A or B, is preferable?(Express the answer as a function of p, k, and N.) (b) How does this compare to the results of apriority ranking system, for p=0.8 and k=1?Solution: (a) The expected utility of risk reduction is pu(N) = p(1 – e–kN) for program A andu(0.5N) = 1 – e–0.5kNfor program B. Program A is preferable to program B if and only if p(1 – e–kN)> 1 – e–0.5kN, or, equivalently, p > (1 – e–0.5kN)/(1 – e–kN). For example, if kN = 1, then p must be atleast 62.2% to make A preferable to B. If kN = 10, then p must be at least 99.3% to make A prefer-able to B. (b) If the probability that program A will work is p=0.8 and the coefﬁcient of absoluterisk aversion is k=1, then A is preferred to B for N=1 or 2, and B is preferred to A for N³3. In thiscase, diversiﬁcation is not an issue (i.e., either A or B is deﬁnitely preferable, depending on thevalue of N.) However, no priority ranking of interventions A and B is best for both N=2 and N=3.The reason is that a risk-averse decision-maker who prefers A to B for small N prefers B to A forlarger N. Any priority-scoring system that ranks one of A or B above the other, and that is notsensitive to N, will recommend the less valuable decision for some values of N. In practice, mostscoring systems use qualitative or ordered categorical descriptions that are not sensitive to quanti-tative details such as N. (For example, the Common Vulnerability Scoring System rates “CollateralDamage Potential,” which scores “potential for loss of life, physical assets, productivity or reve-nue,” as high if “A successful exploit of this vulnerability may result in catastrophic physical orproperty damage and loss. Or, there may be a catastrophic loss of revenue or productivity.” http://www.ﬁrst.org/cvss/cvss-guide.html#i1.2 Such a qualitative description does not discriminatebetween N=2 and N=3.)Discussion: Precisely analogous examples hold for consumer credit risk-reducing interventions,information security, homeland security, and other applications in which the success of some pro-posed interventions is uncertain. Suppose that intervention A reduces the average rate of successfulattacks per target (e.g., secure facility or web server) per year from 10% to 0% if it works, whileintervention B reduces the rate from 10% to 5% with certainty. The probability that A will work(i.e., that an attacker cannot circumvent it) is p. If the choice between A and B affects N similartargets, then, by analogy to the above example, a risk-averse risk manager should prefer A to B forsufﬁciently small N and B to A for larger values of N. Any priority system that is applied to a smallnumber of targets at a time (possibly only 1, by the target’s owner, operator, or security manager)will then consistently recommend A, even though B should be preferred when the complete set ofN targets is considered. That scoring systems are blind to the total number of similar targets thatthey are applied to (i.e., to the scale of application) can lead to excessively high-risk exposuresarising from large-scale application of priorities that hold for small numbers of targets, but thatshould be reversed for larger numbers of targets.
141Opportunities for ImprovementOpportunities for ImprovementApplied risk analysis is in a curious state today. Highly effective optimizationmethods for selecting subsets of risk-reducing investments to maximize the value ofrisk reductions achieved for a given budget are readily available. They can draw on arich and deep set of technical methods developed in ﬁnancial risk analysis and opera-tions research over the past half century. Yet, these methods are having little or noimpact on management of some of the world’s most critical risks. Instead, extremelysimplistic priority-setting rules and scoring systems are being widely used to setpriorities and to allocate resources in important practical risk management applica-tions. Scoring systems are being used in important real-world applications as diverseas Superfund site cleanups, computer and IT security vulnerability assessment, coun-terterrorism, military asset protection, and risk matrix systems (used in everythingfrom designing and defending federal buildings and facilities, to managing construc-tion project and infrastructure risks, to regulating risks of ﬁnancial and businessenterprises). Yet, these risk-scoring systems achieve less value-of-risk-reduction thancould easily be obtained if resources were allocated by other methods (includingrandomized decision-making, in extreme cases.)The requirements that scoring systems must meet before being adopted and rec-ommended in standards are not very stringent. In the applications examined in earliersections, there appears to be no requirement that risk-scoring systems should pro-duce effective risk management decisions (or even that they should not produce thelowest-value decision possible) before they are standardized for widespread use. Inall of the applications mentioned, common elements found in multiple risky systemscreate correlated vulnerabilities, criticalities, consequences, or threats. Priority listsdo not generally produce effective risk management decisions in such settings.Applyinginvestmentportfoliooptimizationprinciples(suchasoptimaldiversiﬁcation,consideration of risk aversion, and exploitation of correlations among risk reductionsfrom different activities) can create better portfolios of risk-reducing activities inthese situations than any that can be expressed by priority scores.In summary, risk priority-scoring systems, although widely used (and evenrequired in many current regulations and standards), ignore essential informationabout correlations among risks. This information typically consists of noting com-mon elements across multiple targets (e.g., common vulnerabilities). These commonfeatures induce common, or strongly positively correlated, uncertainties about theeffectiveness of different risk-reducing measures. It is easy to use this information,in conjunction with well-known decision analysis and optimization techniques, todevelop more valuable risk-reduction strategies, for any given risk managementbudget, than can be expressed by a priority list. Thus, there appears to be abundantopportunity to improve the productivity of current risk-reducing efforts in manyimportant applications using already well-understood optimization methods.This observation will not be new or surprising to experts in decision and riskanalysis (Hubbard 2009). Techniques for optimizing investments in risk-reducing(and/or beneﬁt-producing) interventions have been extensively developed in opera-tions research and management science for decades. What is perhaps startling is that
142 4 Improving Organizational Risk Managementthese methods are so little exploited in current risk assessment and risk managementsystems. Risk priority scores can never do better (and often do much worse) thanoptimization methods in identifying valuable risk-reducing strategies. Perhaps it istime to stop using risk priority scores to manage correlated risks, recognizing thatthey often produce simple but wrong answers. Optimization techniques that con-sider dependencies among risk-reducing interventions for multiple targets should beused instead. The following sections consider how to apply this advice in a simplebut important case where many different such interventions are available, but bud-get constraints make it impossible to pursue all of them simultaneously.Risk Management Software Based on Risk IndicesDespite the limitations and deﬁciencies of priority-setting rules and scoring systemsfor managing risks (Hubbard 2009), they are widely used in ERM and other areasof applied risk analysis. This is not only because of their simplicity and intuitiveappeal, but also because they are already embedded in risk management softwareinitiatives and tools used around the world to help companies follow internationalrisk management standards and recommendations, such as ISO 31000. For better orworse, risk priority-scoring systems are being used to support organizational riskmanagement tasks ranging from ERM at Walmart (Atkinson 2003) to terrorism riskassessment programs (Mitchell and Decker 2004). This magniﬁes the beneﬁts fromany simple changes that can improve their practical value.As previously mentioned, many deployed risk management software tools usethe following simple conceptual framework. Users estimate the values or qualitativeratings of a few (typically, two or three) components of risk, such as probability andimpact in ERM applications; threat, vulnerability, and consequence in terrorismapplications; or exposure, probability, and consequences in occupational health andsafety risk management applications. They enter these inputs for each event or con-dition of concern that they want to prioritize for purposes of risk management. Thesoftware combines these inputs using simple (typically, multiplicative) formulas orlook-up tables, to produce corresponding risk numbers or ratings for each event orcondition of concern. We will refer to the resulting risk numbers (or scores or rat-ings), in the rest of this chapter, as risk indices, since they are typically interpretedas indicating the relative sizes, importances, or priorities of different risks that anorganization faces.Most risk management software products display risk index outputs as risk matri-ces (tables), with frequency and severity categories for rows and columns; or ascolorful heat maps, with cell colors indicating priorities for action or remediation ofthe estimated risks. Other popular displays include bar charts comparing risk indicesand scatter plots (e.g., showing impact versus probability) showing their compo-nents. These methods are widely employed in diverse organizations and ERMproducts.
143Simulation–Evaluation of Methods for Selecting Risks to AddressExample: Simple Risk Formulas in Commercial RiskManagement SystemsVendors now offer many risk index systems used by large organizations. For example, theSTARSYS® System (www.starys.com/html/products.html) is offered as “an Integrated RiskManagement and Document Control system developed speciﬁcally to enable organisations toimplement sound practices that comply with Occupational Health and Safety and Environmentaland Quality control requirements.” It uses three risk components, called consequences, exposure,and probability, and provides a Risk Calculator for assigning numbers (e.g., between 0 and 6) toeach of these components. From these user-supplied ratings, it then calculates a corresponding riskpriority class.Similarly, the SAP BusinessObjects Risk Management 3.0 software documentation (http://scn.sap.com/docs/DOC-8488) states that “Impact levels (and if use[d] Beneﬁt Levels) are an importantbuilding block of any risk management model. All risks are described in terms of Likelihood andImpact. Impact levels are used to give a real-world description to the magnitude of a risk event.Beneﬁt Levels give a real-world description to the magnitude of a beneﬁt.” The documentationalso explains that “Impact Levels combined with Probability Levels are used to create a Risk HeatMap.” More explicitly, documentation of the “Risk and Opportunity Level Matrix” explains that“The combination of impact level×probability level should correspond to the deﬁned risk level.”Example: A More Sophisticated Commercial RiskManagement SystemThe GuardianERM system (www.guardianerm.com/RiskManagement.htm) notes that “Usersevaluate and categorise each risk, record the possible causes, rate the likelihood and consequences,record Value at Risk and assign a ﬁnancial statement assertion if required. Users attach any numberof controls to a risk and evaluate each control as to its effectiveness, record cost of control, updatecontrol status (agreed, proposed, implemented), control type (treat, transfer, correct), key controlindicator, execution frequency, action and control responsibility.” Although the system displaysconventional-looking heat maps and bar charts as outputs to summarize and interpret the data itrecords, the information that it collects, speciﬁcally on control costs and effectiveness, can poten-tially be used to improve upon conventional risk indices. This possibility is explored below.In light of the theoretical limitations of risk indices described in previous sections, it is importantto understand How well do real-world risk management recommendations or priorities based on theconceptual framework of risk indices perform? If an organization uses risk indices, risk matrices, orrisk heat maps to set priorities and allocate resources, then how much better or worse off will it bethan if it used different approaches? To better understand the objective performance characteristicsof these widely deployed, but not yet well-understood systems, the following sections compare therelative performances of several different risk indices to each other, and to an optimal approach,using simple models with easily derived correct answers.Simulation–Evaluation of Methods for Selecting Risksto AddressTo clearly compare different risk management approaches, this section constructs asimple example with detailed data, for which it can be determined how resources shouldbe allocated. This makes it possible to quantify how well two different risk indices
144 4 Improving Organizational Risk Managementperform, compared to this ideal answer. Finally, a large, randomly generated data setwill be used to further analyze the performances of these alternative approaches.Consider a risk manager or decision-maker constrained by a limited budget toallocate among a large number of opportunities to reduce risks. She wishes to userisk management software, based on the risk index framework, to decide whichones to address with this limited budget. Table 4.2 shows an example with ﬁve risks(or opportunities for risk reduction), each represented by one row of the table.Each risk is characterized by three attributes, here called Threat, Vulnerability,and Consequence, shown in the left columns. Their product gives the index calledRisk (4th column). Many risk management software products stop at this point,color-code or rank or categorize the resulting risk index values, and display theresults, with the top-ranked risks (here, the top two) displayed in a color such as redand assigned top priority for risk management interventions.One criticism of this method recognizes that the true values of the inputs (suchas Threat, Vulnerability, and Consequence in Table 4.2) are typically uncertain andtheir uncertain values may be correlated. Considering the correlations can com-pletely change the values for the risk index and can even reverse their relative sizes(Cox 2008a). Risk management software tools that omit correlation informationfrom the inputs – as most do – produce risk rankings (and implied or explicit recom-mendations) that might be changed or reversed if correlations were accounted for.To avoid this difﬁculty, for purposes of understanding performance driven byother factors, the input columns in Table 4.2 are populated by independent randomvariables (i.e., all correlations among variables are assumed to be 0). Speciﬁcally,each input value in Table 4.1 is independently randomly sampled from a unit uniformdistribution, U[0, 1]. This case of statistically independent input values mayartiﬁcially improve the performance of risk indices, compared to real performance,if real performance is deteriorated by the presence of negative correlations betweeninput values. It has previously been found that negatively correlated input valuescan cause risk indices to systematically assign higher estimated values (or levels,ratings, etc.) of risk to smaller risks than to larger ones, making the index approachworse than useless (i.e., worse than random selection) as a guide to effective riskmanagement (Cox 2008a; Hubbard 2009). However, to understand the relativeTable 4.2 Example of resource allocation problem data1Threat2Vulnerability3Consequence4=3*2*1 56=5*4Riskreduction7Cost ($)8=6/7Risk (e.g.,averageloss peryear)Fractionof riskeliminatedif addressedRiskreductionper unitcost0.64 0.44 0.22 0.063 0.55 0.034 0.83 0.040.28 0.92 0.90 0.231 0.42 0.097 0.40 0.250.07 0.73 0.15 0.008 0.80 0.006 0.35 0.020.44 0.75 0.04 0.014 0.82 0.012 0.37 0.030.70 0.01 0.34 0.003 0.76 0.003 0.16 0.02
145Simulation–Evaluation of Methods for Selecting Risks to Addressperformance and limitations of different indices, even under favorable conditions,we will make the assumption that the inputs are statistically independent.A second criticism of index methods based on combining inputs (e.g., Threat×Vulnerability×Consequence, Frequency×Severity, and Probability×Impact) with-out considering costs or budgets or risk reductions achieved by alternative interven-tions is that they leave out information that is crucial for rational risk managementdecision-making. Knowing which risks are largest does not necessarily reveal whichrisk management interventions will achieve the greatest risk reduction for a givenamount spent and thus may prove deceptive as screening and prioritization tools.(Some risk index software products do consider costs and risk reductions for differ-ent potential interventions and are not subject to this criticism.)To evaluate the signiﬁcance of this criticism for tools that omit cost consider-ations when prioritizing risks, Table 4.2 includes four additional columns that dealwith costs and risk reductions. Fraction of Risk eliminated if addressed gives thefraction of the Risk number in the fourth column that could be removed by spendingavailable budget on the most cost-effective available risk-reducing measure for therisk in that row. Risk reduction is the product of the two columns to its left, Risk andFraction of Risk eliminated if addressed. Risk reduction shows the risk-reductionbeneﬁt (measured in units such as average prevented loss per year) that would beachieved if the risk in that row were selected to be addressed. This is another pos-sible index that could be used to set priorities for risk management, correspondingto changing the decision rule from “Address the largest risks ﬁrst” to “Address thelargest opportunities for risk reduction ﬁrst.”The Cost column shows the assumed cost to address each risk, which wouldreduce it by the factor shown in the Fraction of Risk eliminated if addressed col-umn. The last column, Risk reduction per unit cost, shows the ratio of the Riskreduction to Cost columns, indicating the amount of risk reduction achieved perdollar spent if selected (i.e., if there are several alternatives for reducing a risk, weassume that the one with the greatest value of this ratio is selected). To evaluate theperformance limitations of risk index methods under assumptions favorable fortheir use, we assume that each risk (i.e., row) can be addressed independently, sothat the risk manager’s only problem is to decide which risks (i.e., which rows) toaddress. Such additive independence could be realistic if the risk manager is tryingto decide how to allocate risk-reduction resources among separate, non-interacting,geographic areas or facilities, based on attributes such as those in Table 4.2. Giventhe choice of a feasible subset of rows (meaning any subset with total costs sum-ming to no more than the available budget), the total risk-reduction beneﬁt achievedis assumed to be the sum of the beneﬁts achieved (i.e., the Risk reduction numbers)from the selected rows.The last column, Risk reduction per unit cost (column 8), provides a possiblealternative index to the Risk and Risk reduction indices in columns 4 and 6 for set-ting priorities and selecting a subset of risks to address. (Note that, in general, costsand risks may be measured in different units. Costs might be measured in units suchas dollars spent or person-years of expert time allocated to problem remediation.Beneﬁts might be measured as lives saved or loss of critical facilities or infrastruc-ture prevented. No effort has been made to monetize these impacts or to place them
146 4 Improving Organizational Risk Managementon a common scale. Although Table 4.2 shows values less than 1 for the Riskreduction per unit cost column, due to the simple arithmetic that Risk reductioncomes from a product of several U[0, 1] variables and cost comes from a singleU[0, 1] variable, this does not imply that the beneﬁts of risk reductions are not worththe costs.)In Table 4.2, with only ﬁve risks (rows), one can easily identify the subset ofinterventions that should be addressed to maximize the risk reduction achieved forany given budget spent. For example, if the budget is less than 0.35 (on a scale nor-malized so that 1 represents the maximum possible cost for any intervention), thenthe only affordable intervention would be to select the bottommost row, which hasa cost of 0.16 and yields a risk-reduction beneﬁt of 0.003 (on a scale normalized sothat the mean risk-reduction beneﬁt is the mean of the product of four independentU[0, 1] random variables, i.e., (0.5)^4 = 0.0625). If the budget is 0.37, then a largerbeneﬁt, of 0.012 can be obtained. For budgets greater than 0.51, multiple risks canbe addressed. As the budget increases further, one must search for the feasible(i.e., affordable) subset of risks that maximizes the risk reduction achieved. Thiscombinatorial optimization problem can be solved approximately or exactly usingoperations research algorithms (Senju and Toyoda 1968; Martello and Toth 1990).Either specialized knapsack algorithms (Senju and Toyoda 1968) or general-purposebranch-and-bound algorithms (such as those implemented in the Excel Solveradd-in) can solve such problems in minutes, if the number of risks is at most a fewdozen. For larger-scale problems (e.g., with thousands or tens of thousands of risks),special-purpose heuristics provide nearly optimal solutions within seconds (Martelloand Toth 1990); thus, there is no practical reason to use signiﬁcantly less-than-optimal approaches. This optimization identiﬁes the maximum risk-reductionbeneﬁt that can be achieved for each level of budget.In summary, we consider the following increasingly demanding indices:• Risk: This is column 4 (i.e., Risk=Threat×Vulnerability×Consequence). It isthe most basic index that we consider. Using this index to set priorities foraddressing risks corresponds to the decision rule, “Address the largest risksﬁrst.”• Risk reduction: This (column 6) is the product Risk Reduction=Risk×Fractionof Risk eliminated if addressed. Using it to set priorities for addressing risks cor-responds to the decision rule, “Address the largest risk reductions ﬁrst.”• Risk reduction/cost ratio (column 8) takes the preceding index (Risk reduction)and divides it by the cost needed to achieve it. The corresponding decision ruleis “Address the largest risk reductions per unit cost ﬁrst.”Each of these indices is derived by reﬁning its predecessor with additionalinformation – from risk, to risk reduction, to risk reduction per unit cost. We willcompare the performance of these indices to each other and also to the optimal solu-tion (obtained by solving a knapsack problem) on a simple test set of randomlygenerated budget allocation problems. Our goal is to answer the following researchquestions in a simple simulation setting for which one can obtain answers easily:
147Results: Comparing Index Policies to Optimal Portfolios1. How do the risk-reduction beneﬁts achieved by using the Risk index in Table 4.2to select risks to address compare to the risk-reduction beneﬁts achieved by usingthe other two indices? Is the Risk index (the product of the three inputs calledThreat, Vulnerability, and Consequence in Table 4.2) a useful surrogate for themore indices that include bang for the buck (i.e., risk reduction and cost) informa-tion? Or, is the Risk index signiﬁcantly less useful than these more reﬁned ratiosin setting priorities that achieve large risk-reduction beneﬁts for dollars spent?2. How do the beneﬁts achieved by using these different indices to set prioritiescompare to the beneﬁts from optimal selection of which risks to address?In short, for this simple setting, we can investigate the value of using a moredemanding index instead of a simpler one and explore how much additional beneﬁt(if any) could be achieved by using optimization, instead of either index, to decidewhich risks to address for a given budget. Comparing these alternatives on simplerandom data suggests the potential sizes of gains in risk-reduction beneﬁts fromcollecting and using more information or more sophisticated algorithms to try toimprove upon the risk management priorities suggested by the simpler Risk index.We carry out the comparisons using a table analogous to Table 4.2 but with 100risks instead of 5.Results: Comparing Index Policies to Optimal PortfoliosFigure 4.1 shows the amounts of risk reduction (y-axis) that can be purchased fordifferent costs, if each of the three different indices – Risk, Risk reduction, or Riskreduction per unit cost – is used to set priorities and allocate resources in the test setof randomly generated problems. Table 4.3 shows numerical comparisons of the riskreductions achieved by each index, for several different budget levels. The rightmostcolumn of Table 4.3 shows the maximum possible risk reduction that can be achievedfor each budget level (as determined by solving the combinatorial optimizationproblem (knapsack problem) of selecting a subset of risks to address that will maxi-mize the total risk reduction obtained for the speciﬁed budget. With 100 randomlygenerated risks from which to choose, the solution times are on the order of about 10min on a modern PC, using the Excel Solver’s branch-and-bound algorithm forbinary integer programs. Since no speciﬁc units have been selected for costs andbeneﬁts, Table 4.4 presents the information from Table 4.3 normalized to make themaximum risk reduction possible equal to 1 (from addressing all risks) and similarlynormalized to make the smallest cost needed to achieve this equal to 1.The results exhibit the following conspicuous patterns:• All three indices are useful. Compared to a completely uninformed (random)approach to priority-setting for resource allocation (for which the correspondingcumulative risk reduction versus cumulative cost curve in Fig. 4.1 appears as thestraight line shown from the origin to the leftmost point where all projects arefunded), all three curves in Fig. 4.1 show a useful degree of lift (i.e., improve-ment, visually seen as the difference between each curve and the straight line).
148 4 Improving Organizational Risk ManagementThus, in this test set of problems, even an index that does not consider cost isvaluable compared to uninformed selection (i.e., the lowest curve in Fig. 4.1compared to the straight line).• In this test set of randomly generated problems, the Risk reduction per unit costindex outperforms the other two indices. The Risk index performs less well thanTable 4.3 Risk reductions achieved by using different indices to allocate budgetsBudgetRisk reductionusing Risk indexto allocate budgetRisk reductionusing Riskreduction indexRisk reductionusing Risk reductionper unit cost indexOptimal riskreduction forgiven budget0.5 0 0 0.19 0.521 0.65 0.65 0.83 0.942 0.91 1.05 1.48 1.614 1.66 2.01 2.56 2.648 3.25 3.35 3.86 3.8816 4.6 4.94 5.07 5.0932 5.73 5.84 5.86 5.86Inﬁnite 5.95 5.95 5.95 5.95Cumulative Risk Reduction vs. Cumulative Cost for Three Indices0 10 20 30 40 50 60Cumulative cost01234567CumulativeRiskReductionRiskRisk ReductionRisk Reduction/CostFig. 4.1 Comparison of risk reductions achieved using three different indices
149Results: Comparing Index Policies to Optimal Portfoliosthe other indices. For example, for the same cost, the priority order generated bythe Risk index reduces risk by only 15% of the maximum possible amount, com-pared to 25% for the Risk reduction per unit cost index. Thus, at this budgetlevel, the Risk index is only about 60% as efﬁcient as the Risk reduction per unitcost index in obtaining risk reductions for cost spent. Similarly, the Risk indexreduces risk by only 28% of the maximum possible amount, for the same cost atwhich the Risk reduction per unit cost index reduces risk by 43%. This gapbetween the lowest-performing (Risk) and highest-performing (Risk reductionper unit cost) indices diminishes at budget levels high enough so that most or allrisk-reduction opportunities are taken.• The best index (Risk reduction per unit cost) provides nearly optimal decisionsfor almost all budget levels. Although this index can fail to recommend the bestsubset of risks to address when the budget is too small to address more than avery few risks (e.g., one or two), it yields decisions that are optimal or nearly so(i.e., within about 2% of optimal, in terms of risk reduction obtained for resourcesspent for this simple simulation), for all budget levels greater than about 0.02(on a scale where 1 denotes the smallest budget needed to address all risks.)• Diminishing returns. The risk reductions achieved by different budgets showsteeply diminishing returns, for each index. For example, more than half of themaximum possible risk reduction can be achieved (via any of the indices) forless than 1/6 of the budget needed to eliminate all risk; and more than 80% of thetotal risk can be removed (unless the simplest index, Risk, is used) for about 1/3of the budget needed to remove all risk. Conversely, the best index (with costconsiderations) achieves signiﬁcantly higher lift than by the worst index (with nocost considerations) only in situations where budget restrictions make carefulallocation of resources essential for achieving close-to-maximum risk-reductionbeneﬁts, as shown in Table 4.4.These ﬁndings for the simple test set considered indicate that for resource-con-strained organizations faced by a large number of opportunities to invest in costlyrisk reductions, using simple risk indices, such as Risk=Threat×Vulnerability×Consequence or Risk=Frequency×Severity, to allocate risk management resources,Table 4.4 Normalized risk reductions achieved by using different indicesBudgetRisk reductionusing Risk indexto allocate budgetRisk reductionusing Riskreduction indexRisk reduction usingRisk reduction perunit cost indexOptimal riskreduction forgiven budget0.01 0 0 0.03 0.090.02 0.11 0.11 0.14 0.160.04 0.15 0.18 0.25 0.270.08 0.28 0.34 0.43 0.440.17 0.55 0.56 0.65 0.650.33 0.77 0.83 0.85 0.860.67 0.96 0.98 0.98 0.981 1 1 1 1
150 4 Improving Organizational Risk Managementmay be relatively inefﬁcient. For some budget levels, these simple indices (and, afortiori, risk matrices or risk heat maps based on them) yield no more than about60–65% of the risk-reduction beneﬁts achieved by using indices that consider riskreduction per unit cost, at least in this simple test set of randomly generated prob-lems. Thus, organizations may gain substantial improvements (e.g., more than athird, in this simple setting) in risk reductions achieved for dollars spent, by usingbetter indices.However, investing in more sophisticated optimization algorithms produces littlefurther gain (except at the lowest budget levels) beyond what can be achieved bymoving from Risk to Risk reduction per unit cost. That is, the best index yieldsnearly optimal decisions for these problems, leaving very little room for furtherimprovement by using more sophisticated (non-index) decision rules.Discussion and ConclusionsIn a simple, idealized setting, with statistically independent values for the compo-nents of risk, multiplicative formulas for combining them into risk indices, additivelyindependent costs and beneﬁts (i.e., risk reductions) across risks, and known valuesfor all costs, risks, and risk reductions, each of the three indices examined has somevalue. The best of them, the Risk reduction per unit cost ratio, provides nearly opti-mal resource allocations for almost all budget levels considered in the simple simula-tion exercise reported here (Table 4.4). The other two indices, Risk and Risk reduction,are signiﬁcantly correlated with Risk reduction per unit cost and with each other, soit is not surprising that they provide some information useful for setting priorities andallocating resources. Speciﬁcally, Risk reduction is proportional to Risk (with a ran-dom coefﬁcient of proportionality, corresponding to the U[0, 1] random variableFraction of Risk eliminated if addressed), and Risk reduction per unit cost is derivedfrom Risk reduction by multiplying it by a random variable, 1/Cost, where Cost is anindependent U[0, 1] random variable. Conversely, Risk may be viewed as beingderived from the high-performing index Risk reduction per unit cost by multiplyingit by the random variable Cost and dividing the result by the random variable Fractionof Risk eliminated if addressed. These transformations distort the information in Riskreduction per unit cost, making Risk less useful than Risk reduction per unit cost; theresult is that Risk may achieve only a fraction (e.g., 60%) of the risk-reductionbeneﬁts of Risk reduction per unit cost, for the same cost.If similar results hold in practice – an if which depends on the empirical jointdistributions of risk sizes, risk-reduction opportunities, and costs to reduce risks –then they provide both good news and bad news for providers and customers ofcurrent risk management software systems. The bad news is that risk managementsoftware packages that implement simple indices, such as Risk=Probability×Impactor Risk=Threat×Vulnerability×Consequence, are probably supporting relativelyinefﬁcient risk management priorities and resource allocations, unless costinformation is added after the risk indices have been computed and displayed.
151Discussion and ConclusionsThe heat maps that they typically provide suggest that high-ranked (e.g., red orhigh) risks should be prioritized ahead of low-ranked (e.g., green or low) risks forrisk management attention and remediation. Unfortunately, following these recom-mendations may achieve only a fraction (e.g., 60%, depending on the number andcosts of risk-reduction opportunities and the budget available to address them) ofthe risk-reduction beneﬁts that could be achieved by more effective indices.The good news is that data already being collected in some systems as part ofrisk management documentation can be used to substantially improve upon theabove indices, at least in the simple random test bed demonstrated here. Theimprovement method is simple: as illustrated in Table 4.4, multiplying each value ofa Risk index by a (Risk reduction fraction per unit Cost) factor to obtain a Riskreduction per unit cost index can lead to revised priorities that capture almost 100%of the maximum possible risk reduction. (As already discussed, this gain is possiblefor almost any given budget level, as long as it allows for funding a sizable portfolioof risk-reduction opportunities.) Even if this new factor can only be estimatedimprecisely, the potential gains from using it to reﬁne current Risk indices may besubstantial enough to warrant adding it as a post-processing step to current methodsthat stop with Risk indices.Figure 4.1 makes clear that the simulation test bed conditions are favorable, com-pared to the case of zero or negative lift, which previous work has established canarise when index procedures are applied to situations with negatively correlatedinput values (e.g., low frequencies of high-consequence events, high frequencies oflow-consequence events) (Cox 2008a). Such situations are common in practice,including ERM applications domains.Some other important complexities that might arise in practice include:• Allow risk-averse or risk-seeking utility functions. Rather than simple expectedvalue (e.g., Probability×Impact) formulas for risk, exponential or other utilityfunctions would allow greater ﬂexibility in expressing risk attitudes.• Consider uncertain ability to reduce risk by taking expensive actions. Ratherthan spending a known cost to achieve a known risk reduction, it may be neces-sary to make some investments that return only uncertain reductions in risk.• Model interactions among risk-reducing investment opportunities. For example,some risk-reducing investments (e.g., upgrading an alarm system) may only bepossible when others (e.g., installing an alarm system) have already been suc-cessfully completed; or some investments may only be valuable if others thatattempt to protect the same assets in different ways fail.• Generalize to arbitrary joint distributions of costs and risk reductions, ratherthan statistically independent uniform distributions, as in this chapter.• Consider randomly deteriorating or changing situations, where a risk may ran-domly increase (e.g., as more supports for a bridge fail) during the time that norisk management interventions (e.g., inspection and replacement of failing sup-ports) are funded.Although no general results are yet available for situations involving all thesecomplexities, some important advances have been made recently on each of these
152 4 Improving Organizational Risk Managementdimensions by showing that index policies are optimal in broad classes of models(e.g., random forest models) that allow for precedence relations and other con-straints among activities, arbitrary costs of activities and probability distributionsfor rewards (e.g., risk reductions), and exponential utility functions that allow forrisk aversion (Denardo et al. 2004).In addition, the theory of Gittins indices in operations research (Denardo et al.2004; Sethuraman and Tsitsiklis 2007; Glazebrook and Minty 2009) has recently beenshown to provide excellent heuristics for allocating resources in large classes of riskyrestless bandit problems that greatly generalize the resource allocation task consid-ered here, by letting risk-reduction opportunities (or other projects) evolve randomlywhile not being worked on and by allowing uncertainty about the true value of eachproject. Many such indices are generalizations of the bang for the buck ratio (i.e., therisk reduction per unit cost) index considered in this chapter. These results suggest thatusing relatively easily computed indices to set priorities for resource allocation canprovide nearly optimal risk management decisions in many interesting settingsbeyond the idealized setting considered here. However, even in these more generalcases, high-performing indices are usually generalizations of the beneﬁt-per-unit-costcriterion that has proved to be so effective in our simple context.Many risk analysts already recognize that including costs in risk ranking effortscan signiﬁcantly improve budget allocations, with high-level committees makingthis point over 2 decades ago in the context of risk ranking activities performed bythe US Environmental Protection Agency (EPA SAB 1990; Davies 1996). In thiscontext, the results reported here will seem hardly surprising to some readers.However, as a practical matter, many computer-aided risk analysis software products,formulas (e.g., Risk=Threat×Vulnerability×Consequences), and consulting tools(e.g., risk matrices) do not yet include bang for the buck information or showestimates of risk reduction achieved per dollar spent as an option. Thus, the manyorganizational risk management initiatives and software products that now use sim-ple risk indices with the aim of ranking (i.e., suggesting priorities and supportingrisk management resource allocation decisions) might be signiﬁcantly improvedsimply by multiplying current risk indices by the estimated ratio of the risk-reductionfraction to the cost of a risk-reducing intervention. This would make a useful starttoward improving their performance in increasing the risk-reduction beneﬁtsachieved for resources spent.This chapter has only provided quantitative results for the special case of inde-pendent, uniformly distributed, random inputs, illustrated in a simple test bed ofrandomly generated budget allocation problems. At least in this idealized setting,the results suggest that a better choice of risk index can lead to signiﬁcantly moreeffective resource allocation decisions for constrained risk management budgets.Generalizing to more complex, realistic, and interesting settings, such as those forwhich Gittins indices provide useful decision rules, represents a potentially valuablenext step for understanding how far simple changes in the indices used to rank andcompare risk-reducing investments can improve the current generation of riskmanagement software and practices.
153ReferencesReferencesAtkinson W (2003) Enterprise risk management at Walmart. Risk Manag. http://www.rmmag.com/Magazine/PrintTemplate.cfm?AID=2209Bernstein PL (1998) Against the Gods: the remarkable story of risk. Wiley, New YorkCox LA Jr (2008a) What’s wrong with risk matrices? Risk Anal 28(2):497–512Cox LA Jr (2008b) Some limitations of “Risk=Threat×Vulnerability×Consequence” for riskanalysis of terrorist attacks. Risk Anal 28(6):1749–1762Davies JC (1996) Comparing environmental risks: tools for setting government priorities.Resources for the Future, Washington, DCDenardo EV, Rothblum UG, van der Heyden L (2004) Index policies for stochastic search in aforest with an application to R&D project management. Math Oper Res 29(1):162–181Doctor JN, Bleichrodt H, Miyamoto J, Temkin NR, Dikmen S (2004) A new and more robust testof QALYs. J Health Econ 23(2):353–367Dyer JS, Jia J (1998) Preference conditions for utility models: a risk-value perspective. Ann OperRes 80(1):167–182Dyer JS, Sarin RK (1979) Measurable multiattribute value functions. Oper Res 27(4):810–822EPA SAB (U.S. Environmental Protection Agency Science Advisory Board) (1990) Reducingrisk: setting priorities and strategies for environmental protection. SAB-EC-90-021. U.S.Environmental Protection Agency Science Advisory Board, Washington, DC [online].Available http://yosemite.epa.gov/sab/sabproduct.nsf/28704D9C420FCBC1852573360053C692/$File/REDUCING+RISK++++++++++EC-90-021_90021_5-11-1995_204.pdf. Accessed14 Sept 12Gintis H, Bowles S, Boyd R, Fehr E (2003) Explaining altruistic behavior in humans. Evol HumBehav 24:153–172Gintis H (2000) Game Theory Evolving: A problem-centered introduction to modeling strategicinteraction. Princeton University Press, Princeton, NJGlazebrook KD, Minty R (2009) A generalized gittins index for a class of multiarmed Bandits withgeneral resource requirements. Math Oper Res 34(1):26–44Harford T (2011) Adapt: why success always starts with failure. Farra, Straus and Giroux, NewYorkHazen G, Sounderpandian J (1999) Lottery acquisition versus information acquisition: price andpreference reversals. J Risk Uncertainty 18(2):125–136Hubbard DW (2009) The failure of risk management: why it’s broken and how to ﬁx it. Wiley,New YorkInfanger G (2006) Dynamic asset allocation strategies using a stochastic dynamic programmingapproach. Chapter 5. In: Zenios SA, Ziemba WT (eds) Handbook of assets and liability man-agement, volume 1. North Holland, New YorkISO31000 http://www.iso.org/iso/catalogue_detail?csnumber=43170. Accessed 8 July 2011Jones P, Edmonds Y (2008) Risk-based strategies for allocating resources in a constrained environ-ment. J Homeland Security. www.homelandsecurity.org/newjournal/Articles/displayArticle2.asp?article=171Keeney RL, Raiffa H (1976) Decisions with multiple objectives: preferences and value trade-offs.Wiley, New YorkMacIntyre CR, Seccull A, Lane JM (2006) Plant A. Development of a risk-priority score for cat-egory A bioterrorism agents as an aid for public health policy. Mil Med 171(7):589–594Martello S, Toth P (1990) Knapsack problems: algorithms and computer interpretations.Wiley-Interscience, New York, NYMitchell C, Decker C (2004) Applying risk-based decision-making methods and tools to U.S.Navy Antiterrorism Capabilities. J Homeland Security http://www.au.af.mil/au/awc/awcgate/ndia/mitchell_rbdm_terr_hls_conf_may04.pdf. Accessed 14 Sept 12
154 4 Improving Organizational Risk ManagementPfanzagl J (1959) A general theory of measurement. Applications to utility. Naval ResearchLogistic Quarterly 6:283–294Rosenthal EC (2011) The Complete idiot’s guide to game theory. The Penguin Group. AlphaBooks, New York, New YorkSenju S, Toyoda Y (1968) An approach to linear programming with 0–1 variables. Manag Sci15(5):B-196–B-207Sethuraman J, Tsitsiklis J (2007) Stochastic search in a forest revisited. Math Oper Res 589–593.http://www.columbia.edu/~js1353/pubs/search.pdfWilson R (1968) The theory of syndicates. Econometrica 336(1):119–132