1. 1
BOOK REVIEW
Douglas W. Hubbard (2009). The Failure of Risk Management: Why It’s Broken and How
to Fix It (Hoboken, NJ: John Wiley & Sons).
INTRODUCTION
What is risk management? Why is it difficult to measure risk? Why is the problem of
measuring risk related to the larger question of developing effective metrics in
management? What, if anything, can be done to rectify the situation? These are the
questions that Hubbard addresses in this book, which was written during the height
of the financial crisis of 2008. Hubbard is not only an expert on risk management but
has spent a lot of time developing the metrics required to measure it. His previous
book was titled How to Measure Anything. The element that links the previous effort
with this book is the notion of the ‘tangible’ versus the notion of the ‘intangible’ in
the context of the theory and practice of management. Hubbard argues that the lack
of metrics makes it difficult to make substantial progress in developing a theory of
risk management. His intention in writing this book is to survey the literature on the
area of risk management to find out what sort of models are available and to assess
their value, if any, in thinking through the theoretical challenges in this area.
While Hubbard argues that rigorous quantitative frameworks are required to model
risk, his goal is not to go the whole hog in that direction since he hopes to reach a
wide audience of analysts in the financial community. He therefore discusses the
theoretical implications of the different models without necessarily going into the
mathematical intricacies. But irrespective of whether the approach to risk analysis is
qualitative, or quantitative, it is important for a firm to know what model, if any, is
in place, and the consequences of failure. And, as mentioned previously, the absence
of effective metrics only complicates the problem further. It is therefore important to
at least start with effective definitions and demarcate the scope of risk management.
The book therefore sets out to explain the area of risk management in three parts by
asking what risk management is, why it fell apart, and what must be done to repair
it.
2. 2
RISK MANAGEMENT
Hubbard sets out a brief account of how and why the notion of risk management
entered the consciousness of management in ancient times and the different forms it
has taken in recent years. This is then followed by an account of how risk is assessed
and mitigated in firms. The professional matrix in which such exercises are
conducted is then evaluated. Hubbard comes to the conclusion that the modes of
risk management in place have not yet attained the standards that are necessary to
provide investors and stakeholders the ‘quality assurance’ that they are asking for.
What is also required then are the training processes and certification programs that
will increase standards of performance in the area of risk analysis, evaluation, and
mitigation. An important problem in the attempts made by these professionals is the
lack of completeness and thoroughness in risk assessments. The area works more
through component testing than through a comprehensive model which can account
for the ‘internal, external, historical and combinatorial’ forms of completeness.
Working out such a model can also be attempted by business academics and
consultants who want to move into this area. Those who can bring together a
sophisticated understanding of the metrics necessary along with the quantitative
and qualitative expertise required to interpret and situate the results will be able to
make an important contribution to the area of risk management. A dash of
philosophical skepticism will also help since the main challenge will be to find out
how such models can be evaluated in terms of their ‘effectiveness’ in actually
helping decision-makers to make decisions.
The professionals who are already working in this area can be classified as the ‘four
horsemen’ and include ‘actuaries, war quants, economists, and management
consultants.’ Hubbard compares the approaches used by these four types of
professionals systematically before listing the seven challenges of risk management
that are yet to be addressed by the profession. These include a range of both
theoretical problems and operational difficulties including problems of method and
3. 3
the institutional constraints that are experienced by risk management professionals.
Hubbard also discusses the semantic field within which the problems of risk analysis
are addressed. Some of the key terms in this field include volatility and variance in
addition to the term risk. It is also important to situate the problem of risk within the
context of uncertainty that can not only accentuate it, but make it necessary to
understand the ‘risk-reward’ ratio in the context of decision-making. Hubbard
argues that an understanding of the cognitive heuristics in the work of the
behavioral economists like Amos Tversky and Daniel Kahneman can help to set
right the misconceptions that are generated by doing ‘mental math’ as opposed to
quantitative models that are more rigorous. Hubbard therefore argues that it is
important to set up ‘calibration tests’ to determine the validity of the inferences and
conclusions that are generated by the mental math that seem self-evident (but
actually are not) during a decision analysis or a risk analysis.
TESTS OF CALIBRATION
These tests then will help the reader to calibrate the relationship between his ability
to answer a set of questions and simultaneously measure the level of confidence that
he experiences in setting out the answer. The results of these tests will give the
reader a range of the confidence levels within which he is won’t to assess or evaluate
his own knowledge levels. What is at stake then is not about coming up with the
right answer per se, but to also measure the affective range within which the reader
arrives at this answer through a process of effective calibration. While the calibration
tests discussed above pertain to how a reader can gain clues about his cognitive
patterns, it is also important to understand the flaws built into the conventional
scales and scoring patterns that measure risk and take corrective action whenever
possible. The anecdotal style in which risk analysis is done has to do with the fact
that people working in this area do not have perfect quantitative models to think
through the challenges of risk analysis and therefore do not pursue quantitative
approaches or have the quantitative models to know how to contextualize the results
generated. The expectations from these models also do not make any sense since
they are supposed to be able to anticipate highly improbable events.
4. 4
So while Hubbard agrees that critics like Nassin Taleb have an important point
about the difficulty of predicting highly improbable events, or the fact that many
models in place are wrong, it does not necessarily follow from their valuable critique
that the profession can hop along without quantitative analysis. Hubbard, needless
to say, has a professional stake in taking such a position since he not only works full-
time in this area but is also the inventor or a method called Applied Information
Economics. Hubbard therefore takes a lot of effort to spell out on exactly what points
he agrees or disagrees with his critique. Quantitative approaches in turn have to be
situated since simulations produce results in a range and the calculation of the ROI
in a specific investment cannot be calculated exactly. Using simulations therefore
involves subjective estimates and do not have the objective validity that they appear
to have. It is not always the case that forecasts are compared after a period of time to
determine their levels of accuracy; they are for all practical purposes more an
attempt to manage expectations in the present rather than a ‘prediction’ in the
scientific sense of the term.
MONTE CARLO SIMULATIONS
The value of a simulation then depends directly on the quality of the input data and
there is nothing in a simulation per se that can make up for deficiencies in its
empirical foundations. Hubbard argues that almost all the models that he has
worked with ‘required further measurement.’ Furthermore risk analysis is mainly
deployed in operational contexts rather than to calculate strategic risk, which is
where the largest pay-off or savings are possible for the firm. This is a problem that
Hubbard terms ‘the measurement inversion’ and it stems from the lack of an
empirical basis to ‘risk models.’ Hubbard’s takeaway on this problem, put simply, is
this: ‘Everybody, everywhere, is focusing on the least valuable measurements at the
expense of the most valuable measurements.’ While Hubbard slightly exaggerates
his conclusion, the spirit of his conclusion must be taken seriously if firms want to
re-do their methodologies in place in order to generate simulations that make more
5. 5
sense and can therefore provide decision-makers with a more accurate set of
estimates to work with.
And, finally, Hubbard’s solution involves three important steps: the first step is to
learn from the structure of uncertainty systems which work with ‘calibrated
probabilities’; the second is to develop empirical approaches to modeling risk
analysis in simulations; and finally, to help to set up high standards of risk analysis
in the firm and the community of practitioners from which risk analysts are drawn
by setting up high quality certification programs. Hubbard also includes an
exhaustive list of ‘PC-based Monte Carlo tools’ along with a list of the respective
manufacturers, and a description of their significant features for those who wish to
incorporate these tools into their ‘arsenal’ of approaches for not only modeling risk
analysis, but to also incorporate the insights generated thereby into how the firm
formulates and implements strategy. Only then will a firm be able to generate a
‘calibrated culture’ of risk analysis, diagnose quickly if and when the system is
broken, and fix it in order to minimize the down time that results when crises afflict
the system as a whole.
SHIVA KUMAR SRINIVASAN