WRITING A CRITICAL REVIEW What is a critical review A .docx
1. WRITING A CRITICAL REVIEW
What is a critical review?
A critical review is much more than a simple summary; it is an
analysis and evaluation of a book, article,
or other medium. Writing a good critical review requires that
you understand the material, and that you
know how to analyze and evaluate that material using
appropriate criteria.
Steps to writing an effective critical review:
Reading
Skim the whole text to determine the overall thesis, structure
and methodology. This will help you better
understand how the different elements fit together once you
begin reading carefully.
Read critically. It is not enough to simply understand what the
author is saying; it is essential to
challenge it. Examine how the article is structured, the types of
reasons or evidence used to support the
conclusions, and whether the author is reliant on underlying
assumptions or theoretical frameworks. Take
copious notes that reflect what the text means AND what you
think about it.
Analyzing
Examine all elements. All aspects of the text—the structure,
the methods, the reasons and evidence, the
conclusions, and, especially, the logical connections between all
of these—should be considered.
2. The types of questions asked will vary depending on the
discipline in which you are writing, but the
following samples will provide a good starting point:
Structure What type of text is it? (For example: Is it a primary
source or secondary
source? Is it original research or a comment on original
research?)
What are the different sections and how do they fit together?
Are any of the sections particularly effective (or ineffective)?
Methodology Is the research quantitative or qualitative?
Does the methodology have any weaknesses?
How does the design of the study address the hypothesis?
Reasons/Evidence What sources does the author use (interviews,
peer-reviewed journals,
government reports, journal entries, newspaper accounts, etc.)?
What types of reasoning are employed (inductive, deductive,
abductive)?
What type of evidence is provided (empirical, statistical,
logical, etc.)?
Are there any gaps in the evidence (or reasoning)?
Conclusions Does the data adequately support the conclusion
drawn by the researcher(s)?
Are other interpretations plausible?
Are the conclusions dependent on a particular theoretical
formulation?
What does the work contribute to the field?
Logic What assumptions does the author make?
Does the author account for all of the data, or are portions left
out?
What alternative perspectives remain unconsidered?
4. The first example below works well with shorter assignments,
but the risk is that too much time will be
spent developing the overview, and too little time on the
evaluation. The second example works better for
longer reviews because it provides the relevant description with
the analysis and evaluation, allowing the
reader to follow the argument easily.
Two common structures used for critical reviews:
Example 1 Example 2
Introduction
Overview of the text
Evaluation of the text
� Point 1
� Point 2
� Point 3
� Point 4 …(continue as necessary)
Conclusion
Introduction (with thesis)
Point 1: Explanation and evaluation
Point 2: Explanation and evaluation
Point 3: Explanation and evaluation
(continue elaborating as many points as
necessary)
5. Conclusion
Important: Avoid presenting your points in a laundry-list style.
Synthesize the information as much as
possible.
“Laundry-List” Style of Presentation Synthesized Argument.
The article cites several different studies in support
of the argument that playing violent video games can
have a positive impact on student achievement.
These studies refer to educational games and other
types of computer use. The argument is not logically
well constructed. Educational games are not the same
as violent video games. The article also ignores data
indicating that people with the highest GPA are those
that reported low computer use. Also, different types
of computer use could include things like researching
or word-processing, and these activities are very
different from playing violent video games.
The evidence cited in the article does not support the
overall conclusion that playing violent games improves
GPA. One study only examines educational games in
relation to GPA, so it is questionable whether the same
findings will hold true for other types of games. Another
study does not distinguish between different types of
computer use, making it difficult to assess whether it is
game playing or activities such as research and writing
that contributed to improvements in GPA. Further, the
author disregards relevant data that indicates that students
with the highest GPAs are those who report low
computer use, which means that a direct correlation
between game playing and GPA cannot be supported.
7. and practice in information security management. In
theory, we know how to manage information secu-
rity and design and implement an information secu-
rity management system (ISMS) according to, for
example, the ISO/IEC 27000 family of standards.1 In
particular, we know that risk assessment and risk treat-
ment should govern all activities related to information
security management. Risk assessment involves identi-
fying, analyzing, and evaluating the relevant risks in a
given situation or environment, whereas risk treatment
is about planning, implementing, and managing appro-
priate countermeasures (or security controls) according
to, for example, the plan-do-check-act model. � e goal
is to institute countermeasures that are economically
reasonable and can collectively reduce risk exposure
(according to the risk assessment results) as much as
possible, or at least to a level acceptable by the organiza-
tion in question. � is is a highly involved optimization
problem that results in a statement of applicability for
the various countermeasures.
In practice, we don’t know how to properly assess
the risks. � is is true for risk identi� cation and risk
evaluation, but it’s particularly true for risk analysis—
especially a quantitative risk analysis. An immediate
consequence of this lack of knowledge in risk assess-
ment is that we can’t solve the optimization problem,
and hence we can’t properly treat risks either.
� e gap and lack of knowledge in risk assessment—
and, as a consequence, in risk treatment—is a funda-
mental problem that makes us either routinely fail in
information security management or head toward solu-
tions that are ad hoc and overly pragmatic. But, instead of
solving the problem—that is, � nding a risk assessment
8. method that works in practice—we maintain the status
quo. Most information security management textbooks,
standards, recommendations, and documents referring
to best practices unanimously agree that risks are key,
www.computer.org/security 19
and therefore must be properly identified, analyzed,
and evaluated before anything else meaningful can be
done. But, instead of providing a convertible method
for (quantitative) risk analysis, most references outline
vague principles and frameworks or abstract method-
ologies that can’t be applied in the field. For example,
with regard to the ISO/IEC 27000 family of standards,
ISO/IEC 270052 supposedly provides guidelines for
information security risk management that align with
ISO 310003 and IEC 31010,4 as well as the vocabulary
defined in ISO Guide 73:2009.5 All these standards and
documents mention and even emphasize the fact that
they aren’t conducted in a constructive spirit, but rather
provide complementary reading for anybody charged
with managing risks—a gentle way of saying that they
aren’t directly field applicable.
Quantitative Risk Analysis
In daily life, we often use the term “risk,” but we don’t
have a commonly agreed on and precise definition for
it. Intuitively, a risk refers to a situation in which a threat
matches a vulnerability such that something valuable
might get lost. To more precisely describe the situation,
we have to introduce the following terms:
■ vulnerability: a weakness or flaw in a system that might
be exploited;
9. ■ attack: an exploit against such a vulnerability;
■ threat: the possibility that such an attack might take
place; and
■ risk: the situation that quantifies this possibility.
We can use these terms to argue about risk manage-
ment. But the above-described intuition still applies and
is sufficient in many situations. It means that a risk exists
only if a threat matches a vulnerability; conversely, if a
threat doesn’t match a vulnerability, then there’s no risk
to consider for this threat.
Consider a pickpocket who wants to steal your wal-
let. If you carry a wallet with you, then you’re vulnerable
and susceptible to theft. But if you don’t have a wallet,
then you aren’t vulnerable, and hence the respective
risk of being robbed doesn’t exist in the first place. Now
consider an information security example. Password
guessing is a major threat to almost all current com-
puter systems. But this threat results in a risk if and only
if the computer system has user accounts to exploit. If
the system operates autonomously and doesn’t require
user interaction—for example, with an intelligent sen-
sor or agent—then the system might not be suscepti-
ble to password guessing, and hence the respective risk
might not exist in the first place. The vulnerabilities and
risks scale in this example: the more users a system has,
the more vulnerable it is and the higher the risks tend to
be (simply because it’s more likely that some users have
weak passwords).
With this notion of a risk in mind, I now address
risk assessment. The first step is to identify all relevant
10. risks. This, in turn, requires comprehensive threat and
vulnerability lists. Some lists already exist, such as the
threat catalogs of the German Federal Office for Infor-
mation Security or the list of Common Vulnerabilities
and Exposures maintained by the MITRE Corporation
(https://cve.mitre.org). But these lists are only show-
cases and can’t be comprehensive. For example, before
Paul Kocher developed and published the first timing
attack in the 1990s, the respective threats and vulner-
abilities were unknown and weren’t present on any list.
The same is true for the Heartbeat extension of the
Transport Layer Security (TLS) and Datagram TLS
protocols. Before the Heartbleed bug hit the world in
2014, we didn’t even know that a threat and a respective
risk existed.
This isn’t by chance but rather a general pattern:
whenever a security problem (that is going to represent
a threat) pops up, it doesn’t grow gradually but rather
occurs suddenly. This refers to the nonmonotonous
property mentioned in “Common Misconceptions in
Computer and Information Security.”6 Consequently,
threat or vulnerability lists must be taken with a grain
of salt and assumed to not be comprehensive. But if
these lists aren’t comprehensive, then any list of rel-
evant risks can’t be comprehensive either, and hence
the ability to identify all relevant risks is illusive for all
nontrivial settings. We are simply unable to identify all
relevant risks in a given situation or environment. The
best we can do is produce a list of risks that is reason-
able and meaningful. These risks can then be analyzed
and evaluated, but we should keep in mind that there
will be unaddressed risks.
If risk identification is difficult, then risk analysis and
evaluation are even more so. At first, this seems counter-
11. intuitive, because the academic literature has a long tra-
dition of quantitatively analyzing risks,7 and because we
even have a nice mathematical formula to compute and
quantify them. According to this formula, a risk is the
product of the probability of occurrence multiplied by
the expected damage of the respective threat:
risk = probability of occurrence × expected damage.
Under laboratory conditions, this formula can be
applied easily. For example, if I know that an attack
occurs once per decade and might cause US$10 million
in damages, then I can apply the formula to conclude
that the associated risk that results from the threat is $1
million per year. I then know that a countermeasure to
the threat is economically reasonable if and only if its
20 IEEE Security & Privacy November/December 2015
LESSONS LEARNED FROM THE EDITORIAL BOARD
yearly costs are less than $1 million. � is determina-
tion is independent from whether the countermeasure
is technical, organizational, or legal. If the costs exceed
$1 million per year, then it’s economically more reason-
able to invoke other countermeasures or to not invoke
any countermeasure at all (in which case the respective
risk must be carried). However, this is just an economi-
cal viewpoint; other reasons might exist to implement
the countermeasure, for instance, compliance with best
practices or legislation.
But under normal (that is, nonlaboratory) condi-
tions, the formula is very di� cult, if not impossible,
12. to apply. � e problem is that we don’t have su� cient
information to estimate the probability of occurrence
and the expected damage in a meaningful way. For
example, what’s the probability of occurrence for a
timing a� ack, or—more
generally—a side-
channel a� ack? Is 0.2
an appropriate value
for the probability
of occurrence, or is
0.3 be� er? Similarly,
what’s the probability
of occurrence for the
next Heartbleed-like bug?
As these threats are recent, we can’t refer to any statis-
tics. � e best we can do is guess, so any value we assign
to the probability of occurrence is as good as the next.
� e same is true for the expected damage. What’s the
expected damage of a timing a� ack or any related side-
channel a� ack? What’s the expected damage of the next
Heartbleed-like bug? Again, we can’t estimate these val-
ues or even argue about a particular value’s appropri-
ateness. We’re completely in the dark. � is sometimes
leads to arti� cially large values being assigned to the
probability of occurrence and expected damage of a par-
ticular risk, just to ensure that respective countermea-
sures are selected and implemented despite their high
costs. In this case, a quantitative risk analysis is applied
backward, which is arguably the wrong direction and
completely defeats the analysis’s original purpose.
Try it yourself: Take a threat other than a natural
disaster such as an earthquake or � ooding (for which
we have statistics), and try to estimate the probability
of occurrence and the expected damage of this particu-
lar threat. A� er going through this exercise, you’ll agree
13. that you can argue for or against any possible value.
� is is already true for material damages, but it’s partic-
ularly true for immaterial damages, such as reputation
loss. How do we quantify reputation loss? We know
that it’s bigger for a � nancial institution, such as a bank,
than for a local handworker. But beyond that, it’s very
di� cult to estimate how much bigger. Again, any value
is acceptable, and it’s hard to tell whether a particular
value is accurate.
Given this situation and our inability to quantify the
values necessary to compute a risk—that is, the proba-
bility of occurrence and the expected damage—we must
admit that we’ve reached a dead end and that our nice
mathematical formula for quantifying risks hardly works
in practice and is therefore useless. � is means that all
the quantitative risk analyses wri� en in the past, includ-
ing mine, must be taken with a grain of salt. � is doesn’t
mean that the risk quanti� cation formula per se is wrong;
it’s just not applicable to the problem we’re facing.
� e situation is comparable to hammering in a nail
with a screwdriver. � e problem is not the screwdriver,
but rather the worker who chose the wrong tool to start
with. If the worker had chosen a hammer, the task could
have been performed easily.
Applied to our prob-
lem, we’ve chosen
the risk quanti� ca-
tion formula to start
with but have miser-
ably failed in applying
it. Instead of further
trying to apply the
14. wrong tool, we could search
for another tool—one more appropriate for informa-
tion security management. We’re fully aware that this
search isn’t trivial and that the underlying question is
in line with the more general question about measuring
security in the � rst place.8,9
Alternative Approaches
From a bird’s-eye perspective, there are at least three
tools and hence three alternative approaches for infor-
mation security management.
First, in a baseline requirements approach, we don’t
assess risks. Instead, we implement and enforce the use of
countermeasures that have a good cost–bene� t ratio. � is
means that the countermeasures don’t cost too much but
have a severe and mostly positive e� ect on information
security. Antivirus so� ware, � rewall systems, and maybe
even intrusion detection and prevention systems (IDS/
IPS) are examples. � ere’s li� le reason not to use these
products today; they’re implemented and enforced inde-
pendently from the current risk exposure. Interestingly,
the ISO/IEC 27000 family of standards has its roots in
this approach—it evolved from a simple code of practice
into a comprehensive methodology for risk-based infor-
mation security management.
Second, in a vulnerability management approach, we
focus on vulnerabilities and try to eliminate them to the
extent possible. � is approach is based on the observa-
tion that a risk only exists if there’s a vulnerability to
We don’t have su� cient information to
estimate the probability of occurrence and
the expected damage in a meaningful way.
15. www.computer.org/security 21
exploit. Put in other words: if a vulnerability doesn’t
exist, then the respective risk doesn’t exist either. So,
vulnerability management is about identifying and
removing vulnerabilities in an automated or semiauto-
mated way using tools, such as those from Qualys, Trip-
wire, Rapid7, or First Security Technology.
Finally, in a qualitative risk analysis approach, we sim-
plify risk analysis to the point of feasibility. Instead of
using exact values to estimate the probability of occur-
rence and the expected damage, we use a very simple
ordinal scale, such as one that distinguishes only a few
values (for instance, low, medium, and high). In this
case, we can argue that the probability of occurrence is
low, medium, or high and that the expected damage is
also low, medium, or high. This results in nine possible
values for a particular risk, but assignment of risk to any
of these nine possibilities remains a gut feeling. This fact
should be made explicit and not hidden behind formu-
las and accurate-looking numbers.
All three approaches are reasonable and might provide
a partial solution. Because they’re not mutually exclusive,
they can be combined and complemented by other yet-
to-be-defined approaches. The combined approach for
information security management is more appropriate
than any approach based on a quantitative risk analysis.
We might even negotiate insurance policies that cover
cyberrisks. Note that there’s an emerging market for
such policies and that the mere existence of such policies
doesn’t mean that quantitative risk analysis works in prac-
16. tice. On one hand, insurance companies have a much bet-
ter starting position for assessing risks. Instead of looking
at a single organization, they can examine many organi-
zations simultaneously and apply statistical means to an
entire population. On the other hand, today’s insurance
policies are very restrictive and focus on very specific
cyberrisks, such as data loss and lawsuits. The policies
don’t cover more interesting cyberrisks, such as the risks
of being hacked and blackmailed. Unfortunately, these
are the risks that are relevant in practice.
Q uantitative risk analysis in information security management
is a modern fairy tale. We know that
the story is wrong, but we continue telling it—maybe
because it sounds nice, or maybe because we have no
other story to tell. In either case, we want to comply with
international standards and best practices and therefore
put a good face on the matter, sometimes knowing that
we’re wrong.
When building a house, we’d like it to be as secure
as possible against burglaries. But instead of quantify-
ing risks by estimating probabilities of occurrence and
expected damages for the threats we can identify, we
implement baselines (for instance, install a lock at the
front door), manage vulnerabilities (for instance, peri-
odically check whether the doors are closed and locked),
and qualitatively analyze a few risks (for instance, put
more valuable goods into a safe). Sometimes, we even
negotiate insurance policies to ensure that our valuables
are refunded in case of emergency.
In daily life, we’re accustomed to these approaches
and combine them intuitively and routinely. So why not
also apply them to cyberspace? This might be a better
17. starting point for information security management than
any quantitative risk analysis can ever be, and hence it
might provide a viable solution for our problem. Similar
to the Apollo 13 crew, we should be able to say one day
that we have had a problem, and that we have solved it.
References
1. E. Humphreys, Implementing the ISO/IEC 27000 Infor-
mation Security Management System Standard, Artech
House, 2007.
2. “ISO/IEC 27005:2011: Information Technology—
Security Techniques—Information Security Risk
Management,” Int’l Org. for Standardization/Int’l
Electrotechnical Commission, 2011; www.iso.org/iso
/catalogue_detail?csnumber=56742.
3. “ISO 31000:2009: Risk Management—Principles and
Guidelines,” Int’l Org. for Standardization, 2009; www
.iso.org/iso/catalogue_detail?csnumber=43170.
4. “IEC 31010:2009, Risk Management—Risk Assess-
ment Techniques,” Int’l Org. for Standardization/Int’l
Electrotechnical Commission, 2009; www.iso.org/iso
/catalogue_detail?csnumber=51073.
5. “ISO Guide 73:2009: Risk Management—Vocabulary,”
Int’l Org. for Standardization, 2009; www.iso.org/iso
/catalogue_detail?csnumber=44651.
6. R . Oppliger and B. Wildhaber, “Common Misconcep-
tions in Computer and Information Security,” Computer,
vol. 45, no. 6, 2012, pp. 102–104.
7. S. Kaplan and B.J. Garrick, “On the Quantitative Defini-
18. tion of Risk,” Risk Analysis, vol. 1, no. 1, 1981, pp. 11–27.
8. C.P. Pfleeger, “The Fundamentals of Information Secu-
rity,” IEEE Software, vol. 14, no. 1, 1997, pp. 15–16, 60.
9. S.L. Pfleeger and R .K. Cunningham, “Why Measuring
Security Is Hard,” IEEE Security & Privacy, vol. 8, no. 4,
2010, pp. 46–54.
Rolf Oppliger is the founder and owner of eSECURITY
Technologies, works for the Swiss federal administra-
tion, teaches at the University of Zurich, and is the
editor of Artech House’s book series on information
security and privacy. Since 2010, he has been on the
editorial board of IEEE Security & Privacy. Oppliger
received a PhD in computer science from the Univer-
sity of Berne and the venia legendi from the University
of Zurich. Contact him at [email protected]