SlideShare a Scribd company logo
ARTICLE IN PRESS
Contents lists available at ScienceDirect
Telecommunications Policy
Telecommunications Policy 33 (2009) 706–719
0308-59
doi:10.1
� Cor
E-m
URL: www.elsevierbusinessandmanagement.com/locate/telpol
Cybersecurity: Stakeholder incentives, externalities,
and policy options
Johannes M. Bauer a,�, Michel J.G. van Eeten b
a Department of Telecommunication, Information Studies, and
Media; Quello Center for Telecommunication Management and
Law,
Michigan State University, East Lansing, Michigan, USA
b Faculty of Technology, Policy and Management, Delft
University of Technology, Delft, The Netherlands
a r t i c l e i n f o
Keywords:
Cybersecurity
Cybercrime
Security incentives
Externalities
Information security policy
Regulation.
61/$ - see front matter & 2009 Elsevier Ltd. A
016/j.telpol.2009.09.001
responding author. Tel.: þ1 517 432 8003; fax:
ail addresses: [email protected] (J.M. Bauer), m
a b s t r a c t
Information security breaches are increasingly motivated by
fraudulent and criminal
motives. Reducing their considerable costs has become a
pressing issue. Although
cybersecurity has strong public good characteristics, most
information security
decisions are made by individual stakeholders. Due to the
interconnectedness of
cyberspace, these decentralized decisions are afflicted with
externalities that can
result in sub-optimal security levels. Devising effective
solutions to this problem is
complicated by the global nature of cyberspace, the
interdependence of stakeholders, as
well as the diversity and heterogeneity of players. The paper
develops a framework for
studying the co-evolution of the markets for cybercrime and
cybersecurity. It examines
the incentives of stakeholders to provide for security and their
implications for the ICT
ecosystem. The findings show that market and non-market
relations in the information
infrastructure generate many security-enhancing incentives.
However, pervasive
externalities remain that can only be corrected by voluntary or
government-led
collective measures.
& 2009 Elsevier Ltd. All rights reserved.
1. Introduction
Malicious software (‘‘malware’’) has become a serious security
threat for users of the Internet. Estimates of the total cost
to society of information security breaches vary but data
published by private security firms, non-profit organizations,
and
government, all indicate that their cost is non-negligible and
increasing. From a societal point of view, not only the direct
cost (e.g., repair cost, losses due to fraud) but also indirect
costs (e.g., costs of preventative measures) and implicit costs
(e.g., slower productivity increases due to reduced trust in
electronic transactions) have to be attributed to information
security breaches. Bauer, Van Eeten, Chattopadhyay, and Wu
(2008) in a meta-study of a broad range of research conclude
that a conservative estimate of these costs may fall between
0.2% and 0.4% of global GDP. A catastrophic security failure in
the world information and communication system with
potentially much higher impact on the global economy is
possible.
Viruses, worms, and the many other variants of malware have
developed from a costly nuisance to sophisticated tools
for criminals. Computers worldwide, some estimate as many as
one in five to one in ten, are infected with malware, often
without the knowledge of the machine’s owner. While hacking
continues to play a role, other infection strategies have
become dominant. They range from the use of email attachments
to spread malware, downloads from legitimate websites
which are themselves infected, and distribution via social
networks and games. Infected machines are often connected in
ll rights reserved.
þ1 517 355 1292.
[email protected] (M.J.G. van Eeten).
www.elsevierbusinessandmanagement.com/locate/telpol
dx.doi.org/10.1016/j.telpol.2009.09.001
mailto:[email protected]
mailto:[email protected]
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 707
botnets: flexible remote-controlled networks of computers that
operate collectively to provide a platform for fraudulent
and criminal purposes. Individuals, organizations, and nation
states are targets. Attack vectors directed toward individuals
and organizations include spam (most originating from botnets),
variations of socially engineered fraud such as phishing
and whaling, identity theft, attacks on websites, ‘‘click fraud’’,
‘‘malvertising’’, and corporate espionage. The DDoS attacks
on Estonia in 20071 and the spread of the cryptic Conficker
worm that, early in 2009, paralyzed parts of the British and
French military as well as government and health institutions in
other countries, are recent examples of attacks on nations
and their civil and military infrastructures.2
The pervasive presence of security threats has created concern
among scholars (e.g., Zittrain, 2008) and policy-makers.
The OECD Ministerial Summit in Seoul in 2008 dedicated
considerable attention to the issue (OECD, 2009) and the
International Telecommunication Union (ITU) has developed an
active cybersecurity agenda. Regional initiatives, including
efforts by the European Union and APEC, have elevated the
visibility of the issue (e.g., Anderson, Böhme, Clayton, &
Moore,
2008) and an increasing number of public and private
partnerships target information security at a national and supra-
national level.3 These measures are complemented by national
initiatives, as illustrated by the comprehensive hearing that
the British House of Lords dedicated to the problem (House of
Lords, 2007) and the announcement by the US
administration of a renewed push to improve cybersecurity
(White House, 2009). Given the dynamic and multifaceted
nature of the problem, a broad spectrum of simultaneous
measures is being considered as the best initial approach.
Cybersecurity policy needs to start from a clear and empirically
grounded understanding of the nature of the problems
before possible solutions can be devised. The analytical lenses
of scholars and practitioners on malware have changed
significantly in recent years. Security threats were initially
viewed as mostly technological problems; in recent years, the
perspective has broadened to include economic issues and
aspects of user behavior (Anderson & Moore, 2006).
This research has clarified that, as information security comes
at a cost, tolerating some level of insecurity is
economically rational from an individual and social point of
view. Although it is mostly provided by private players,
cybersecurity has strong public good characteristics. Therefore,
from a societal perspective, a crucial question is whether
the costs and benefits taken into account by market players
reflect the social costs and benefits. If that is the case,
decentralized individual decisions also result in an overall
desirable outcome (e.g., a tolerable level of cybercrime, a
desirable level of security). However, if some of the costs are
borne by other stakeholders or some of the benefits accrue to
other players (i.e., they are ‘‘externalized’’), individual security
decisions do not properly reflect social benefits and costs.
This is the more likely scenario in highly interdependent
information and communication systems such as the Internet.
Whereas the security decisions of a market player regarding
malware will be rational for that player, given the costs and
benefits it perceives, the resulting course of action
inadvertently or deliberately imposes costs on other market
players and
on society at large. Decentralized individual decisions will
therefore not result in a socially optimal level of security. The
presence of externalities can result in internet-based services
that are less secure than is socially desirable.
This paper aims at a comprehensive economic analysis of
cybersecurity. In particular, given that most security decisions
are made by organizations and individuals, it is interested in
how well the incentives shaping these decentralized decisions
are aligned with achieving a socially optimal level of security.
This overarching objective is pursued in three interrelated
steps. First, an economic framework is proposed to examine the
co-evolution of cybercrime and cybersecurity (Section 2).
The overall level of cybersecurity is the outcome of many
decentralized decisions by individual stakeholders. Therefore,
building on a detailed field study in seven countries, the paper,
second, discusses the incentives influencing security
decisions of important players such as ISPs, software vendors
and users (Sections 3 and 4). This unique and rich data set
permits identification and analysis of the security-enhancing
and security-reducing incentives perceived by and relevant
for key players. The implications of the relevant incentives for
the level of security attained by individual players and the
sector as a whole are investigated, with special emphasis on the
emerging externalities. Whereas many security-enhancing
incentives could be detected, significant externalities remain
that cannot easily be overcome by private action. The paper
therefore examines, third, policy options to enhance
cybersecurity (Section 5). Main insights are summarized in the
final
section.
2. The co-evolution of cybercrime and information security
Information security breaches have become increasingly driven
by financial motives. The particular challenges to
achieve a desirable level of information security can be better
understood by analyzing the interplay of two sets of activities
whose players respond to very different incentives: the realm of
cybercrime and the realm of information security. These
two areas are tightly linked; developments in one directly affect
conditions in the other. Decisions and effects of changes in
the incentives of players in both realms can be analyzed using a
market model. It may seem frivolous to some to look at
1 See N. Anderson, ‘‘Massive DDoS attacks target Estonia;
Russia accused,’’ Ars Technica, May 14, 2007. Retrieved
September 3, 2009 from http://
arstechnica.com/security/news/2007/05/massive-ddos-attacks-
target-estonia-russia-accused.ars#.
2 See M.E. Soper, ‘‘Conficker worm shuts down French and UK
Air Forces,’’ Maximum PC, February 10, 2009. Retrieved
September 3, 2009 from http://
www.maximumpc.com/article/news/conficker_worm_shuts_dow
n_french_and_uk_air_forces.
3 The first Computer Emergency Response Team (CERT) was
established in 1988 in response to the first internet worm. By
2009, the Forum of Incident
Response and Security Teams (FIRST), founded in 1990, and
one of the many collaborative partnerships, had more than 200
members from all continents
(see http://www.first.org).
http://arstechnica.com/security/news/2007/05/massive-ddos-
attacks-target-estonia-russia-accused.ars#
http://arstechnica.com/security/news/2007/05/massive-ddos-
attacks-target-estonia-russia-accused.ars#
http://www.maximumpc.com/article/news/conficker_worm_shut
s_down_french_and_uk_air_forces
http://www.maximumpc.com/article/news/conficker_worm_shut
s_down_french_and_uk_air_forces
http://www.first.org
ARTICLE IN PRESS
Fig. 1. Important interdependencies in the ICT ecosystem.
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719708
cybercrime as a ‘‘market’’4 but many players in this
increasingly differentiated realm respond to price signals and
other
economic incentives.5 Some players, whether they are inspired
by essentially laudable goals (such as ‘‘white hat’’ hackers)
or destructive motives (such as cyberterrorists) do not primarily
follow financial gain but their decisions may be modelled
as an optimization over non-financial goals. If the logic and
forms of these transactions were better known, it might be
possible to interrupt and manipulate price and other signals in
ways that quench illegal activity. Little is known about the
underground economy organized around cybercrime (Schipka,
2007; Jakobsson & Ramzan, 2008; Kanich et al., 2008). More
information is available on the market for information security
(e.g., CSI, 2008) but detailed information is often kept
proprietary. Attack and defence strategies can be analyzed at
the level of individual players (e.g., a cybercriminal, a firm
investing in information security, a home user deciding on the
purchase of a firewall) or at an aggregate (sector) level. In
the latter case the interrelationships between players and the
effects of their decisions on the working of the sector as a
whole are of primary interest (see Fig. 1).
Information and communication services require inputs from
many players. They include internet service providers
(ISPs), application and service providers (App/Svc), hardware
and software vendors, users, security providers, and national
and international organizations involved in the governance of
these activities. Two main types of interdependencies exist
among these participants: they are, first, physically connected
via the nodes and links of the networks and second,
connected in commercial and non-commercial transactions. This
highly interconnected system has been described as a
‘‘value net’’ (e.g., Bovel & Martha, 2000; Brandenburger &
Nalebuff, 1996; Li & Whalley, 2002) or as an ‘‘ecosystem’’
(Fransman, 2007; Peppard & Rylander, 2006). Whereas the first
term emphasizes the joint nature of value creation, the
second one emphasizes interdependence among the participants
and their co-evolution. Players (and institutions)
co-evolve if one player’s actions affect but do not fully
determine the choice options and strategies of others (e.g.,
Nelson,
1994; Murmann, 2003; Gilsing, 2005). As interdependence and
co-evolution are important aspects of the cybersecurity
problem, the second notion will be used throughout the paper.
Each type of player in this ICT ecosystem is composed of
differentiated and heterogeneous players, which might be
grouped into relatively similar classes (indicated by the various
indices). For example, users could be differentiated into large
industrial, small and medium-sized enterprises, and
residential. The ICT ecosystem extends across national
boundaries and jurisdictions. It is under relentless attack from
cybercriminals (and some benign hackers). These criminals
often are located in countries with weak rules of law but they
typically launch attacks from nations with a highly developed
information infrastructure (such as the US and South Korea)
or a large number of users whose machines are poorly protected
(such as Brazil, Turkey, and China) (Symantec, 2009, p. 8).
4 For the promises and challenges of studying criminal activity
in a market framework see Becker (1968) and Ehrlich (1996).
For an application to
cybersecurity see Kshetri (2006) and Franklin, Paxson, Perrig,
and Savage (2007).
5 Economic incentives are the factors that influence decisions
by individuals and individuals in organizations. They are rooted
in economic, formal-
legal, and informal mechanisms, including the specific
economic conditions of the market, the interdependence with
other players, laws as well as tacit
social norms. Players respond to the incentives that are
perceived as relevant for their decisions. Anderson and Moore
(2006) have argued that ‘‘over the
past 6 years, people have realized that security failure is caused
at least as often by bad incentives as by bad design.’’ In their
view, many of the problems
of information security can be explained more clearly and
convincingly using the language of microeconomics: network
effects, externalities, asymmetric
information, moral hazard, adverse selection, liability dumping,
and the tragedy of the commons. Within this literature, the
designing of incentives that
stimulate efficient behavior is central.
ARTICLE IN PRESS
Fig. 2. Markets for cybercrime and cybersecurity.
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 709
With the growth of the underground cybercrime economy,
different tasks in this network of transactions apparently are
increasingly carried out by specialists. One individual or
organization is rarely involved in the whole range of activities
necessary. The division of labor has spawned writers of
malicious code, distributors of code, herders (‘‘owners’’) of
botnets,
spammers, identity collectors, drop services, drop site
developers, and drops (Schipka, 2007). Specialization has not
only
increased the sophistication and virility of malware but also
increased the productivity of the cybercrime economy and
hence reduced the cost of attack tools. Despite the heterogeneity
of these actors, it is reasonable to assume that they act
purposefully rational. That is, given their information about
relevant opportunities and constraints they will pursue and
expand their activities as long as incremental benefits exceed
incremental costs. Too little information is typically available
to determine the exact shape of cost and benefit relations of the
players empirically. However, it is possible to derive some
insights from a conceptual analysis. Other things equal
(‘‘ceteris paribus’’) it is likely that each actor can only expand
his or
her activity level at higher cost. For example, after highly
vulnerable information systems are attacked, it will be
exceedingly difficult to penetrate more secure systems.
Likewise, it will generally be more time-consuming and hence
costly to write code to attack software and devices that have
been fortified against attacks. Hence, the short-run
incremental cost curve faced by criminals will most likely be
upward sloping.
The shape of the incremental benefit relationship is less
obvious. If for any given type of attack the bigger rewards are
easier to reap the incremental benefit curve will be downward
sloping. In principle, if more effort were ‘‘rewarded’’ with
higher spoils it could also be upward sloping. Whereas this is
possible in some cases, it will probably not hold at an
aggregate level, at least in societies that adhere to the rule of
law. For the purposes of our further discussion we will
therefore focus on the more plausible scenario in which the
slope of the incremental cost curve is higher than that of the
incremental benefits curve. Individual decisions can be
aggregated to represent the activities in a specific ‘‘market
segment’’, for example, the market for malicious code, botnets,
or stolen credit cards. There is no reason to believe that
these market segments do not exhibit upward slowing supply
and downward sloping demand schedules. Cybercriminal
activity could be aggregated even further to generate a
representation of the overall market for cybercrime. To this end,
it is
necessary to define the traded services in a more abstract way.
In this aggregate market, supply is shaped by the cost of
breaching information systems. Demand is shaped by the
benefits of such breaches and the corresponding willingness to
pay for them. The intersection of these two schedules will
determine the overall level of fraudulent and criminal security
breaches (see Fig. 2). As long as the incremental benefits in this
market exceed the incremental costs, cybercriminal activity
will expand (and vice versa).
The shape and position of the incremental cost and benefit
schedules is influenced by changes in the technological basis
of information systems, the technology of attacks, and the
institutional and legal framework governing information
security and its violations. For example, a deeper division of
labor in the underground economy, improvements in attack
technologies, or less diligent law enforcement will reduce the
cost of cybercrime, and therefore shift the supply curve to the
lower right (as illustrated in Fig. 2). Accordingly, for a given
demand curve for cybercrime (i.e., ceteris paribus)
cybercriminal activity will increase. In contrast, higher
penalties for cybercrime, stricter law enforcement that elevates
the
risk of being caught, and measures that improve information
security all increase the cost of criminal activity and hence
shift the supply curve to the upper left. For a given demand
curve for cybercrime (i.e., ceteris paribus) this implies that
cybercriminal activity will diminish. The demand schedule for
cybercrime is also influenced by forces external to the
market for cybercrime. It shifts to the upper right as the net
benefits of criminal activity increase, for example, if the
number of users of electronic transactions (and hence the
potential rewards from cybercrime) grow, connectivity is
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719710
increasing globally, mobile devices are used more, and ICT use
in the private and public sectors becomes more widespread.
Ceteris paribus this will lead to increased cybercriminal
activity. On the other hand, if the net rewards of cybercrime
shrink,
for example, because of a lower user base, tighter security, and
more stringent law enforcement, the schedule is shifted to
the lower left, reducing cybercrime. Presently, the factors
contributing to an intensification of cybercrime seem to
outweigh
the others, putting upward pressure on the overall level of
cybercrime.
Like the realm of cybercrime, information security can be
analyzed in a market framework. It is best seen as an
aggregate made up of individual sub-markets in which vendors
supply and heterogeneous types of users demand
information security products and services. Such services are
offered to different users by a range of specialized vendors,
including security service providers (anti-malware software
providers, suppliers of network monitoring hardware and
software such as Cloudmark), Internet Service Providers, or
they may be provided in-house by a specialized department.
These players are supported by non-profit groups like
Spamhaus, the Messaging Anti-Abuse Working Group
(MAAWG), and
many blacklisting services that monitor spam and other
malicious activity. Security services are ultimately demanded
by
different types of users including residential users, for-profit
and not-for-profit organizations of different sizes, and
government agencies. Within its specific context, incentives,
and available information, each decentralized player will
again make purposefully rational decisions. Once all
adjustments are made, it is reasonable to assume that each
decision-
maker will strive for a situation in which the perceived
incremental benefits of additional security measures are
approximately equal to the incremental costs of such measures.
Given the state of security technology, the incremental cost
of improved security will most likely increase. At the same
time, with very few if any exceptions, the benefits of additional
security measures will decrease. Consequently, the optimal
level of security is found where the incremental cost and
benefits of security are equated. In a dynamic perspective,
under conditions of risk and uncertainty, although the analysis
is
somewhat more cumbersome, the same principal decision rule
applies: that the optimal level of security is found where
the adjusted and discounted incremental benefits equal the
incremental costs (Gordon & Loeb, 2004).
Many of the challenges of reaching an optimal level of
information security at the aggregate level are rooted in a
potential mismatch between the perceived individual and social
benefits and costs of security. In information systems,
positive and negative externalities are closely intertwined
aspects of security decisions. Additional security investments of
end users or of an intermediate service provider such as an ISP
also benefit others in the ICT system and are thus associated
with positive externalities. Similarly, insufficient security
investment by a player exerts a negative externality on others in
the system. The externality problem is complicated by the
economic features of information and security markets. Many
information products are produced under conditions of high
fixed but low incremental costs. Unfettered competition will
drive prices quickly to incremental costs. Moreover, many
products and services are characterized by positive network
effects. Firms will respond to such conditions with product and
price differentiation, attempts to capitalize on first mover
advantages, versioning, and other strategies to overcome these
challenges (Illing & Peitz, 2006; Shapiro & Varian, 1999;
Varian, Farrell & Shapiro, 2005). Given the current legal
framework of information industries, in particular the current
liability regime, security is a quasi-public good. Suppliers will
typically not be able to mark-up their products in accordance
with the social value of their security features but only with the
private value, leading to an under-provision of security. If
suppliers face a trade-off between speed to market and security,
given positive network effects, speed to market may take
precedence over security testing (Anderson, 2001; Anderson &
Moore, 2006).
Cybercriminal activity and decisions in information security
markets co-evolve, mutually influencing but not fully
determining each other’s course. An increased level of
cybercrime (determined in the market for cybercrime) will
increase
the effort and cost of maintaining a given level of information
security. At the same time, the benefits of increased security
may increase. Consequently, the overall cost of security will
increase for all participants in the ICT ecosystem even if the
level of security remains unchanged. In contrast, if effective
measures reduce the level of cybercrime, the costs of security
will decline as well. However, the overall level of security may
not increase as users may decide to maintain a given level
with lower expenditures. There is an asymmetry in the relation
between the markets for cybercrime and cybersecurity.
Whereas higher or lower levels of cybercrime will increase or
decrease the overall cost of security, the effect of such
changes on the level of security remains ambiguous. In contrast,
increased security will unequivocally increase the cost of
cybercrime and/or reduce the benefits of cybercrime. As these
two effects work in the same direction, increased security
will reduce the level of cybercrime (Van Eeten & Bauer, 2008).
However, it may trigger a new round in the technological
race between cybercriminals and defenders.
3. Security incentives of stakeholders
The overall level of cybersecurity emerges from the
decentralized decisions of the players in the ICT ecosystem. It
is
therefore important to understand how these decisions are made
and what factors (‘‘incentives’’) influence them. Very
little detailed empirical evidence is available on the security
incentives of stakeholders and the externalities that emanate
from them. This section reports findings of recent qualitative
field research designed to explore these incentives. In the
course of 2007, a team of researchers from the Delft University
of Technology and Michigan State University conducted 41
in-depth structured interviews with 57 professionals of
organizations operating in networked computer environments
that
are confronted with malware. Interviewees represented a
stratified sample of professionals from different industry
segments (e.g., hardware, software, service providers, and
users) in seven countries (Australia, Canada, Germany, the
ARTICLE IN PRESS
Table 1
Security incentives of players in the ICT value chain
Player Security-enhancing Security-reducing
Internet service
providers (ISPs)
Cost of customer support Cost of security measures
Cost of abuse management Cost of customer acquisition
Cost of blacklisting Legal provisions that shield ISPs
Loss of reputation, brand damage
Cost of infrastructure expansion
Legal provisions requiring security
Software vendors Cost of vulnerability patching Cost of
software development and testing (time to market)
Loss of reputation, brand damage
Benefits of functionality
Benefits of compatibility
Benefits of user discretion
Licensing agreements with hold-harmless clauses
E-commerce providers
(banks)
Benefits of online transaction growth Cost of security measures
Trust in online transactions Benefits of usability of the service
Loss of reputation, brand damage
Users Awareness of security risks, realistic self-efficacy,
exposure to cybercrime
Poor understanding of risks, overconfidence, cost of security
products and services
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 711
Netherlands, United Kingdom, France, and the United States).
Moreover, experts involved in the governance of information
security issues such as Computer Emergency Response Teams
(CERTs) and regulatory agencies were interviewed. Here we
focus on the incentives of ISPs, software vendors, E-commerce
providers (most notably banks and other financial service
providers), and users.6 The aim is to identify if and how these
incentives give rise to negative security externalities, which
require some form of voluntary or government-led collective
action.
The interdependence between stakeholders has, to a certain
degree, contributed to a blame game between individual
players accusing each other of insufficient security efforts.
Whereas Microsoft, due to its pervasive presence, is a frequent
target (e.g., Perrow, 2007; Schneier, 2004), the phenomenon is
broader: security-conscious ISPs blame rogue ISPs, ISPs
blame software vendors, countries with higher security
standards blame those lacking law enforcement, and so forth.
Although these claims are sometimes correct, they do not
represent an accurate picture of the complicated relations. The
field research into the incentives and the way externalities
percolate through the system reveals a more nuanced picture.
Several admittedly imperfect mechanisms, such as reputation
effects and reciprocal sanctions among players, exist that
help align private and social costs and benefits. Moreover, the
high interconnectedness may result in the internalization of
externalities at other stages of the value chain; the associated
costs are therefore not externalized to society at large but to
other players and may indirectly be charged back to the
originators. For every player interviewed in the study, multiple
and
sometimes conflicting incentives were at work, leaving the net
effect contingent upon the relative influence of the specific
drivers at work. Table 1 summarizes important incentives that
shape the security decision of participants in the ICT
ecosystem.
Internet Service Providers (ISPs) are key market players and
often criticized for poor security investment levels.
Nonetheless, ISPs operate under several security-enhancing
incentives (although their effectiveness is mediated by the
ISP’s business model). Commercial ISPs will take security into
account when it affects their revenue stream and bottom
line. This is vividly illustrated by the example of viruses and
spam. Early on, ISPs argued that emails were the personal
property of recipients and that an inspection of the content of
mails was a violation of privacy. Consequently, the
responsibility for protecting their own machines and for dealing
with spam was attributed to end users. With the
exorbitant growth of spam, currently constituting upwards of
80% of all emails, i.e., in the order of 200 billion messages per
day, the financial implications for ISPs also changed
fundamentally. Not only did the flood of spam become a burden
for
network infrastructure that would have required additional
investment, the malware imported onto the network did
indirectly affect the ISP’s cost. Users of infected machines
started to call the help desk or customer service at a fairly high
cost per call to the ISP. Malicious traffic sent from infected
machines triggers abuse notifications from other ISPs and
requests to fix the problem, typically requiring even more
expensive outgoing calls to customers. In extreme cases, the
whole ISP could be blacklisted (see below), causing potentially
serious customer relations and reputation problems. Facing
this altered economic reality, ISPs reversed their stance with
little fanfare and started to filter incoming mail and to manage
their customers’ security more proactively.
In the present environment, ISPs operate under several more or
less potent incentives. The strength of these incentives
is mediated by the business model adopted by an ISP. ‘‘Rogue’’
ISPs whose business model is based on the hosting of shady
activities will respond differently than commercial ISPs seeking
legitimate business. Costs of customer support and abuse
6 See Van Eeten and Bauer (2008) for the full project report and
OECD (2009), in particular Part II. A full list of the
organizations that participated in
the field study is provided in Van Eeten and Bauer (2008, pp.
67–68).
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719712
management as well as the cost of additional infrastructure that
might be required to handle malicious traffic all have an
immediate effect on the bottom line and will increase the
incentives to undertake security-enhancing measures. Loss of
reputation and brand damage work indirectly (and probably
slower) but exert pressure in the same direction. ISPs are
embedded in an interdependent system of service providers. If
contacts via the abuse management desk are ineffective,
other ISPs have a range of escalating options to retaliate for
poor security practices with regard to outgoing malicious
traffic, even if the origin is an individual user.
Blacklists (or ‘‘blocklists’’), inventories of IP addresses and
email addresses reported to have sent spam and other forms
of malicious code, are regularly used by ISPs to filter and block
incoming traffic. Lists are typically maintained by non-profit
organizations such as SpamCop, Spamhaus, or the Spam and
Open Relay Blocking System (SORBS).7 Other lists, such as
those maintained by RFC-Ignorant, detail IP networks that have
not implemented certain RFC’s—‘‘requests for comments’’,
which are memoranda published by the Internet Engineering
Task Force (IETF). This includes, for example, ISPs that do not
have a working ‘‘[email protected]’’ address.8 These lists are
not uncontested and their promoters are sometimes seen as
‘‘vigilantes.’’ But they are widely used and provide real-time
countermeasures for network administrators. Each blacklist
organization has procedures for delisting an ISP once the
originating IP address ceases the malicious activity. Substantial
presence on one or more of these lists, reflected in the listing of
many IP addresses belonging to an ISP for an extended time
period, will drive up customer support and abuse management
costs because, for example, emails originating from
blacklisted ISPs may not be delivered, resulting in customer
dissatisfaction and complaints. It may also trigger subsequent
reputation and revenue losses for an ISP. Both effects create an
incentive to improve security measures, at least to respond
in a timely fashion to abuse requests. In extreme cases, an entire
ISP (and not just IP addresses or address ranges) may be
blocked by blacklists and de-peered by other ISPs, raising the
costs of this ISP significantly, possibly to the point where its
business model becomes unsustainable.
All these incentives work in favor of enhanced security
measures. On the other hand, the costs of increasing security,
legal provisions that shield ISPs from legal liability, and the
costs of customer acquisition all work in the opposite direction.
Other things equal, they constitute incentives to adopt a lower
level of information security. The net effect on ISPs is hence
dependent on the relative strength of the components of this
web of incentives. The high cost of customer calls (estimated
by some ISPs to be as high as $12 per incoming and $18 per
outgoing call), while providing an incentive to find alternative
solutions to enhance security of end users, also may provide an
incentive to ignore individual cases that have not triggered
notifications to blacklisting services or abuse requests from
other ISPs. Most ISPs estimated that only a small percentage of
the infected machines on their network show up in abuse
notifications. Considerable advances and lower cost of security
technology, on the other hand, have enabled ISPs to move their
intervention points closer to the edges of their network and
thus automate many functions in a cost-effective way. In an
escalating response, machines on the network may initially be
quarantined with instructions to the user to undertake certain
measures to fix a problem. Only in cases that cannot be
solved in this fashion may customer calls become necessary.
Overall, while the interdependencies among ISPs result in the
internalization of some of the external effects, this
internalization is partial at best. Hence, it is likely that the
highly
decentralized system of decision-making yields effort levels
that, while higher than often assumed, nevertheless fall short
of the social optimum.
Software is a critical component in the ICT value chain.
Exploitation of vulnerabilities is one important attack vector.
The
market for exploits has become increasingly sophisticated as
seen, for example, in the steady increase in zero-day exploits.
Like ISPs, software vendors work in a complex set of
potentially conflicting incentives whose overall net effect is
difficult to
determine. One strong factor working to enhance security
efforts is the cost of vulnerability patching, which comprises
the
cost of patch development, testing, and deployment. As
software is typically installed in many versions, configurations,
and contexts, the cost of patch development can be very
significant. From the perspective of software vendors it may
therefore be advantageous to invest in security upfront rather
than have the follow-up costs of patching. However, given
the complexity of modern software packages and the benefits of
short time-to-market, it will not be economical to invest to
the point where all potential flaws are eliminated. Loss of
reputation and brand damage also strengthen the incentive to
invest in security. However, reputation effects may work only
slowly and are further weakened if the vendor has a strong
market position, either because of a large installed base or
because of limitations in the number of available substitute
products. Nonetheless, as the security enhancements in the
transition from Windows XP to Windows XP Service Pack 2 to
Windows Vista illustrate, reputation effects do work.
At the same time, there are several incentive mechanisms that,
other things equal, weaken the effort to provide secure
software. Time-to-market is lengthened by software testing and
the cost of software development is increased. In
industries with network effects and first-mover advantages,
such delays may result in significantly reduced revenues over
the product life cycle (Anderson & Moore, 2006). Thus, the
more competitive the software market segment, the lower the
incentive to improve security beyond a minimum threshold.
Moreover, in software design complicated trade-offs have to
be made between different design aspects. Security often is in
conflict with functionality, compatibility, and user discretion
over the software’s configuration (a benefit that many users
covet). Lastly, hold-harmless provisions in software licenses
and shrink-wrap license agreements largely relieve software
vendors from liability for financial damages stemming from
7 See http://www.spamcop.net, http://www.spamhaus.org/zen,
and http://www.de.sorbs.net/.
8 See http://www.rfc-ignorant.org/.
[email protected]
http://www.spamcop.net
http://www.spamhaus.org/zen
http://www.de.sorbs.net/
http://www.rfc-ignorant.org/
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 713
software flaws. The diffusion of software for mobile devices
and the expansion of open source software also increase
vulnerabilities. The fear that increased liability of software
vendors might reduce innovation rates is not unfounded but the
strength of this effect is unknown.
E-commerce providers form another important class of market
players. One particularly interesting sector is financial
service providers, not least because they are a high priority
target for attackers. This is a rather diverse sector,
encompassing different types of banks, credit card companies,
mutual funds, insurance companies, and so forth. The rules
for each of these players differ in detail. Focusing
predominantly on merchant banks, Van Eeten and Bauer (2008)
concluded that these financial service providers are to a
considerable degree able to manage risks emanating from their
customer relations. However, they need to make choices
balancing enhanced security and the growth of their electronic
business. In principle, they could use highly secure platforms to
conduct e-commerce transactions. Such an approach
would likely have detrimental effects on users as it decreases
the convenience of conducting business. Financial
organizations thus face a trade-off between higher security and
migrating transactions to cost-saving electronic platforms.
Many financial service providers offer compensation for losses
incurred by their customers from phishing or other
fraudulent actions as part of this overall security decision. This
practice aligns the incentives of the financial service
provider with the goal of improved security (but not the
incentives of individual users). Businesses other than financial
service providers may often not be in a position to manage
externalities associated with their clients. Therefore, more
significant deviations between private incentives and social
effects may exist, resulting in a sub-optimally low level of
security investment by these firms.
Large businesses (firms with 250 and more employees) are a
heterogeneous group. Many large business users have
adopted risk assessment tools to make security decisions
(Gordon & Loeb, 2004). Their diligence will vary with size and
possibly other factors such as the specific products and services
provided. Two other groups of players that deserve
mentioning are small and medium enterprise (SMEs, typically
defined as enterprises with fewer than 250 employees,
including microenterprises) and residential users. Although this
is a large and diverse group, these players also are in
several respects similar. Like other participants, they work
under multiple and potentially conflicting incentives. Unlike
larger businesses that may be able to employ information
security specialists, either in-house or via outsourced services,
many SMEs and residential users have insufficient resources to
prevent or respond to sophisticated types of attacks.
Whereas awareness of security threats has increased, there is
mounting information that many residential users
underestimate their exposure and overestimate their efficacy in
dealing with risks (LaRose, Rifon, & Enbody, 2008).
Although these constitute similarities between SMEs and
residential users there are also differences. In general, one can
assume that businesses employ a more deliberate, instrumentally
rational form of reasoning when making security
decisions. In both cases, however, the benefits of security
expenses will to a large degree flow to other users.
Individual businesses and users may suffer from the perception
that their own risk exposure is low, especially if others
protect their machines, the well-known free rider phenomenon.
On the other hand, given increased information, a growing
number of users in this category is aware of the risks of
information security breaches. Thus, they realize to a certain
extent
that they are the recipients of ‘‘incoming’’ externalities.
Overall, one can expect that on average these classes of users
will
not be full free riders. Whereas some individuals and SMEs may
over-invest, there is evidence that most will not invest in
security at the level required by the social costs of information
security breaches (Kunreuther & Heal, 2003). This
conclusion is corroborated by the observation that many
individual users do not purchase security services, do not even
use
them when offered for free by an ISP or a software vendor,9 and
turn off their firewalls and virus scanners regularly if they
slow down certain uses, such as gaming.
The ICT ecosystem also has other security loopholes. Some
have to do with the engineering conventions of the Internet.
TCP/IP was designed as a very open and transparent protocol
but not with security in mind. This openness has facilitated a
broad variety of applications and services but is also at the
heart of security problems. Some of these problems could be
remedied with overlay network infrastructure and some will be
mitigated by moving to IPv6. Moreover, the long-time
convention of registrars to grant five-day grace periods to allow
the ‘‘tasting’’ of domain names has been abused; changes
to this practice have been approved by the Internet Corporation
for Assigned Names and Numbers (ICANN) but have not yet
been implemented.
4. Analysis of the emerging patterns
What patterns can be observed in the empirical findings on the
incentives of market players and externalities that
emerge from them? Pointing to a variety of reports that show
increases in malicious attack trends, one might conclude that
markets are not responding adequately. Our field work and
analysis revealed a more diverse, graded picture, indicating a
number of market-based incentive mechanisms that enhance
security but also other instances in which decentralized
actions are afflicted with externalities and possibly sub-optimal
outcomes. The findings suggest that all players work under
some incentives that work in the correct direction. Furthermore,
all market players studied experienced at least some
9 For example, XS4All, a Dutch ISP, offered security services
to their customers but less than 10% signed up. Eventually, the
ISP decided to offer
security software as part of the subscription, but even then
many users would not install it. Many users also do not use
automatic updates to their
software.
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719714
consequences of security trade-offs on others. In other words,
feedback loops (e.g., reputation effects or threats to the
revenues of a company from poor security practices) were at
work that brought some of the costs imposed on others back
to the agent that caused them. However, in many cases these
feedbacks were too weak, localized, or too slow to move
agents’ behavior swiftly towards more efficient social
outcomes. In all cases, moreover, incentive mechanisms exist
that
undermine security. Overall, across the ecosystem of all the
different market players, three typical situations emerge:
(1) No externalities: This concerns instances in which a market
player, be it an individual user or an organization,
correctly assesses security risks, bears all the costs of
protecting against security threats (including those associated
with
these risks) and adopts appropriate countermeasures. Private
and social costs and benefits of security decisions are aligned.
There may still be significant damage caused by malware, but
this damage is borne by the market player itself. This
situation would be economically efficient but, due to the high
degree of interdependency in the Internet, it is relatively rare.
Essentially, this scenario only applies to closed networks and
user groups.
(2) Externalities that are borne by agents in the value net that
can manage them: This concerns instances in which a market
player assesses the security risks based on the available
information but, due to the existence of (positive or negative)
externalities, the resulting decision deviates from the social
optimum. Such deviations may be based on lack of incentives
to take costs imposed on others into account, but it can also
result from a lack of information, insufficient skills
to cope with security risks, or financial constraints faced by an
individual or organization. If somebody in the value net
internalizes these costs and this agent can influence the
magnitude of these costs – i.e., it can influence the incentives
and
security trade-offs of the agents generating the externality –
then the security level achieved in the ICT ecosystem will be
closer to the social optimum than without such internalization.
This scenario depicts a relatively frequent case, typically
situations in which a commercial relation exists between
players. Thus, numerous examples were found where
externalities generated in one segment of the value net were
internalized by other market players. ISPs and financial
service providers that internalize costs originating from end
users are two examples that were discussed in more
detail above.
(3) Externalities that are borne by agents who cannot manage
them or by society at large: An individual unit may correctly
assess the security risks given its perceived incentives but, due
to the existence of externalities, this decision deviates from
the social optimum. Alternatively, an individual unit may not
fully understand the externalities it generates for other
actors. Unlike in scenario two, agents in the ICT ecosystem that
absorb the cost are not in a position to influence them – i.e.,
influence the security trade-offs of the agents generating the
externality. Hence, costs are generated for the whole sector
and society at large. These are the costs of illegal activity or
crime associated with malware, the costs of restitution of crime
victims, the costs of e-commerce companies buying security
services to fight off botnet attacks, the cost of law enforcement
associated with these activities, and so forth. Furthermore, they
may take on the more indirect form of slower growth of
e-commerce and other activities. Slower growth of ICT use may
entail a significant opportunity cost for society at large if
the delayed activities would have contributed to economic
efficiency gains and accelerated growth.
Among the most poignant cases in this category are the
externalities caused by lax security practices of end users. Some
of these externalities are internalized by other market players
that can mitigate them, most notably ISPs that can
quarantine infected end users. ISPs have incentives to deal with
these problems but only in so far as they themselves suffer
consequences from the end user behavior, e.g., by facing the
threat that a significant part of their network gets blacklisted.
Estimates mentioned in the interviews suggested that the
number of abuse notifications received by ISPs represents only
a
fraction of the overall number of infected machines in their
network. This observation suggests that a considerable share of
the externalities originating from users may not be mitigated.
Consequently, a large share of these costs of poor security
practices of end users is borne by the sector as a whole and
society at large, typically in the form of higher direct, indirect
and implicit costs (see Van Eeten, Bauer, & Tabatabaie, 2009).
These externalities are typically explained by the absence of
incentives for end users to secure their machines. It would be
more precise, however, to argue that end users do not perceive
any incentives to secure their machines, in part due to
insufficient information. While malware writers have
purposefully chosen to minimize their impact on the infected
host
and to direct their attacks at other targets, there is also a
plethora of malware which does in fact attack the infected
host—most notably to scour personal information that can be
used for financial gain. In that sense, end users should have a
strong incentive to secure their machines. Unsecured machines
cannot differentiate between malware that does or does
not affect the owner of the machine. If the machine is not
sufficiently secured, then one has to assume that all forms of
malware can be present. The fact that this is not perceived by
the end user is an issue of incomplete information rather than
a principal failure of the respective incentive mechanism.
The discussion so far has assumed that the threat level does not
fully undermine the operations of the ICT ecosystem.
However, there is a risk of not just security failures or even a
disaster but of catastrophic failure. Given the dependence of
all aspects of global society on the Internet (and electronic
communications in general), widespread and extended failure
could certainly have catastrophic consequences. That such a
pervasive failure or technological terrorism has not yet
happened and has a low probability complicates the formulation
of a response (Posner, 2004). Like other events with a low
but non-trivial probability, it could be considered a ‘‘black
swan’’ event (Taleb, 2007). Cost-benefit analysis of such
catastrophic events would help in shaping more rational
responses but it is extremely difficult. Complications include
the
choice of an appropriate time horizon, the quantification of the
risk in question, problems of monetizing a wide range of
qualitative impacts, and the determination of social discount
rates applied to monetized future events. Nonetheless,
assessing the potential costs of internet security breaches would
greatly benefit from such an exercise.
ARTICLE IN PRESS
Table 2
Principal policy instruments to enhance information security
Predominant policy
vector
Cybercrime Information security
Legal and regulatory
measures
� National legislation
� Bi- and multi-lateral treaties
� Forms and severity of punishment
� Law enforcement
� National legislation/regulation of information security
� Legislation/regulation of best practices to enhance
information
security
� Liability in case of failure to meet required standards
� Tax credits and subsidies
Economic measures � Measures that increase the direct costs of
committing fraud and crime
� Measures that increase the opportunity costs of
committing fraud and crime
� Measures that reduce the benefits of crime
� Level of financial penalties for violations of legal/regulatory
provisions (compensatory, punitive)
� Payments for access to valuable information
� Markets for vulnerabilities
� Insurance markets
Technical measures � Redesign of physical and logical internet
infrastructure
� Information security standards
� Mandated security testing
� Peer-based information security
Informational and
behavioral measures
� National and international information sharing on
cybercrime
� National and international information sharing on information
security
� Educational measures
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 715
5. Policy implications
This section explores the role of policy and regulation in
overcoming cybersecurity problems. The analysis in Sections 3
and 4 suggests that decentralized decision-making often
generates correct incentives for stakeholders and that deviations
from a desired security level often trigger the appropriate
feedbacks. However, many instances were also identified where
the resulting incentives were weak, too slow, or the net effects
of the security-enhancing and security-reducing incentive
mechanisms could be either positive or negative. Moreover, the
ICT ecosystem is under relentless attack from organized
gangs and reckless individuals who dispose over increasingly
powerful and sophisticated hacking tools and malicious code.
Many of these activities are organized in countries where the
cost of engaging in cybercrime are low: law enforcement is
weak or non-existent; due to dire overall economic conditions,
the opportunity costs of participating in cybercrime rather
than pursuing other gainful employment are low; and
technological means enable criminals to operate swiftly across
many
national boundaries. At the same time, numerous incremental
measures to improve security beyond the status quo are
known but may not be undertaken by decentralized players
because they are afflicted with positive externalities. The
benefits of such measures potentially help all stakeholders but
their costs often need to be borne by one particular group.
Participants in the ICT ecosystem suffer from a prisoner’s
dilemma problem: everybody is worse off if decisions are
made in a non-cooperative fashion. Where repeated interactions
happen, it is partially overcome, as is illustrated by the
cooperation among ISPs. Enhancing cybersecurity at a broader
level will have to overcome this coordination and
cooperation issue: it is a collective action problem. Whether
such collective measures can be identified and successfully
implemented without disadvantages that outweigh the potential
improvements needs further examination. The
information economics literature has begun to examine these
issues during the past few years (Anderson et al., 2008;
Anderson & Moore, 2006; Camp, 2006, 2007). Many of the
aspects of information markets that aggravate security issues
can be explored more systematically from an economic
perspective including network effects, pervasive externalities,
the
assignment and evasion of liability rules, and moral hazard.
Often, the response is to design a broadside of policy
instruments, targeting multiple goals with multiple means
simultaneously. Whereas this may eventually turn out as a
rational approach, there is also a risk that individual measures
counteract and neutralize each other.
There are two principal vectors for implementing such
collective measures. They can target the realm of cybercrime or
the information security decisions of stakeholders in the ICT
ecosystem. Both areas may be addressed with measures in
four categories (or with a mix of such measures): legal and
regulatory measures, economic means, technical solutions, and
informational/behavioral approaches (see Table 2). Cybercrime
can principally be reduced by increasing its costs and by
reducing its benefits. Strengthening law enforcement via
national legislation and multi-national and international
treaties,
such as the European Convention on Cybercrime, which has
been ratified by 23 countries and signed by 23 others,10 is one
important precondition to credible enforcement as it defines a
legal basis for intervention. Potentially equally important for
10 See
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
185&CM=&DF=&CL=ENG (last visited February 28, 2009).
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
185&CM=&DF=&CL=ENG
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
185&CM=&DF=&CL=ENG
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
185&CM=&DF=&CL=ENG
http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
185&CM=&DF=&CL=ENG
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719716
the preventative effect of law enforcement are the forms and
severity of punishment as well as the effectiveness and
expediency of law enforcement. International collaboration is a
particularly difficult obstacle that needs to be overcome in
this area. Wang and Kim (2009) found that joining the
Convention had a deterrent effect on cyberattacks. On the other
hand, Png, Wang, and Wang (2008) found an insignificant effect
of domestic enforcement. Even so, the shutting down of
McColo in November 2008 caused a significant temporary drop
in spam activity.11
Given the fact that much criminal activity is organized (but not
necessarily executed) in countries with relatively weak
rules of law, other measures might be needed in addition to
legislative and regulatory initiatives. A number of interesting
proposals have been made to reduce spam via technical–
economic mechanisms, such as the requirement to have a token
or
make a payment before a mailbox can be accessed (e.g., Loder,
Van Alstyne, & Wash, 2004). Although such measures are
principally effective and interesting, they will only take care of
malware disseminated via email. Moreover, their
effectiveness will depend on a critical mass of users adopting
the method. General measures to increase the opportunity
costs of cybercrime, such as the creation of attractive
employment opportunities for skilled programmers, will
contribute
to a reduction of activity long term. Many of the architects of
the internet have proclaimed that it was not built for the
current onslaught of legal and illegal activity. Several
initiatives, such as DNS Security Extensions (DNSSEC) which
is
designed to make the domain name system more secure, are
underway. Several other protocols are deployed in the web
and others may be rolled out as an underlay network. Whereas
all of these measures already have triggered responses by
attackers, they do make cybercrime more expensive and one
would expect that this has a dampening effect, at least in the
short term and other things equal. Furthermore, measures of
information sharing at a national level in CERTs and CSIRTs as
well as at the international level, such as in the more than 200
organizations collaborating in Forum for Incident Response
Teams (FIRST) are important steps in the right direction.
Whereas measures devised to fight cybercrime are relatively
uncontested, this is not true for policies directed towards
information security markets. Nonetheless, there is an emerging
discussion in many countries as to whether such measures
are appropriate, whether they should be targeted at key players
in the ICT ecosystem, and which organizations should be in
charge of such initiatives (Anderson et al., 2008; OECD, 2009).
Several countries have adopted laws against spam. Although
their effectiveness is sometimes questioned, legislation such as
the US CAN-SPAM Act of 2003 and subsequent
implementation measures by the Federal Communications
Commission (FCC) and the Federal Trade Commission (FTC)
have provided a legal basis to prosecute several spammers with
deterrent effects. The recent discussion goes beyond such
measures to explore specific regulatory intervention. A critical
weakness of any attempt to legislate or regulate security is
that specific measures may be outsmarted by new attack
technologies quickly. However, the notion of ‘‘best practices’’
could be developed on an ongoing, adaptive basis. This is, for
example, the approach taken by the Australian
Communication and Media Authority (ACMA) in the Australian
Internet Security Initiative. In this collaborative effort, 59
ISPs (as of February 28, 2009) and the regulatory agency share
information to reduce the threat from botnets. The approach
is not without critics, many of whom point to the lack of
transparency in following up on security threat reports.
Nonetheless, the ability of the regulatory authority to threaten
with the imposition of formal regulation seems to have
boosted participation and the initiative is expanding steadily.
The Australian example raises the question of what role, if any,
regulatory agencies should play. Historically, apart from
law enforcement, cybersecurity was overwhelmingly organized
in voluntary and self-regulatory arrangements, giving
regulatory agencies only ancillary jurisdiction (Lewis, 2005).
For example, in the US, the Federal Communications
Commission was part of implementing the Communications
Assistance for Law Enforcement Act (CALEA), 1994 legislation
defining the obligations of telecommunications and information
service providers in assisting law enforcement. In some
way, however, regulatory agencies are well-positioned to
assume a stronger role in contributing to enhanced
cybersecurity:
they have jurisdiction over much of the key infrastructure used
in providing information services which might be an
efficient intervention point, typically have powers to compel
information from providers that would enable finding better
policies, have administrative processes in place that could assist
in developing policies, and have some experience with
international collaboration (Lukasik, 2000). Regulatory
agencies also have practice in designing financing models for
the
pursuit of public interest goals rather than imposing an
unfunded mandate on stakeholders. On the other hand, they also
have disadvantages: the administrative process is often slow and
beholden to special interest groups and regulatory
measures might be too rigid in formulating adaptive and
dynamic responses (Garcia & Horowitz, 2007). However, these
challenges might be overcome if adaptive regulatory tools are
used. In any case, regulatory agencies are an interesting
avenue to pursue cybersecurity but they will have to work in
concert with other agencies.
An even more contested issue is the modification of existing
liability rules. In principle, liability rules would facilitate a
desirable evolutionary learning process. By creating enforceable
rights and obligations, they would allow common and case
law institutions to gradually develop a body of best practices
and reshape incentives to reduce negative and strengthen
positive externalities. Compared to the status quo, such rules
will create advantages for some players and disadvantages for
others. Most likely, such changes will activate losers that will
attempt to block solutions even if the winners could have
compensated them. Changes in economic institutions and
incentives could achieve similar outcomes. Among the possible
measures discussed in the literature are increased penalties for
security flaws, the establishment of markets for
11 See B. Krebs, ‘‘Major source of online scams and spams
knocked offline’’, Washington Post, November 11, 2008.
Retrieved September 3, 2009 from
http://voices.washingtonpost.com/securityfix/2008/11/major_so
urce_of_online_scams_a.html.
http://voices.washingtonpost.com/securityfix/2008/11/major_so
urce_of_online_scams_a.html
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 717
vulnerabilities, insurance markets, and mandatory release of
information about security violations (Rescorla, 2004; Choi,
Fershtman, & Gandal, 2005; Shostack, 2005). All these
proposals have some appealing features but most also have
downsides. For example, insurance markets may not work well
because the risks of individual stakeholders, exposed to the
same threats, may be highly correlated. Releasing information
may assist consumers in avoiding firms with low security
standards but it may also trigger more devious attacks (see the
more detailed discussion in Van Eeten & Bauer, 2008). At a
technical level, information security standards, mandated
security testing, or peer-based information security approaches
are promising avenues. Many nations are actively engaged in
information sharing programs at the national level. Some
have established reporting requirements for security breaches.
Early observations suggest that such measures work in favor
of enhanced security. At least, they increase market
transparency for consumers.
The effectiveness of the available broad spectrum of measures
is not well established. Very few empirical studies
document the possible impact of specific measures. As it is
unlikely that a comprehensive information base that will
allow such judgment is available any time soon, measures will
have to be adopted on an experimental, trial and error basis.
Different options may complement each other (such as national
and international legislation) whereas others may be
partial or full substitutes for each other. More research and
practical policy experimentation is needed to move this
process along.
6. Conclusions
Information and communication technologies form a vast open
ecosystem. This openness supports the innovative
dynamics of the internet (Benkler, 2006) but it also renders
infrastructure and services vulnerable to attack. During
the past decade, information security breaches have become
increasingly motivated by fraudulent and criminal
motives and are threatening to undermine the open fabric of the
internet (Zittrain, 2008). Because it benefits all
participants in the ICT ecosystem, information security has
strong public good characteristics. However, defenses against
this relentless stream of attacks are predominantly the outcome
of security decisions by private market and non-market
players. As security is costly, accepting some level of insecurity
is economically rational. A critical question, which was
pursued in this paper, is, therefore, whether these decentralized
decisions result in a socially acceptable level of
information security.
The paper set out to pursue three interrelated aims. It first
analyzed cybercrime and information security as two
interdependent markets. Second, in this framework, the
incentives of key players in the ICT ecosystem (e.g., ISPs,
e-commerce service providers, and different types of users)
were examined in detail. The research found that all players in
the ICT ecosystem face incentives such as revenue effects,
reputation effects, and social relations that increase the level of
security. At the same time, due to physical network links, multi-
layered business transactions, and social relations, some of
the costs and benefits of security decisions are dispersed
throughout the entire ecosystem. Individual decision-makers
therefore generate externalities, i.e. they do not perceive all
relevant costs and benefits of their choices. Three typical
scenarios describe the extent and severity of externalities. In
rare instances all social costs and benefits are internalized and
no externalities are present. In a second scenario, a stakeholder
in the value net, such as a financial institution or an ISP that
has a business relationship with the player that is the source of
the externality (often end users) is able to fully or partially
control it. In this case, the security externality is internalized at
the sector level. In the third scenario, no such opportunities
exist and costs are imposed on the whole ICT ecosystem and
society at large.
Threats to cybersecurity therefore have two sources: a
continuous stream of attacks from the underworld of cybercrime
and the interconnected and distributed nature of the ICT
ecosystem. As decentralized decisions are afflicted with
externalities, some form of collective action, ranging from
voluntary self-regulation to government-prodded co-regulation
and forms of legislative and regulatory intervention may be the
only way to overcome the problem. Thus, the paper,
explored third alternative policy approaches. Such actions can
use two complementary levers: measures to quench
cybercrime and measures to enhance information security. Both
strategies will result in a lower level of attacks. Whereas
this may or may not increase the level of security, measures to
reduce cybercrime will unequivocally reduce the direct and
indirect cost of maintaining a given level of security. Measures
to enhance information security range from market-based
interventions such as the establishment of markets for
vulnerabilities to regulatory measures. Other things equal, they
will
increase the cost of cybercrime and therefore reduce the level of
attacks. However, the relation between attackers and
defenders resembles an arms race so that the effects of security
enhancements may only be temporary.
The complex nature of the problem suggests that a mix of policy
instruments may currently be the best approach.
Regulatory agencies are in some ways well-positioned to
assume responsibility in the area of information security. They
have jurisdiction over large parts of the ICT infrastructure, are
experienced in developing policies and funding models for
measures that might impose costs on certain players, and are
increasingly collaborating at an international level. However,
regulatory agencies will have to develop innovative, adaptive
approaches as existing administrative inertia and the
potential rigidity of ex ante regulation may render them
ineffective in addressing issues of cybercrime. Moreover,
regulation alone will be insufficient, as it cannot effectively
target the world of cybercrime. Concerted efforts across public
and private stakeholders, including legal measures, national and
international law enforcement, cooperation in CERTs and
CSIRTs, as well as technical measures at the level of the
infrastructure, hardware, and software, will therefore be
necessary
if the problem of cybersecurity shall be tackled.
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719718
Acknowledgements
Background research for this article was supported by funding
from the OECD (Paris) and the Dutch Ministry of
Economics (The Hague). Yuehua Wu and Tithi Chattopadhyay
provided able research assistance during early stages of the
project. We also like to thank the more than 50 experts from
industry and governance who generously contributed time
and knowledge during and after the field interviews. An earlier
version of this paper was presented at WebSci09, Athens,
Greece, March 18–20, 2009.
References
Anderson, R. (2001). Why information security is hard: An
economic perspective. Paper presented at the annual computer
security applications conference,
New Orleans, LA. Retrieved September 3, 2009 from
/http://www.acsac.org/2001/papers/110.pdfS (December 13).
Anderson, R., Böhme, R., Clayton, R., & Moore, T. (2008).
Security economics and European policy. Cambridge:
University of Cambridge Computer Laboratory.
Anderson, R., & Moore, T. (2006). The economics of
information security. Science, 314(5799), 610–613.
Bauer, J. M., Van Eeten, M., Chattopadhyay, T., & Wu, Y.
(2008). Financial implications of network security: Malware and
spam. Report for the International
Telecommunication Union (ITU), Geneva, Switzerland.
Becker, G. S. (1968). Crime and punishment: An economic
approach. Journal of Political Economy, 76(2), 169–217.
Benkler, Y. (2006). The wealth of networks: How social
production transforms markets and freedom. New Haven, CT:
Yale University Press.
Bovel, D., & Martha, J. (2000). Value nets: Breaking the supply
chain to unlock hidden profits. New York, NY: John Wiley.
Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition.
New York, NY: Doubleday.
Camp, L. J. (2006). The state of economics of information
security. I/S-A Journal of Law and Policy in the Information
Society, 2. Retrieved September 3, 2009
from /http://www.is-journal.org/V02I02/2ISJLP189-Camp.pdfS.
Camp, L. J. (2007). Economics of identity theft: Avoidance,
causes and possible cures. New York, Berlin: Springer.
Choi, J. P., Fershtman, C., & Gandal, N. (2005). Internet
security, vulnerability disclosure, and software provision. Paper
presented at the fourth workshop on
the economics of information security, Cambridge, MA.
Retrieved September 3, 2009 from
/http://infosecon.net/workshop/pdf/9.pdfS (June 2).
CSI (2008). 2008 CSI computer crime and security survey. San
Francisco, CA: Computer Security Institute.
Ehrlich, I. (1996). Crime, punishment, and the market for
offenses. Journal of Economic Perspectives, 10(1), 43–67.
Franklin, J., Paxson, V., Perrig, A., & Savage, S. (2007). An
inquiry into the nature and causes of the wealth of internet
miscreants. Paper presented at the 14th
ACM conference on computer and communications security,
Alexandria, VA. Retrieved September 3, 2009 from
/http://www.icir.org/vern/papers/
miscreant-wealth.ccs07.pdfS (October 31).
Fransman, M. J. (2007). The new ICT ecosystem: Implications
for Europe. Edinburgh: Kokoro.
Garcia, A., & Horowitz, B. (2007). The potential for
underinvestment in internet security: Implications for regulatory
policy. Journal of Regulatory
Economics, 31(1), 37–55.
Gilsing, V. (2005). The dynamics of innovation and interfirm
networks: Exploration, exploitation and co-evolution.
Cheltenham, UK: Edward Elgar Publishing.
Gordon, L. A., & Loeb, M. P. (2004). The economics of
information security investment. In L. J. Camp, & S. Lewis
(Eds.), Economics of information
security (pp. 105–128). Dordrecht: Kluwer Academic
Publishers.
House of Lords (2007). Personal internet security. Science and
Technology Committee, 5th Report of Session 2006–07,
London. Retrieved September 3, 2009
from
/http://www.publications.parliament.uk/pa/ld/ldsctech.htmS.
Illing, G., & Peitz, M. (Eds.). (2006). Industrial organization
and the digital economy. Cambridge, MA MIT Press.
Jakobsson, M., & Ramzan, Z. (Eds.). (2008). Crimeware:
Understanding new attacks and defenses. Upper Saddle River,
NJ: Addison-Wesley Professional.
Kanich, C., Kreibnich, C., Levchenko, K., Enright, B., Voelker,
G. M., Paxson, V., et al. (2008). Spamalytics: an empirical
analysis of spam marketing conversion.
Paper presented at the 15th ACM conference on computer and
communications security, Alexandria, VA. Retrieved September
3, 2009 from /http://
www.icsi.berkeley.edu/pubs/networking/2008-ccs-
spamalytics.pdfS (October 28).
Kshetri, N. (2006). The simple economics of cybercrimes. IEEE
Security and Privacy, 4(1), 33–39.
Kunreuther, H., & Heal, G. (2003). Interdependent security.
Journal of Risk and Uncertainty, 26(2), 231–249.
LaRose, R., Rifon, N., & Enbody, R. (2008). Promoting
personal responsibility for internet safety. Communications of
the ACM, 51(3), 71–76.
Lewis, J. A. (2005). Aux armes, citoyens: Cyber security and
regulation in the United States. Telecommunications Policy,
29(11), 821–830.
Li, F., & Whalley, J. (2002). Deconstruction of the
telecommunications industry: From value chains to value
networks. Telecommunications Policy, 26(9–10),
451–472.
Loder, T., Van Alstyne, M., & Wash, R. (2004). An economic
answer to unsolicited communication. Paper presented at the 5th
ACM conference on electronic
commerce, New York. Retrieved September 3, 2009 from
/http://portal.acm.org/citation.cfm?id=988780S (May 17).
Lukasik, S. J. (2000). Protecting the global information
commons. Telecommunications Policy, 24(6–7), 519–531.
OECD (2009). Computer viruses and other malicious software.
Paris: Organisation for Economic Co-operation and
Development.
Murmann, J. P. (2003). Knowledge and competitive advantage:
The co-evolution of firms, technology, and national institutions.
Cambridge, UK: Cambridge
University Press.
Nelson, R. R. (1994). The co-evolution of technology, industrial
structure, and supporting institutions. Industrial and Corporate
Change, 3(1), 47–63.
Peppard, J., & Rylander, A. (2006). From value chain to value
network: Insights for mobile operators. European Management
Journal, 24(2–3), 128–141.
Perrow, C. (2007). The next catastrophe: Reducing our
vulnerabilities to natural, industrial, and terrorist disasters.
Princeton, NJ: Princeton University Press.
Png, I. P. L., Wang, C.-H., & Wang, Q.-H. (2008). The deterrent
and displacement effects of information security enforcement:
International evidence.
Journal of Management Information Systems, 25(2), 125–144.
Posner, R. A. (2004). Catastrophe: Risk and response. New
York: Oxford University Press.
Rescorla, E. (2004). Is finding security holes a good idea? Paper
presented at the third workshop on the economics of
information security, Minneapolis, MN.
Retrieved September 3, 2009 from
/http://www.rtfm.com/bugrate.pdfS (May 13).
Schipka, M. (2007). The online shadow economy: A billion
dollar market for malware authors. White Paper. New York,
NY: MessageLabs Ltd. Retrieved
September 3, 2009 from
/http://www.fstc.org/docs/articles/messaglabs_online_shadow_e
conomy.pdfS.
Schneier, B. (2004). Secrets and lies: Digital security in a
networked society. New York, NY: Wiley.
Shapiro, C., & Varian, H. R. (1999). Information rules: A
strategic guide to the network economy. Boston, MA: Harvard
Business School Press.
Shostack, A. (2005). Avoiding liability: An alternative route to
more secure products. Paper presented at the fourth workshop
on the economics of information
security, Cambridge, MA. Retrieved on September 3, 2009 from
/http://infosecon.net/workshop/pdf/44.pdfS (June 3).
Symantec (2009). The state of spam. Report #32, August 2009.
Retrieved on September 3, 2009 from
/http://eval.symantec.com/mktginfo/enterprise/
other_resources/b-state_of_spam_report_08-2009.en-us.pdfS.
Taleb, N. N. (2007). The black swan: The impact of the highly
improbable. New York: Random House.
Van Eeten, M., & Bauer, J. M. (2008). The economics of
malware: Security decisions, incentives and externalities.
Directorate for Science, Technology and
Industry, Committee for Information, Computer and
Communications Policy, DSTI/ICCP/REG(2007)27. Paris:
OECD.
Van Eeten, M., Bauer, J. M., & Tabatabaie, S. (2009). Damages
from internet security incidents: A framework and toolkit for
assessing the economic costs of
security breaches. Report for OPTA. Delft: Delft University of
Technology.
http://www.acsac.org/2001/papers/110.pdf
http://www.is-journal.org/V02I02/2ISJLP189-Camp.pdf
http://infosecon.net/workshop/pdf/9.pdf
http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdf
http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdf
http://www.publications.parliament.uk/pa/ld/ldsctech.htm
http://www.icsi.berkeley.edu/pubs/networking/2008-ccs-
spamalytics.pdf
http://www.icsi.berkeley.edu/pubs/networking/2008-ccs-
spamalytics.pdf
http://portal.acm.org/citation.cfm?id=988780
http://www.rtfm.com/bugrate.pdf
http://www.fstc.org/docs/articles/messaglabs_online_shadow_ec
onomy.pdf
http://infosecon.net/workshop/pdf/44.pdf
http://eval.symantec.com/mktginfo/enterprise/other_resources/b
-state_of_spam_report_08-2009.en-us.pdf
http://eval.symantec.com/mktginfo/enterprise/other_resources/b
-state_of_spam_report_08-2009.en-us.pdf
ARTICLE IN PRESS
J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33
(2009) 706–719 719
Varian, H., Farrell, J., & Shapiro, C. (2005). The economics of
information technology: An introduction. Cambridge, UK:
Cambridge University Press.
Wang, Q.-H., & Kim, S.-H. (2009). Cyber-attacks: Cross-
country interdependencies and enforcement. Paper presented at
the eight workshop on the economics
of information security, London. Retrieved September 3, 2009
from /http://weis09.infosecon.net/files/153/paper153.pdfS (June
24).
White House (2009). Cyberspace policy review: Assuring a
trusted and resilient information and communications
infrastructure. Washington, DC: The White
House Retrieved September 3, 2009 from.
/http://www.whitehouse.gov/assets/documents/Cyberspace_Poli
cy_Review_final.pdfS.
Zittrain, J. (2008). The future of the internet and how to stop it.
New Haven, CT: Yale University Press.
http://weis09.infosecon.net/files/153/paper153.pdf
http://www.whitehouse.gov/assets/documents/Cyberspace_Polic
y_Review_final.pdfCybersecurity: Stakeholder incentives,
externalities, and policy optionsIntroductionThe co-evolution of
cybercrime and information securitySecurity incentives of
stakeholdersAnalysis of the emerging patternsPolicy
implicationsConclusionsAcknowledgementsReferences
The Economics of Information Security
Ross Anderson and Tyler Moore
University of Cambridge, Computer Laboratory
15 JJ Thomson Avenue, Cambridge CB3 0FD, United Kingdom
[email protected]
The economics of information security has recently become a
thriving and
fast-moving discipline. As distributed systems are assembled
from machines
belonging to principals with divergent interests, we find
incentives becoming
as important to dependability as technical design is. The new
field provides
valuable insights not just into ‘security’ topics such as bugs,
spam, phishing
and law-enforcement strategy, but into more general areas such
as the design
of peer-to-peer systems, the optimal balance of effort by
programmers and
testers, why privacy gets eroded, and the politics of DRM.
Introduction
Over the last six years, people have realized that security
failure is caused at least as often by bad
incentives as by bad design. Systems are particularly prone to
failure when the person guarding
them is not the person who suffers when they fail. The growing
use of security mechanisms to
enable one system user to exert power over another user, rather
than simply to exclude people
who should not be users at all, introduces many strategic and
policy issues. The tools and con-
cepts of game theory and microeconomic theory are becoming
just as important to the security
engineer as the mathematics of cryptography.
We review several recent results and live research challenges in
the economics of informa-
tion security. We first consider misaligned incentives, and
externalities: network insecurity is
somewhat like air pollution or traffic congestion, in that people
who connect insecure machines
to the Internet do not bear the full consequences of their
actions.
The difficulty in measuring information security risks presents
another challenge: these
risks cannot be managed better until they can be measured
better. Auctions and markets can
help reduce the information asymmetry prevalent in the
software industry. We also examine
the problem of insuring against attacks. The local and global
correlations exhibited by different
attack types largely determine what sort of insurance markets
are feasible. Information security
1
mechanisms or failures can create, destroy or distort other
markets: DRM provides a topical
example.
Economic factors also explain many challenges to personal
privacy. Discriminatory pricing
– which is economically efficient but socially controversial – is
simultaneously made more
attractive to merchants, and easier to implement, by
technological advance. We conclude by
discussing a fledgling research effort: examining the security
impact of network structure on
interactions, reliability and robustness.
Misaligned incentives
One of the observations that drove initial interest in security
economics came from banking.
In the USA, banks are generally liable for the costs of card
fraud; when a customer disputes
a transaction, the bank must either show that she is trying to
cheat, or refund her money. In
the UK, the banks had a much easier ride: they generally got
away with claiming that the
ATM system was ‘secure’, so a customer who complained must
be mistaken or lying. “Lucky
bankers,” you might think; yet UK banks spent more on security
and suffered more fraud. How
could this be? It appears to have been what economists call a
moral-hazard effect: UK bank
staff knew that customer complaints would not be taken
seriously, so they became lazy and
careless. This situation led to an avalanche of fraud [1].
In 2000, Varian made another key observation – about the anti-
virus software market. People
did not spend as much on protecting their computers as they
might have. Why not? Well, at that
time, a typical virus payload was a service-denial attack against
the website of a company like
Microsoft. While a rational consumer might well spend $20 to
prevent a virus from trashing his
hard disk, he might not do so just to prevent an attack on
someone else [2].
Legal theorists have long known that liability should be
assigned to the party that can best
manage the risk. Yet everywhere we look, we see online risks
allocated poorly, resulting in
privacy failures and protracted regulatory tussles. For instance,
medical IT systems are bought
by hospital directors and insurance companies, whose interests
in account management, cost
control and research are not well aligned with the patients’
interests in privacy.
Incentives can also influence attack and defense strategies. In
economic theory, a hidden-
action problem arises when two parties wish to transact, but one
party can take unobservable
actions that impact the transaction. The classic example comes
from insurance, where the
insured party may behave recklessly (increasing the likelihood
of a claim) because the insurance
company cannot observe his behavior.
Moore noted that we can use such economic concepts to classify
computer security prob-
lems [3]. Routers can quietly drop selected packets or falsify
responses to routing requests;
nodes can redirect network traffic to eavesdrop on
conversations; and players in file-sharing
systems can hide whether they have chosen to share with others,
so some may ‘free-ride’ rather
than to help sustain the system. In such hidden-action attacks,
some nodes can hide malicious or
antisocial behavior from others. Once the problem is seen in
this light, designers can structure
2
interactions to minimize the capacity for hidden action or to
make it easy to enforce suitable
contracts.
This helps explain the evolution of peer-to-peer systems over
the past ten years. Early sys-
tems, such as Eternity, Freenet, Chord, Pastry and OceanStore,
provided a ‘single pot’, with
widely and randomly distributed functionality. Later and more
successful systems, like the
popular Gnutella and Kazaa, allow peer nodes to serve content
they have downloaded for their
personal use, without burdening them with random files. The
comparison between these archi-
tectures originally focused on purely technical aspects: the cost
of search, retrieval, communi-
cations and storage. However, it turns out that incentives matter
here too.
First, a system structured as an association of clubs reduces the
potential for hidden action;
club members are more likely to be able to assess correctly
which members are contributing.
Second, clubs might have quite divergent interests. Early peer-
to-peer systems were oriented
towards censorship resistance rather than music file sharing,
and when they put all content into
one pot, quite different groups ended up protecting each others’
free speech – maybe Chinese
dissidents, critics of Scientology, or aficionados of sado-
masochistic imagery that is legal in
California but banned in Tennessee. The question then is
whether such groups might not fight
harder to defend their own kind, rather than people involved in
struggles in which they had no
interest and where they might even be disposed to side with the
censor.
Danezis and Anderson introduced the Red-Blue model to
analyze this [4]. Each node has a
preference among resource types, while a censor who attacks
the network will try to impose his
own preferences. His action will meet the approval of some
nodes but not others. The model
proceeds as a multi-round game in which nodes set defense
budgets which affect the probability
that they will defeat the censor or be overwhelmed by him.
Under reasonable assumptions, the
authors show that diversity (with each node storing its preferred
resource mix) performs better
under attack than solidarity (where each node stores the same
resource mix, which is not usually
its preference). Diversity increases node utility which in turn
makes nodes willing to allocate
higher defense budgets. This model sheds light on the more
general problem of the tradeoffs
between diversity and solidarity when conflict threatens, and
the related social policy issue of
the extent to which the growing diversity of modern societies is
in tension with the solidarity on
which modern welfare systems are founded [5].
Security as an externality
Information industries are characterized by many different types
of externalities, where indi-
viduals’ actions have side-effects on others. The software
industry tends toward dominant firms
thanks to the benefits of interoperability. Economists call this a
network externality: a network,
or a community of software users, is more valuable to its
members the larger it is. This not only
helps explain the rise and dominance of operating systems, from
System/360 through Windows
to Symbian, and of music platforms such as iTunes; it also helps
explain the typical pattern of
security flaws. Put simply, while a platform vendor is building
market dominance, he has to
3
appeal to vendors of complementary products as well as to his
direct customers; not only does
this divert energy that might be spent on securing the platform,
but security could get in the way
by making life harder for the complementers. So platform
vendors commonly ignore security
in the beginning, as they are building their market position;
later, once they have captured a
lucrative market, they add excessive security in order to lock
their customers in tightly [7].
Another instance of externalities can be found when we analyze
security investment, as
protection often depends on the efforts of many principals.
Budgets generally depend on the
manner in which individual investment translates to outcomes,
but this in turn depends not just
on the investor’s own decisions but also on the decisions of
others.
System reliability can depend on the sum of individual efforts,
the minimum effort anyone
makes, or the maximum effort any makes. Program correctness
can depend on the weakest
link (the most careless programmer introducing a vulnerability)
while software validation and
vulnerability testing might depend on the sum of everyone’s
efforts. There can also be cases
where the security depends on the best effort – the effort of an
individual champion. A simple
model by Varian provides interesting results when players
choose their effort levels indepen-
dently [8]. For the total-effort case, system reliability depends
on the agent with the highest
benefit-cost ratio, and all other agents free-ride. In the weakest-
link case, the agent with the
lowest benefit-cost ratio dominates. As more agents are added,
systems become increasingly
reliable in the total-effort case but increasingly unreliable in the
weakest-link case. What are
the implications? One is that software companies should hire
more software testers and fewer
(but more competent) programmers.
Work such as this has inspired other researchers to consider
interdependent risk. A recent
influential model by Kunreuther and Heal notes that the security
investments can be strategic
complements: an individual taking protective measures creates
positive externalities for others
that in turn may discourage their own investment [9]. This
result has implications far beyond
information security. The decision by one apartment owner to
install a sprinkler system that
minimizes the risk of fire damage will affect the decisions of
his neighbors; airlines may de-
cide not to screen luggage transferred from other carriers who
are believed to be careful with
security; and people thinking of vaccinating their children
against a contagious disease may
choose to free-ride off the herd immunity instead. In each case,
several widely varying Nash
equilibrium outcomes are possible, from complete adoption to
total refusal, depending on the
levels of coordination between principals.
Katz and Shapiro famously noted how network externalities
affected the adoption of tech-
nology [10]. Network effects can also influence the deployment
of security technology. The
benefit a protection technology provides may depend on the
number of users that adopt it.
The cost may be greater than the benefit until a minimum
number of players adopt; so each
decision-maker might wait for others to go first, and the
technology never gets deployed. Re-
cently, Ozment and Schechter have analyzed different
approaches for overcoming bootstrapping
problems faced by those who would deploy security
technologies [11].
This challenge is particularly topical. A number of core Internet
protocols, such as DNS
and routing, are considered insecure. More secure protocols
exist; the challenge is to get them
4
adopted. Two security protocols that have already been widely
deployed, SSH and IPsec, both
overcame the bootstrapping problem by providing significant
intra-organizational benefits. In
the successful cases, adoption could be done one organization at
a time, rather than needing
most organizations to move at once. The deployment of fax
machines also occurred through
this mechanism: companies initially bought fax machines to
connect their own offices.
Economics of vulnerabilities
A vigorous debate has ensued between software vendors and
security researchers over whether
actively seeking and disclosing vulnerabilities is socially
desirable. Resorla has argued that for
software with many latent vulnerabilities (like Windows),
removing an individual bug makes
little difference to the likelihood of an attacker finding another
one later [12]. Since exploits are
often based on vulnerabilities inferred from patches or security
advisories, he argued against
disclosure and frequent patching if vulnerabilities are
correlated.
Ozment investigated vulnerabilities identified for FreeBSD; he
found that many vulnerabil-
ities are indeed likely to be rediscovered and are therefore often
correlated [13]. Arora, Telang
and Xu produced a model where disclosure is necessary to
incentivize vendors into fixing bugs
in subsequent product releases [14]. Arora, Krishnan,
Nandkumar, Telang and Yang present
quantitative analysis to complement the above model, which
found that for public disclosure,
vendors respond more quickly compared to private disclosure,
the number of attacks increases
but the number of reported vulnerabilities declines over time
[15].
This discussion begs a more fundamental question: why do so
many vulnerabilities exist in
the first place? Surely, if companies desire secure products then
secure software will dominate
the marketplace? As we know from experience, this is not the
case: most commercial software
contains design and implementation flaws that could easily have
been prevented. Although
vendors are capable of creating more secure software, the
economics of the software industry
provide them with little incentive to do so [7]. In many markets,
the attitude of ‘ship it Tuesday
and get it right by version 3’ is perfectly rational behavior.
Consumers generally reward vendors
for adding features, for being first to market, or for being
dominant in an existing market – and
especially so in platform markets with network externalities.
These motivations clash with the
task of writing more secure software, which requires time-
consuming testing and a focus on
simplicity.
Another aspect of vendors’ lack of motivation is readily
explained by Anderson: the soft-
ware market is a ‘market for lemons.’ In a Nobel prize-winning
work, economist George
Akerlof employed the used car market as a metaphor for a
market with asymmetric informa-
tion [16]. His paper imagines a town in which 50 good used cars
(worth $2000) are for sale,
along with 50 ‘lemons’ (worth $1000) each). The sellers know
the difference but the buyers
do not. What will be the market-clearing price? One might
initially think $1500, but at that
price no-one with a good car will offer it for sale; so the market
price will quickly end up near
$1000. Because buyers are unwilling to pay a premium for
quality they cannot measure, only
5
low-quality used vehicles are available for sale.
The software market suffers from the same information
asymmetry. Vendors may make
claims about the security of their products, but buyers have no
reason to trust them. In many
cases, even the vendor does not know how secure its software
is. So buyers have no reason to
pay more for more secure software, and vendors are disinclined
to invest in protection. How
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx
ARTICLE IN PRESSContents lists available at ScienceDirect.docx

More Related Content

Similar to ARTICLE IN PRESSContents lists available at ScienceDirect.docx

Gebm os presentation final
Gebm os presentation finalGebm os presentation final
Gebm os presentation finalsunnyjoshi88
 
OverseeCyberSecurityAsHackersSeekToInfiltrate
OverseeCyberSecurityAsHackersSeekToInfiltrateOverseeCyberSecurityAsHackersSeekToInfiltrate
OverseeCyberSecurityAsHackersSeekToInfiltrateKashif Ali
 
1Running head CYBERWARCYBER WAR9Outstanding title.docx
1Running head CYBERWARCYBER WAR9Outstanding title.docx1Running head CYBERWARCYBER WAR9Outstanding title.docx
1Running head CYBERWARCYBER WAR9Outstanding title.docx
felicidaddinwoodie
 
Understanding the Methods behind Cyber Terrorism
Understanding the Methods behind Cyber TerrorismUnderstanding the Methods behind Cyber Terrorism
Understanding the Methods behind Cyber Terrorism
Maurice Dawson
 
2015_ICMSS_Institutional_Cybersecurity_s02
2015_ICMSS_Institutional_Cybersecurity_s022015_ICMSS_Institutional_Cybersecurity_s02
2015_ICMSS_Institutional_Cybersecurity_s02Government
 
Institutional Cybersecurity from Military Perspective
Institutional Cybersecurity from Military PerspectiveInstitutional Cybersecurity from Military Perspective
Institutional Cybersecurity from Military PerspectiveGovernment
 
Cyber security rule of use internet safely
Cyber security rule of use internet safelyCyber security rule of use internet safely
Cyber security rule of use internet safely
Alexander Decker
 
Gebm os presentation final
Gebm os presentation finalGebm os presentation final
Gebm os presentation final
sunnyjoshi88
 
Online security – an assessment of the new
Online security – an assessment of the newOnline security – an assessment of the new
Online security – an assessment of the newsunnyjoshi88
 
EA&SP_GROUP_ASSIGNMENT_1.pdf
EA&SP_GROUP_ASSIGNMENT_1.pdfEA&SP_GROUP_ASSIGNMENT_1.pdf
EA&SP_GROUP_ASSIGNMENT_1.pdf
TirthShah760404
 
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
Fas (Feisal) Mosleh
 
B susser researchpaper (2)
B susser researchpaper (2)B susser researchpaper (2)
B susser researchpaper (2)
Bradley Susser
 
B susser researchpaper (2)
B susser researchpaper (2)B susser researchpaper (2)
B susser researchpaper (2)Bradley Susser
 
The Cyberspace and Intensification of Privacy Invasion
The Cyberspace and Intensification of Privacy InvasionThe Cyberspace and Intensification of Privacy Invasion
The Cyberspace and Intensification of Privacy Invasion
iosrjce
 
E017242431
E017242431E017242431
E017242431
IOSR Journals
 
B susser researchpaper (3)
B susser researchpaper (3)B susser researchpaper (3)
B susser researchpaper (3)
Bradley Susser
 
Guideline Thailand Cybersecure Strate Digital Economy
Guideline Thailand Cybersecure Strate Digital EconomyGuideline Thailand Cybersecure Strate Digital Economy
Guideline Thailand Cybersecure Strate Digital Economy
Settapong_CyberSecurity
 
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
IJNSA Journal
 
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
IJNSA Journal
 

Similar to ARTICLE IN PRESSContents lists available at ScienceDirect.docx (20)

Gebm os presentation final
Gebm os presentation finalGebm os presentation final
Gebm os presentation final
 
OverseeCyberSecurityAsHackersSeekToInfiltrate
OverseeCyberSecurityAsHackersSeekToInfiltrateOverseeCyberSecurityAsHackersSeekToInfiltrate
OverseeCyberSecurityAsHackersSeekToInfiltrate
 
1Running head CYBERWARCYBER WAR9Outstanding title.docx
1Running head CYBERWARCYBER WAR9Outstanding title.docx1Running head CYBERWARCYBER WAR9Outstanding title.docx
1Running head CYBERWARCYBER WAR9Outstanding title.docx
 
Understanding the Methods behind Cyber Terrorism
Understanding the Methods behind Cyber TerrorismUnderstanding the Methods behind Cyber Terrorism
Understanding the Methods behind Cyber Terrorism
 
2015_ICMSS_Institutional_Cybersecurity_s02
2015_ICMSS_Institutional_Cybersecurity_s022015_ICMSS_Institutional_Cybersecurity_s02
2015_ICMSS_Institutional_Cybersecurity_s02
 
Institutional Cybersecurity from Military Perspective
Institutional Cybersecurity from Military PerspectiveInstitutional Cybersecurity from Military Perspective
Institutional Cybersecurity from Military Perspective
 
Cyber security rule of use internet safely
Cyber security rule of use internet safelyCyber security rule of use internet safely
Cyber security rule of use internet safely
 
Gebm os presentation final
Gebm os presentation finalGebm os presentation final
Gebm os presentation final
 
Online security – an assessment of the new
Online security – an assessment of the newOnline security – an assessment of the new
Online security – an assessment of the new
 
EA&SP_GROUP_ASSIGNMENT_1.pdf
EA&SP_GROUP_ASSIGNMENT_1.pdfEA&SP_GROUP_ASSIGNMENT_1.pdf
EA&SP_GROUP_ASSIGNMENT_1.pdf
 
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
The Biggest Cyber and Physical Security Threats to Critical Infrastructure FM...
 
B susser researchpaper (2)
B susser researchpaper (2)B susser researchpaper (2)
B susser researchpaper (2)
 
B susser researchpaper (2)
B susser researchpaper (2)B susser researchpaper (2)
B susser researchpaper (2)
 
Internet Security Threat
Internet Security ThreatInternet Security Threat
Internet Security Threat
 
The Cyberspace and Intensification of Privacy Invasion
The Cyberspace and Intensification of Privacy InvasionThe Cyberspace and Intensification of Privacy Invasion
The Cyberspace and Intensification of Privacy Invasion
 
E017242431
E017242431E017242431
E017242431
 
B susser researchpaper (3)
B susser researchpaper (3)B susser researchpaper (3)
B susser researchpaper (3)
 
Guideline Thailand Cybersecure Strate Digital Economy
Guideline Thailand Cybersecure Strate Digital EconomyGuideline Thailand Cybersecure Strate Digital Economy
Guideline Thailand Cybersecure Strate Digital Economy
 
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
 
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
DESIGNING A CYBER-SECURITY CULTURE ASSESSMENT SURVEY TARGETING CRITICAL INFRA...
 

More from festockton

Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docxLearning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
festockton
 
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docxLeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
festockton
 
Leadership via vision is necessary for success. Discuss in detail .docx
Leadership via vision is necessary for success. Discuss in detail .docxLeadership via vision is necessary for success. Discuss in detail .docx
Leadership via vision is necessary for success. Discuss in detail .docx
festockton
 
Learning about Language by Observing and ListeningThe real.docx
Learning about Language by Observing and ListeningThe real.docxLearning about Language by Observing and ListeningThe real.docx
Learning about Language by Observing and ListeningThe real.docx
festockton
 
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docxLearning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
festockton
 
Learning about Language by Observing and ListeningThe real voy.docx
Learning about Language by Observing and ListeningThe real voy.docxLearning about Language by Observing and ListeningThe real voy.docx
Learning about Language by Observing and ListeningThe real voy.docx
festockton
 
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docxLEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
festockton
 
Leadership Style What do people do when they are leadingAssignme.docx
Leadership Style What do people do when they are leadingAssignme.docxLeadership Style What do people do when they are leadingAssignme.docx
Leadership Style What do people do when they are leadingAssignme.docx
festockton
 
Leadership Throughout HistoryHistory is filled with tales of leade.docx
Leadership Throughout HistoryHistory is filled with tales of leade.docxLeadership Throughout HistoryHistory is filled with tales of leade.docx
Leadership Throughout HistoryHistory is filled with tales of leade.docx
festockton
 
Lean Inventory Management1. Why do you think lean inventory manage.docx
Lean Inventory Management1. Why do you think lean inventory manage.docxLean Inventory Management1. Why do you think lean inventory manage.docx
Lean Inventory Management1. Why do you think lean inventory manage.docx
festockton
 
Leadership varies widely by culture and personality. An internationa.docx
Leadership varies widely by culture and personality. An internationa.docxLeadership varies widely by culture and personality. An internationa.docx
Leadership varies widely by culture and personality. An internationa.docx
festockton
 
Leadership is the ability to influence people toward the attainment .docx
Leadership is the ability to influence people toward the attainment .docxLeadership is the ability to influence people toward the attainment .docx
Leadership is the ability to influence people toward the attainment .docx
festockton
 
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docx
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docxLawday. Court of Brightwaltham holden on Monday next after Ascension.docx
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docx
festockton
 
law43665_fm_i-xx i 010719 1032 AMStakeholders, Eth.docx
law43665_fm_i-xx i 010719  1032 AMStakeholders, Eth.docxlaw43665_fm_i-xx i 010719  1032 AMStakeholders, Eth.docx
law43665_fm_i-xx i 010719 1032 AMStakeholders, Eth.docx
festockton
 
Leaders face many hurdles when leading in multiple countries. There .docx
Leaders face many hurdles when leading in multiple countries. There .docxLeaders face many hurdles when leading in multiple countries. There .docx
Leaders face many hurdles when leading in multiple countries. There .docx
festockton
 
Last year Angelina Jolie had a double mastectomy because of re.docx
Last year Angelina Jolie had a double mastectomy because of re.docxLast year Angelina Jolie had a double mastectomy because of re.docx
Last year Angelina Jolie had a double mastectomy because of re.docx
festockton
 
Leaders face many hurdles when leading in multiple countries. Ther.docx
Leaders face many hurdles when leading in multiple countries. Ther.docxLeaders face many hurdles when leading in multiple countries. Ther.docx
Leaders face many hurdles when leading in multiple countries. Ther.docx
festockton
 
Leaders today must be able to create a compelling vision for the org.docx
Leaders today must be able to create a compelling vision for the org.docxLeaders today must be able to create a compelling vision for the org.docx
Leaders today must be able to create a compelling vision for the org.docx
festockton
 
Law enforcement professionals and investigators use digital fore.docx
Law enforcement professionals and investigators use digital fore.docxLaw enforcement professionals and investigators use digital fore.docx
Law enforcement professionals and investigators use digital fore.docx
festockton
 
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docxLAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
festockton
 

More from festockton (20)

Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docxLearning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
Learning ResourcesRequired ReadingsToseland, R. W., & Ri.docx
 
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docxLeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
LeamosEscribamos Completa el párrafo con las formas correctas de lo.docx
 
Leadership via vision is necessary for success. Discuss in detail .docx
Leadership via vision is necessary for success. Discuss in detail .docxLeadership via vision is necessary for success. Discuss in detail .docx
Leadership via vision is necessary for success. Discuss in detail .docx
 
Learning about Language by Observing and ListeningThe real.docx
Learning about Language by Observing and ListeningThe real.docxLearning about Language by Observing and ListeningThe real.docx
Learning about Language by Observing and ListeningThe real.docx
 
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docxLearning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
Learning Accomplishment Profile-Diagnostic Spanish Language Edit.docx
 
Learning about Language by Observing and ListeningThe real voy.docx
Learning about Language by Observing and ListeningThe real voy.docxLearning about Language by Observing and ListeningThe real voy.docx
Learning about Language by Observing and ListeningThe real voy.docx
 
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docxLEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
LEARNING OUTCOMES1. Have knowledge and understanding of the pri.docx
 
Leadership Style What do people do when they are leadingAssignme.docx
Leadership Style What do people do when they are leadingAssignme.docxLeadership Style What do people do when they are leadingAssignme.docx
Leadership Style What do people do when they are leadingAssignme.docx
 
Leadership Throughout HistoryHistory is filled with tales of leade.docx
Leadership Throughout HistoryHistory is filled with tales of leade.docxLeadership Throughout HistoryHistory is filled with tales of leade.docx
Leadership Throughout HistoryHistory is filled with tales of leade.docx
 
Lean Inventory Management1. Why do you think lean inventory manage.docx
Lean Inventory Management1. Why do you think lean inventory manage.docxLean Inventory Management1. Why do you think lean inventory manage.docx
Lean Inventory Management1. Why do you think lean inventory manage.docx
 
Leadership varies widely by culture and personality. An internationa.docx
Leadership varies widely by culture and personality. An internationa.docxLeadership varies widely by culture and personality. An internationa.docx
Leadership varies widely by culture and personality. An internationa.docx
 
Leadership is the ability to influence people toward the attainment .docx
Leadership is the ability to influence people toward the attainment .docxLeadership is the ability to influence people toward the attainment .docx
Leadership is the ability to influence people toward the attainment .docx
 
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docx
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docxLawday. Court of Brightwaltham holden on Monday next after Ascension.docx
Lawday. Court of Brightwaltham holden on Monday next after Ascension.docx
 
law43665_fm_i-xx i 010719 1032 AMStakeholders, Eth.docx
law43665_fm_i-xx i 010719  1032 AMStakeholders, Eth.docxlaw43665_fm_i-xx i 010719  1032 AMStakeholders, Eth.docx
law43665_fm_i-xx i 010719 1032 AMStakeholders, Eth.docx
 
Leaders face many hurdles when leading in multiple countries. There .docx
Leaders face many hurdles when leading in multiple countries. There .docxLeaders face many hurdles when leading in multiple countries. There .docx
Leaders face many hurdles when leading in multiple countries. There .docx
 
Last year Angelina Jolie had a double mastectomy because of re.docx
Last year Angelina Jolie had a double mastectomy because of re.docxLast year Angelina Jolie had a double mastectomy because of re.docx
Last year Angelina Jolie had a double mastectomy because of re.docx
 
Leaders face many hurdles when leading in multiple countries. Ther.docx
Leaders face many hurdles when leading in multiple countries. Ther.docxLeaders face many hurdles when leading in multiple countries. Ther.docx
Leaders face many hurdles when leading in multiple countries. Ther.docx
 
Leaders today must be able to create a compelling vision for the org.docx
Leaders today must be able to create a compelling vision for the org.docxLeaders today must be able to create a compelling vision for the org.docx
Leaders today must be able to create a compelling vision for the org.docx
 
Law enforcement professionals and investigators use digital fore.docx
Law enforcement professionals and investigators use digital fore.docxLaw enforcement professionals and investigators use digital fore.docx
Law enforcement professionals and investigators use digital fore.docx
 
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docxLAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
LAW and Economics 4 questionsLaw And EconomicsTextsCoote.docx
 

Recently uploaded

2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 

Recently uploaded (20)

2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 

ARTICLE IN PRESSContents lists available at ScienceDirect.docx

  • 1. ARTICLE IN PRESS Contents lists available at ScienceDirect Telecommunications Policy Telecommunications Policy 33 (2009) 706–719 0308-59 doi:10.1 � Cor E-m URL: www.elsevierbusinessandmanagement.com/locate/telpol Cybersecurity: Stakeholder incentives, externalities, and policy options Johannes M. Bauer a,�, Michel J.G. van Eeten b a Department of Telecommunication, Information Studies, and Media; Quello Center for Telecommunication Management and Law, Michigan State University, East Lansing, Michigan, USA b Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands a r t i c l e i n f o Keywords: Cybersecurity Cybercrime
  • 2. Security incentives Externalities Information security policy Regulation. 61/$ - see front matter & 2009 Elsevier Ltd. A 016/j.telpol.2009.09.001 responding author. Tel.: þ1 517 432 8003; fax: ail addresses: [email protected] (J.M. Bauer), m a b s t r a c t Information security breaches are increasingly motivated by fraudulent and criminal motives. Reducing their considerable costs has become a pressing issue. Although cybersecurity has strong public good characteristics, most information security decisions are made by individual stakeholders. Due to the interconnectedness of cyberspace, these decentralized decisions are afflicted with externalities that can result in sub-optimal security levels. Devising effective solutions to this problem is complicated by the global nature of cyberspace, the interdependence of stakeholders, as
  • 3. well as the diversity and heterogeneity of players. The paper develops a framework for studying the co-evolution of the markets for cybercrime and cybersecurity. It examines the incentives of stakeholders to provide for security and their implications for the ICT ecosystem. The findings show that market and non-market relations in the information infrastructure generate many security-enhancing incentives. However, pervasive externalities remain that can only be corrected by voluntary or government-led collective measures. & 2009 Elsevier Ltd. All rights reserved. 1. Introduction Malicious software (‘‘malware’’) has become a serious security threat for users of the Internet. Estimates of the total cost to society of information security breaches vary but data published by private security firms, non-profit organizations, and government, all indicate that their cost is non-negligible and increasing. From a societal point of view, not only the direct cost (e.g., repair cost, losses due to fraud) but also indirect costs (e.g., costs of preventative measures) and implicit costs (e.g., slower productivity increases due to reduced trust in electronic transactions) have to be attributed to information security breaches. Bauer, Van Eeten, Chattopadhyay, and Wu
  • 4. (2008) in a meta-study of a broad range of research conclude that a conservative estimate of these costs may fall between 0.2% and 0.4% of global GDP. A catastrophic security failure in the world information and communication system with potentially much higher impact on the global economy is possible. Viruses, worms, and the many other variants of malware have developed from a costly nuisance to sophisticated tools for criminals. Computers worldwide, some estimate as many as one in five to one in ten, are infected with malware, often without the knowledge of the machine’s owner. While hacking continues to play a role, other infection strategies have become dominant. They range from the use of email attachments to spread malware, downloads from legitimate websites which are themselves infected, and distribution via social networks and games. Infected machines are often connected in ll rights reserved. þ1 517 355 1292. [email protected] (M.J.G. van Eeten). www.elsevierbusinessandmanagement.com/locate/telpol dx.doi.org/10.1016/j.telpol.2009.09.001 mailto:[email protected] mailto:[email protected] ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 707 botnets: flexible remote-controlled networks of computers that operate collectively to provide a platform for fraudulent and criminal purposes. Individuals, organizations, and nation
  • 5. states are targets. Attack vectors directed toward individuals and organizations include spam (most originating from botnets), variations of socially engineered fraud such as phishing and whaling, identity theft, attacks on websites, ‘‘click fraud’’, ‘‘malvertising’’, and corporate espionage. The DDoS attacks on Estonia in 20071 and the spread of the cryptic Conficker worm that, early in 2009, paralyzed parts of the British and French military as well as government and health institutions in other countries, are recent examples of attacks on nations and their civil and military infrastructures.2 The pervasive presence of security threats has created concern among scholars (e.g., Zittrain, 2008) and policy-makers. The OECD Ministerial Summit in Seoul in 2008 dedicated considerable attention to the issue (OECD, 2009) and the International Telecommunication Union (ITU) has developed an active cybersecurity agenda. Regional initiatives, including efforts by the European Union and APEC, have elevated the visibility of the issue (e.g., Anderson, Böhme, Clayton, & Moore, 2008) and an increasing number of public and private partnerships target information security at a national and supra- national level.3 These measures are complemented by national initiatives, as illustrated by the comprehensive hearing that the British House of Lords dedicated to the problem (House of Lords, 2007) and the announcement by the US administration of a renewed push to improve cybersecurity (White House, 2009). Given the dynamic and multifaceted nature of the problem, a broad spectrum of simultaneous measures is being considered as the best initial approach. Cybersecurity policy needs to start from a clear and empirically grounded understanding of the nature of the problems before possible solutions can be devised. The analytical lenses of scholars and practitioners on malware have changed significantly in recent years. Security threats were initially viewed as mostly technological problems; in recent years, the
  • 6. perspective has broadened to include economic issues and aspects of user behavior (Anderson & Moore, 2006). This research has clarified that, as information security comes at a cost, tolerating some level of insecurity is economically rational from an individual and social point of view. Although it is mostly provided by private players, cybersecurity has strong public good characteristics. Therefore, from a societal perspective, a crucial question is whether the costs and benefits taken into account by market players reflect the social costs and benefits. If that is the case, decentralized individual decisions also result in an overall desirable outcome (e.g., a tolerable level of cybercrime, a desirable level of security). However, if some of the costs are borne by other stakeholders or some of the benefits accrue to other players (i.e., they are ‘‘externalized’’), individual security decisions do not properly reflect social benefits and costs. This is the more likely scenario in highly interdependent information and communication systems such as the Internet. Whereas the security decisions of a market player regarding malware will be rational for that player, given the costs and benefits it perceives, the resulting course of action inadvertently or deliberately imposes costs on other market players and on society at large. Decentralized individual decisions will therefore not result in a socially optimal level of security. The presence of externalities can result in internet-based services that are less secure than is socially desirable. This paper aims at a comprehensive economic analysis of cybersecurity. In particular, given that most security decisions are made by organizations and individuals, it is interested in how well the incentives shaping these decentralized decisions are aligned with achieving a socially optimal level of security. This overarching objective is pursued in three interrelated steps. First, an economic framework is proposed to examine the
  • 7. co-evolution of cybercrime and cybersecurity (Section 2). The overall level of cybersecurity is the outcome of many decentralized decisions by individual stakeholders. Therefore, building on a detailed field study in seven countries, the paper, second, discusses the incentives influencing security decisions of important players such as ISPs, software vendors and users (Sections 3 and 4). This unique and rich data set permits identification and analysis of the security-enhancing and security-reducing incentives perceived by and relevant for key players. The implications of the relevant incentives for the level of security attained by individual players and the sector as a whole are investigated, with special emphasis on the emerging externalities. Whereas many security-enhancing incentives could be detected, significant externalities remain that cannot easily be overcome by private action. The paper therefore examines, third, policy options to enhance cybersecurity (Section 5). Main insights are summarized in the final section. 2. The co-evolution of cybercrime and information security Information security breaches have become increasingly driven by financial motives. The particular challenges to achieve a desirable level of information security can be better understood by analyzing the interplay of two sets of activities whose players respond to very different incentives: the realm of cybercrime and the realm of information security. These two areas are tightly linked; developments in one directly affect conditions in the other. Decisions and effects of changes in the incentives of players in both realms can be analyzed using a market model. It may seem frivolous to some to look at 1 See N. Anderson, ‘‘Massive DDoS attacks target Estonia; Russia accused,’’ Ars Technica, May 14, 2007. Retrieved September 3, 2009 from http://
  • 8. arstechnica.com/security/news/2007/05/massive-ddos-attacks- target-estonia-russia-accused.ars#. 2 See M.E. Soper, ‘‘Conficker worm shuts down French and UK Air Forces,’’ Maximum PC, February 10, 2009. Retrieved September 3, 2009 from http:// www.maximumpc.com/article/news/conficker_worm_shuts_dow n_french_and_uk_air_forces. 3 The first Computer Emergency Response Team (CERT) was established in 1988 in response to the first internet worm. By 2009, the Forum of Incident Response and Security Teams (FIRST), founded in 1990, and one of the many collaborative partnerships, had more than 200 members from all continents (see http://www.first.org). http://arstechnica.com/security/news/2007/05/massive-ddos- attacks-target-estonia-russia-accused.ars# http://arstechnica.com/security/news/2007/05/massive-ddos- attacks-target-estonia-russia-accused.ars# http://www.maximumpc.com/article/news/conficker_worm_shut s_down_french_and_uk_air_forces http://www.maximumpc.com/article/news/conficker_worm_shut s_down_french_and_uk_air_forces http://www.first.org ARTICLE IN PRESS Fig. 1. Important interdependencies in the ICT ecosystem. J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719708 cybercrime as a ‘‘market’’4 but many players in this
  • 9. increasingly differentiated realm respond to price signals and other economic incentives.5 Some players, whether they are inspired by essentially laudable goals (such as ‘‘white hat’’ hackers) or destructive motives (such as cyberterrorists) do not primarily follow financial gain but their decisions may be modelled as an optimization over non-financial goals. If the logic and forms of these transactions were better known, it might be possible to interrupt and manipulate price and other signals in ways that quench illegal activity. Little is known about the underground economy organized around cybercrime (Schipka, 2007; Jakobsson & Ramzan, 2008; Kanich et al., 2008). More information is available on the market for information security (e.g., CSI, 2008) but detailed information is often kept proprietary. Attack and defence strategies can be analyzed at the level of individual players (e.g., a cybercriminal, a firm investing in information security, a home user deciding on the purchase of a firewall) or at an aggregate (sector) level. In the latter case the interrelationships between players and the effects of their decisions on the working of the sector as a whole are of primary interest (see Fig. 1). Information and communication services require inputs from many players. They include internet service providers (ISPs), application and service providers (App/Svc), hardware and software vendors, users, security providers, and national and international organizations involved in the governance of these activities. Two main types of interdependencies exist among these participants: they are, first, physically connected via the nodes and links of the networks and second, connected in commercial and non-commercial transactions. This highly interconnected system has been described as a ‘‘value net’’ (e.g., Bovel & Martha, 2000; Brandenburger & Nalebuff, 1996; Li & Whalley, 2002) or as an ‘‘ecosystem’’ (Fransman, 2007; Peppard & Rylander, 2006). Whereas the first term emphasizes the joint nature of value creation, the
  • 10. second one emphasizes interdependence among the participants and their co-evolution. Players (and institutions) co-evolve if one player’s actions affect but do not fully determine the choice options and strategies of others (e.g., Nelson, 1994; Murmann, 2003; Gilsing, 2005). As interdependence and co-evolution are important aspects of the cybersecurity problem, the second notion will be used throughout the paper. Each type of player in this ICT ecosystem is composed of differentiated and heterogeneous players, which might be grouped into relatively similar classes (indicated by the various indices). For example, users could be differentiated into large industrial, small and medium-sized enterprises, and residential. The ICT ecosystem extends across national boundaries and jurisdictions. It is under relentless attack from cybercriminals (and some benign hackers). These criminals often are located in countries with weak rules of law but they typically launch attacks from nations with a highly developed information infrastructure (such as the US and South Korea) or a large number of users whose machines are poorly protected (such as Brazil, Turkey, and China) (Symantec, 2009, p. 8). 4 For the promises and challenges of studying criminal activity in a market framework see Becker (1968) and Ehrlich (1996). For an application to cybersecurity see Kshetri (2006) and Franklin, Paxson, Perrig, and Savage (2007). 5 Economic incentives are the factors that influence decisions by individuals and individuals in organizations. They are rooted in economic, formal- legal, and informal mechanisms, including the specific economic conditions of the market, the interdependence with other players, laws as well as tacit social norms. Players respond to the incentives that are
  • 11. perceived as relevant for their decisions. Anderson and Moore (2006) have argued that ‘‘over the past 6 years, people have realized that security failure is caused at least as often by bad incentives as by bad design.’’ In their view, many of the problems of information security can be explained more clearly and convincingly using the language of microeconomics: network effects, externalities, asymmetric information, moral hazard, adverse selection, liability dumping, and the tragedy of the commons. Within this literature, the designing of incentives that stimulate efficient behavior is central. ARTICLE IN PRESS Fig. 2. Markets for cybercrime and cybersecurity. J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 709 With the growth of the underground cybercrime economy, different tasks in this network of transactions apparently are increasingly carried out by specialists. One individual or organization is rarely involved in the whole range of activities necessary. The division of labor has spawned writers of malicious code, distributors of code, herders (‘‘owners’’) of botnets, spammers, identity collectors, drop services, drop site developers, and drops (Schipka, 2007). Specialization has not only increased the sophistication and virility of malware but also
  • 12. increased the productivity of the cybercrime economy and hence reduced the cost of attack tools. Despite the heterogeneity of these actors, it is reasonable to assume that they act purposefully rational. That is, given their information about relevant opportunities and constraints they will pursue and expand their activities as long as incremental benefits exceed incremental costs. Too little information is typically available to determine the exact shape of cost and benefit relations of the players empirically. However, it is possible to derive some insights from a conceptual analysis. Other things equal (‘‘ceteris paribus’’) it is likely that each actor can only expand his or her activity level at higher cost. For example, after highly vulnerable information systems are attacked, it will be exceedingly difficult to penetrate more secure systems. Likewise, it will generally be more time-consuming and hence costly to write code to attack software and devices that have been fortified against attacks. Hence, the short-run incremental cost curve faced by criminals will most likely be upward sloping. The shape of the incremental benefit relationship is less obvious. If for any given type of attack the bigger rewards are easier to reap the incremental benefit curve will be downward sloping. In principle, if more effort were ‘‘rewarded’’ with higher spoils it could also be upward sloping. Whereas this is possible in some cases, it will probably not hold at an aggregate level, at least in societies that adhere to the rule of law. For the purposes of our further discussion we will therefore focus on the more plausible scenario in which the slope of the incremental cost curve is higher than that of the incremental benefits curve. Individual decisions can be aggregated to represent the activities in a specific ‘‘market segment’’, for example, the market for malicious code, botnets, or stolen credit cards. There is no reason to believe that these market segments do not exhibit upward slowing supply
  • 13. and downward sloping demand schedules. Cybercriminal activity could be aggregated even further to generate a representation of the overall market for cybercrime. To this end, it is necessary to define the traded services in a more abstract way. In this aggregate market, supply is shaped by the cost of breaching information systems. Demand is shaped by the benefits of such breaches and the corresponding willingness to pay for them. The intersection of these two schedules will determine the overall level of fraudulent and criminal security breaches (see Fig. 2). As long as the incremental benefits in this market exceed the incremental costs, cybercriminal activity will expand (and vice versa). The shape and position of the incremental cost and benefit schedules is influenced by changes in the technological basis of information systems, the technology of attacks, and the institutional and legal framework governing information security and its violations. For example, a deeper division of labor in the underground economy, improvements in attack technologies, or less diligent law enforcement will reduce the cost of cybercrime, and therefore shift the supply curve to the lower right (as illustrated in Fig. 2). Accordingly, for a given demand curve for cybercrime (i.e., ceteris paribus) cybercriminal activity will increase. In contrast, higher penalties for cybercrime, stricter law enforcement that elevates the risk of being caught, and measures that improve information security all increase the cost of criminal activity and hence shift the supply curve to the upper left. For a given demand curve for cybercrime (i.e., ceteris paribus) this implies that cybercriminal activity will diminish. The demand schedule for cybercrime is also influenced by forces external to the market for cybercrime. It shifts to the upper right as the net benefits of criminal activity increase, for example, if the number of users of electronic transactions (and hence the
  • 14. potential rewards from cybercrime) grow, connectivity is ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719710 increasing globally, mobile devices are used more, and ICT use in the private and public sectors becomes more widespread. Ceteris paribus this will lead to increased cybercriminal activity. On the other hand, if the net rewards of cybercrime shrink, for example, because of a lower user base, tighter security, and more stringent law enforcement, the schedule is shifted to the lower left, reducing cybercrime. Presently, the factors contributing to an intensification of cybercrime seem to outweigh the others, putting upward pressure on the overall level of cybercrime. Like the realm of cybercrime, information security can be analyzed in a market framework. It is best seen as an aggregate made up of individual sub-markets in which vendors supply and heterogeneous types of users demand information security products and services. Such services are offered to different users by a range of specialized vendors, including security service providers (anti-malware software providers, suppliers of network monitoring hardware and software such as Cloudmark), Internet Service Providers, or they may be provided in-house by a specialized department. These players are supported by non-profit groups like Spamhaus, the Messaging Anti-Abuse Working Group (MAAWG), and many blacklisting services that monitor spam and other malicious activity. Security services are ultimately demanded
  • 15. by different types of users including residential users, for-profit and not-for-profit organizations of different sizes, and government agencies. Within its specific context, incentives, and available information, each decentralized player will again make purposefully rational decisions. Once all adjustments are made, it is reasonable to assume that each decision- maker will strive for a situation in which the perceived incremental benefits of additional security measures are approximately equal to the incremental costs of such measures. Given the state of security technology, the incremental cost of improved security will most likely increase. At the same time, with very few if any exceptions, the benefits of additional security measures will decrease. Consequently, the optimal level of security is found where the incremental cost and benefits of security are equated. In a dynamic perspective, under conditions of risk and uncertainty, although the analysis is somewhat more cumbersome, the same principal decision rule applies: that the optimal level of security is found where the adjusted and discounted incremental benefits equal the incremental costs (Gordon & Loeb, 2004). Many of the challenges of reaching an optimal level of information security at the aggregate level are rooted in a potential mismatch between the perceived individual and social benefits and costs of security. In information systems, positive and negative externalities are closely intertwined aspects of security decisions. Additional security investments of end users or of an intermediate service provider such as an ISP also benefit others in the ICT system and are thus associated with positive externalities. Similarly, insufficient security investment by a player exerts a negative externality on others in the system. The externality problem is complicated by the economic features of information and security markets. Many
  • 16. information products are produced under conditions of high fixed but low incremental costs. Unfettered competition will drive prices quickly to incremental costs. Moreover, many products and services are characterized by positive network effects. Firms will respond to such conditions with product and price differentiation, attempts to capitalize on first mover advantages, versioning, and other strategies to overcome these challenges (Illing & Peitz, 2006; Shapiro & Varian, 1999; Varian, Farrell & Shapiro, 2005). Given the current legal framework of information industries, in particular the current liability regime, security is a quasi-public good. Suppliers will typically not be able to mark-up their products in accordance with the social value of their security features but only with the private value, leading to an under-provision of security. If suppliers face a trade-off between speed to market and security, given positive network effects, speed to market may take precedence over security testing (Anderson, 2001; Anderson & Moore, 2006). Cybercriminal activity and decisions in information security markets co-evolve, mutually influencing but not fully determining each other’s course. An increased level of cybercrime (determined in the market for cybercrime) will increase the effort and cost of maintaining a given level of information security. At the same time, the benefits of increased security may increase. Consequently, the overall cost of security will increase for all participants in the ICT ecosystem even if the level of security remains unchanged. In contrast, if effective measures reduce the level of cybercrime, the costs of security will decline as well. However, the overall level of security may not increase as users may decide to maintain a given level with lower expenditures. There is an asymmetry in the relation between the markets for cybercrime and cybersecurity. Whereas higher or lower levels of cybercrime will increase or decrease the overall cost of security, the effect of such
  • 17. changes on the level of security remains ambiguous. In contrast, increased security will unequivocally increase the cost of cybercrime and/or reduce the benefits of cybercrime. As these two effects work in the same direction, increased security will reduce the level of cybercrime (Van Eeten & Bauer, 2008). However, it may trigger a new round in the technological race between cybercriminals and defenders. 3. Security incentives of stakeholders The overall level of cybersecurity emerges from the decentralized decisions of the players in the ICT ecosystem. It is therefore important to understand how these decisions are made and what factors (‘‘incentives’’) influence them. Very little detailed empirical evidence is available on the security incentives of stakeholders and the externalities that emanate from them. This section reports findings of recent qualitative field research designed to explore these incentives. In the course of 2007, a team of researchers from the Delft University of Technology and Michigan State University conducted 41 in-depth structured interviews with 57 professionals of organizations operating in networked computer environments that are confronted with malware. Interviewees represented a stratified sample of professionals from different industry segments (e.g., hardware, software, service providers, and users) in seven countries (Australia, Canada, Germany, the ARTICLE IN PRESS Table 1 Security incentives of players in the ICT value chain Player Security-enhancing Security-reducing
  • 18. Internet service providers (ISPs) Cost of customer support Cost of security measures Cost of abuse management Cost of customer acquisition Cost of blacklisting Legal provisions that shield ISPs Loss of reputation, brand damage Cost of infrastructure expansion Legal provisions requiring security Software vendors Cost of vulnerability patching Cost of software development and testing (time to market) Loss of reputation, brand damage Benefits of functionality Benefits of compatibility Benefits of user discretion Licensing agreements with hold-harmless clauses E-commerce providers (banks) Benefits of online transaction growth Cost of security measures
  • 19. Trust in online transactions Benefits of usability of the service Loss of reputation, brand damage Users Awareness of security risks, realistic self-efficacy, exposure to cybercrime Poor understanding of risks, overconfidence, cost of security products and services J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 711 Netherlands, United Kingdom, France, and the United States). Moreover, experts involved in the governance of information security issues such as Computer Emergency Response Teams (CERTs) and regulatory agencies were interviewed. Here we focus on the incentives of ISPs, software vendors, E-commerce providers (most notably banks and other financial service providers), and users.6 The aim is to identify if and how these incentives give rise to negative security externalities, which require some form of voluntary or government-led collective action. The interdependence between stakeholders has, to a certain degree, contributed to a blame game between individual players accusing each other of insufficient security efforts. Whereas Microsoft, due to its pervasive presence, is a frequent target (e.g., Perrow, 2007; Schneier, 2004), the phenomenon is broader: security-conscious ISPs blame rogue ISPs, ISPs blame software vendors, countries with higher security standards blame those lacking law enforcement, and so forth. Although these claims are sometimes correct, they do not represent an accurate picture of the complicated relations. The field research into the incentives and the way externalities
  • 20. percolate through the system reveals a more nuanced picture. Several admittedly imperfect mechanisms, such as reputation effects and reciprocal sanctions among players, exist that help align private and social costs and benefits. Moreover, the high interconnectedness may result in the internalization of externalities at other stages of the value chain; the associated costs are therefore not externalized to society at large but to other players and may indirectly be charged back to the originators. For every player interviewed in the study, multiple and sometimes conflicting incentives were at work, leaving the net effect contingent upon the relative influence of the specific drivers at work. Table 1 summarizes important incentives that shape the security decision of participants in the ICT ecosystem. Internet Service Providers (ISPs) are key market players and often criticized for poor security investment levels. Nonetheless, ISPs operate under several security-enhancing incentives (although their effectiveness is mediated by the ISP’s business model). Commercial ISPs will take security into account when it affects their revenue stream and bottom line. This is vividly illustrated by the example of viruses and spam. Early on, ISPs argued that emails were the personal property of recipients and that an inspection of the content of mails was a violation of privacy. Consequently, the responsibility for protecting their own machines and for dealing with spam was attributed to end users. With the exorbitant growth of spam, currently constituting upwards of 80% of all emails, i.e., in the order of 200 billion messages per day, the financial implications for ISPs also changed fundamentally. Not only did the flood of spam become a burden for network infrastructure that would have required additional investment, the malware imported onto the network did indirectly affect the ISP’s cost. Users of infected machines
  • 21. started to call the help desk or customer service at a fairly high cost per call to the ISP. Malicious traffic sent from infected machines triggers abuse notifications from other ISPs and requests to fix the problem, typically requiring even more expensive outgoing calls to customers. In extreme cases, the whole ISP could be blacklisted (see below), causing potentially serious customer relations and reputation problems. Facing this altered economic reality, ISPs reversed their stance with little fanfare and started to filter incoming mail and to manage their customers’ security more proactively. In the present environment, ISPs operate under several more or less potent incentives. The strength of these incentives is mediated by the business model adopted by an ISP. ‘‘Rogue’’ ISPs whose business model is based on the hosting of shady activities will respond differently than commercial ISPs seeking legitimate business. Costs of customer support and abuse 6 See Van Eeten and Bauer (2008) for the full project report and OECD (2009), in particular Part II. A full list of the organizations that participated in the field study is provided in Van Eeten and Bauer (2008, pp. 67–68). ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719712 management as well as the cost of additional infrastructure that might be required to handle malicious traffic all have an immediate effect on the bottom line and will increase the incentives to undertake security-enhancing measures. Loss of reputation and brand damage work indirectly (and probably slower) but exert pressure in the same direction. ISPs are
  • 22. embedded in an interdependent system of service providers. If contacts via the abuse management desk are ineffective, other ISPs have a range of escalating options to retaliate for poor security practices with regard to outgoing malicious traffic, even if the origin is an individual user. Blacklists (or ‘‘blocklists’’), inventories of IP addresses and email addresses reported to have sent spam and other forms of malicious code, are regularly used by ISPs to filter and block incoming traffic. Lists are typically maintained by non-profit organizations such as SpamCop, Spamhaus, or the Spam and Open Relay Blocking System (SORBS).7 Other lists, such as those maintained by RFC-Ignorant, detail IP networks that have not implemented certain RFC’s—‘‘requests for comments’’, which are memoranda published by the Internet Engineering Task Force (IETF). This includes, for example, ISPs that do not have a working ‘‘[email protected]’’ address.8 These lists are not uncontested and their promoters are sometimes seen as ‘‘vigilantes.’’ But they are widely used and provide real-time countermeasures for network administrators. Each blacklist organization has procedures for delisting an ISP once the originating IP address ceases the malicious activity. Substantial presence on one or more of these lists, reflected in the listing of many IP addresses belonging to an ISP for an extended time period, will drive up customer support and abuse management costs because, for example, emails originating from blacklisted ISPs may not be delivered, resulting in customer dissatisfaction and complaints. It may also trigger subsequent reputation and revenue losses for an ISP. Both effects create an incentive to improve security measures, at least to respond in a timely fashion to abuse requests. In extreme cases, an entire ISP (and not just IP addresses or address ranges) may be blocked by blacklists and de-peered by other ISPs, raising the costs of this ISP significantly, possibly to the point where its business model becomes unsustainable.
  • 23. All these incentives work in favor of enhanced security measures. On the other hand, the costs of increasing security, legal provisions that shield ISPs from legal liability, and the costs of customer acquisition all work in the opposite direction. Other things equal, they constitute incentives to adopt a lower level of information security. The net effect on ISPs is hence dependent on the relative strength of the components of this web of incentives. The high cost of customer calls (estimated by some ISPs to be as high as $12 per incoming and $18 per outgoing call), while providing an incentive to find alternative solutions to enhance security of end users, also may provide an incentive to ignore individual cases that have not triggered notifications to blacklisting services or abuse requests from other ISPs. Most ISPs estimated that only a small percentage of the infected machines on their network show up in abuse notifications. Considerable advances and lower cost of security technology, on the other hand, have enabled ISPs to move their intervention points closer to the edges of their network and thus automate many functions in a cost-effective way. In an escalating response, machines on the network may initially be quarantined with instructions to the user to undertake certain measures to fix a problem. Only in cases that cannot be solved in this fashion may customer calls become necessary. Overall, while the interdependencies among ISPs result in the internalization of some of the external effects, this internalization is partial at best. Hence, it is likely that the highly decentralized system of decision-making yields effort levels that, while higher than often assumed, nevertheless fall short of the social optimum. Software is a critical component in the ICT value chain. Exploitation of vulnerabilities is one important attack vector. The market for exploits has become increasingly sophisticated as seen, for example, in the steady increase in zero-day exploits.
  • 24. Like ISPs, software vendors work in a complex set of potentially conflicting incentives whose overall net effect is difficult to determine. One strong factor working to enhance security efforts is the cost of vulnerability patching, which comprises the cost of patch development, testing, and deployment. As software is typically installed in many versions, configurations, and contexts, the cost of patch development can be very significant. From the perspective of software vendors it may therefore be advantageous to invest in security upfront rather than have the follow-up costs of patching. However, given the complexity of modern software packages and the benefits of short time-to-market, it will not be economical to invest to the point where all potential flaws are eliminated. Loss of reputation and brand damage also strengthen the incentive to invest in security. However, reputation effects may work only slowly and are further weakened if the vendor has a strong market position, either because of a large installed base or because of limitations in the number of available substitute products. Nonetheless, as the security enhancements in the transition from Windows XP to Windows XP Service Pack 2 to Windows Vista illustrate, reputation effects do work. At the same time, there are several incentive mechanisms that, other things equal, weaken the effort to provide secure software. Time-to-market is lengthened by software testing and the cost of software development is increased. In industries with network effects and first-mover advantages, such delays may result in significantly reduced revenues over the product life cycle (Anderson & Moore, 2006). Thus, the more competitive the software market segment, the lower the incentive to improve security beyond a minimum threshold. Moreover, in software design complicated trade-offs have to be made between different design aspects. Security often is in conflict with functionality, compatibility, and user discretion
  • 25. over the software’s configuration (a benefit that many users covet). Lastly, hold-harmless provisions in software licenses and shrink-wrap license agreements largely relieve software vendors from liability for financial damages stemming from 7 See http://www.spamcop.net, http://www.spamhaus.org/zen, and http://www.de.sorbs.net/. 8 See http://www.rfc-ignorant.org/. [email protected] http://www.spamcop.net http://www.spamhaus.org/zen http://www.de.sorbs.net/ http://www.rfc-ignorant.org/ ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 713 software flaws. The diffusion of software for mobile devices and the expansion of open source software also increase vulnerabilities. The fear that increased liability of software vendors might reduce innovation rates is not unfounded but the strength of this effect is unknown. E-commerce providers form another important class of market players. One particularly interesting sector is financial service providers, not least because they are a high priority target for attackers. This is a rather diverse sector, encompassing different types of banks, credit card companies, mutual funds, insurance companies, and so forth. The rules for each of these players differ in detail. Focusing predominantly on merchant banks, Van Eeten and Bauer (2008) concluded that these financial service providers are to a considerable degree able to manage risks emanating from their customer relations. However, they need to make choices
  • 26. balancing enhanced security and the growth of their electronic business. In principle, they could use highly secure platforms to conduct e-commerce transactions. Such an approach would likely have detrimental effects on users as it decreases the convenience of conducting business. Financial organizations thus face a trade-off between higher security and migrating transactions to cost-saving electronic platforms. Many financial service providers offer compensation for losses incurred by their customers from phishing or other fraudulent actions as part of this overall security decision. This practice aligns the incentives of the financial service provider with the goal of improved security (but not the incentives of individual users). Businesses other than financial service providers may often not be in a position to manage externalities associated with their clients. Therefore, more significant deviations between private incentives and social effects may exist, resulting in a sub-optimally low level of security investment by these firms. Large businesses (firms with 250 and more employees) are a heterogeneous group. Many large business users have adopted risk assessment tools to make security decisions (Gordon & Loeb, 2004). Their diligence will vary with size and possibly other factors such as the specific products and services provided. Two other groups of players that deserve mentioning are small and medium enterprise (SMEs, typically defined as enterprises with fewer than 250 employees, including microenterprises) and residential users. Although this is a large and diverse group, these players also are in several respects similar. Like other participants, they work under multiple and potentially conflicting incentives. Unlike larger businesses that may be able to employ information security specialists, either in-house or via outsourced services, many SMEs and residential users have insufficient resources to prevent or respond to sophisticated types of attacks. Whereas awareness of security threats has increased, there is
  • 27. mounting information that many residential users underestimate their exposure and overestimate their efficacy in dealing with risks (LaRose, Rifon, & Enbody, 2008). Although these constitute similarities between SMEs and residential users there are also differences. In general, one can assume that businesses employ a more deliberate, instrumentally rational form of reasoning when making security decisions. In both cases, however, the benefits of security expenses will to a large degree flow to other users. Individual businesses and users may suffer from the perception that their own risk exposure is low, especially if others protect their machines, the well-known free rider phenomenon. On the other hand, given increased information, a growing number of users in this category is aware of the risks of information security breaches. Thus, they realize to a certain extent that they are the recipients of ‘‘incoming’’ externalities. Overall, one can expect that on average these classes of users will not be full free riders. Whereas some individuals and SMEs may over-invest, there is evidence that most will not invest in security at the level required by the social costs of information security breaches (Kunreuther & Heal, 2003). This conclusion is corroborated by the observation that many individual users do not purchase security services, do not even use them when offered for free by an ISP or a software vendor,9 and turn off their firewalls and virus scanners regularly if they slow down certain uses, such as gaming. The ICT ecosystem also has other security loopholes. Some have to do with the engineering conventions of the Internet. TCP/IP was designed as a very open and transparent protocol but not with security in mind. This openness has facilitated a broad variety of applications and services but is also at the
  • 28. heart of security problems. Some of these problems could be remedied with overlay network infrastructure and some will be mitigated by moving to IPv6. Moreover, the long-time convention of registrars to grant five-day grace periods to allow the ‘‘tasting’’ of domain names has been abused; changes to this practice have been approved by the Internet Corporation for Assigned Names and Numbers (ICANN) but have not yet been implemented. 4. Analysis of the emerging patterns What patterns can be observed in the empirical findings on the incentives of market players and externalities that emerge from them? Pointing to a variety of reports that show increases in malicious attack trends, one might conclude that markets are not responding adequately. Our field work and analysis revealed a more diverse, graded picture, indicating a number of market-based incentive mechanisms that enhance security but also other instances in which decentralized actions are afflicted with externalities and possibly sub-optimal outcomes. The findings suggest that all players work under some incentives that work in the correct direction. Furthermore, all market players studied experienced at least some 9 For example, XS4All, a Dutch ISP, offered security services to their customers but less than 10% signed up. Eventually, the ISP decided to offer security software as part of the subscription, but even then many users would not install it. Many users also do not use automatic updates to their software. ARTICLE IN PRESS
  • 29. J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719714 consequences of security trade-offs on others. In other words, feedback loops (e.g., reputation effects or threats to the revenues of a company from poor security practices) were at work that brought some of the costs imposed on others back to the agent that caused them. However, in many cases these feedbacks were too weak, localized, or too slow to move agents’ behavior swiftly towards more efficient social outcomes. In all cases, moreover, incentive mechanisms exist that undermine security. Overall, across the ecosystem of all the different market players, three typical situations emerge: (1) No externalities: This concerns instances in which a market player, be it an individual user or an organization, correctly assesses security risks, bears all the costs of protecting against security threats (including those associated with these risks) and adopts appropriate countermeasures. Private and social costs and benefits of security decisions are aligned. There may still be significant damage caused by malware, but this damage is borne by the market player itself. This situation would be economically efficient but, due to the high degree of interdependency in the Internet, it is relatively rare. Essentially, this scenario only applies to closed networks and user groups. (2) Externalities that are borne by agents in the value net that can manage them: This concerns instances in which a market player assesses the security risks based on the available information but, due to the existence of (positive or negative) externalities, the resulting decision deviates from the social optimum. Such deviations may be based on lack of incentives to take costs imposed on others into account, but it can also result from a lack of information, insufficient skills
  • 30. to cope with security risks, or financial constraints faced by an individual or organization. If somebody in the value net internalizes these costs and this agent can influence the magnitude of these costs – i.e., it can influence the incentives and security trade-offs of the agents generating the externality – then the security level achieved in the ICT ecosystem will be closer to the social optimum than without such internalization. This scenario depicts a relatively frequent case, typically situations in which a commercial relation exists between players. Thus, numerous examples were found where externalities generated in one segment of the value net were internalized by other market players. ISPs and financial service providers that internalize costs originating from end users are two examples that were discussed in more detail above. (3) Externalities that are borne by agents who cannot manage them or by society at large: An individual unit may correctly assess the security risks given its perceived incentives but, due to the existence of externalities, this decision deviates from the social optimum. Alternatively, an individual unit may not fully understand the externalities it generates for other actors. Unlike in scenario two, agents in the ICT ecosystem that absorb the cost are not in a position to influence them – i.e., influence the security trade-offs of the agents generating the externality. Hence, costs are generated for the whole sector and society at large. These are the costs of illegal activity or crime associated with malware, the costs of restitution of crime victims, the costs of e-commerce companies buying security services to fight off botnet attacks, the cost of law enforcement associated with these activities, and so forth. Furthermore, they may take on the more indirect form of slower growth of e-commerce and other activities. Slower growth of ICT use may entail a significant opportunity cost for society at large if the delayed activities would have contributed to economic
  • 31. efficiency gains and accelerated growth. Among the most poignant cases in this category are the externalities caused by lax security practices of end users. Some of these externalities are internalized by other market players that can mitigate them, most notably ISPs that can quarantine infected end users. ISPs have incentives to deal with these problems but only in so far as they themselves suffer consequences from the end user behavior, e.g., by facing the threat that a significant part of their network gets blacklisted. Estimates mentioned in the interviews suggested that the number of abuse notifications received by ISPs represents only a fraction of the overall number of infected machines in their network. This observation suggests that a considerable share of the externalities originating from users may not be mitigated. Consequently, a large share of these costs of poor security practices of end users is borne by the sector as a whole and society at large, typically in the form of higher direct, indirect and implicit costs (see Van Eeten, Bauer, & Tabatabaie, 2009). These externalities are typically explained by the absence of incentives for end users to secure their machines. It would be more precise, however, to argue that end users do not perceive any incentives to secure their machines, in part due to insufficient information. While malware writers have purposefully chosen to minimize their impact on the infected host and to direct their attacks at other targets, there is also a plethora of malware which does in fact attack the infected host—most notably to scour personal information that can be used for financial gain. In that sense, end users should have a strong incentive to secure their machines. Unsecured machines cannot differentiate between malware that does or does not affect the owner of the machine. If the machine is not sufficiently secured, then one has to assume that all forms of
  • 32. malware can be present. The fact that this is not perceived by the end user is an issue of incomplete information rather than a principal failure of the respective incentive mechanism. The discussion so far has assumed that the threat level does not fully undermine the operations of the ICT ecosystem. However, there is a risk of not just security failures or even a disaster but of catastrophic failure. Given the dependence of all aspects of global society on the Internet (and electronic communications in general), widespread and extended failure could certainly have catastrophic consequences. That such a pervasive failure or technological terrorism has not yet happened and has a low probability complicates the formulation of a response (Posner, 2004). Like other events with a low but non-trivial probability, it could be considered a ‘‘black swan’’ event (Taleb, 2007). Cost-benefit analysis of such catastrophic events would help in shaping more rational responses but it is extremely difficult. Complications include the choice of an appropriate time horizon, the quantification of the risk in question, problems of monetizing a wide range of qualitative impacts, and the determination of social discount rates applied to monetized future events. Nonetheless, assessing the potential costs of internet security breaches would greatly benefit from such an exercise. ARTICLE IN PRESS Table 2 Principal policy instruments to enhance information security Predominant policy vector
  • 33. Cybercrime Information security Legal and regulatory measures � National legislation � Bi- and multi-lateral treaties � Forms and severity of punishment � Law enforcement � National legislation/regulation of information security � Legislation/regulation of best practices to enhance information security � Liability in case of failure to meet required standards � Tax credits and subsidies Economic measures � Measures that increase the direct costs of committing fraud and crime � Measures that increase the opportunity costs of committing fraud and crime � Measures that reduce the benefits of crime
  • 34. � Level of financial penalties for violations of legal/regulatory provisions (compensatory, punitive) � Payments for access to valuable information � Markets for vulnerabilities � Insurance markets Technical measures � Redesign of physical and logical internet infrastructure � Information security standards � Mandated security testing � Peer-based information security Informational and behavioral measures � National and international information sharing on cybercrime � National and international information sharing on information security � Educational measures J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 715
  • 35. 5. Policy implications This section explores the role of policy and regulation in overcoming cybersecurity problems. The analysis in Sections 3 and 4 suggests that decentralized decision-making often generates correct incentives for stakeholders and that deviations from a desired security level often trigger the appropriate feedbacks. However, many instances were also identified where the resulting incentives were weak, too slow, or the net effects of the security-enhancing and security-reducing incentive mechanisms could be either positive or negative. Moreover, the ICT ecosystem is under relentless attack from organized gangs and reckless individuals who dispose over increasingly powerful and sophisticated hacking tools and malicious code. Many of these activities are organized in countries where the cost of engaging in cybercrime are low: law enforcement is weak or non-existent; due to dire overall economic conditions, the opportunity costs of participating in cybercrime rather than pursuing other gainful employment are low; and technological means enable criminals to operate swiftly across many national boundaries. At the same time, numerous incremental measures to improve security beyond the status quo are known but may not be undertaken by decentralized players because they are afflicted with positive externalities. The benefits of such measures potentially help all stakeholders but their costs often need to be borne by one particular group. Participants in the ICT ecosystem suffer from a prisoner’s dilemma problem: everybody is worse off if decisions are made in a non-cooperative fashion. Where repeated interactions happen, it is partially overcome, as is illustrated by the cooperation among ISPs. Enhancing cybersecurity at a broader level will have to overcome this coordination and cooperation issue: it is a collective action problem. Whether such collective measures can be identified and successfully
  • 36. implemented without disadvantages that outweigh the potential improvements needs further examination. The information economics literature has begun to examine these issues during the past few years (Anderson et al., 2008; Anderson & Moore, 2006; Camp, 2006, 2007). Many of the aspects of information markets that aggravate security issues can be explored more systematically from an economic perspective including network effects, pervasive externalities, the assignment and evasion of liability rules, and moral hazard. Often, the response is to design a broadside of policy instruments, targeting multiple goals with multiple means simultaneously. Whereas this may eventually turn out as a rational approach, there is also a risk that individual measures counteract and neutralize each other. There are two principal vectors for implementing such collective measures. They can target the realm of cybercrime or the information security decisions of stakeholders in the ICT ecosystem. Both areas may be addressed with measures in four categories (or with a mix of such measures): legal and regulatory measures, economic means, technical solutions, and informational/behavioral approaches (see Table 2). Cybercrime can principally be reduced by increasing its costs and by reducing its benefits. Strengthening law enforcement via national legislation and multi-national and international treaties, such as the European Convention on Cybercrime, which has been ratified by 23 countries and signed by 23 others,10 is one important precondition to credible enforcement as it defines a legal basis for intervention. Potentially equally important for 10 See http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT= 185&CM=&DF=&CL=ENG (last visited February 28, 2009). http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT=
  • 37. 185&CM=&DF=&CL=ENG http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT= 185&CM=&DF=&CL=ENG http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT= 185&CM=&DF=&CL=ENG http://conventions.coe.int/Treaty/Commun/ChercheSig.asp?NT= 185&CM=&DF=&CL=ENG ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719716 the preventative effect of law enforcement are the forms and severity of punishment as well as the effectiveness and expediency of law enforcement. International collaboration is a particularly difficult obstacle that needs to be overcome in this area. Wang and Kim (2009) found that joining the Convention had a deterrent effect on cyberattacks. On the other hand, Png, Wang, and Wang (2008) found an insignificant effect of domestic enforcement. Even so, the shutting down of McColo in November 2008 caused a significant temporary drop in spam activity.11 Given the fact that much criminal activity is organized (but not necessarily executed) in countries with relatively weak rules of law, other measures might be needed in addition to legislative and regulatory initiatives. A number of interesting proposals have been made to reduce spam via technical– economic mechanisms, such as the requirement to have a token or make a payment before a mailbox can be accessed (e.g., Loder, Van Alstyne, & Wash, 2004). Although such measures are principally effective and interesting, they will only take care of malware disseminated via email. Moreover, their effectiveness will depend on a critical mass of users adopting
  • 38. the method. General measures to increase the opportunity costs of cybercrime, such as the creation of attractive employment opportunities for skilled programmers, will contribute to a reduction of activity long term. Many of the architects of the internet have proclaimed that it was not built for the current onslaught of legal and illegal activity. Several initiatives, such as DNS Security Extensions (DNSSEC) which is designed to make the domain name system more secure, are underway. Several other protocols are deployed in the web and others may be rolled out as an underlay network. Whereas all of these measures already have triggered responses by attackers, they do make cybercrime more expensive and one would expect that this has a dampening effect, at least in the short term and other things equal. Furthermore, measures of information sharing at a national level in CERTs and CSIRTs as well as at the international level, such as in the more than 200 organizations collaborating in Forum for Incident Response Teams (FIRST) are important steps in the right direction. Whereas measures devised to fight cybercrime are relatively uncontested, this is not true for policies directed towards information security markets. Nonetheless, there is an emerging discussion in many countries as to whether such measures are appropriate, whether they should be targeted at key players in the ICT ecosystem, and which organizations should be in charge of such initiatives (Anderson et al., 2008; OECD, 2009). Several countries have adopted laws against spam. Although their effectiveness is sometimes questioned, legislation such as the US CAN-SPAM Act of 2003 and subsequent implementation measures by the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) have provided a legal basis to prosecute several spammers with deterrent effects. The recent discussion goes beyond such measures to explore specific regulatory intervention. A critical
  • 39. weakness of any attempt to legislate or regulate security is that specific measures may be outsmarted by new attack technologies quickly. However, the notion of ‘‘best practices’’ could be developed on an ongoing, adaptive basis. This is, for example, the approach taken by the Australian Communication and Media Authority (ACMA) in the Australian Internet Security Initiative. In this collaborative effort, 59 ISPs (as of February 28, 2009) and the regulatory agency share information to reduce the threat from botnets. The approach is not without critics, many of whom point to the lack of transparency in following up on security threat reports. Nonetheless, the ability of the regulatory authority to threaten with the imposition of formal regulation seems to have boosted participation and the initiative is expanding steadily. The Australian example raises the question of what role, if any, regulatory agencies should play. Historically, apart from law enforcement, cybersecurity was overwhelmingly organized in voluntary and self-regulatory arrangements, giving regulatory agencies only ancillary jurisdiction (Lewis, 2005). For example, in the US, the Federal Communications Commission was part of implementing the Communications Assistance for Law Enforcement Act (CALEA), 1994 legislation defining the obligations of telecommunications and information service providers in assisting law enforcement. In some way, however, regulatory agencies are well-positioned to assume a stronger role in contributing to enhanced cybersecurity: they have jurisdiction over much of the key infrastructure used in providing information services which might be an efficient intervention point, typically have powers to compel information from providers that would enable finding better policies, have administrative processes in place that could assist in developing policies, and have some experience with international collaboration (Lukasik, 2000). Regulatory agencies also have practice in designing financing models for
  • 40. the pursuit of public interest goals rather than imposing an unfunded mandate on stakeholders. On the other hand, they also have disadvantages: the administrative process is often slow and beholden to special interest groups and regulatory measures might be too rigid in formulating adaptive and dynamic responses (Garcia & Horowitz, 2007). However, these challenges might be overcome if adaptive regulatory tools are used. In any case, regulatory agencies are an interesting avenue to pursue cybersecurity but they will have to work in concert with other agencies. An even more contested issue is the modification of existing liability rules. In principle, liability rules would facilitate a desirable evolutionary learning process. By creating enforceable rights and obligations, they would allow common and case law institutions to gradually develop a body of best practices and reshape incentives to reduce negative and strengthen positive externalities. Compared to the status quo, such rules will create advantages for some players and disadvantages for others. Most likely, such changes will activate losers that will attempt to block solutions even if the winners could have compensated them. Changes in economic institutions and incentives could achieve similar outcomes. Among the possible measures discussed in the literature are increased penalties for security flaws, the establishment of markets for 11 See B. Krebs, ‘‘Major source of online scams and spams knocked offline’’, Washington Post, November 11, 2008. Retrieved September 3, 2009 from http://voices.washingtonpost.com/securityfix/2008/11/major_so urce_of_online_scams_a.html. http://voices.washingtonpost.com/securityfix/2008/11/major_so urce_of_online_scams_a.html
  • 41. ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 717 vulnerabilities, insurance markets, and mandatory release of information about security violations (Rescorla, 2004; Choi, Fershtman, & Gandal, 2005; Shostack, 2005). All these proposals have some appealing features but most also have downsides. For example, insurance markets may not work well because the risks of individual stakeholders, exposed to the same threats, may be highly correlated. Releasing information may assist consumers in avoiding firms with low security standards but it may also trigger more devious attacks (see the more detailed discussion in Van Eeten & Bauer, 2008). At a technical level, information security standards, mandated security testing, or peer-based information security approaches are promising avenues. Many nations are actively engaged in information sharing programs at the national level. Some have established reporting requirements for security breaches. Early observations suggest that such measures work in favor of enhanced security. At least, they increase market transparency for consumers. The effectiveness of the available broad spectrum of measures is not well established. Very few empirical studies document the possible impact of specific measures. As it is unlikely that a comprehensive information base that will allow such judgment is available any time soon, measures will have to be adopted on an experimental, trial and error basis. Different options may complement each other (such as national and international legislation) whereas others may be partial or full substitutes for each other. More research and practical policy experimentation is needed to move this process along. 6. Conclusions
  • 42. Information and communication technologies form a vast open ecosystem. This openness supports the innovative dynamics of the internet (Benkler, 2006) but it also renders infrastructure and services vulnerable to attack. During the past decade, information security breaches have become increasingly motivated by fraudulent and criminal motives and are threatening to undermine the open fabric of the internet (Zittrain, 2008). Because it benefits all participants in the ICT ecosystem, information security has strong public good characteristics. However, defenses against this relentless stream of attacks are predominantly the outcome of security decisions by private market and non-market players. As security is costly, accepting some level of insecurity is economically rational. A critical question, which was pursued in this paper, is, therefore, whether these decentralized decisions result in a socially acceptable level of information security. The paper set out to pursue three interrelated aims. It first analyzed cybercrime and information security as two interdependent markets. Second, in this framework, the incentives of key players in the ICT ecosystem (e.g., ISPs, e-commerce service providers, and different types of users) were examined in detail. The research found that all players in the ICT ecosystem face incentives such as revenue effects, reputation effects, and social relations that increase the level of security. At the same time, due to physical network links, multi- layered business transactions, and social relations, some of the costs and benefits of security decisions are dispersed throughout the entire ecosystem. Individual decision-makers therefore generate externalities, i.e. they do not perceive all relevant costs and benefits of their choices. Three typical scenarios describe the extent and severity of externalities. In rare instances all social costs and benefits are internalized and no externalities are present. In a second scenario, a stakeholder
  • 43. in the value net, such as a financial institution or an ISP that has a business relationship with the player that is the source of the externality (often end users) is able to fully or partially control it. In this case, the security externality is internalized at the sector level. In the third scenario, no such opportunities exist and costs are imposed on the whole ICT ecosystem and society at large. Threats to cybersecurity therefore have two sources: a continuous stream of attacks from the underworld of cybercrime and the interconnected and distributed nature of the ICT ecosystem. As decentralized decisions are afflicted with externalities, some form of collective action, ranging from voluntary self-regulation to government-prodded co-regulation and forms of legislative and regulatory intervention may be the only way to overcome the problem. Thus, the paper, explored third alternative policy approaches. Such actions can use two complementary levers: measures to quench cybercrime and measures to enhance information security. Both strategies will result in a lower level of attacks. Whereas this may or may not increase the level of security, measures to reduce cybercrime will unequivocally reduce the direct and indirect cost of maintaining a given level of security. Measures to enhance information security range from market-based interventions such as the establishment of markets for vulnerabilities to regulatory measures. Other things equal, they will increase the cost of cybercrime and therefore reduce the level of attacks. However, the relation between attackers and defenders resembles an arms race so that the effects of security enhancements may only be temporary. The complex nature of the problem suggests that a mix of policy instruments may currently be the best approach. Regulatory agencies are in some ways well-positioned to assume responsibility in the area of information security. They
  • 44. have jurisdiction over large parts of the ICT infrastructure, are experienced in developing policies and funding models for measures that might impose costs on certain players, and are increasingly collaborating at an international level. However, regulatory agencies will have to develop innovative, adaptive approaches as existing administrative inertia and the potential rigidity of ex ante regulation may render them ineffective in addressing issues of cybercrime. Moreover, regulation alone will be insufficient, as it cannot effectively target the world of cybercrime. Concerted efforts across public and private stakeholders, including legal measures, national and international law enforcement, cooperation in CERTs and CSIRTs, as well as technical measures at the level of the infrastructure, hardware, and software, will therefore be necessary if the problem of cybersecurity shall be tackled. ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719718 Acknowledgements Background research for this article was supported by funding from the OECD (Paris) and the Dutch Ministry of Economics (The Hague). Yuehua Wu and Tithi Chattopadhyay provided able research assistance during early stages of the project. We also like to thank the more than 50 experts from industry and governance who generously contributed time and knowledge during and after the field interviews. An earlier version of this paper was presented at WebSci09, Athens, Greece, March 18–20, 2009. References
  • 45. Anderson, R. (2001). Why information security is hard: An economic perspective. Paper presented at the annual computer security applications conference, New Orleans, LA. Retrieved September 3, 2009 from /http://www.acsac.org/2001/papers/110.pdfS (December 13). Anderson, R., Böhme, R., Clayton, R., & Moore, T. (2008). Security economics and European policy. Cambridge: University of Cambridge Computer Laboratory. Anderson, R., & Moore, T. (2006). The economics of information security. Science, 314(5799), 610–613. Bauer, J. M., Van Eeten, M., Chattopadhyay, T., & Wu, Y. (2008). Financial implications of network security: Malware and spam. Report for the International Telecommunication Union (ITU), Geneva, Switzerland. Becker, G. S. (1968). Crime and punishment: An economic approach. Journal of Political Economy, 76(2), 169–217. Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. New Haven, CT: Yale University Press. Bovel, D., & Martha, J. (2000). Value nets: Breaking the supply chain to unlock hidden profits. New York, NY: John Wiley. Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition. New York, NY: Doubleday. Camp, L. J. (2006). The state of economics of information security. I/S-A Journal of Law and Policy in the Information Society, 2. Retrieved September 3, 2009 from /http://www.is-journal.org/V02I02/2ISJLP189-Camp.pdfS. Camp, L. J. (2007). Economics of identity theft: Avoidance, causes and possible cures. New York, Berlin: Springer. Choi, J. P., Fershtman, C., & Gandal, N. (2005). Internet security, vulnerability disclosure, and software provision. Paper presented at the fourth workshop on
  • 46. the economics of information security, Cambridge, MA. Retrieved September 3, 2009 from /http://infosecon.net/workshop/pdf/9.pdfS (June 2). CSI (2008). 2008 CSI computer crime and security survey. San Francisco, CA: Computer Security Institute. Ehrlich, I. (1996). Crime, punishment, and the market for offenses. Journal of Economic Perspectives, 10(1), 43–67. Franklin, J., Paxson, V., Perrig, A., & Savage, S. (2007). An inquiry into the nature and causes of the wealth of internet miscreants. Paper presented at the 14th ACM conference on computer and communications security, Alexandria, VA. Retrieved September 3, 2009 from /http://www.icir.org/vern/papers/ miscreant-wealth.ccs07.pdfS (October 31). Fransman, M. J. (2007). The new ICT ecosystem: Implications for Europe. Edinburgh: Kokoro. Garcia, A., & Horowitz, B. (2007). The potential for underinvestment in internet security: Implications for regulatory policy. Journal of Regulatory Economics, 31(1), 37–55. Gilsing, V. (2005). The dynamics of innovation and interfirm networks: Exploration, exploitation and co-evolution. Cheltenham, UK: Edward Elgar Publishing. Gordon, L. A., & Loeb, M. P. (2004). The economics of information security investment. In L. J. Camp, & S. Lewis (Eds.), Economics of information security (pp. 105–128). Dordrecht: Kluwer Academic Publishers. House of Lords (2007). Personal internet security. Science and Technology Committee, 5th Report of Session 2006–07, London. Retrieved September 3, 2009
  • 47. from /http://www.publications.parliament.uk/pa/ld/ldsctech.htmS. Illing, G., & Peitz, M. (Eds.). (2006). Industrial organization and the digital economy. Cambridge, MA MIT Press. Jakobsson, M., & Ramzan, Z. (Eds.). (2008). Crimeware: Understanding new attacks and defenses. Upper Saddle River, NJ: Addison-Wesley Professional. Kanich, C., Kreibnich, C., Levchenko, K., Enright, B., Voelker, G. M., Paxson, V., et al. (2008). Spamalytics: an empirical analysis of spam marketing conversion. Paper presented at the 15th ACM conference on computer and communications security, Alexandria, VA. Retrieved September 3, 2009 from /http:// www.icsi.berkeley.edu/pubs/networking/2008-ccs- spamalytics.pdfS (October 28). Kshetri, N. (2006). The simple economics of cybercrimes. IEEE Security and Privacy, 4(1), 33–39. Kunreuther, H., & Heal, G. (2003). Interdependent security. Journal of Risk and Uncertainty, 26(2), 231–249. LaRose, R., Rifon, N., & Enbody, R. (2008). Promoting personal responsibility for internet safety. Communications of the ACM, 51(3), 71–76. Lewis, J. A. (2005). Aux armes, citoyens: Cyber security and regulation in the United States. Telecommunications Policy, 29(11), 821–830. Li, F., & Whalley, J. (2002). Deconstruction of the telecommunications industry: From value chains to value networks. Telecommunications Policy, 26(9–10), 451–472. Loder, T., Van Alstyne, M., & Wash, R. (2004). An economic answer to unsolicited communication. Paper presented at the 5th ACM conference on electronic
  • 48. commerce, New York. Retrieved September 3, 2009 from /http://portal.acm.org/citation.cfm?id=988780S (May 17). Lukasik, S. J. (2000). Protecting the global information commons. Telecommunications Policy, 24(6–7), 519–531. OECD (2009). Computer viruses and other malicious software. Paris: Organisation for Economic Co-operation and Development. Murmann, J. P. (2003). Knowledge and competitive advantage: The co-evolution of firms, technology, and national institutions. Cambridge, UK: Cambridge University Press. Nelson, R. R. (1994). The co-evolution of technology, industrial structure, and supporting institutions. Industrial and Corporate Change, 3(1), 47–63. Peppard, J., & Rylander, A. (2006). From value chain to value network: Insights for mobile operators. European Management Journal, 24(2–3), 128–141. Perrow, C. (2007). The next catastrophe: Reducing our vulnerabilities to natural, industrial, and terrorist disasters. Princeton, NJ: Princeton University Press. Png, I. P. L., Wang, C.-H., & Wang, Q.-H. (2008). The deterrent and displacement effects of information security enforcement: International evidence. Journal of Management Information Systems, 25(2), 125–144. Posner, R. A. (2004). Catastrophe: Risk and response. New York: Oxford University Press. Rescorla, E. (2004). Is finding security holes a good idea? Paper presented at the third workshop on the economics of information security, Minneapolis, MN. Retrieved September 3, 2009 from /http://www.rtfm.com/bugrate.pdfS (May 13). Schipka, M. (2007). The online shadow economy: A billion
  • 49. dollar market for malware authors. White Paper. New York, NY: MessageLabs Ltd. Retrieved September 3, 2009 from /http://www.fstc.org/docs/articles/messaglabs_online_shadow_e conomy.pdfS. Schneier, B. (2004). Secrets and lies: Digital security in a networked society. New York, NY: Wiley. Shapiro, C., & Varian, H. R. (1999). Information rules: A strategic guide to the network economy. Boston, MA: Harvard Business School Press. Shostack, A. (2005). Avoiding liability: An alternative route to more secure products. Paper presented at the fourth workshop on the economics of information security, Cambridge, MA. Retrieved on September 3, 2009 from /http://infosecon.net/workshop/pdf/44.pdfS (June 3). Symantec (2009). The state of spam. Report #32, August 2009. Retrieved on September 3, 2009 from /http://eval.symantec.com/mktginfo/enterprise/ other_resources/b-state_of_spam_report_08-2009.en-us.pdfS. Taleb, N. N. (2007). The black swan: The impact of the highly improbable. New York: Random House. Van Eeten, M., & Bauer, J. M. (2008). The economics of malware: Security decisions, incentives and externalities. Directorate for Science, Technology and Industry, Committee for Information, Computer and Communications Policy, DSTI/ICCP/REG(2007)27. Paris: OECD. Van Eeten, M., Bauer, J. M., & Tabatabaie, S. (2009). Damages from internet security incidents: A framework and toolkit for assessing the economic costs of security breaches. Report for OPTA. Delft: Delft University of
  • 50. Technology. http://www.acsac.org/2001/papers/110.pdf http://www.is-journal.org/V02I02/2ISJLP189-Camp.pdf http://infosecon.net/workshop/pdf/9.pdf http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdf http://www.icir.org/vern/papers/miscreant-wealth.ccs07.pdf http://www.publications.parliament.uk/pa/ld/ldsctech.htm http://www.icsi.berkeley.edu/pubs/networking/2008-ccs- spamalytics.pdf http://www.icsi.berkeley.edu/pubs/networking/2008-ccs- spamalytics.pdf http://portal.acm.org/citation.cfm?id=988780 http://www.rtfm.com/bugrate.pdf http://www.fstc.org/docs/articles/messaglabs_online_shadow_ec onomy.pdf http://infosecon.net/workshop/pdf/44.pdf http://eval.symantec.com/mktginfo/enterprise/other_resources/b -state_of_spam_report_08-2009.en-us.pdf http://eval.symantec.com/mktginfo/enterprise/other_resources/b -state_of_spam_report_08-2009.en-us.pdf ARTICLE IN PRESS J.M. Bauer, M.J.G. van Eeten / Telecommunications Policy 33 (2009) 706–719 719 Varian, H., Farrell, J., & Shapiro, C. (2005). The economics of information technology: An introduction. Cambridge, UK: Cambridge University Press. Wang, Q.-H., & Kim, S.-H. (2009). Cyber-attacks: Cross- country interdependencies and enforcement. Paper presented at the eight workshop on the economics of information security, London. Retrieved September 3, 2009 from /http://weis09.infosecon.net/files/153/paper153.pdfS (June
  • 51. 24). White House (2009). Cyberspace policy review: Assuring a trusted and resilient information and communications infrastructure. Washington, DC: The White House Retrieved September 3, 2009 from. /http://www.whitehouse.gov/assets/documents/Cyberspace_Poli cy_Review_final.pdfS. Zittrain, J. (2008). The future of the internet and how to stop it. New Haven, CT: Yale University Press. http://weis09.infosecon.net/files/153/paper153.pdf http://www.whitehouse.gov/assets/documents/Cyberspace_Polic y_Review_final.pdfCybersecurity: Stakeholder incentives, externalities, and policy optionsIntroductionThe co-evolution of cybercrime and information securitySecurity incentives of stakeholdersAnalysis of the emerging patternsPolicy implicationsConclusionsAcknowledgementsReferences The Economics of Information Security Ross Anderson and Tyler Moore University of Cambridge, Computer Laboratory 15 JJ Thomson Avenue, Cambridge CB3 0FD, United Kingdom [email protected] The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, we find incentives becoming
  • 52. as important to dependability as technical design is. The new field provides valuable insights not just into ‘security’ topics such as bugs, spam, phishing and law-enforcement strategy, but into more general areas such as the design of peer-to-peer systems, the optimal balance of effort by programmers and testers, why privacy gets eroded, and the politics of DRM. Introduction Over the last six years, people have realized that security failure is caused at least as often by bad incentives as by bad design. Systems are particularly prone to failure when the person guarding them is not the person who suffers when they fail. The growing use of security mechanisms to enable one system user to exert power over another user, rather than simply to exclude people who should not be users at all, introduces many strategic and policy issues. The tools and con- cepts of game theory and microeconomic theory are becoming just as important to the security engineer as the mathematics of cryptography. We review several recent results and live research challenges in the economics of informa- tion security. We first consider misaligned incentives, and externalities: network insecurity is somewhat like air pollution or traffic congestion, in that people who connect insecure machines to the Internet do not bear the full consequences of their actions. The difficulty in measuring information security risks presents another challenge: these
  • 53. risks cannot be managed better until they can be measured better. Auctions and markets can help reduce the information asymmetry prevalent in the software industry. We also examine the problem of insuring against attacks. The local and global correlations exhibited by different attack types largely determine what sort of insurance markets are feasible. Information security 1 mechanisms or failures can create, destroy or distort other markets: DRM provides a topical example. Economic factors also explain many challenges to personal privacy. Discriminatory pricing – which is economically efficient but socially controversial – is simultaneously made more attractive to merchants, and easier to implement, by technological advance. We conclude by discussing a fledgling research effort: examining the security impact of network structure on interactions, reliability and robustness. Misaligned incentives One of the observations that drove initial interest in security economics came from banking. In the USA, banks are generally liable for the costs of card fraud; when a customer disputes a transaction, the bank must either show that she is trying to cheat, or refund her money. In the UK, the banks had a much easier ride: they generally got away with claiming that the
  • 54. ATM system was ‘secure’, so a customer who complained must be mistaken or lying. “Lucky bankers,” you might think; yet UK banks spent more on security and suffered more fraud. How could this be? It appears to have been what economists call a moral-hazard effect: UK bank staff knew that customer complaints would not be taken seriously, so they became lazy and careless. This situation led to an avalanche of fraud [1]. In 2000, Varian made another key observation – about the anti- virus software market. People did not spend as much on protecting their computers as they might have. Why not? Well, at that time, a typical virus payload was a service-denial attack against the website of a company like Microsoft. While a rational consumer might well spend $20 to prevent a virus from trashing his hard disk, he might not do so just to prevent an attack on someone else [2]. Legal theorists have long known that liability should be assigned to the party that can best manage the risk. Yet everywhere we look, we see online risks allocated poorly, resulting in privacy failures and protracted regulatory tussles. For instance, medical IT systems are bought by hospital directors and insurance companies, whose interests in account management, cost control and research are not well aligned with the patients’ interests in privacy. Incentives can also influence attack and defense strategies. In economic theory, a hidden- action problem arises when two parties wish to transact, but one party can take unobservable
  • 55. actions that impact the transaction. The classic example comes from insurance, where the insured party may behave recklessly (increasing the likelihood of a claim) because the insurance company cannot observe his behavior. Moore noted that we can use such economic concepts to classify computer security prob- lems [3]. Routers can quietly drop selected packets or falsify responses to routing requests; nodes can redirect network traffic to eavesdrop on conversations; and players in file-sharing systems can hide whether they have chosen to share with others, so some may ‘free-ride’ rather than to help sustain the system. In such hidden-action attacks, some nodes can hide malicious or antisocial behavior from others. Once the problem is seen in this light, designers can structure 2 interactions to minimize the capacity for hidden action or to make it easy to enforce suitable contracts. This helps explain the evolution of peer-to-peer systems over the past ten years. Early sys- tems, such as Eternity, Freenet, Chord, Pastry and OceanStore, provided a ‘single pot’, with widely and randomly distributed functionality. Later and more successful systems, like the popular Gnutella and Kazaa, allow peer nodes to serve content they have downloaded for their personal use, without burdening them with random files. The
  • 56. comparison between these archi- tectures originally focused on purely technical aspects: the cost of search, retrieval, communi- cations and storage. However, it turns out that incentives matter here too. First, a system structured as an association of clubs reduces the potential for hidden action; club members are more likely to be able to assess correctly which members are contributing. Second, clubs might have quite divergent interests. Early peer- to-peer systems were oriented towards censorship resistance rather than music file sharing, and when they put all content into one pot, quite different groups ended up protecting each others’ free speech – maybe Chinese dissidents, critics of Scientology, or aficionados of sado- masochistic imagery that is legal in California but banned in Tennessee. The question then is whether such groups might not fight harder to defend their own kind, rather than people involved in struggles in which they had no interest and where they might even be disposed to side with the censor. Danezis and Anderson introduced the Red-Blue model to analyze this [4]. Each node has a preference among resource types, while a censor who attacks the network will try to impose his own preferences. His action will meet the approval of some nodes but not others. The model proceeds as a multi-round game in which nodes set defense budgets which affect the probability that they will defeat the censor or be overwhelmed by him. Under reasonable assumptions, the authors show that diversity (with each node storing its preferred
  • 57. resource mix) performs better under attack than solidarity (where each node stores the same resource mix, which is not usually its preference). Diversity increases node utility which in turn makes nodes willing to allocate higher defense budgets. This model sheds light on the more general problem of the tradeoffs between diversity and solidarity when conflict threatens, and the related social policy issue of the extent to which the growing diversity of modern societies is in tension with the solidarity on which modern welfare systems are founded [5]. Security as an externality Information industries are characterized by many different types of externalities, where indi- viduals’ actions have side-effects on others. The software industry tends toward dominant firms thanks to the benefits of interoperability. Economists call this a network externality: a network, or a community of software users, is more valuable to its members the larger it is. This not only helps explain the rise and dominance of operating systems, from System/360 through Windows to Symbian, and of music platforms such as iTunes; it also helps explain the typical pattern of security flaws. Put simply, while a platform vendor is building market dominance, he has to 3 appeal to vendors of complementary products as well as to his direct customers; not only does this divert energy that might be spent on securing the platform,
  • 58. but security could get in the way by making life harder for the complementers. So platform vendors commonly ignore security in the beginning, as they are building their market position; later, once they have captured a lucrative market, they add excessive security in order to lock their customers in tightly [7]. Another instance of externalities can be found when we analyze security investment, as protection often depends on the efforts of many principals. Budgets generally depend on the manner in which individual investment translates to outcomes, but this in turn depends not just on the investor’s own decisions but also on the decisions of others. System reliability can depend on the sum of individual efforts, the minimum effort anyone makes, or the maximum effort any makes. Program correctness can depend on the weakest link (the most careless programmer introducing a vulnerability) while software validation and vulnerability testing might depend on the sum of everyone’s efforts. There can also be cases where the security depends on the best effort – the effort of an individual champion. A simple model by Varian provides interesting results when players choose their effort levels indepen- dently [8]. For the total-effort case, system reliability depends on the agent with the highest benefit-cost ratio, and all other agents free-ride. In the weakest- link case, the agent with the lowest benefit-cost ratio dominates. As more agents are added, systems become increasingly reliable in the total-effort case but increasingly unreliable in the
  • 59. weakest-link case. What are the implications? One is that software companies should hire more software testers and fewer (but more competent) programmers. Work such as this has inspired other researchers to consider interdependent risk. A recent influential model by Kunreuther and Heal notes that the security investments can be strategic complements: an individual taking protective measures creates positive externalities for others that in turn may discourage their own investment [9]. This result has implications far beyond information security. The decision by one apartment owner to install a sprinkler system that minimizes the risk of fire damage will affect the decisions of his neighbors; airlines may de- cide not to screen luggage transferred from other carriers who are believed to be careful with security; and people thinking of vaccinating their children against a contagious disease may choose to free-ride off the herd immunity instead. In each case, several widely varying Nash equilibrium outcomes are possible, from complete adoption to total refusal, depending on the levels of coordination between principals. Katz and Shapiro famously noted how network externalities affected the adoption of tech- nology [10]. Network effects can also influence the deployment of security technology. The benefit a protection technology provides may depend on the number of users that adopt it. The cost may be greater than the benefit until a minimum number of players adopt; so each decision-maker might wait for others to go first, and the
  • 60. technology never gets deployed. Re- cently, Ozment and Schechter have analyzed different approaches for overcoming bootstrapping problems faced by those who would deploy security technologies [11]. This challenge is particularly topical. A number of core Internet protocols, such as DNS and routing, are considered insecure. More secure protocols exist; the challenge is to get them 4 adopted. Two security protocols that have already been widely deployed, SSH and IPsec, both overcame the bootstrapping problem by providing significant intra-organizational benefits. In the successful cases, adoption could be done one organization at a time, rather than needing most organizations to move at once. The deployment of fax machines also occurred through this mechanism: companies initially bought fax machines to connect their own offices. Economics of vulnerabilities A vigorous debate has ensued between software vendors and security researchers over whether actively seeking and disclosing vulnerabilities is socially desirable. Resorla has argued that for software with many latent vulnerabilities (like Windows), removing an individual bug makes little difference to the likelihood of an attacker finding another one later [12]. Since exploits are often based on vulnerabilities inferred from patches or security
  • 61. advisories, he argued against disclosure and frequent patching if vulnerabilities are correlated. Ozment investigated vulnerabilities identified for FreeBSD; he found that many vulnerabil- ities are indeed likely to be rediscovered and are therefore often correlated [13]. Arora, Telang and Xu produced a model where disclosure is necessary to incentivize vendors into fixing bugs in subsequent product releases [14]. Arora, Krishnan, Nandkumar, Telang and Yang present quantitative analysis to complement the above model, which found that for public disclosure, vendors respond more quickly compared to private disclosure, the number of attacks increases but the number of reported vulnerabilities declines over time [15]. This discussion begs a more fundamental question: why do so many vulnerabilities exist in the first place? Surely, if companies desire secure products then secure software will dominate the marketplace? As we know from experience, this is not the case: most commercial software contains design and implementation flaws that could easily have been prevented. Although vendors are capable of creating more secure software, the economics of the software industry provide them with little incentive to do so [7]. In many markets, the attitude of ‘ship it Tuesday and get it right by version 3’ is perfectly rational behavior. Consumers generally reward vendors for adding features, for being first to market, or for being dominant in an existing market – and especially so in platform markets with network externalities.
  • 62. These motivations clash with the task of writing more secure software, which requires time- consuming testing and a focus on simplicity. Another aspect of vendors’ lack of motivation is readily explained by Anderson: the soft- ware market is a ‘market for lemons.’ In a Nobel prize-winning work, economist George Akerlof employed the used car market as a metaphor for a market with asymmetric informa- tion [16]. His paper imagines a town in which 50 good used cars (worth $2000) are for sale, along with 50 ‘lemons’ (worth $1000) each). The sellers know the difference but the buyers do not. What will be the market-clearing price? One might initially think $1500, but at that price no-one with a good car will offer it for sale; so the market price will quickly end up near $1000. Because buyers are unwilling to pay a premium for quality they cannot measure, only 5 low-quality used vehicles are available for sale. The software market suffers from the same information asymmetry. Vendors may make claims about the security of their products, but buyers have no reason to trust them. In many cases, even the vendor does not know how secure its software is. So buyers have no reason to pay more for more secure software, and vendors are disinclined to invest in protection. How