DeterLab provides the capability to conduct risk evaluations of cyber-physical systems where the controllable variables range from IP level dynamics to introduction of malicious entities such as DDoS attacks. I recently co-authored an article published in the IEEE Magazine that discusses how the cyber aspects and the physical aspects of such systems can be integrated together to provide a CPS risk assessment environment.
In our article, titled "In the Quest of Benchmarking Security Risks to Cyber-Physical Systems" we present a generic yet practical framework for assessing security risks to cyber-physical systems. Our framework can be used to benchmark security risks when information is less than perfect, and interdependencies of physical and computational components may result in correlated failures. We focus on the risks that arise from interdependent reliability failures (faults) and security failures (attacks).
We advocate that a sound assessment of these risks requires explicit modeling of the effects of both technology-based defenses and institutions necessary for supporting them. Our game-theoretic approach to estimate security risks allows designing defenses that consider fault-tolerant control along with institutional structures.
Alefiya Hussain, University of Southern California
For information on DeterLab, visit: http://www.deter-project.org/deterlab-cyber-security-science-facility
2. AMIN LAYOUT_Layout 1 1/15/13 2:46 PM Page 20
Reliability and security risk management
Layer 3 Information and CPS Risks
Internet
The Interplay of Technological Defenses and
Diagnosis, response, and reconfiguration Institutions
Layer 2 There are two types of technological means to reduce CPS
Control network risks: ICT security tools and control-theoretic tools. The ICT
security tools include authentication and access control mecha-
Detection and regulation nisms, network intrusion detection systems, patch manage-
ment, and security certification. In practice, the effectiveness
Layer 1 of these security tools is limited by CPS reliability and cost
Sensor actuator considerations. For example, the frequency of security patch
network updates is limited by the real-time constraints on the availabil-
Electric power Buildings ity of CPS data; common criteria certification is limited by the
Physical infrastructures resources for CPS security and so on. The control-theoretic
tools include model-based attack/fault detection and isolation,
robust control strategies that maintain closed-loop stability
Water and gas
Transportation and performance guarantees under a class of DoS/deception
attacks, and reconfigurable (switching) control strategies to
limit the effect of correlated failures. Recently, several organi-
Attacks Defenses Faults zations (e.g., NIST, NERC, DHS) have proposed security
standards and recommendations that combine the ICT-specif-
ic security defenses with control theoretic tools.
Figure 1. A layered architecture for management of CPS. While technology-based defenses for CPS are the main
channel to improve their survivability against correlated fail-
ures, the mere existence of these defenses is not sufficient. It
respectively. In Table 1, we present examples of security is well established that the lack of private parties’ incentives
attacks on the regulatory and supervisory control layers. for security improvements is a severe impediment to achieving
Attacks at the management level are similar to attacks on socially desirable improvements of CPS security [6]. Indeed,
computer networks. We refer the reader to [1, 2] for specific large-scale critical infrastructures are typically managed by
discussions on security attacks to smart grid infrastructures. profit-driven private entities. Proper implementation of tech-
nological defenses and resilient operation requires compliance
Classification of Correlated Failures of relevant entities. Below we highlight the informational defi-
The danger of correlated failures becomes especially profound ciencies that negatively affect the incentives for security.
in CPSs due to the tight coupling of typically continuous phys-
ical dynamics and discrete dynamics of embedded computing Informational Deficiencies
processes. Correlated failures originate from one or more of Due to the prohibitively high costs of information acquisition,
the following events: it is often too costly to determine the following:
• Simultaneous attacks: Targeted cyber attacks (e.g., failures • Which hardware malfunctions and software bugs have
due to Stuxnet); non-targeted cyber attacks (e.g., failures caused a system failure
due to Slammer worm, distributed DoS attacks [3], conges- • Whether the system failure was caused by a reliability fail-
tion in shared networks); coordinated physical attacks (e.g., ure or security failure or both
failures caused by terrorists) In many cases, this information varies significantly across dif-
• Simultaneous faults: Common-mode failures (e.g., failure of ferent entities (players), such as CPS operators, SCADA and
multiple ICT components in an identical manner [4], pro- ICT vendors, network service providers, users, and local/ fed-
gramming errors); random failures (e.g., natural events such eral regulatory agencies (or government). Information defi-
as earthquakes and tropical cyclones, and operator errors ciencies arise from the conflicting interests of individual
such as an incorrect firmware upgrade) players whose choices affect the CPS risks. One may say that
• Cascading failures: Failure of a fraction of nodes (compo- interdependent failures cause externalities that result in mis-
nents) in one CPS subnetwork can lead to progressive esca- aligned player incentives (i.e., the individually optimal CPS
lation of failures in other subnetworks (e.g., power network security defenses diverge from the socially optimal ones).
blackouts affecting communication networks, and vice Moreover, in environments with incomplete and also asym-
versa) [5]. metric (and private) information, the societal costs of a corre-
The above classification is neither fully disjoint nor exhaus- lated CPS failure typically exceed the losses of the individual
tive. Still, we envision that it will be useful for CPS risk assess- players whose products and services affect CPS operations,
ment. We term correlated failures caused by simultaneous and on whose actions the CPS risks depend. Specifically,
attacks as security failures and simultaneous faults as reliabili- interdependencies between security and reliability failures in
ty failures. Due to the tight cyber-physical interactions, it is CPS are likely to cause negative externalities. In such environ-
extremely difficult (and often prohibitively time-consuming) to ments, the individual players tend to underinvest in security
isolate the cause of any specific failure using the diagnostic relative to a socially optimal benchmark. This requires design
information, which, in general, is imperfect and incomplete. of institutional means to realign the individual players’ incen-
Thus, reliability and security failures in CPSs are inherently tives to make adequate investments in security. Examples of
intertwined. We believe that the quest to find a mutually institutional means include regulations that require players to
exclusive and jointly exhaustive partition of failure events certify that they possess certain security capabilities, and legal
must be abandoned. Instead, the research emphasis should rules which mandate that players share information about
shift to the analysis of interdependent reliability and security security incidents with government agencies and/or the public
failures, and risk assessment. through established channels.
20 IEEE Network • January/February 2013
3. AMIN LAYOUT_Layout 1 1/15/13 2:46 PM Page 21
Control layer
Regulatory control Supervisory control during any reliability and/or security failure event. The fail-
ure events form a collection of subsets of W i , which we
Deception
Spoofing, replay Set-point change denote by F i. Let the random variables X R : W i Æ R and
i
attacks XS : Wi Æ R represent the reliability and security levels of
i
Measurement substitution Controller substitution the i-th component, respectively, with joint (cumulative)
distribution function:
Physical jamming Network flooding
DoS i
attacks FX i ,X i (xR, xS) = P{w Œ Wi Ô XR (w) £ xR, XS(w) £ xS},
i i i i i i
R S
Increase in latency Operational disruption
where the measure P assigns probabilities to failure events.
i i
Table 1. Cyber-attacks to CPS control layers. Notice that the reliability level XR and security level XS are
defined on the same measure space (Wi, Fi), and they are
not mutually independent, that is,
Clearly, these individual players cannot completely elimi-
nate the risk of CPS failures even in the presence of advanced FXR,XS(xR, xS) ≠ FXR(xR) . FXS(xS).
technological defenses and institutional measures, which aim
to reduce (or even eliminate) incentive misalignment between Unfortunately, the CPS operator does not have perfect
individual and socially optimal security choices. For example, knowledge of these distributions. Reasonable estimates of
consider a benchmark case when security defenses are opti- FXR(xR) may be obtained from historical failure data. Howev-
mally chosen by the social planner for a given technological er, estimating the joint distribution FXR,XS(xR, xS) is difficult
and institutional environment. There still remains a residual as attackers continue to find new ways to compromise security
risk driven by fundamental physical limits. Indeed, when secu- vulnerabilities.
i i
rity defenses are chosen by individual players, the risk is only In general, the random vector (XR, XS) is influenced by:
higher. Thus, non-negligible (public) residual risks are charac- • Action set of the CPS operator A = U » V, where U : =
teristic for CPSs that are subjected to correlated failures. {U1, … Um} and V : = {V1, …, Vm} denote the set of con-
So far, the occurrence of extreme correlated failures have trol and security choices, respectively
been statistically rare. However, with the emergence of orga- • Action set of other players B, such as vendors, attackers,
nized cyber-crime groups capable of conducting intrusions service providers, users, and regulatory agencies
into NCS/SCADA systems, the risks of such rare failure • Environment E, including the technological, organizational,
events cannot be ignored. Unsurprisingly, cyber-warfare is and institutional factors
i i
projected to become the future of armed conflict, and manag- For given reliability and security levels xR, xS, let the func-
ing CPS risks must be at the core of any proactive defense tion L i(x R, x Si) denote the losses faced by the CPS operator
i
program. when the ith component fails (e.g., the cost of service disrup-
tions, maintenance/recovery costs, and penalties for users’ suf-
Benchmarking CPS Risks fering). Then, for CPS with m independent components, the
Due to the aforementioned challenges, benchmarking CPS aggregate risk can be expressed as:1
risks is a hard problem, and several questions remain unan-
swered [7–9]. Our goal in this article is twofold:
( ( ))
m
• We suggest a game-theoretic framework that assesses secu- R = ∑ R i Li XiR , XiS , (1)
rity risks by quantifying the misalignment between individu- i =1
ally and socially optimal security investment decisions when
the CPS comprises interdependent NCS. where the functional Ri assigns a numerical value to each ran-
i
• We advocate that better information about these risks is a dom variable Li with distribution function FLi. Henceforth, we
prerequisite to improvement of CPS security via a combina- use the expected (mean) value of loss, m(Li) = E[Li(XR, XS)],
i i
tion of more sophisticated technology-based defenses and as a metric of Ri, but caution that it is inadequate to capture
the advancement of their supporting institutions. risk of extreme failure events.2 From Eq. 1, we observe that
Improved assessment of the CPS risks will lead to several the aggregate risk is also influenced by actions A, B, and envi-
beneficial developments, such as improved risk management ronment E. To emphasize this dependence, we will use R(A,
at both the individual and societal levels. Thus, a standardized B, E) to denote the aggregate CPS risk.
framework should be established that can assess and compare For a given environment E and fixed choices B of other
different technological and institutional means for risk man- players, the CPS operator’s objective is to choose security
agement. At the very least, better knowledge of CPS risks will actions V and control actions U to minimize the total expected
permit the players to make more informed (and therefore bet- cost J(U, V) of operating the system:
ter and cheaper) choices of security defenses, thus improving
the societal welfare. J(U, V) = JI(V) + JII(U, V), (2)
m
where J I (V) : = S i=1 l i (V i ) denotes the operator’s cost of
Framework to Benchmark CPS Risks employing security choices V, and J II(U, V) is the expected
We now present a risk assessment framework from the per-
spective of CPS operators. Our setup can readily be adapted
to assess risks from the perspective of other players. 1 The assumption of independent components can easily be relaxed to
include parallel, series, and interlinked components.
CPS with a Centralized Control System
Consider a CPS with m independent components managed by 2 Other commonly used choices of risk Ri include the mean-variance
a single operator (i.e., centralized control system). For the ith model: m(Li) + lis(Li), where li > 0 and s(Li) is the standard deviation
component, let W i denote the set of all hardware flaws, soft- of Li; and the value-of-risk model: VaRai(Li) = min {z | FLi(z) ≥ ai},
i
ware bugs, and vulnerability points that can be compromised which is the same as ai-quantile in distribution of Li.
IEEE Network • January/February 2013 21
4. AMIN LAYOUT_Layout 1 1/15/13 2:46 PM Page 22
l2 l2
{S, N}
{S, N} {N, N} {S, N}
{N, N}
{S, N} {N, N}
{N, N}
so
l1 {S, S}
so
l2 {S {N
{S, S} ,N ,N
} {N, S}
{S, N} {N, N} an }
d {N, N}
l1 l2 {N
,S
}
{S, S} and {N, N}
{S
,S {S {S,N} and {N, S}
} ,N
an }
{S d an
,S {N lso
2 {S d
lso
2 } ,N {N, S} {S, S} ,S {N
} } ,S
{N, N} {S, N} }
l1 l2
{S, S} {S, S}
{S, S} {N, S} {S, S} {N, S} {N, S}
{N, S}
l1 l1
l1 lso
1 l1 lso
1 l2 lso l2 lso
2 2
(a) (b)
Figure 2. Individual optima (Nash equilibria) and social optima.
operational cost. From Eq. 2, when the CPS operator’s CPS with Interdependent Networked Control Systems
security choices are V, s/he chooses control actions U =
m*(V) to minimize total expected cost, where m*(V) is an Let us focus on the issue of misalignment between individual
optimal control policy. Let the CPS operator’s minimum and socially optimal actions in the case when a CPS comprises
cost for the case when security choices– are V and {∆} (i.e., multiple NCSs communicating over a shared network. In con-
no security defenses) be defined as J (V) : = J(m*(V),V) trast to the above, we now assume that each NCS is managed
and J 0 : = J(m*({∆}), {∆}), respectively. To evaluate the by a separate operator. The NCS operators choose their secu-
effectiveness of V, we use the difference of corresponding rity levels to safeguard against network-induced risks (e.g.,
expected costs: due to distributed DoS attacks). Each NCS is modeled by a
–
discrete-time stochastic linear system, which is controlled over
D(V) : = J 0 – J (V). (3) a lossy communication network:
Thus, D(V) denotes the CPS operator’s gain from employing xti +1 = Axti + vti Buti + wti
security choices V. It can be viewed as the reduction of opera- t ∈N0 , i ∈ M ,
tor’s risk when s/he chooses V over no defenses, that is, yti = γ ti Cxti + vti (5)
R(A0, B, E) – R(A(V), B, E) = D(V), (4) where M denotes the number of players, xti Œ R d the state, uti
Œ R m the input, wti Œ R d the process noise, yti Œ R p the mea-
where A(V) and A 0 denote the action set corresponding to sured output, and vti Œ R p the measurement noise, for player
security choices V and {∆}, respectively. The problem of Pi at the tth time step. Let the standard assumptions of lin-
choosing optimal security choices V* can now be viewed as an ear quadratic Gaussian (LQG) theory hold. The random
optimization problem over the set of security defenses: variables g ti (resp. n ti ) are i.i.d. Bernoulli with the failure
~ ~
probability g i (resp. ni), and model a lossy sensor (resp. con-
max Δ(V ), subject to the constraint J (V ) ≤ K , trol) channel.
v
We formulate the problem of security choices of the indi-
vidual players as a non-cooperative two-stage game [10]. In
where K is the available budget for security investments. the first stage, each Pi chooses to make a security investment
The residual risk after the implementation of optimal (S) or not (N). The set of player security choices is denoted V
security choices V* can be obtained as R(A0, B, E) – D(V*). : = {V 1, …, V m}, where V i = S if Pi invests in security and N
Risks from failure events (those resulting from security if not. Once player security choices are made, they are irre-
attacks, random faults, cascading failures, etc.) can thus be versible and observable by all the players. In the second stage,
i
estimated and compared, and the best security defenses V each Pi chooses a control input sequence Ui: = {u t, t Œ N 0}
corresponding to anticipated failure types can be selected by to maintain optimal closed-loop performance. The objective
the CPS operator. of each Pi is to minimize his/her total cost:
The above analysis assumes that the choices B of other
i i
players do not change in response to the CPS operator’s J i(V, U) = JI(V) + JII(V, U), i Œ M, (6)
choices A. When players are strategic, the optimal security
i
choices must be computed as best responses to the other play- where the first stage cost is denoted J I(V): = (1 – Ii)li, and
i
ers’ (Nash) strategies. Finally, government or regulatory agen- J II(V, U) denotes second stage cost (the average LQG cost).
cies can also influence the environment E. Here l i > 0 is the security investment incurred by Pi only if
22 IEEE Network • January/February 2013
5. AMIN LAYOUT_Layout 1 1/15/13 2:46 PM Page 23
s/he has chosen S, and the indicator function Ii = 0 when Vi and focus on a specific failure scenario. The basic models can
= S, and Ii = 1 otherwise. be composed into a composite model to represent various cor-
In order to reflect security interdependencies, in our related failure scenarios, including simultaneous attacks, com-
~ ~
model, the failure probabilities g i and n i depend on the Pi’s mon-mode failures, and cascading failures. By using of
own security choice V i and on the other players’ security quantitative techniques from statistical estimation, model-
choices {V j, j ≠ i}. Following [10], we assume based diagnosis, stochastic simulation, and predictive control,
we can automatically generate new failure scenarios from real-
~ – –
P[g ti = 0 Ô V] = g i(V) := Iig + (1 – Iig )a(h–i). time sensor-control data. These techniques enable the synthe-
sis of operational security strategies and provide estimates of
In Eq. 7, the first term reflects the probability of a direct fail- residual risks in environments with highly correlated failures
ure, and the second term reflects the probability of an indirect and less than perfect information. Thus, theoretical guarantees
failure. The interdependence term a(h –i ) increases as the and computational tools are needed for the following:
number of players, excluding Pi, who have chosen N increase, • Compositions of stochastic fault and attack models
where h–i: = Sj≠iIj; similarly for n ti. The social planner objec- • Inference and learning of new failure scenarios
tive is to minimize the aggregate cost: • Fast and accurate simulation of CPS dynamics
m
• Detection and identification of failure events
J SO (V , U ) = ∑ J i (V , U ). (8)
• Operational ICT and control based strategies
The DETERLab testbed [11] provides the capability to
i =1
conduct experiments with a diverse set of CPS failure scenar-
Consider a two-player game, where the interdependent fail- ios, where the controllable variables range from IP-level
ure probabilities are given by Eq. 8. To derive optimal player dynamics to introduction of malicious entities such as dis-
actions (security choices Vi), we distinguish the following two tributed DoS attacks. The cyber-physical aspects of large-scale
cases: increasing incentives and decreasing incentives. For the infrastructures can be integrated together on DETERLab to
case of increasing incentives, if a player secures, other player’s provide an experimental environment for assessing CPS risks.
gain from securing increases, that is, J II({N, N}) – J II({S,
* * Specifically, the DETERLab provides a programmable net-
N}) £ J II ({N, S}) – J II ({S, S}), where J II ( . ) denotes the
* * * work emulation environment, and a suite of tools that allow a
optimal second stage cost. Similarly, for the case of decreasing user to describe the experimentation “apparatus,” and moni-
incentives, a player’s gain from investing in security decreases tor and control the experimentation “procedure.” Multiple
when the other player invests in security, that is, J II({N, N})
* experimentations can be executed at the same time by differ-
– J II({S, N}) ≥ J II({N, S}) – J II({S, S}).
* * * ent users if computational resources are available.
Figure 2a (resp. Fig. 2b) characterizes the Nash equilibria The main challenge for CPS experimentation on the
(individually optimal choices) and socially optimal choices of DETERLab testbed is to compose physical system dynamics
the game for the case of increasing (resp. decreasing) incen-
– –
(real/simulated/emulated) with communication system emula-
tives, where we assume l1 < l– (resp. l2 > l2 ). For i Œ {1,
– SO
SO
1
SO
tion. The experimentation “apparatus” should model the com-
SO
2}, the thresholds li, li, li , and li are given in [10]. munication network, the physical network, and their dynamic
Consider the case of increasing incentives (Fig. 2a). If li <
–
interactions. The experimentation “procedure” should
l1 (resp. li > l1), the symmetric Nash equilibrium {S, S} (resp.
–
describe the sensing and actuation policies that are the best
{N, N}) is unique. Thus, l1 (resp. l1) is the cutoff cost below responses to strategic actions of other players.
(resp. above) which both players invest (resp. neither player
–
invests) in security. If l1 £ li £ l1, both {S, S} and {N, N} are
–
Institutional Challenges
individually optimal. However, if l1 < l1 & l2 > l1 (resp. l1 > The design of institutional means is a chicken-and-egg
l1 & l2 < l1), the asymmetric – strategy {S, N}–(resp. {N, S}) is problem. On one hand, institutional means such as imposition
SO SO
an equilibrium. Now, if li < l1 (resp. li > l1 ), the socially of legal liability, mandatory incident disclosure, and insurance
optimal choices are {S, S}– (resp. {N, N}). If l 1 £ l 1 & l 2
–
SO
instruments improve the information about CPS risks. On the
£ lSO (resp. l2 £ l1 & l1 £ l1 ), socially optimal choices are
SO SO
other hand, substantial knowledge of CPS risks is required for
{S, N} (resp. {N, S}). Similarly, we can describe individually their design and successful deployment.
and socially optimal choices for the case of decreasing incen- Given the limitations of currently available risk assessment
tives (Fig. 2b). tools, the CPS operators find it hard (and, as a result, costly)
For both cases, we observe that the presence of interdepen- to manage their risks. This problem is especially acute for risk
dent security causes a negative externality. The individual management via financial means, such as diversification, real-
players are subject to network-induced risks and tend to location to other parties, and insurance. For example, insur-
under-invest in security relative to the social optimum. From ance instruments of CPS risks management are meager: the
our results, for a wide parameter range, regulatory imposi- premiums of cyber-security contracts are not conditioned on
tions to incentivize higher security investments are desirable the security parameters. It would be no exaggeration to say
(discussed later). The effectiveness of such impositions on the that so far, the cyber-insurance market has failed to develop.
respective risks faced by individual players (NCS operators) For example, the volume of underwritten contracts is essen-
can be evaluated in a manner similar to Eqs. 3–4. tially unchanged in a decade, despite multiple predictions of
its growth by independent researchers and industry analysts.
In fact, even the existing superficial “market” is largely sus-
Challenges in CPS Risk Assessment tained by non-market (regulatory) forces.
Indeed, the leading reason for CPS operators to acquire
Technological Challenges insurance policies at the prevailing exuberant prices is their
A significant challenge for the practical implementation of need to comply with federal requirements for government
our CPS risk assessment framework is to develop data-driven, contractors. Citizens (i.e., federal and state taxpayers) are the
stochastic CPS models, which account for dynamics of CPS final bearers of these costs. We expect that this situation will
with interdependent reliability and security failures. Each of remain “as is” unless information on CPS risks drastically
these singular/basic models should account for CPS dynamics improves.
IEEE Network • January/February 2013 23
6. AMIN LAYOUT_Layout 1 1/15/13 2:46 PM Page 24
[3] A. Hussain, J. Heidemann, and C. Papadopoulos, “A Framework for
Another related problem is that of suboptimal provider incen- Classifying Denial of Service Attacks,” Proc. 2003 ACM Conf. Applica-
tives (as seen in Fig. 2). A CPS operator’s estimates of his/her tions, Technologies, Architectures, and Protocols for Computer Commu-
own risk tend to be understated (relative to societal ones), even nications, 2003, pp. 99–110.
when failure probabilities are known to him/her. In such cases, [4] S. Amin et al., “Cyber Security of Water SCADA Systems — Part II:
Attack Detection Using Enhanced Hydrodynamic Models,” IEEE Trans.
the gap between individually and socially optimal incentives Control Systems Technology, 2012.
could be reduced via adjustments of legal and regulatory institu- [5] S. Buldyrev et al., “Catastrophic Cascade of Failures in Interdependent
tions. For example, it would be socially desirable to introduce Networks,” Nature, vol. 464, no. 7291, Apr. 2010, pp. 1025–28.
limited liability (i.e., a due care standard) for individual entities [6] C. Hall et al., “Resilience of the Internet Interconnection Ecosystem,”
Proc. 10th Wksp. Economics of Information Security, June 2011.
whose products and services are employed in CPSs. This would [7] T. Alpcan and T. Basar, Network Security: A Decision and Game Theo-
improve providers’ incentives to invest in their products’ security retic Approach, Cambridge Univ. Press, 2011.
and reliability. However, due to information incompleteness, cur- [8] P. Grossi and H. Kunreuther, Catastrophe Modeling: A New Approach
rently there is no liability regime for providers of CPS compo- to Managing Risk, Springer, 2005, vol. 25.
[9] Y. Y. Haimes, Risk Modeling, Assessment, and Management, 3rd ed.,
nents and services, for neither security nor reliability driven Wiley, 2009.
failures. Indeed, any liability regime is based on knowing (the [10] S. Amin, G. A. Schwartz, and S. S. Sastry, “On the Interdependence
estimate[s] of) failure probabilities and the induced losses. This of Reliability and Security in Networked Control Systems,” CDC-ECE,
again requires benchmarking of CPS risks. IEEE, 2011, pp. 4078–83.
[11] T. Benzel, “The Science of Cyber Security Experimentation: The Deter
Project,” Proc. 27th ACM Annual Computer Security Applications
Concluding Remarks Conf., 2011, pp. 137–48.
Benchmarking of CPS risks is a hard problem. It is harder Biographies
than the traditional risk assessment problems for infrastruc- S AURABH A MIN (amins@mit.edu) is an assistant professor in the Depart-
ture reliability or ICT security, which so far have been consid- ment of Civil and Environmental Engineering, Massachusetts Institute of
ered in isolation. Estimation of CPS risks by naively Technology (MIT). His research focuses on the design and implementation
aggregating risks due to reliability and security failures does of high-confidence network control algorithms for critical infrastructures,
including transportation, water, and energy distribution systems. He
not capture the externalities, and can lead to grossly subopti- received his B.Tech. in civil engineering from the Indian Institute of Tech-
mal responses to CPS risks. Such misspecified CPS risks lead nology Roorkee in 2002, M.S. in transportation engineering from the Uni-
to biased security choices and reduce the effectiveness of versity of Texas at Austin in 2004, and Ph.D. in systems engineering from
security defenses. the University of California at Berkeley in 2011.
Modern, and especially upcoming, CPSs are subjected to GALINA A. SCHWARTZ is a research economist in the Department of Electrical
complex risks, of which very little is known despite the realiza- Engineering and Computer Sciences at the University of California, Berkeley.
tion of their significance. In this article we are calling on our Her primary expertise is game theory and microeconomics. She has pub-
colleagues to embark on the hard task of assessing interde- lished on the subjects of network neutrality, cyber risk management and mod-
eling of cyber-insurance markets, and security and privacy of cyber-physical
pendent CPS risks. The effectiveness of security defenses can systems. In her earlier research, she has applied contract theory to study the
be increased only when our knowledge of CPS risks improves. interplay between information, transaction costs, institutions and regulations.
She has been on the faculty in the Ross School of Business at the University
Acknowledgments of Michigan, Ann-Arbor, and has taught in the Economics Departments at the
University of California, Davis and Berkeley. She received her M.S. in mathe-
We are grateful to the anonymous reviewers for their feed- matical physics from Moscow Institute of Engineering Physics, Russia, and
back, and thank Professors S. Shankar Sastry (UC Berkeley) Ph.D. in economics from Princeton University in 2000.
and Joseph M. Sussman (MIT) for useful discussions.
ALEFIYA HUSSAIN is a computer scientist at the University of Southern Cali-
References fornia’s Information Sciences Institute (USC/ISI). Her research interests
include statistical signal processing, protocol design, cyber security, and
[1] Y. Mo et al., “Cyber-Physical Security of A Smart Grid Infrastructure,”
network measurement systems. She received her B.E. in computer engineer-
Proc. IEEE, vol. 100, no. 1, Jan. 2012, pp. 195–209.
ing from the University of Pune, India, in 1997 and Ph.D. in computer sci-
[2] S. Sridhar, A. Hahn, and M. Govindarasu, “Cyber-Physical System
ence from University of Southern California in 2005. Prior to joining
Security for the Electric Power Grid,” Proc. IEEE, vol. 100, no. 1, Jan.
USC/ISI, she was a senior principal scientist at Sparta Inc.
2012, pp. 210–24.
24 IEEE Network • January/February 2013