SlideShare a Scribd company logo
1 of 14
Download to read offline
Gaming security by obscurity

                                                                                                               Dusko Pavlovic
                                                                              Royal Holloway, University of London, and University of Twente
                                                                                                     dusko.pavlovic@rhul.ac.uk
arXiv:1109.5542v1 [cs.CR] 26 Sep 2011




                                        ABSTRACT                                                                            1. INTRODUCTION
                                        Shannon [35] sought security against the attacker with un-                             New paradigms change the world. In computer science,
                                        limited computational powers: if an information source con-                         they often sneak behind researchers’ backs: the grand vi-
                                        veys some information, then Shannon’s attacker will surely                          sions often frazzle into minor ripples (like the fifth genera-
                                        extract that information. Diffie and Hellman [12] refined                              tion of programming languages), whereas some modest goals
                                        Shannon’s attacker model by taking into account the fact                            engender tidal waves with global repercussions (like moving
                                        that the real attackers are computationally limited. This                           the cursor by a device with wheels, or connecting remote
                                        idea became one of the greatest new paradigms in computer                           computers). So it is not easy to conjure a new paradigm
                                        science, and led to modern cryptography.                                            when you need it.
                                           Shannon also sought security against the attacker with un-                          Perhaps the only readily available method to generate new
                                        limited logical and observational powers, expressed through                         paradigms at leisure is by disputing the obvious. Just in
                                        the maxim that ”the enemy knows the system”. This view                              case, I question on this occasion not one, but two generally
                                        is still endorsed in cryptography. The popular formulation,                         endorsed views:
                                        going back to Kerckhoffs [22], is that ”there is no security
                                        by obscurity”, meaning that the algorithms cannot be kept                              • Kerckhoffs Principle that there is no security by ob-
                                        obscured from the attacker, and that security should only                                scurity, and
                                        rely upon the secret keys. In fact, modern cryptography goes
                                                                                                                               • Fortification Principle that the defender has to defend
                                        even further than Shannon or Kerckhoffs in tacitly assuming
                                                                                                                                 all attack vectors, whereas the attacker only needs to
                                        that if there is an algorithm that can break the system, then
                                                                                                                                 attack one.
                                        the attacker will surely find that algorithm. The attacker is
                                        not viewed as an omnipotent computer any more, but he is                            To simplify things a little, I argue that these two princi-
                                        still construed as an omnipotent programmer. The ongoing                            ples as related. The Kerckhoffs Principle demands that a
                                        hackers’ successes seem to justify this view.                                       system should withstand attackers unhindered probing. In
                                           So the Diffie-Hellman step from unlimited to limited com-                          the modern security definitions, this is amplified to the re-
                                        putational powers has not been extended into a step from                            quirement that the system should resist a family of attacks,
                                        unlimited to limited logical or programming powers. Is the                          irrespective of the details of their algorithms. The adaptive
                                        assumption that all feasible algorithms will eventually be                          attackers are thus allowed to query the system, whereas the
                                        discovered and implemented really different from the as-                             system is not allowed to query the attackers. The resulting
                                        sumption that everything that is computable will eventually                         information asymmetry makes security look like a game bi-
                                        be computed? The present paper explores some ways to re-                            ased in favor of the attackers. The Fortification Principle is
                                        fine the current models of the attacker, and of the defender,                        an expression of that asymmetry. In economics, information
                                        by taking into account their limited logical and program-                           asymmetry has been recognized as a fundamental problem,
                                        ming powers. If the adaptive attacker actively queries the                          worth the Nobel Prize in Economics for 2001 [39, 3, 37]. In
                                        system to seek out its vulnerabilities, can the system gain                         security research, the problem does not seem to have been
                                        some security by actively learning attacker’s methods, and                          explicitly addressed, but there is, of course, no shortage of
                                        adapting to them?                                                                   efforts to realize security by obscurity in practice — albeit
                                                                                                                            without any discernible method. Although the practices of
                                                                                                                            analyzing the attackers and hiding the systems are hardly
                                                                                                                            waiting for anyone to invent a new paradigm, I will pur-
                                                                                                                            sue the possibility that a new paradigm might be sneaking
                                                                                                                            behind our backs again, like so many old paradigms did.
                                        Permission to make digital or hard copies of all or part of this work for
                                        personal or classroom use is granted without fee provided that copies are
                                                                                                                            Outline of the paper
                                        not made or distributed for profit or commercial advantage and that copies           While I am on the subject of security paradigms, I decided
                                        bear this notice and the full citation on the first page. To copy otherwise, to      to first spell out a general overview of the old ones. An
                                        republish, to post on servers or to redistribute to lists, requires prior specific   attempt at this is in Sec. 2. It is surely incomplete, and
                                        permission and/or a fee.
                                        NSPW’11, September 12–15, 2011, Marin County, California, USA.                      perhaps wrongheaded, but it may help a little. It is difficult
                                        Copyright 2011 ACM xxx-x-xxxx-xxxx-x/xx/xx ...$5.00.                                to communicate about the new without an agreement about
the old. Moreover, it will be interesting to hear not only         flows or dangling pointers in the code. For a cryptographer,
whether my new paradigms are new, but also whether my              it means that any successful attack on the cypher can be
old paradigms are old.                                             reduced to an algorithm for computing discrete logarithms,
   The new security paradigm arising from the slogan ”Know         or to integer factorization. For a diplomat, security means
your enemy” is discussed in Sec. 3. Of course, security engi-      that the enemy cannot read the confidential messages. For
neers often know their enemies, so this is not much of a new       a credit card operator, it means that the total costs of the
paradigm in practice. But security researchers often require       fraudulent transactions and of the measures to prevent them
that systems should be secure against universal families of        are low, relative to the revenue. For a bee, security means
attackers, without knowing anything about who the enemy            that no intruder into the beehive will escape her sting. . .
is at any particular moment. With respect to such static              Is it an accident that all these different ideas go under
requirements, a game theoretic analysis of dynamics of se-         the same name? What do they really have in common?
curity can be viewed as an almost-new paradigm (with few           They are studied in different sciences, ranging from com-
previous owners). In Sec.3.1 I point to the practical devel-       puter science to biology, by a wide variety of different meth-
opments that lead up to this paradigm, and then in Sec. 3.2        ods. Would it be useful to study them together?
I describe the game of attack vectors, which illustrates it.
This is a very crude view of security process as a game of         2.1 What is security?
incomplete information. I provide a simple pictorial analy-           If all avatars of security have one thing in common, it is
sis of the strategic interactions in this game, which turn out     surely the idea that there are enemies and potential attack-
to be based on acquiring information about the opponent’s          ers out there. All security concerns, from computation to
type and behavior. A sketch of a formal model of security          politics and biology, come down to averting the adversarial
games of incomplete information, and of the game of attack         processes in the environment, that are poised to subvert the
vectors, is given in Appendix A.                                   goals of the system. There are, for instance, many kinds of
   A brand new security paradigm of ”Applied security by           bugs in software, but only those that the hackers use are a
obscurity” is described in Sec. 4. It is based on the idea         security concern.
of logical complexity of programs, which leads to one way             In all engineering disciplines, the system guarantees a
programming similarly like computational complexity led to         functionality, provided that the environment satisfies some
one way computations. If achieved, one way programming             assumptions. This is the standard assume-guarantee format
will be a powerful tool in security games.                         of the engineering correctness statements. Such statements
   A final attempt at a summary, and some comments about            are useful when the environment is passive, so that the as-
the future research, and the pitfalls, are given in Sec. 5.        sumptions about it remain valid for a while. The essence of
                                                                   security engineering is that the environment actively seeks
Related work
                                                                   to invalidate system’s assumptions.
The two new paradigms offer two new tools for the security             Security is thus an adversarial process. In all engineer-
toolkit: games of incomplete information, and algorithmic          ing disciplines, failures usually arise from engineering errors
information theory.                                                and noncompliance. In security, failures arise in spite of the
   Game theoretic techniques have been used in applied secu-       compliance with the best engineering practices of the mo-
rity for a long time, since there a need for strategic reasoning   ment. Failures are the first class citizens of security: every
often arises in practice. A typical example from the early         key has a lifetime, and in a sense, every system too. For
days is [11], where games of imperfect information were used.      all major software systems, we normally expect security up-
Perhaps the simplest more recent game based model are the          dates, which usually arise from attacks, and often inspire
attack-defense trees, which boil down to zero-sum extensive        them.
games [25]. Another application of games of imperfect infor-
mation appeared, e.g., in a previous edition of this confer-       2.2 Where did security come from?
ence [28]. Conspicuously, games of incomplete information             The earliest examples of security technologies are found
do not seem to have been used, which seems appropriate             among the earliest documents of civilization. Fig. 1 shows
since they analyze how players keep each other in obscurity.       security tokens with a tamper protection technology from al-
The coalgebraic presentation of games and response rela-           most 6000 years ago. Fig.2 depicts the situation where this
tions, presented in the Appendix, is closely related with the      technology was probably used. Alice has a lamb and Bob
formalism used in [31].                                            has built a secure vault, perhaps with multiple security lev-
   The concept of logical complexity, proposed in Sec. 4, is       els, spacious enough to store both Bob’s and Alice’s assets.
based on the ideas of algorithmic information theory [24,          For each of Alice’s assets deposited in the vault, Bob issues a
36] in general, and in particular on the idea of logical depth     clay token, with an inscription identifying the asset. Alice’s
[10, 26, 6]. I propose to formalize logical complexity by lift-    tokens are then encased into a bulla, a round, hollow ”en-
ing logical depth from the G¨del-Kleene indices to program
                               o                                   velope” of clay, which is then baked to prevent tampering.
specifications [19, 8, 9, 14, 34, 32]. The underlying idea          When she wants to withdraw her deposits, Alice submits her
that a G¨del-Kleene index of a program can be viewed as
           o                                                       bulla to Bob, he breaks it, extracts the tokens, and returns
its ”explanation” goes back to Kleene’s idea of realizability      the goods. Alice can also give her bulla to Carol, who can
[23] and to Solomonoff’s formalization of inductive inference       also submit it to Bob, to withdraw the goods, or pass on
[36].                                                              to Dave. Bullæ can thus be traded, and they facilitate ex-
                                                                   change economy. The tokens used in the bullæ evolved into
2.   OLD SECURITY PARADIGMS                                        the earliest forms of money, and the inscriptions on them
  Security means many things to many people. For a soft-           led to the earliest numeral systems, as well as to Sumerian
ware engineer, it often means that there are no buffer over-        cuneiform script, which was one of the earliest alphabets.
Bob        Alice




                                                                 Figure 2: To withdraw her sheep from Bob’s
                                                                 secure vault, Alice submits a tamper-proof
    Figure 1: Tamper protection from 3700 BC                     token from Fig. 1.


Security thus predates literature, science, mathematics, and     information processing, where the purposeful program ex-
even money.                                                      ecutions at the network nodes are supplemented by spon-
                                                                 taneous data-driven evolution of network links. While the
2.3 Where is security going?                                     network emerges as the new computer, data and metadata
  Through history, security technologies evolved gradually,      become inseparable, and a new type of security problems
serving the purposes of war and peace, protecting public         arises.
resources and private property. As computers pervaded all
aspects of social life, security became interlaced with com-     Paradigms of security
putation, and security engineering came to be closely related    In early computer systems, security tasks mainly concerned
with computer science. The developments in the realm of          sharing of the computing resources. In computer networks,
security are nowadays inseparable from the developments in       security goals expanded to include information protection.
the realm of computation. The most notable such develop-         Both computer security and information security essentially
ment is, of course, cyber space.                                 depend on a clear distinction between the secure areas, and
Paradigms of computation                                         the insecure areas, separated by a security perimeter. Secu-
                                                                 rity engineering caters for computer security and for infor-
In the beginning, engineers built computers, and wrote pro-      mation security by providing the tools to build the security
grams to control computations. The platform of computa-          perimeter. In cyber space, the secure areas are separated
tion was the computer, and it was used to execute algorithms     from the insecure areas by the ”walls” of cryptography; and
and calculations, allowing people to discover, e.g., fractals,   they are connected by the ”gates” of cryptographic proto-
and to invent compilers, that allowed them to write and exe-     cols.1 But as networks of computers and devices spread
cute more algorithms and more calculations more efficiently.       through physical and social spaces, the distinctions between
Then the operating system became the platform of compu-          the secure and the insecure areas become blurred. And in
tation, and software was developed on top of it. The era of      such areas of cyber-social space, information processing does
personal computing and enterprise software broke out. And        not yield to programming, and cannot be secured just by
then the Internet happened, followed by cellular networks,       cryptography and protocols. What else is there?
and wireless networks, and ad hoc networks, and mixed net-
works. Cyber space emerged as the distance-free space of
instant, costless communication. Nowadays software is de-        3. A SECOND-HAND BUT ALMOST-NEW
veloped to run in cyberspace. The Web is, strictly speaking,        SECURITY PARADIGM: KNOW YOUR
just a software system, albeit a formidable one. A botnet is
also a software system. As social space blends with cyber
                                                                    ENEMY
space, many social (business, collaborative) processes can
be usefully construed as software systems, that ran on social    3.1 Security beyond architecture
networks as hardware. Many social and computational pro-           Let us take a closer look at the paradigm shift to post-
cesses become inextricable. Table 1 summarizes the crude         modern cyber security in Table 2. It can be illustrated as
picture of the paradigm shifts which led to this remarkable      the shift from Fig. 3 to Fig. 4. The fortification in Fig. 3
situation.                                                       represents the view that security is in essence an architec-
   But as every person got connected to a computer, and ev-      tural task. A fortress consists of walls and gates, separating
ery computer to a network, and every network to a network        the secure area within from the insecure area outside. The
of networks, computation became interlaced with communi-
                                                                 1
cation, and ceased to be programmable. The functioning of         This is, of course, a blatant oversimplification, as are many
the Web and of web applications is not determined by the         other statements I make. In a sense, every statement is an
code in the same sense as in a traditional software system:      oversimplification of reality, abstracting away the matters
                                                                 deemed irrelevant. The gentle reader is invited to squint
after all, web applications do include the human users as a      whenever any of the details that I omit do seem relevant,
part of their runtime. The fusion of social and computa-         and add them to the picture. The shape of a forest should
tional processes in cyber-social space leads to a new type of    not change when some trees are enhanced.
age                   ancient times                   middle ages                     modern times

       platform              computer                        operating system                network
       applications          Quicksort, compilers            MS Word, Oracle                 WWW, botnets
       requirements          correctness, termination        liveness, safety                trust, privacy
       tools                 programming languages           specification languages          scripting languages


                                        Table 1: Paradigm shifts in computation

       age                   middle ages                     modern times                    postmodern times

       space                 computer center                 cyber space                     cyber-social space
       assets                computing resources             information                     public and private resources
       requirements          availability, authorization     integrity, confidentiality       trust, privacy
       tools                 locks, tokens, passwords        cryptography, protocols         mining and classification


                                             Table 2: Paradigm shifts in security


boundary between these two areas is the security perime-            through social processes, like trust, privacy, reputation, in-
ter. The secure area may be further subdivided into the             fluence.
areas of higher security and the areas of lower security. In
cyber space, as we mentioned, the walls are realized using
crypto systems, whereas the gates are authentication pro-
tocols. But as every fortress owner knows, the walls and




                                                                                  Figure 4: Dynamic security


                                                                      In summary, besides the methods to keep the attackers
                                                                    out, security is also concerned with the methods to deal with
                Figure 3: Static security
                                                                    the attackers once they get in. Security researchers have
                                                                    traditionally devoted more attention to the former family of
                                                                    methods. Insider threats have attracted a lot of attention
the gates are not enough for security: you also need some           recently, but a coherent set of research methods is yet to
soldiers to defend it, and some weapons to arm the soldiers,        emerge.
and some craftsmen to build the weapons, and so on. More-             Interestingly, though, there is a sense in which security
over, you also need police and judges to maintain security          becomes an easier task when the attacker is in. Although
within the fortress. They take care for the dynamic aspects         unintuitive at the first sight, this idea becomes natural when
of security. These dynamic aspects arise from the fact that         security processes are viewed in a broad context of the infor-
sooner or later, the enemies will emerge inside the fortress:       mation flows surrounding them (and not only with respect
they will scale the walls at night (i.e. break the crypto), or      to the data designated to be secret or private). To view
sneak past the gatekeepers (break the protocols), or build          security processes in this broad context, it is convenient to
up trust and enter with honest intentions, and later defect         model them as games of incomplete information [4], where
to the enemy; or enter as moles, with the intention to strike       the players do not have enough information to predict the
later. In any case, security is not localized at the security       opponent’s behavior. For the moment, let me just say that
perimeters of Fig. 3, but evolves in-depth, like on Fig. 4,         the two families of security methods (those to keep the at-
tackers out, and those to catch them when they are in) cor-
respond to two families of strategies in certain games of in-
complete information, and turn out to have quite different
winning odds for the attacker, and for defender. In fact,                                                      Defense
they have the opposite winning odds.                                         Defense
   In the fortress mode, when the defenders’ goal is to keep
the attackers out, it is often observed that the attackers
only need to find one attack vector to enter the fortress,                                                       Attack
whereas the defenders must defend all attack vectors to pre-                  Attack
vent them. When the battle switches to the dynamic mode,
and the defense moves inside, then the defenders only need
to find one marker to recognize and catch the attackers,           Figure 5: Fortification              Figure 6: Honeypot
whereas the attackers must cover all their markers. This
strategic advantage is also the critical aspect of the immune
response, where the invading organisms are purposely sam-                 current distribution of the forces along the common
pled and analyzed for chemical markers. Some aspects of                   part of the border.
this observation have, of course, been discussed within the
framework of biologically inspired security. Game theoretic           (b) Each player sees all movement in the areas enclosed
modeling seems to be opening up a new dimension in this                   enclosed within his territory, i.e. observes any point on
problem space. We present a sketch to illustrate this new                 a straight line between any two points that he controls.
technical and conceptual direction.                                       That means that each player sees the opponent’s next
                                                                          move at all points that lie within the convex hull of his
3.2 The game of attack vectors                                            territory, which we call range.
Arena. Two players, the attacker A and the defender D,            According to (b), the position in Fig. 5 allows A to see D’s
battle for some assets of value to both of them. They are         next move. D, on the other hand only gets to know A’s move
given equal, disjoint territories, with the borders of equal      according to (a), when his own border changes. This depicts,
length, and equal amounts of force, expressed as two vector       albeit very crudely, the information asymmetry between the
fields distributed along their respective borders. The players     attacker and the defender.
can redistribute the forces and move the borders of their
territories. The territories can thus take any shapes and         Question. How should rational players play this game?
occupy any areas where the players may move them, obeying
the constraints that
                                                                  3.2.1 Fortification strategy
                                                                  Goals. Since each player’s total force is divided by the
    (i) the length of the borders of both territories must be     length of his borders, the maximal area defensible by a given
        preserved, and                                            force has the shape of a disk. All other shapes with the
    (ii) the two territories must remain disjoint, except that    boundaries of the same length enclose smaller areas. So D’s
         they may touch at the borders.                           simplest strategy is to acquire and maintain the smallest disk
                                                                  shaped territory of size θ.3 This is the fortification strategy:
It is assumed that the desired asset Θ is initially held by the   D only responds to A’s moves.
defender D. Suppose that storing this asset takes an area            A’s goal is, on the other hand, to create ”dents” in D’s
of size θ. Defender’s goal is thus to maintain a territory pD     territory pD , since the area of pD decreases most when its
with an area pD ≥ θ. Attacker’s goal is to decrease the           convexity is disturbed. If a dent grows deep enough to reach
size of pD below θ, so that the defender must release some of     across pD , or if two dents in it meet, then pD disconnects
the asset Θ. To achieve this, the attacker A must bring his2      in two components. Given a constant length of the border,
forces to defender D’s borders, and push into his territory.      it is easy to see that the size of the enclosed area decreases
A position in the game can thus be something like Fig. 5.         exponentially as it gets broken up. In this way, the area
Game. At each step in the game, each player makes a move          enclosed by a border of given length can be made arbitrarily
by specifying a distribution of his forces along his borders.     small.
Both players are assumed to be able to redistribute their            But how can A create dents? Wherever he pushes, the
forces with equal agility. The new force vectors meet at          defender will push back. Since their forces are equal and
the border, they add up, and the border moves along the           constant, increasing the force along one vector decreases the
resulting vector. So if the vectors are, say, in the opposite     force along another vector.
directions, the forces subtract and the border is pushed by       Optimization tasks. To follow the fortification strategy,
the greater vector.                                               D just keeps restoring pD to a disk of size θ. To counter
   The players observe each other’s positions and moves in        D’s defenses, A needs to find out where they are the weak-
two ways:                                                         est. He can observe this wherever D’s territory pD is within
                                                                  3
    (a) Each player knows his own moves, i.e. distributions,        This is why the core of a medieval fortification was a round
        and sees how his borders change. From the change in       tower with a thick wall and a small space inside. The fortress
        the previous move, he can thus derive the opponent’s      itself often is not round, because the environment is not flat,
                                                                  or because the straight walls were easier to build; but it
2
 I hope no one minds that I will be using ”he” for both           is usually at least convex. Later fortresses, however, had
Attacker and Defender, in an attempt to avoid distracting         protruding towers — to attack the attacker. Which leads us
connotations.                                                     beyond the fortification strategy. . .
Defense             both players’ strategy refinements will evolve methods to
            Defense                                               trade territory for information, making increasingly efficient
                                                                  use of the available territories. Note that the size of D’s
                                                                  territory must not drop below θ, whereas the size of A’s
                                                                  territory can, but then he can only store a part of whatever
                                               Attack             assets he may force D to part with.
                                                                    A formalism for a mathematical analysis of this game is
                                                                  sketched in the Appendix.
            Attack
                                                                  3.3 What does all this mean for security?
                                                                     The presented toy model provides a very crude picture of
                                                                  the evolution of defense strategies from fortification to adap-
 Figure 7: Sampling              Figure 8: Adaptation             tation. Intuitively, Fig. 5 can be viewed as a fortress under
                                                                  siege, whereas Fig. 8 can be interpreted as a macrophage
                                                                  localizing an invader. The intermediate pictures show the
A’s range, i.e. contained in the convex hull of pA . So A
                                                                  adaptive immune system luring the invader and sampling
needs to maximize the intersection of his range with D’s
                                                                  his chemical markers.
territory. Fig. 5 depicts a position where this is achieved: D
                                                                     But there is nothing exclusively biological about the adap-
is under A’s siege. It embodies the Fortification Principle,
                                                                  tation strategy. Figures 5–8 could also be viewed entirely in
that the defender must defend all attack vectors, whereas
                                                                  the context of Figures 3–4, and interpreted as the transi-
the attacker only needs to select one. For a fast push, A
                                                                  tion from the medieval defense strategies to modern politi-
randomly selects an attack vector, and waits for D to push
                                                                  cal ideas. Fig. 8 could be viewed as a depiction of the idea
back. Strengthening D’s defense along one vector weakens
                                                                  of ”preemptive siege”: while the medieval rulers tried to
it along another one. Since all of D’s territory is within A’s
                                                                  keep their enemies out of their fortresses, some of the mod-
range, A sees where D’s defense is the weakest, and launches
                                                                  ern ones try to keep them in their jails. The evolution of
the next attack there. In contrast, D’s range is initially lim-
                                                                  strategic thinking illustrated on Figures 5-8 is pervasive in
ited to his own disk shaped territory. So D only ”feels” A’s
                                                                  all realms of security, i.e. wherever the adversarial behaviors
pushes when his own borders move. At each step, A pushes
                                                                  are a problem, including cyber-security.
at D’s weakest point, and creates a deeper dent. A does
                                                                     And although the paradigm of keeping an eye on your en-
enter into D’s range, but D’s fortification strategy makes
                                                                  emies is familiar, the fact that it reverts the odds of security
no use of the information that could be obtained about A.
                                                                  and turns them in favor of the defenders does not seem to
The number of steps needed to decrease pD below θ depends
                                                                  have received enough attention. It opens up a new game
on how big are the forces and how small are the contested
                                                                  theoretic perspective on security, and suggests a new tool
areas.
                                                                  for it.
3.2.2 Adaptation strategy
   What can D do to avoid the unfavorable outcome of the          4. A BRAND NEW SECURITY PARADIGM:
fortification strategy? The idea is that he should learn to           APPLIED SECURITY BY OBSCURITY
know his enemy: he should also try to shape his territory
to maximize the intersection of his range with A’s territory.     4.1 Gaming security basics
D can lure A into his range simply letting A dent his terri-
tory. This is the familiar honeypot approach, illustrated on      Games of information. In games of luck, each player
Fig. 6. Instead of racing around the border to push back          has a type, and some secrets. The type determines player’s
against every attack, D now gathers information about A’s         preferences and behaviors. The secrets determine player’s
his next moves within his range. If A maintains the old siege     state. E.g., in poker, the secrets are the cards in player’s
strategy, and pushes to decrease D’s territory, he will accept    hand, whereas her type consists of her risk aversion, her
D’s territory, and enter into his range more and more.            gaming habits etc. The imperfect information means that
   Fig. 7 depicts a further refinement of D’s strategy, where      all players’ types are a public information, whereas their
the size of D’s territory, although it is still his main goal     states are unknown, because their secrets are private. In
in the game, is assigned a smaller weight than the size of        games of incomplete information, both players’ types and
A’s territory within D’s range. This is the adaptation strat-     their secrets are unknown. The basic ideas and definitions
egy. The shape of D’s territory is determined by the task         of the complete and incomplete informations in games go all
of gathering the information about A. If A follows the final       the way back to von Neumann and Morgenstern [40]. The
part of his siege strategy, he will accept to be observed in      ideas and techniques for modeling incomplete information
exchange for the territory, and the final position depicted on     are due to Harsanyi [18], and constitute an important part
Fig. 8 will be reached. Here A’s attacks are observed and         of game theory [27, 17, 4].
prevented. D wins. The proviso is that D has enough ter-          Security by secrecy. If cryptanalysis is viewed as a game,
ritory to begin with that there is enough to store his assets     then the algorithms used in a crypto system can be viewed
after he changes its shape in order to control A. Another         as the type of the corresponding player. The keys are, of
proviso is that A blindly sticks with his siege strategy.         course, its secrets. In this framewrok, Claude Shannon’s slo-
   To prevent this outcome, A will thus need to refine his         gan that ”the enemy knows the system” asserts that crypt-
strategy, and not release D from his range, or enter his range    analysis should be viewed as a game of imperfect informa-
so easily. However, if he wants to decrease D’s territory, A      tion. Since the type of the crypto system is known to the
will not be able to avoid entering D’s range altogether. So       enemy, it is not a game of incomplete information. Another
statement of the same imperative is the Kerckhoffs’ slogan          The task of obfuscation is to transform a program repre-
that ”there is no security by obscurity”. Here the obscurity       sentation so that the obfuscated program runs roughly the
refers to the type of the system, so the slogan thus suggests      same as the original one, but that the original code (or some
that the security of a crypto system should only depend on         secrets built into it) cannot be recovered. Of course, the lat-
the secrecy of its keys, and remain secure if its type is known.   ter is harder, because encrypted data just need to be secret,
In terms of physical security, both slogans thus say that the      whereas an obfuscated program needs to be secret and and
thief should not be able to get into the house without the         to run like the original program. In [5], it was shown that
right key, even if he knows the mechanics of the lock. The         some programs must disclose the original code in order to
key is the secret, the lock is the type.                           perform the same function (and they disclose it in a nontriv-
Security by obscurity. And while all seems clear, and we           ial way, i.e. not by simply printing out their own code). The
all pledge allegiance to the Kerckhoffs’ Principle, the prac-       message seems consistent with the empiric evidence that re-
tices of security by obscurity abound. E.g., besides the locks     verse engineering is, on the average5 , effective enough that
that keep the thieves out, many of us use some child-proof         you don’t want to rely upon its hardness. So it is much
locks, to protect toddlers from dangers. A child-proof lock        easier to find out the lock mechanism, than to find the right
usually does not have a key, and only provides protection          key, even in the digital domain.
through the obscurity of its mechanism.                            One-way programming? The task of an attacker is to
   On the cryptographic side, security by obscurity remains        construct an attack algorithm. For this, he needs to under-
one of the main tools, e.g., in Digital Rights Management          stand the system. The system code may be easy to reverse
(DRM), where the task is to protect the digital content from       engineer , but it may be genuinely hard to understand, e.g.
its intended users. So our DVDs are encrypted to prevent           if it is based on a genuinely complex mathematical construc-
copying; but the key must be on each DVD, or else the DVD          tion. And if it is hard to understand, then it is even harder to
could not be played. In order to break the copy protection,        modify into an attack, except in very special cases. The sys-
the attacker just needs to find out where to look for the key;      tem and the attack can be genuinely complex to construct,
i.e. he needs to know the system used to hide the key. For a       or to reconstruct from each other, e.g. if the constructions
sophisticated attacker, this is no problem; but the majority       involved are based on genuinely complex mathematical con-
is not sophisticated. The DRM is thus based on the second-         structions. This logical complexity of algorithms is orthogo-
hand but almost-new paradigm from the preceding section:           nal to their computational complexity: an algorithm that is
the DVD designers study the DVD users and hide the keys            easy to run may be hard to construct, even if the program
in obscure places. From time to time, the obscurity wears          resulting from that construction is relatively succinct, like
out, by an advance in reverse engineering, or by a lapse of        e.g. [2]); whereas an algorithm that requires a small effort
defenders attention4 . Security is then restored by analyzing      to construct may, of course, require a great computational
the enemy, and either introducing new features to stall the        effort when run.
ripping software, or by dragging the software distributors to         The different roles of computational and of logical com-
court. Security by obscurity is an ongoing process, just like      plexities in security can perhaps be pondered on the follow-
all of security.                                                   ing example. In modern cryptography, a system C would be
                                                                   considered very secure if an attack algorithm AC on it would
4.2 Logical complexity                                             yield a proof that P = N P . But how would you feel about
                                                                   a crypto system D such that an attack algorithm AD would
What is the difference between keys and locks? The                  yield a proof that P = N P ? What is the difference between
conceptual problem with the Kerckhoffs Principle, as the re-        the reductions
quirement that security should be based on secret keys, and
not on obscure algorithms, is that it seems inconsistent, at            AC =⇒ P = N P         and     AD =⇒ P = N P         ?
least at the first sight, with the Von Neumann architecture
                                                                   Most computer scientists believe that P = N P is true. If
of our computers, where programs are represented as data.
In a computer, both a key and an algorithm is a string of          P = N P is thus false, then no attack on C can exist, whereas
bits. Why can I hide a key and cannot hide an algorithm?           an attack on D may very well exist. On the other hand,
More generally, why can I hide data, and cannot hide pro-          after many decades of efforts, the best minds of mankind
grams?                                                             did not manage to construct a proof that P = N P . An
                                                                   attacker on the system D, whose construction of AD would
   Technically, the answer boils down to the difference be-
                                                                   provide us with a proof of P = N P , would be welcomed
tween data encryption and program obfuscation. The task
of encryption is to transform a data representation in such        with admiration and gratitude.
a way that it can be recovered if and only if you have a key.        Since proving (or disproving) P = N P is worth a Clay
                                                                   Institute Millenium Prize of $ 1,000,000, the system D seems
4
  The DVD Copy Scramble System (CSS) was originally re-            secure enough to protect a bank account with $ 900,000. If
verse engineered to allow playing DVDs on Linux computers.         an attacker takes your money from it, he will leave you with
This was possibly facilitated by an inadvertent disclosure         a proof worth much more. So the logical complexity of the
from the DVD Copy Control Association (CAA). DVD CAA               system D seems to provide enough obscurity for a significant
pursued the authors and distributors of the Linux DeCSS            amount of security!
module through a series of court cases, until the case was
dismissed in 2004 [15]. Ironically, the cryptography used in       But what is logical complexity? Computational com-
DVD CSS has been so weak, in part due to the US export             plexity of a program tells how many computational steps
controls at the time of design, that any computer fast enough      (counting them in time, memory, state changes, etc.) does
to play DVDs could find the key by brute force within 18
                                                                   5
seconds [38]. This easy cryptanalytic attack was published           However, the International Obfuscated C Code Contest [21]
before DeCSS, but seemed too obscure for everyday use.             has generated some interesting and extremely amusing work.
the program take to transform its input into its output. Log-    analogous with (1), for all programs p and inputs x. So G
ical complexity is not concerned with the execution of the       executes spec(p) just like the U executes code(p). Moreover,
program, but with its logical construction. Intuitively, if      we require that G is a homomorphism with respect to some
computational complexity of a program counts the number          suitable operations, i.e. that it satisfies something like
of computational steps to execute it on an input of a given
                                                                          G spec(p) ⊕ spec(q), x     = p(x) + q(x)         (5)
length, its logical complexity should count the number of
computational steps that it takes to derive the program in a     where + is some operation on data, and ⊕ is a logical op-
desired format, from some form of specification. This oper-       eration on programs. In general, G may satisfy such re-
ation may correspond to specification refinement and code          quirements with respect to several such operations, pos-
generation; or it may correspond to program transformation       sibly of different arities. In this way, the structure of a
into a desired format, if the specification is another program;   program specification on the left hand side will reflect the
or it may be a derivation of an attack, if the specification is   structure of the program outputs on the right hand side.
the system.                                                      If a complex program p has spec(p) satisfying (5), then we
   Formally, this idea of logical complexity seems related to    can analyze and decompose spec(p) on the left hand side
the notion of logical depth, as developed in the realm of        of (5), learn a generic structure of its outputs on the right
algorithmic information theory [10, 26, 6]. In the usual for-    hand side, and predict the behavior of p without running it,
mulation, logical depth is viewed as a complexity measure        provided that we know the behaviors of the basic program
assigned to numbers. viewed as programs, or more precisely       components. The intended difference between the G¨del-    o
as the G¨del-Kleene indices, which can be executed by a
          o                                                      Kleene code(p) in (1), and the executable logical specifica-
universal Turing machine, and thus represent partial recu        tion spec(p) in (4) is that each code(p) must be evaluated
   p, viewed as the numbers code(p) which are executable         on each x to tell p(x), whereas spec(p) allows us to derive
by a universal Turing machine. The logical depth of a pro-       p(x), e.g., from q(x) if spec(p) = Φspec(q) and x = φy for an
gram p is defined to be the computational complexity of the       easy program transformation Φ and data operation φ satis-
simplest program that outputs code(p). The task of logi-         fying G (Φ(spec(q)), y) = φ(q(y)). (See a further comment
cal complexity, as a measure of the difficulty of attacker’s       in Sec. 5.)
algorithmic tasks, is to lift this idea from the realm of ex-       The notion of logical complexity can thus be construed as
ecutable encodings of programs, to their executable logical      a strengthening of the notion of logical depth by the addi-
specifications [19, 8], viewed as their ”explanations”. The       tional requirement that the execution engine G should pre-
underlying idea is that for an attacker on a logically com-      serve some composition operations. I conjecture that a log-
plex secure system, it may not suffice to have executable          ical specification framework for measuring logical complex-
code of that system, and an opportunity to run it; in order      ity of programs can be realized as a strengthened version of
to construct an attack, the attacker needs to ”understand”       the G¨del-Kleene-style program encodings, using the avail-
                                                                        o
the system, and specify an ”explanation”. But what does it       able program specification and composition tools and for-
mean to ”understand” the system? How can we recognize            malisms [9, 14, 34, 29, 33]. Logical complexity of a program
an ”explanation” of a program? A possible interpretation         p could then be defined as the computational complexity of
is that a specification explains a program if it allows pre-      the simplest executable logical specification φ that satisfies
dicting its outputs of the program without running it, and       G(φ, x) = p(x) for all inputs x.
it requires strictly less computation on the average6 . Let us   One-way programmable security? If a proof of P =
try to formalize this idea.                                      N P is indeed hard to construct, as the years of efforts sug-
   Logical depth is formalized in terms of a universal Turing    gest, then the program implementing an attack algorithm
machine U , which takes code(p) of a program p and for any       AD as above, satisfying AD ⇒ P = N P , must be logically
input x outputs                                                  complex. Although the system D may be vulnerable to a
                  U code(p), x       =   p(x)             (1)    computationally feasible attack AD , constructing this at-
                                                                 tack may be computationally unfeasible. For instance, the
To formalize logical complexity, we also need a code genera-     attacker might try to derive AD from the given program D.
tor Γ, which for every program p with a logical specification     Deriving a program from another program must be based on
spec(p) generates                                                ”understanding”, and some sort of logical homomorphism,
                                                                 mapping the meanings in a desired way. The attacker should
                  Γ spec(p)      =   code(p)              (2)
                                                                 thus try to extract spec(D) from D and then to construct
so that U Γ (spec(p)) , x = p(x) holds for all inputs x again.   spec(AD ) from spec(D). However, if D is logically complex,
Composing the generator of the executable code with the          it may be hard to extract spec(D), and ”understand” D,
universal Turing machine yields a universal specification eval-   notwithstanding the fact that it may be perfectly feasible to
uator G, defined by                                               reverse engineer code(D) from D. In this sense, generating
                                                                 a logically complex program may be a one-way operation,
                  G(φ, x) =      U Γ(φ), x                (3)    with an unfeasible inverse. A simple algorithm AD , and with
                                                                 it a simple proof of N = N P , may, of course, exist. But the
This universal specification evaluator is analogous to the
                                                                 experience of looking for the latter suggests that the risk is
universal Turing machine in that it satisfies the equation
                                                                 low. Modern cryptography is based on accepting such risks.
                  G spec(p), x       = p(x)               (4)    Approximate logical specifications. While the actual
6
 This requirement should be formalized to capture the idea       technical work on the idea of logical complexity remains for
that the total cost of predicting all values of the program is   future work, it should be noted that the task of provid-
essentially lower than the total cost of evaluating the pro-     ing a realistic model of attacker’s logical practices will not
gram on all of its values.                                       be accomplished realizing a universal execution engine G
satisfying the homomorphism requirements with respect to          confirmed, this claim implies that security can be increased
some suitable logical operations. For a pragmatic attacker,       not only by analyzing attacker’s type, but also by obscuring
it does not seem rational to analyze a program p all the          defender’s type.
way to spec(p) satisfying (4), when most of the algorithms        On logical complexity. The second idea of this paper
that he deals with are randomized. He will thus probably          is the idea of one way programming, based on the concept
be satisfied with specε (p) such that                              of logical complexity of programs. The devil of the logical
           Prob G specε (p), x    = p(x)     ≥    ε        (6)    complexity proposal lies in the ”detail” of stipulating the
                                                                  logical operations that need to be preserved by the univer-
4.3 Logical complexity of gaming security                         sal specification evaluator G. These operations determine
                                                                  what does it mean to specify, to understand, to explain an
Logical specifications as beliefs. Approximate logical
                                                                  algorithm. One stipulation may satisfy some people, a dif-
specifications bring us back to security as a game of incom-
                                                                  ferent one others. A family of specifications structured for
plete information. In order to construct a strategy each
                                                                  one G may be more suitable for one type of logical transfor-
player in such a game must supplement the available infor-
                                                                  mations and attack derivations, another family for another
mations about the opponent by some beliefs. Mathemat-
                                                                  one. But even if we give up the effort to tease out logical
ically, these beliefs have been modeled, ever since [18], as
                                                                  structure of algorithms through the homomorphism require-
probability distributions over opponent’s payoff functions.
                                                                  ment, and revert from logical complexity to logical depth,
More generally, in games not driven by payoffs, beliefs can be
                                                                  from homomorphic evaluators G, to universal Turing ma-
modeled as probability distributions over the possible oppo-
                                                                  chines U , and from spec(p) to code(p), even the existing
nent’s behaviors, or algorithms. In the framework described
                                                                  techniques of algorithmic information theory alone may suf-
above, such a belief can thus be viewed as an approximate
                                                                  fice to develop one-way programs, easy to construct, but
logical specification of opponent’s algorithms. An approxi-
                                                                  hard to deconstruct and transform. A system programmed
mation specε (p) can thus be viewed not as a deterministic
                                                                  in that way could still allow computationally feasible, but
specification probably close to p, but as a probability distri-
                                                                  logically unfeasible attacks.
bution containing some information about p.
   Since both players in a game of incomplete information         On security of profiling. Typing and profiling are frowned
are building beliefs about each other, they must also build       on in security. Leaving aside the question whether gathering
beliefs about each other beliefs: A formulates a belief about     information about the attacker, and obscuring the system,
B’s belief about A, and B formulates a belief about A’s be-       might be useful for security or not, these practices remain
lief about B. And so on to the infinity. This is described         questionable socially. The false positives arising from such
in more detail in the Appendix, and in still more detail in       methods cause a lot of trouble, and tend to just drive the
[4]. These hierarchies of beliefs are formalized as probability   attackers deeper into hiding.
distributions over probability distributions. In the frame-          On the other hand, typing and profiling are technically
work of logical complexity, the players thus specify approx-      and conceptually unavoidable in gaming, and remain re-
imate specifications about each other’s approximate speci-         spectable research topics of game theory. Some games can-
fications. Since these specifications are executable by the         not be played without typing and profiling the opponents.
universal evaluator G, they lead into an algorithmic theory       Poker and the bidding phase of bridge are all about trying
of incomplete information, since players’ belief hierarchies      to guess your opponents’ secrets by analyzing their behav-
are now computable, i.e. they consist of samplable probabil-      iors. Players do all they can to avoid being analyzed, and
ity distributions. Approximate specifications, on the other        many prod their opponents to sample their behaviors. Some
hand, introduce an interesting twist into algorithmic theory      games cannot be won by mere uniform distributions, with-
of information: while code(code(p)) could not be essentially      out analyzing opponents’ biases.
simpler than code(p) (because them we could derive for p             Both game theory and immune system teach us that we
simpler code than code(p)), specε (specε (p)) can be simpler      cannot avoid profiling the enemy. But both the social ex-
than specε (p).                                                   perience and immune system teach us that we must set the
                                                                  thresholds high to avoid the false positives that the profil-
5.   FINAL COMMENTS                                               ing methods are so prone to. Misidentifying the enemy leads
                                                                  to auto-immune disorders, which can be equally pernicious
On games of security and obscurity. The first idea of              socially, as they are to our health.
this paper is that security is a game of incomplete informa-
tion: by analyzing your enemy’s behaviors and algorithms          Acknowledgement. As always, Cathy Meadows provided
(subsumed under what game theorists call his type), and by        very valuable comments.
obscuring your own, you can improve the odds of winning
this game.                                                        6. REFERENCES
   This claim contradicts Kerckhoffs’ Principle that there is
no security by obscurity, which implies that security should       [1] Samson Abramsky, Pasquale Malacaria, and Radha
be viewed as a game of imperfect information, by asserting             Jagadeesan. Full completeness for pcf. Information
that security is based on players’ secret data (e.g. cards),           and Computation, 163:409–470, 2000.
and not on their obscure behaviors and algorithms.                 [2] Manindra Agrawal, Neeraj Kayal, and Nitin Saxena.
   I described a toy model of a security game which illus-             PRIMES is in P. Annals of Mathematics,
trates that security is fundamentally based on gathering and           160(2):781–793, 2004.
analyzing information about the type of the opponent. This         [3] George A. Akerlof. The market for ’lemons’: Quality
model thus suggests that security is not a game of imper-              uncertainty and the market mechanism. Quarterly
fect information, but a game of incompete information. If              Journal of Economics, 84(3):488–500, 1970.
[4] Robert J. Aumann and Aviad Heifetz. Incomplete            [23] Stephen C. Kleene. Realizability: a retrospective
     information. In Robert J. Aumann and Sergiu Hart,              survey. In A.R.D. Mathias and H. Rogers, editors,
     editors, Handbook of Game Theory, Volume 3, pages              Cambridge Summer School in Mathematical Logic,
     1665–1686. Elsevier/North Holland, 2002.                       volume 337 of Lecture Notes in Mathematics, pages
 [5] Boaz Barak, Oded Goldreich, Russell Impagliazzo,               95–112. Springer-Verlag, 1973.
     Steven Rudich, Amit Sahai, Salil P. Vadhan, and           [24] A. N. Kolmogorov. On tables of random numbers
     Ke Yang. On the (im)possibility of obfuscating                 (Reprinted from ”Sankhya: The Indian Journal of
     programs. In Proceedings of the 21st Annual                    Statistics”, Series A, Vol. 25 Part 4, 1963). Theor.
     International Cryptology Conference on Advances in             Comput. Sci., 207(2):387–395, 1998.
     Cryptology, CRYPTO ’01, pages 1–18, London, UK,           [25] Barbara Kordy, Sjouke Mauw, Sasa Radomirovic, and
     2001. Springer-Verlag.                                         Patrick Schweitzer. Foundations of attack-defense
 [6] C. H. Bennett. Logical depth and physical complexity.          trees. In Pierpaolo Degano, Sandro Etalle, and
     In A half-century survey on The Universal Turing               Joshua D. Guttman, editors, Formal Aspects in
     Machine, pages 227–257, New York, NY, USA, 1988.               Security and Trust, volume 6561 of Lecture Notes in
     Oxford University Press, Inc.                                  Computer Science, pages 80–95. Springer, 2010.
 [7] Elwyn Berlekamp, John Conway, and Richard Guy.            [26] Leonid A. Levin. Randomness conservation
     Winning Ways for your Mathematical Plays, volume               inequalities: Information and independence in
     1-2. Academic Press, 1982.                                     mathematical theories. Information and Control,
 [8] Dines Bjørner, editor. Abstract Software Specifications,        61:15–37, 1984.
     volume 86 of Lecture Notes in Computer Science.           [27] Jean F. Mertens and Shmuel Zamir. Formulation of
     Springer, 1980.                                                Bayesian analysis for games with incomplete
 [9] E. Borger and Robert F. Stark. Abstract State                  information. Internat. J. Game Theory, 14(1):1–29,
     Machines: A Method for High-Level System Design                1985.
     and Analysis. Springer-Verlag New York, Inc.,             [28] Tyler Moore, Allan Friedman, and Ariel D. Procaccia.
     Secaucus, NJ, USA, 2003.                                       Would a ’cyber warrior’ protect us: exploring
[10] Gregory Chaitin. Algorithmic information theory.               trade-offs between attack and defense of information
     IBM J. Res. Develop., 21:350–359, 496, 1977.                   systems. In Proceedings of the 2010 workshop on New
[11] William Dean. Pitfalls in the use of imperfect                 security paradigms, NSPW ’10, pages 85–94, New
     information. RAND Corporation Report P-7430, 1988.             York, NY, USA, 2010. ACM.
     Santa Monica, CA.                                         [29] Till Mossakowski. Relating casl with other
[12] Whitfield Diffie and Martin E. Hellman. New                       specification languages: the institution level. Theor.
     directions in cryptography. IEEE Transactions on               Comput. Sci., 286(2):367–475, 2002.
     Information Theory, 22(6):644–654, 1976.                  [30] Martin J. Osborne and Ariel Rubinstein. A Course in
[13] Lester E. Dubins and Leonard J. Savage. How to                 Game Theory. MIT Press, 1994.
     Gamble If You Must. McGraw-Hill, New York, 1965.          [31] Dusko Pavlovic. A semantical approach to equilibria
[14] Jos´ Luiz Fiadeiro. Categories for software
         e                                                          and rationality. In Alexander Kurz and Andzrej
     engineering. Springer, 2005.                                   Tarlecki, editors, Proceedings of CALCO 2009, volume
[15] Electronic Frontier Foundation. Press releases and             5728 of Lecture Notes in Computer Science, pages
     court documents on DVD CCA v Bunner Case.                      317–334. Springer Verlag, 2009. arxiv.org:0905.3548.
     w2.eff.org/IP/Video/DVDCCA case/#bunner-press.             [32] Dusko Pavlovic and Douglas R. Smith. Guarded
     (Retrieved on July 31, 2011).                                  transitions in evolving specifications. In H. Kirchner
[16] Amanda Friedenberg and Martin Meier. On the                    and C. Ringeissen, editors, Proceedings of AMAST
     relationship between hierarchy and type morphisms.             2002, volume 2422 of Lecture Notes in Computer
     Economic Theory, 46:377–399, 2011.                             Science, pages 411–425. Springer Verlag, 2002.
     10.1007/s00199-010-0517-2.                                [33] Dusko Pavlovic and Douglas R. Smith. Software
[17] Drew Fudenberg and Jean Tirole. Game Theory. MIT               development by refinement. In Bernhard K. Aichernig
     Press, August 1991.                                            and Tom Maibaum, editors, Formal Methods at the
[18] John C Harsanyi. Games with incomplete information             Crossroads, volume 2757 of Lecture Notes in
     played by bayesian players, I-III. Management                  Computer Science. Springer Verlag, 2003.
     Science, 14:159–182, 320–334, 486–502, 1968.              [34] Duˇko Pavlovi´. Semantics of first order parametric
                                                                       s           c
[19] C. A. R. Hoare. Programs are predicates. Phil. Trans.          specifications. In J. Woodcock and J. Wing, editors,
     R. Soc. Lond., A 312:475–489, 1984.                            Formal Methods ’99, volume 1708 of Lecture Notes in
                                                                    Computer Science, pages 155–172. Springer Verlag,
[20] J. Martin E. Hyland and C.-H. Luke Ong. On full
                                                                    1999.
     abstraction for pcf: I, ii, and iii. Inf. Comput.,
     163(2):285–408, 2000.                                     [35] Claude Shannon. Communication theory of secrecy
                                                                    systems. Bell System Technical Journal,
[21] International Obfuscated C Code Contest.
                                                                    28(4):656–715, 1949.
     www0.us.ioccc.org/main.html. (Retreived on 31 July
     2011).                                                    [36] Ray J. Solomonoff. A formal theory of inductive
                                                                    inference. Part I., Part II. Information and Control,
[22] Auguste Kerckhoffs. La cryptographie militaire.
                                                                    7:1–22, 224–254, 1964.
     Journal des Sciences Militaires, IX:5–38, 161–191,
     1883.                                                     [37] Michael Spence. Job market signaling. Quarterly
                                                                    Journal of Economics, 87(3):355–374, 1973.
[38] Frank A. Stevenson. Cryptanalysis of Contents                Following the same idea, for the mixed responses Φ : A → B
     Scrambling System. http://web.archive.org/web/-              and Ψ : B → C we have the composite (Φ; Ψ) : A → C with
     20000302000206/www.dvd-copy.com/news/                        the entries
     cryptanalysis of contents scrambling system.htm, 8
     November 1999. (Retrieved on July 31, 2011).                             (Φ; Ψ)aγγ ′ c   =            Φaββ ′ b · Ψbγγ ′ c
                                                                                                  ββ ′ b
[39] George J Stigler. The economics of information.
     Journal of Political Economy, 69(3):213–225, 1961.           It is easy to see that these composition operations are asso-
[40] John von Neumann and Oskar Morgenstern. Theory of            ciative and unitary, both for the simple and for the mixed
     Games and Economic Behavior. Princeton University            responses.
     Press, Princeton, NJ, 1944.
                                                                  A.2 Games
                                                                    Arenas turn out to provide a convenient framework for a
APPENDIX                                                          unified presentation of games studied in game theory [40,
A. GAMING SECURITY FORMALISM                                      30], mathematical games [7], game semantics [1, 20], and
                                                                  some constructions in-between these areas [13]. Here we
   Can the idea of applied security by obscurity be realized?
                                                                  shall use them to succinctly distinguish between the various
To test it, let us first make it more precise in a mathematical
                                                                  kinds of game with respect to the information available to
model. I first present a very abstract model of strategic be-
                                                                  the players. As mentioned before, game theorists usually
havior, capturing and distinguishing the various families of
                                                                  distinguish two kinds of players’ information:
games studied in game theory, and some families not stud-
ied. The model is based on coalgebraic methods, along the           • data, or positions: e.g., a hand of cards, or a secret
lines of [31]. I will try to keep the technicalities at a min-        number; and
imum, and the reader is not expected to know what is a              • types, or preferences: e.g., player’s payoff matrix, or a
coalgebra.                                                            system that he uses are components of his type.
A.1    Arenas                                                     The games in which the players have private data or posi-
                                                                  tions are the games of imperfect information. The games
  Definition 1. A player is a pair of sets A = MA , SA ,           where the players have private types or preferences, e.g. be-
where the elements of MA represent or moves available to          cause they don’t know each other’s payoff matrices, are the
A, and the elements of SA are the states that A may observe.      games of incomplete information. See [4] for more about
  A simple response Σ : A → B for a player B to a player          these ideas, [30] for the technical details of the perfect-
A is a binary relation                                            imperfect distinction, and [17] for the technical details of
                                                                  the complete-incomplete distinction.
                            2
                  Σ : MA × SB × MB → {0, 1}                          But let us see how arenas capture these distinction, and
                                                                  what does all that have to do with security.
                                              Σ
When Σ(a, β, β ′ , b) = 1, we write a, β − β ′ , b , and say
                                             →
that the strategy Σ at B’s state β prescribes that B should       A.2.1 Games of perfect and complete information
respond to A’s move a by the move b and update his state to         In games of perfect and complete information, each player
β ′ . The set of B’s simple responses to A is written SR(A, B).   has all information about the other player’s data and pref-
    A mixed response Φ : A → B for the player B to the            erences, i.e. payoffs. To present the usual stateless games in
player A is a matrix                                              normal form, we consider the players A and B whose state
                             2                                    spaces are the sets payoff bimatrices, i.e.
                   Φ : MA × SB × MB → [0, 1]
                                                                                SA = SB       = (R × R)MA ×MB
required to be finitely supported and stochastic in MA , i.e.
for every a ∈ MA holds                                            In other words, a state σ ∈ SA = SB is a pair of maps
                                                         ′        σ = σ A , σ B where σ A is the MA × MB -matrix of A’s pay-
  • Φaββ ′ b = 0 holds for all but finitely many β, β and b,                         A
                                                                  offs: the entry σab is A’s payoff if A plays a and B plays
  •    ββ ′ b   Φaββ ′ b = 1.                                     b. Ditto for σB . Each game in the standard bimatrix form
                                        Φ                         corresponds to one element of both state spaces SA = SB .
When Φaββ ′ b = p we write a, β − β ′ , b , and say that
                                →                                 It is nominally represented as a state, but this state does not
                                         p
the strategy Φ at B’s state β responds to A’s move a with         change. The main point here is that both A and B know
a probability p by B’s move b leading him into the state β ′ .    this element. This allows both of them to determine the
The set of B’s mixed responses to A is written MR(A, B).          best response strategies ΣA : B → A for A and ΣB : A → B
  An arena is a specification of a set of players and a set of     for B, in the form
responses between them.                                                        Σ
                                                                        b, σ A − A σ A , a
                                                                                −→                ⇐⇒                   A     A
                                                                                                            ∀x ∈ MA . σxb ≤ σab
                                                                                Σ
Responses compose. Given simple responses Σ : A → B                     a, σ B − B σ B , b
                                                                                −→                ⇐⇒                   B     B
                                                                                                            ∀y ∈ MB . σay ≤ σab
and Γ : B → C, we can derive a response (Σ; Γ) : A → C for
the player C against A by taking the player B as a ”man in        and to compute the Nash equilibria as the fixed points of
the middle”. The derived response is constructed as follows:      the composites (ΣB ; ΣA ) : A → A and (ΣA ; ΣB ) : B →
                                                                  B. This is further discussed in [31]. Although the payoff
                     Σ                        Γ                   matrices in games studied in game theory usually do not
            a, β − β ′ , b
                 →                      b, γ − γ ′ , c
                                             →
                                                                  change, so the corresponding responses fix all states, and
                                (Σ;Γ)
                           a, γ − − γ ′ , c
                                 −→                               each response actually presents a method to respond in a
whole family of games, represented by the whole space of            where σi ∈ ∆i RMA ×MB . The even components σ2i repre-
                                                                             A                                            A

payoff matrices, it is interesting to consider, e.g. discounted      sent A’s payoff matrix, A’s belief about B’s belief about A’s
payoffs in some iterated games, where the full force of the          payoff matrix, A’s belief about B’s belief about A’s belief
response formalism over the above state spaces is used.             about B’s belief about A’s payoff matrix, and so on. The
                                                                                        2i+1
                                                                    odd components σA        represent A’s belief about B’s payoff
A.2.2 Games of imperfect information                                matrix, A’s belief about B’s belief about A’s belief about B’s
  Games of imperfect information are usually viewed in ex-          payoff matrix, and so on. The meanings of the components
tended form, i.e. with nontrivial state changes, because            of the state σ B ∈ SB are analogous.
players’ private data can then be presented as their private
states. Each player now has a set if private positions, PA
and PB , which is not visible to the opponent. On the other         A.3 Security games
hand, both player’s types, presented as their payoff matrices,
are still visible to both. So we have                                  We model security processes as a special family of games.
                                                                    It will be a game of imperfect information, since the players
               SA     = PA × (R × R)MA ×MB                          of security games usually have some secret keys, which are
               SB     = PB × (R × R)MA ×MB                          presented as the elements of their private state sets PA and
                                                                    PD . The player A is now the attacker, and the player D is
E.g., in a game of cards, A’s hand will be an element of PA ,       the defender.
B’s hand will be an element of PB . With each response, each           The goal of a security game is not expressed through pay-
player updates his position, whereas their payoff matrices           offs, but through ”security requirements” Θ ⊆ PD . The
usually do not change.                                              intuition is that the defender D is given a family of assets to
                                                                    protect, and the Θ are the desired states, where these assets
A.2.3 Games of incomplete information                               are protected. The defender’s goal is to keep the state of the
   Games of incomplete information are studied in epistemic         game in Θ, whereas the attacker’s goal is to drive the game
game theory [18, 27, 4], which is formalized through knowl-         outside Θ. The attacker may have additional preferences,
edge and belief logics. The reason is that each player here         expressed by a probability distribution over his own private
only knows with certainty his own preferences, as expressed         states PA . We shall ignore this aspect, since it plays no role
by his payoff matrix. The opponent’s preferences and payoffs          in the argument here; but it can easily be captured in the
are kept in obscurity. In order to anticipate opponent’s be-        arena formalism.
haviors, each player must build some beliefs about the other           Since players’ goals are not to maximize their revenues,
player’s preferences. In the first instance, this is expressed       their behaviors are not determined by payoff matrices, but
as a probability distribution over the other player’s possi-        by their response strategies, which we collect in the sets
ble payoff matrices. However, the other player also builds           RA(A, D) and RA(D, A). In the simplest case, response
beliefs about his opponent’s preferences, and his behavior          strategies boil down to the response maps, and we take
is therefore not entirely determined by his own preferences,        RA(A, D) = SR(A, D), or RA(A, D) = MR(A, D). In gen-
but also by his beliefs about his opponent’s preferences. So        eral, though, A’s and D’s behavior may not be purely ex-
each player also builds some beliefs about the other player’s       tensional, and the elements of RA(A, D) may be actual al-
beliefs, which is expressed as a probability distribution over      gorithms.
the probability distributions over the payoff matrices. And             While both players surely keep their keys secret, and some
so to the infinity. Harsanyi formalized the notion of players        part of the spaces PA and PD are private, they may not know
type as an element of such information space, which includes        each other’s preferences, and may not be given each other’s
each player’s payoffs, his beliefs about the other player’s pay-     ”response algorithms”. If they do know them, then both
offs, his beliefs about the other player’s beliefs, and so on        defender’s defenses and attaker’s attacks are achieved with-
[18]. Harsanyi’s form of games of incomplete information            out obscurity. However, modern security definitions usually
can be presented in the arena framework by taking                   require that the defender defends the system against a fam-
                                                                    ily of attacks without querying the attacker about his algo-
                 SA       = RMA ×MB + ∆SB                           rithms. So at least the theoretical attacks are in principle
                 SB       = RMA ×MB + ∆SA                           afforded the cloak of obscurity. Since the defender D thus
                                                                    does not know the attacker A’s algorithms, we model se-
where + denotes the disjoint union of sets, and ∆X es the           curity games as games of incomplete information, replacing
space of finitely supported probability distributions over X,        the player’s spaces of payoff matrices by the spaces RA(A, D)
which consists of the maps p : X → [0, 1] such that                 and RA(D, A) of their response strategies to one another.
                                                                       Like above, A thus only knows PA and RA(D, A) with cer-
      |{x ∈ X|p(x) > 0}| < ∞          and                p(x) = 1   tainty, and D only knows PD and RA(A, D) with certainty.
                                                   x∈X
                                                                    Moreover, A builds his beliefs about D’s data and type, as a
Resolving the above inductive definitions of SA and SB , we          probability distribution over PD × RA(A, D), and D builds
get                                                                 similar beliefs about A. Since they then also have to build
                                 ∞
                                                                    beliefs about each other’s beliefs, we have a mutually recur-
                                                                    rent definition of the state spaces again:
            SA = SB         =         ∆i RMA ×MB
                                i=0

Here the state σ A ∈ SA is thus a sequence                                       SA   =     PA × RA(D, A) + ∆SD
                      A      A    A    A
                    σ =     σ0 , σ1 , σ2 , . . .                                 SD   =     PD × RA(A, D) + ∆SA
Resolving the induction again, we now get                               where po denotes the interior of the image of p : O → R.
           ∞                                                            The assumption that both players know both their own and
SA    =          ∆2i PA × RA(D, A) × ∆2i+1 PD × RA(A, D)                the opponent’s positions means that both state spaces SA
           i=0                                                          and SD will contain PA × PD as a component. The state
            ∞                                                           spaces are thus
SD    =          ∆2i PD × RA(A, D) × ∆2i+1 PA × RA(D, A)
                                                                                  SA    = (PA × PD × RA(D, A)) + ∆SD
           i=0
                                                                                  SD    = (PA × PD × RA(A, D)) + ∆SD
Defender’s state β ∈ SD is thus a sequence β = β0 , β1 , β2 , . . . ,
where                                                                     To start off the game, we assume that the defender D is
          P    RA
  • β0 = β0 , β0 ∈ PD × RA(A, D) consists of                            given some assets to secure, presented as an area Θ ⊆ R.
                      P
                                                                        The defender wins as long as his position pD ∈ PD is such
       – D’s secrets β0 ∈ PD , and                                      that Θ ⊆ po . Otherwise the defender loses. The attacker’s
                                                                                     D
                                        RA
       – D’s current response strategy β0 ∈ RA(A, D)                    goal is to acquire the assets, i.e. to maximize the area Θ∩po .
                                                                                                                                    A
  • β1 ∈ ∆ PA × RA(D, A) is D’s belief about A’s secrets                  The players are given equal forces to distribute along the
    and her response strategy;                                          borders of their respective territories. Their moves are the
                                                                        choices of these distributions, i.e.
  • β2 ∈ ∆2 PD × RA(A, D) is D’s belief about A’s belief
    about D’s secrets and his response strategy;                                           MA = MD        = ∆ (∂O)
  • β3 ∈ ∆3 PA × RA(D, A) is D’s belief about A’s belief                where ∂O is the unit circle, viewed as the boundary of O,
    about D’s beliefs, etc.                                             and ∆(∂O) denotes the distributions along ∂O, i.e. the
Each response strategy Σ : A → D prescribes the way in                  measurable functions m : ∂O → [0, 1] such that ∂O m = 1.
which D should update his state in response to A’s observed                How will D update his space after a move? This can be
moves. E.g., if RA(D, A) is taken to consist of relations in            specified as a simple response Σ : A → D. Since the point
                 +                             +
the form Λ : MD × MA → {0, 1}, where MD is the set of                   of this game is to illustrate the need for learning about the
nonempty strings in MD , then D can record the longer and               opponent, let us leave out players’ type information for the
longer histories of A’s responses to his moves.                         moment, and assume that the players only look at their
Remark. The fact that, in a security game, A’s state space              positions, i.e. SA = SD = PA × PD . To specify Σ : A → B,
SA contains RA(D, A) and RA(A, D) means that each player                we must thus determine the relation
A is prepared for playing against a particular player D; while                                       Σ
                                                                                       mA , p A , p D − p ′ , p ′ , mD
                                                                                                      → A D
D is prepared for playing against A. This reflects the sit-
uation in which all security measures are introduced with               for any given mA ∈ MA , pA ∈ PA , and pD ∈ PD . We
                                                                                                ′       ′
particular attackers in mind, whereas the attacks are built             describe the updates pA and pD for an arbitrary mD , and
to attack particular security measures. A technical conse-              leave it to D to determine which mD s are the best responses
quence is that players’ state spaces are defined by the in-              for him. So given the previous positions and both player’s
ductive clauses, which often lead to complex impredicative              moves, the new position p′ will map a point on the bound-
                                                                                                   A
structures. This should not be surprising, since even infor-            ary of the circle, viewed as a unit vector x ∈ O into the
mal security considerations often give rise to complex belief           vector p ′ (x) in R ⊆ R2 as follows.
                                                                                  A
hierarchies, and the formal constructions of epistemic game               • If for all y ∈ O and for all s ∈ [0, mA (x)] and all t ∈
theory [18, 27, 16] seem like a natural tool to apply.                      [0, mD (y)] holds (1 + s)pA (x) = (1 + t)pD (y) then set
A.4       The game of attack vectors                                                                     1 + mA (x)
                                                                                           p ′ (x)
                                                                                             A       =              · pA (x)
   We specify a formal model of the game of attack vec-                                                      2
tors as a game of perfect but incomplete information. This
                                                                          • Otherwise, let y ∈ O, s ∈ [0, mA (x)] and t ∈ [0, mD (y)]
means that the players know each other’s positions, but need
                                                                            be the smallest numbers such that (1 + s)pA (x) = (1 +
to learn about each other’s type, i.e. plans and methods.
                                                                            t)pD (y). Writing m′ (x) = mA (x) − s and m′ (y) =
                                                                                                A                             D
The assumption that the players know each other’s posi-
                                                                            mD (y) − t, we set
tion could be removed, without changing the outcomes and
strategies, by refining the way the model of players’ obser-                                 1 + m′ (x)            1 + m′ (y)
vations. But this seems inessential, and we omit it for sim-                 p ′ (x)
                                                                               A       =          A
                                                                                                       · pA (x) +       D
                                                                                                                             · pD (y)
                                                                                                2                     2
plicity.
   Let O denote the unit disk in the real plane. If we assume           This means that the player A will push her boundary by
that it is parametrized in the polar coordinates, then O =              mA (x) in the direction pA (x) if she does not encounter D
{0} ∪ (0, 1] × R/2πZ, where R/2πZ denotes the circle. Let               at any point during that push. If somewhere during that
R ⊆ R2 be an open domain in the real plane. Players’                    push she does encounter D’s territory, then they will push
positions can then be defined as continuous mappings of O                against each other, i.e. their push vectors will compose.
into R, i.e.                                                            More precisely, A will push in the direction pA (x) with the
                                                                        force m′ (x) = mA (x) − s, that remains to her after the
                                                                                  A
                      PA = PD       = RO                                initial free push by s; but moreover, her boundary will also
                                                                        be pushed in the direction pD (y) by the boundary of D’s
The rules of the game will be such that the attacker’s and                                          ′
                                                                        territory, with the force mD (y) = mD (y) − t, that remains
the defender’s positions pA , pD ∈ RO always satisfy the con-
                                                                        to D after his initial free push by t. Since D’s update is
straint that
                                                                        defined analogously, a common boundary point will arise,
                        po ∩ po
                         A    D     = ∅                                 i.e. players’ borders will remain adjacent. When the move
next, there will be no free initial pushes, i.e. s and t will be
0, and the update vectors will compose in full force.
   How do the players compute the best moves? Attacker’s
goal is, of course, to form a common boundary and to push
towards Θ, preferably from the direction where the defender
does not defend. The defender’s goal is to push back. As
explained explained in the text, the game is thus resolved on
defender’s capability to predict attacker’s moves. Since the
territories do not intersect, but A’s moves become observable
for D along the part of the boundary of A’s territory that
lies within the convex hull of D’s territory, D’s moves must
be selected to maximize the length of the curve
                      ∂pA ∩ conv(pD )
This strategic goal leads to the evolution described infor-
mally in Sec. 3.2.

More Related Content

What's hot

Packet-Hiding Methods: To Prevent Selective Jamming Attacks
Packet-Hiding Methods: To Prevent Selective Jamming AttacksPacket-Hiding Methods: To Prevent Selective Jamming Attacks
Packet-Hiding Methods: To Prevent Selective Jamming AttacksSwapnil Salunke
 
Protecting location privacy in sensor networks against a global eavesdropper
Protecting location privacy in sensor networks against a global eavesdropperProtecting location privacy in sensor networks against a global eavesdropper
Protecting location privacy in sensor networks against a global eavesdropperJPINFOTECH JAYAPRAKASH
 
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...IDES Editor
 
Ps Mil Rev Pendall
Ps Mil Rev PendallPs Mil Rev Pendall
Ps Mil Rev Pendallpendalld
 
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWAREHARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWAREVLSICS Design
 
How your smartphone cpu breaks software level security and privacy
How your smartphone cpu breaks software level security and privacyHow your smartphone cpu breaks software level security and privacy
How your smartphone cpu breaks software level security and privacymark-smith
 
Label based dv-hop localization against wormhole attacks in wireless sensor n...
Label based dv-hop localization against wormhole attacks in wireless sensor n...Label based dv-hop localization against wormhole attacks in wireless sensor n...
Label based dv-hop localization against wormhole attacks in wireless sensor n...ambitlick
 
A key management approach for wireless sensor networks
A key management approach for wireless sensor networksA key management approach for wireless sensor networks
A key management approach for wireless sensor networksZac Darcy
 
Intrusion Alert Correlation
Intrusion Alert CorrelationIntrusion Alert Correlation
Intrusion Alert Correlationamiable_indian
 
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
 SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATIONNexgen Technology
 
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATIONSPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATIONShakas Technologies
 
CoreTrace Whitepaper: Application Whitelisting -- A New Security Paradigm
CoreTrace Whitepaper: Application Whitelisting -- A New Security ParadigmCoreTrace Whitepaper: Application Whitelisting -- A New Security Paradigm
CoreTrace Whitepaper: Application Whitelisting -- A New Security ParadigmCoreTrace Corporation
 
A Secure Data Communication System Using Cryptography and Steganography
A Secure Data Communication System Using Cryptography and SteganographyA Secure Data Communication System Using Cryptography and Steganography
A Secure Data Communication System Using Cryptography and SteganographyIJCNCJournal
 

What's hot (16)

Packet-Hiding Methods: To Prevent Selective Jamming Attacks
Packet-Hiding Methods: To Prevent Selective Jamming AttacksPacket-Hiding Methods: To Prevent Selective Jamming Attacks
Packet-Hiding Methods: To Prevent Selective Jamming Attacks
 
Protecting location privacy in sensor networks against a global eavesdropper
Protecting location privacy in sensor networks against a global eavesdropperProtecting location privacy in sensor networks against a global eavesdropper
Protecting location privacy in sensor networks against a global eavesdropper
 
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...
Different Attacks on Selective Encryption in RSA based Singular Cubic Curve w...
 
Ps Mil Rev Pendall
Ps Mil Rev PendallPs Mil Rev Pendall
Ps Mil Rev Pendall
 
J m paper
J m paperJ m paper
J m paper
 
Beating ips 34137
Beating ips 34137Beating ips 34137
Beating ips 34137
 
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWAREHARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
 
How your smartphone cpu breaks software level security and privacy
How your smartphone cpu breaks software level security and privacyHow your smartphone cpu breaks software level security and privacy
How your smartphone cpu breaks software level security and privacy
 
Label based dv-hop localization against wormhole attacks in wireless sensor n...
Label based dv-hop localization against wormhole attacks in wireless sensor n...Label based dv-hop localization against wormhole attacks in wireless sensor n...
Label based dv-hop localization against wormhole attacks in wireless sensor n...
 
A key management approach for wireless sensor networks
A key management approach for wireless sensor networksA key management approach for wireless sensor networks
A key management approach for wireless sensor networks
 
Intrusion Alert Correlation
Intrusion Alert CorrelationIntrusion Alert Correlation
Intrusion Alert Correlation
 
6620handout4o
6620handout4o6620handout4o
6620handout4o
 
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
 SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
 
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATIONSPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
SPACE-EFFICIENT VERIFIABLE SECRET SHARING USING POLYNOMIAL INTERPOLATION
 
CoreTrace Whitepaper: Application Whitelisting -- A New Security Paradigm
CoreTrace Whitepaper: Application Whitelisting -- A New Security ParadigmCoreTrace Whitepaper: Application Whitelisting -- A New Security Paradigm
CoreTrace Whitepaper: Application Whitelisting -- A New Security Paradigm
 
A Secure Data Communication System Using Cryptography and Steganography
A Secure Data Communication System Using Cryptography and SteganographyA Secure Data Communication System Using Cryptography and Steganography
A Secure Data Communication System Using Cryptography and Steganography
 

Viewers also liked

第12週-2
第12週-2第12週-2
第12週-2fudy9015
 
Air Quality Revisioncc
Air Quality RevisionccAir Quality Revisioncc
Air Quality RevisionccGraham Hoey
 
BR-ND Kitchen Presentation
BR-ND Kitchen PresentationBR-ND Kitchen Presentation
BR-ND Kitchen Presentationpvandeelen
 
Print Portfolio
Print PortfolioPrint Portfolio
Print Portfoliodcanter
 
Producing A Staff Level AAR
Producing A Staff Level AARProducing A Staff Level AAR
Producing A Staff Level AARJeep Marshall
 
第六、七週
第六、七週第六、七週
第六、七週fudy9015
 
Prototype e LibSAPO.js
Prototype e LibSAPO.jsPrototype e LibSAPO.js
Prototype e LibSAPO.jsSAPO Sessions
 
憲法基本概念
憲法基本概念憲法基本概念
憲法基本概念fudy9015
 
第四、五週
第四、五週第四、五週
第四、五週fudy9015
 
第10-11週
第10-11週第10-11週
第10-11週fudy9015
 

Viewers also liked (17)

PDF
PDFPDF
PDF
 
第12週
第12週第12週
第12週
 
第12週-2
第12週-2第12週-2
第12週-2
 
Air Quality Revisioncc
Air Quality RevisionccAir Quality Revisioncc
Air Quality Revisioncc
 
BR-ND Kitchen Presentation
BR-ND Kitchen PresentationBR-ND Kitchen Presentation
BR-ND Kitchen Presentation
 
The Ohm’S Law
The Ohm’S LawThe Ohm’S Law
The Ohm’S Law
 
Print Portfolio
Print PortfolioPrint Portfolio
Print Portfolio
 
Producing A Staff Level AAR
Producing A Staff Level AARProducing A Staff Level AAR
Producing A Staff Level AAR
 
第13週
第13週第13週
第13週
 
第八週
第八週第八週
第八週
 
Logos
LogosLogos
Logos
 
第六、七週
第六、七週第六、七週
第六、七週
 
Prototype e LibSAPO.js
Prototype e LibSAPO.jsPrototype e LibSAPO.js
Prototype e LibSAPO.js
 
憲法基本概念
憲法基本概念憲法基本概念
憲法基本概念
 
HTML - How To
HTML - How ToHTML - How To
HTML - How To
 
第四、五週
第四、五週第四、五週
第四、五週
 
第10-11週
第10-11週第10-11週
第10-11週
 

Similar to Seguridad por oscuridad (paper)

Surreptitiously weakening cryptographic systems
Surreptitiously weakening cryptographic systemsSurreptitiously weakening cryptographic systems
Surreptitiously weakening cryptographic systemsYael Ziv
 
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
 
An overview of computer viruses in a research environment
An overview of computer viruses in a research environmentAn overview of computer viruses in a research environment
An overview of computer viruses in a research environmentUltraUploader
 
A Security Analysis Framework Powered by an Expert System
A Security Analysis Framework Powered by an Expert SystemA Security Analysis Framework Powered by an Expert System
A Security Analysis Framework Powered by an Expert SystemCSCJournals
 
How prevent dos
How prevent dosHow prevent dos
How prevent dossnake9991
 
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docx
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docxFocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docx
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docxbudbarber38650
 
Zimory White Paper: Security in the Cloud pt 2/2
Zimory White Paper: Security in the Cloud pt 2/2Zimory White Paper: Security in the Cloud pt 2/2
Zimory White Paper: Security in the Cloud pt 2/2Zimory
 
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, Paris
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, ParisVESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, Paris
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, ParisOW2
 
Network Security Research Paper
Network Security Research PaperNetwork Security Research Paper
Network Security Research PaperPankaj Jha
 
Proposal defense presentation
Proposal defense presentationProposal defense presentation
Proposal defense presentationRuchika Mehresh
 
Techniques of lattice based
Techniques of lattice basedTechniques of lattice based
Techniques of lattice basedijcsa
 
Wireless sensor Network using Zero Knowledge Protocol ppt
Wireless sensor Network using Zero Knowledge Protocol pptWireless sensor Network using Zero Knowledge Protocol ppt
Wireless sensor Network using Zero Knowledge Protocol pptsofiakhatoon
 
A theoretical superworm
A theoretical superwormA theoretical superworm
A theoretical superwormUltraUploader
 
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection System
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection SystemKey-Recovery Attacks on KIDS, a Keyed Anomaly Detection System
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection System1crore projects
 
Paranoid's View of Token Engineering
Paranoid's View of Token Engineering Paranoid's View of Token Engineering
Paranoid's View of Token Engineering Token Engineering
 
Include at least 250 words in your posting and at least 250 words in
Include at least 250 words in your posting and at least 250 words inInclude at least 250 words in your posting and at least 250 words in
Include at least 250 words in your posting and at least 250 words inmaribethy2y
 
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Reality
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Realityhashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Reality
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. RealityArea41
 

Similar to Seguridad por oscuridad (paper) (20)

Surreptitiously weakening cryptographic systems
Surreptitiously weakening cryptographic systemsSurreptitiously weakening cryptographic systems
Surreptitiously weakening cryptographic systems
 
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...
 
An overview of computer viruses in a research environment
An overview of computer viruses in a research environmentAn overview of computer viruses in a research environment
An overview of computer viruses in a research environment
 
A Security Analysis Framework Powered by an Expert System
A Security Analysis Framework Powered by an Expert SystemA Security Analysis Framework Powered by an Expert System
A Security Analysis Framework Powered by an Expert System
 
How prevent dos
How prevent dosHow prevent dos
How prevent dos
 
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docx
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docxFocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docx
FocusMAYJUNE 2011 1540-799311$26.00 © 2011 IEEE COPUBLI.docx
 
Zimory White Paper: Security in the Cloud pt 2/2
Zimory White Paper: Security in the Cloud pt 2/2Zimory White Paper: Security in the Cloud pt 2/2
Zimory White Paper: Security in the Cloud pt 2/2
 
Bilge12 zero day
Bilge12 zero dayBilge12 zero day
Bilge12 zero day
 
Bilge12 zero day
Bilge12 zero dayBilge12 zero day
Bilge12 zero day
 
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, Paris
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, ParisVESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, Paris
VESPA- Multi-Layered Self-Protection for Cloud Resources, OW2con'12, Paris
 
Network Security Research Paper
Network Security Research PaperNetwork Security Research Paper
Network Security Research Paper
 
Proposal defense presentation
Proposal defense presentationProposal defense presentation
Proposal defense presentation
 
Techniques of lattice based
Techniques of lattice basedTechniques of lattice based
Techniques of lattice based
 
Wireless sensor Network using Zero Knowledge Protocol ppt
Wireless sensor Network using Zero Knowledge Protocol pptWireless sensor Network using Zero Knowledge Protocol ppt
Wireless sensor Network using Zero Knowledge Protocol ppt
 
18
1818
18
 
A theoretical superworm
A theoretical superwormA theoretical superworm
A theoretical superworm
 
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection System
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection SystemKey-Recovery Attacks on KIDS, a Keyed Anomaly Detection System
Key-Recovery Attacks on KIDS, a Keyed Anomaly Detection System
 
Paranoid's View of Token Engineering
Paranoid's View of Token Engineering Paranoid's View of Token Engineering
Paranoid's View of Token Engineering
 
Include at least 250 words in your posting and at least 250 words in
Include at least 250 words in your posting and at least 250 words inInclude at least 250 words in your posting and at least 250 words in
Include at least 250 words in your posting and at least 250 words in
 
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Reality
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Realityhashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Reality
hashdays 2011: Jean-Philippe Aumasson - Cryptanalysis vs. Reality
 

Recently uploaded

Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 

Recently uploaded (20)

Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 

Seguridad por oscuridad (paper)

  • 1. Gaming security by obscurity Dusko Pavlovic Royal Holloway, University of London, and University of Twente dusko.pavlovic@rhul.ac.uk arXiv:1109.5542v1 [cs.CR] 26 Sep 2011 ABSTRACT 1. INTRODUCTION Shannon [35] sought security against the attacker with un- New paradigms change the world. In computer science, limited computational powers: if an information source con- they often sneak behind researchers’ backs: the grand vi- veys some information, then Shannon’s attacker will surely sions often frazzle into minor ripples (like the fifth genera- extract that information. Diffie and Hellman [12] refined tion of programming languages), whereas some modest goals Shannon’s attacker model by taking into account the fact engender tidal waves with global repercussions (like moving that the real attackers are computationally limited. This the cursor by a device with wheels, or connecting remote idea became one of the greatest new paradigms in computer computers). So it is not easy to conjure a new paradigm science, and led to modern cryptography. when you need it. Shannon also sought security against the attacker with un- Perhaps the only readily available method to generate new limited logical and observational powers, expressed through paradigms at leisure is by disputing the obvious. Just in the maxim that ”the enemy knows the system”. This view case, I question on this occasion not one, but two generally is still endorsed in cryptography. The popular formulation, endorsed views: going back to Kerckhoffs [22], is that ”there is no security by obscurity”, meaning that the algorithms cannot be kept • Kerckhoffs Principle that there is no security by ob- obscured from the attacker, and that security should only scurity, and rely upon the secret keys. In fact, modern cryptography goes • Fortification Principle that the defender has to defend even further than Shannon or Kerckhoffs in tacitly assuming all attack vectors, whereas the attacker only needs to that if there is an algorithm that can break the system, then attack one. the attacker will surely find that algorithm. The attacker is not viewed as an omnipotent computer any more, but he is To simplify things a little, I argue that these two princi- still construed as an omnipotent programmer. The ongoing ples as related. The Kerckhoffs Principle demands that a hackers’ successes seem to justify this view. system should withstand attackers unhindered probing. In So the Diffie-Hellman step from unlimited to limited com- the modern security definitions, this is amplified to the re- putational powers has not been extended into a step from quirement that the system should resist a family of attacks, unlimited to limited logical or programming powers. Is the irrespective of the details of their algorithms. The adaptive assumption that all feasible algorithms will eventually be attackers are thus allowed to query the system, whereas the discovered and implemented really different from the as- system is not allowed to query the attackers. The resulting sumption that everything that is computable will eventually information asymmetry makes security look like a game bi- be computed? The present paper explores some ways to re- ased in favor of the attackers. The Fortification Principle is fine the current models of the attacker, and of the defender, an expression of that asymmetry. In economics, information by taking into account their limited logical and program- asymmetry has been recognized as a fundamental problem, ming powers. If the adaptive attacker actively queries the worth the Nobel Prize in Economics for 2001 [39, 3, 37]. In system to seek out its vulnerabilities, can the system gain security research, the problem does not seem to have been some security by actively learning attacker’s methods, and explicitly addressed, but there is, of course, no shortage of adapting to them? efforts to realize security by obscurity in practice — albeit without any discernible method. Although the practices of analyzing the attackers and hiding the systems are hardly waiting for anyone to invent a new paradigm, I will pur- sue the possibility that a new paradigm might be sneaking behind our backs again, like so many old paradigms did. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are Outline of the paper not made or distributed for profit or commercial advantage and that copies While I am on the subject of security paradigms, I decided bear this notice and the full citation on the first page. To copy otherwise, to to first spell out a general overview of the old ones. An republish, to post on servers or to redistribute to lists, requires prior specific attempt at this is in Sec. 2. It is surely incomplete, and permission and/or a fee. NSPW’11, September 12–15, 2011, Marin County, California, USA. perhaps wrongheaded, but it may help a little. It is difficult Copyright 2011 ACM xxx-x-xxxx-xxxx-x/xx/xx ...$5.00. to communicate about the new without an agreement about
  • 2. the old. Moreover, it will be interesting to hear not only flows or dangling pointers in the code. For a cryptographer, whether my new paradigms are new, but also whether my it means that any successful attack on the cypher can be old paradigms are old. reduced to an algorithm for computing discrete logarithms, The new security paradigm arising from the slogan ”Know or to integer factorization. For a diplomat, security means your enemy” is discussed in Sec. 3. Of course, security engi- that the enemy cannot read the confidential messages. For neers often know their enemies, so this is not much of a new a credit card operator, it means that the total costs of the paradigm in practice. But security researchers often require fraudulent transactions and of the measures to prevent them that systems should be secure against universal families of are low, relative to the revenue. For a bee, security means attackers, without knowing anything about who the enemy that no intruder into the beehive will escape her sting. . . is at any particular moment. With respect to such static Is it an accident that all these different ideas go under requirements, a game theoretic analysis of dynamics of se- the same name? What do they really have in common? curity can be viewed as an almost-new paradigm (with few They are studied in different sciences, ranging from com- previous owners). In Sec.3.1 I point to the practical devel- puter science to biology, by a wide variety of different meth- opments that lead up to this paradigm, and then in Sec. 3.2 ods. Would it be useful to study them together? I describe the game of attack vectors, which illustrates it. This is a very crude view of security process as a game of 2.1 What is security? incomplete information. I provide a simple pictorial analy- If all avatars of security have one thing in common, it is sis of the strategic interactions in this game, which turn out surely the idea that there are enemies and potential attack- to be based on acquiring information about the opponent’s ers out there. All security concerns, from computation to type and behavior. A sketch of a formal model of security politics and biology, come down to averting the adversarial games of incomplete information, and of the game of attack processes in the environment, that are poised to subvert the vectors, is given in Appendix A. goals of the system. There are, for instance, many kinds of A brand new security paradigm of ”Applied security by bugs in software, but only those that the hackers use are a obscurity” is described in Sec. 4. It is based on the idea security concern. of logical complexity of programs, which leads to one way In all engineering disciplines, the system guarantees a programming similarly like computational complexity led to functionality, provided that the environment satisfies some one way computations. If achieved, one way programming assumptions. This is the standard assume-guarantee format will be a powerful tool in security games. of the engineering correctness statements. Such statements A final attempt at a summary, and some comments about are useful when the environment is passive, so that the as- the future research, and the pitfalls, are given in Sec. 5. sumptions about it remain valid for a while. The essence of security engineering is that the environment actively seeks Related work to invalidate system’s assumptions. The two new paradigms offer two new tools for the security Security is thus an adversarial process. In all engineer- toolkit: games of incomplete information, and algorithmic ing disciplines, failures usually arise from engineering errors information theory. and noncompliance. In security, failures arise in spite of the Game theoretic techniques have been used in applied secu- compliance with the best engineering practices of the mo- rity for a long time, since there a need for strategic reasoning ment. Failures are the first class citizens of security: every often arises in practice. A typical example from the early key has a lifetime, and in a sense, every system too. For days is [11], where games of imperfect information were used. all major software systems, we normally expect security up- Perhaps the simplest more recent game based model are the dates, which usually arise from attacks, and often inspire attack-defense trees, which boil down to zero-sum extensive them. games [25]. Another application of games of imperfect infor- mation appeared, e.g., in a previous edition of this confer- 2.2 Where did security come from? ence [28]. Conspicuously, games of incomplete information The earliest examples of security technologies are found do not seem to have been used, which seems appropriate among the earliest documents of civilization. Fig. 1 shows since they analyze how players keep each other in obscurity. security tokens with a tamper protection technology from al- The coalgebraic presentation of games and response rela- most 6000 years ago. Fig.2 depicts the situation where this tions, presented in the Appendix, is closely related with the technology was probably used. Alice has a lamb and Bob formalism used in [31]. has built a secure vault, perhaps with multiple security lev- The concept of logical complexity, proposed in Sec. 4, is els, spacious enough to store both Bob’s and Alice’s assets. based on the ideas of algorithmic information theory [24, For each of Alice’s assets deposited in the vault, Bob issues a 36] in general, and in particular on the idea of logical depth clay token, with an inscription identifying the asset. Alice’s [10, 26, 6]. I propose to formalize logical complexity by lift- tokens are then encased into a bulla, a round, hollow ”en- ing logical depth from the G¨del-Kleene indices to program o velope” of clay, which is then baked to prevent tampering. specifications [19, 8, 9, 14, 34, 32]. The underlying idea When she wants to withdraw her deposits, Alice submits her that a G¨del-Kleene index of a program can be viewed as o bulla to Bob, he breaks it, extracts the tokens, and returns its ”explanation” goes back to Kleene’s idea of realizability the goods. Alice can also give her bulla to Carol, who can [23] and to Solomonoff’s formalization of inductive inference also submit it to Bob, to withdraw the goods, or pass on [36]. to Dave. Bullæ can thus be traded, and they facilitate ex- change economy. The tokens used in the bullæ evolved into 2. OLD SECURITY PARADIGMS the earliest forms of money, and the inscriptions on them Security means many things to many people. For a soft- led to the earliest numeral systems, as well as to Sumerian ware engineer, it often means that there are no buffer over- cuneiform script, which was one of the earliest alphabets.
  • 3. Bob Alice Figure 2: To withdraw her sheep from Bob’s secure vault, Alice submits a tamper-proof Figure 1: Tamper protection from 3700 BC token from Fig. 1. Security thus predates literature, science, mathematics, and information processing, where the purposeful program ex- even money. ecutions at the network nodes are supplemented by spon- taneous data-driven evolution of network links. While the 2.3 Where is security going? network emerges as the new computer, data and metadata Through history, security technologies evolved gradually, become inseparable, and a new type of security problems serving the purposes of war and peace, protecting public arises. resources and private property. As computers pervaded all aspects of social life, security became interlaced with com- Paradigms of security putation, and security engineering came to be closely related In early computer systems, security tasks mainly concerned with computer science. The developments in the realm of sharing of the computing resources. In computer networks, security are nowadays inseparable from the developments in security goals expanded to include information protection. the realm of computation. The most notable such develop- Both computer security and information security essentially ment is, of course, cyber space. depend on a clear distinction between the secure areas, and Paradigms of computation the insecure areas, separated by a security perimeter. Secu- rity engineering caters for computer security and for infor- In the beginning, engineers built computers, and wrote pro- mation security by providing the tools to build the security grams to control computations. The platform of computa- perimeter. In cyber space, the secure areas are separated tion was the computer, and it was used to execute algorithms from the insecure areas by the ”walls” of cryptography; and and calculations, allowing people to discover, e.g., fractals, they are connected by the ”gates” of cryptographic proto- and to invent compilers, that allowed them to write and exe- cols.1 But as networks of computers and devices spread cute more algorithms and more calculations more efficiently. through physical and social spaces, the distinctions between Then the operating system became the platform of compu- the secure and the insecure areas become blurred. And in tation, and software was developed on top of it. The era of such areas of cyber-social space, information processing does personal computing and enterprise software broke out. And not yield to programming, and cannot be secured just by then the Internet happened, followed by cellular networks, cryptography and protocols. What else is there? and wireless networks, and ad hoc networks, and mixed net- works. Cyber space emerged as the distance-free space of instant, costless communication. Nowadays software is de- 3. A SECOND-HAND BUT ALMOST-NEW veloped to run in cyberspace. The Web is, strictly speaking, SECURITY PARADIGM: KNOW YOUR just a software system, albeit a formidable one. A botnet is also a software system. As social space blends with cyber ENEMY space, many social (business, collaborative) processes can be usefully construed as software systems, that ran on social 3.1 Security beyond architecture networks as hardware. Many social and computational pro- Let us take a closer look at the paradigm shift to post- cesses become inextricable. Table 1 summarizes the crude modern cyber security in Table 2. It can be illustrated as picture of the paradigm shifts which led to this remarkable the shift from Fig. 3 to Fig. 4. The fortification in Fig. 3 situation. represents the view that security is in essence an architec- But as every person got connected to a computer, and ev- tural task. A fortress consists of walls and gates, separating ery computer to a network, and every network to a network the secure area within from the insecure area outside. The of networks, computation became interlaced with communi- 1 cation, and ceased to be programmable. The functioning of This is, of course, a blatant oversimplification, as are many the Web and of web applications is not determined by the other statements I make. In a sense, every statement is an code in the same sense as in a traditional software system: oversimplification of reality, abstracting away the matters deemed irrelevant. The gentle reader is invited to squint after all, web applications do include the human users as a whenever any of the details that I omit do seem relevant, part of their runtime. The fusion of social and computa- and add them to the picture. The shape of a forest should tional processes in cyber-social space leads to a new type of not change when some trees are enhanced.
  • 4. age ancient times middle ages modern times platform computer operating system network applications Quicksort, compilers MS Word, Oracle WWW, botnets requirements correctness, termination liveness, safety trust, privacy tools programming languages specification languages scripting languages Table 1: Paradigm shifts in computation age middle ages modern times postmodern times space computer center cyber space cyber-social space assets computing resources information public and private resources requirements availability, authorization integrity, confidentiality trust, privacy tools locks, tokens, passwords cryptography, protocols mining and classification Table 2: Paradigm shifts in security boundary between these two areas is the security perime- through social processes, like trust, privacy, reputation, in- ter. The secure area may be further subdivided into the fluence. areas of higher security and the areas of lower security. In cyber space, as we mentioned, the walls are realized using crypto systems, whereas the gates are authentication pro- tocols. But as every fortress owner knows, the walls and Figure 4: Dynamic security In summary, besides the methods to keep the attackers out, security is also concerned with the methods to deal with Figure 3: Static security the attackers once they get in. Security researchers have traditionally devoted more attention to the former family of methods. Insider threats have attracted a lot of attention the gates are not enough for security: you also need some recently, but a coherent set of research methods is yet to soldiers to defend it, and some weapons to arm the soldiers, emerge. and some craftsmen to build the weapons, and so on. More- Interestingly, though, there is a sense in which security over, you also need police and judges to maintain security becomes an easier task when the attacker is in. Although within the fortress. They take care for the dynamic aspects unintuitive at the first sight, this idea becomes natural when of security. These dynamic aspects arise from the fact that security processes are viewed in a broad context of the infor- sooner or later, the enemies will emerge inside the fortress: mation flows surrounding them (and not only with respect they will scale the walls at night (i.e. break the crypto), or to the data designated to be secret or private). To view sneak past the gatekeepers (break the protocols), or build security processes in this broad context, it is convenient to up trust and enter with honest intentions, and later defect model them as games of incomplete information [4], where to the enemy; or enter as moles, with the intention to strike the players do not have enough information to predict the later. In any case, security is not localized at the security opponent’s behavior. For the moment, let me just say that perimeters of Fig. 3, but evolves in-depth, like on Fig. 4, the two families of security methods (those to keep the at-
  • 5. tackers out, and those to catch them when they are in) cor- respond to two families of strategies in certain games of in- complete information, and turn out to have quite different winning odds for the attacker, and for defender. In fact, Defense they have the opposite winning odds. Defense In the fortress mode, when the defenders’ goal is to keep the attackers out, it is often observed that the attackers only need to find one attack vector to enter the fortress, Attack whereas the defenders must defend all attack vectors to pre- Attack vent them. When the battle switches to the dynamic mode, and the defense moves inside, then the defenders only need to find one marker to recognize and catch the attackers, Figure 5: Fortification Figure 6: Honeypot whereas the attackers must cover all their markers. This strategic advantage is also the critical aspect of the immune response, where the invading organisms are purposely sam- current distribution of the forces along the common pled and analyzed for chemical markers. Some aspects of part of the border. this observation have, of course, been discussed within the framework of biologically inspired security. Game theoretic (b) Each player sees all movement in the areas enclosed modeling seems to be opening up a new dimension in this enclosed within his territory, i.e. observes any point on problem space. We present a sketch to illustrate this new a straight line between any two points that he controls. technical and conceptual direction. That means that each player sees the opponent’s next move at all points that lie within the convex hull of his 3.2 The game of attack vectors territory, which we call range. Arena. Two players, the attacker A and the defender D, According to (b), the position in Fig. 5 allows A to see D’s battle for some assets of value to both of them. They are next move. D, on the other hand only gets to know A’s move given equal, disjoint territories, with the borders of equal according to (a), when his own border changes. This depicts, length, and equal amounts of force, expressed as two vector albeit very crudely, the information asymmetry between the fields distributed along their respective borders. The players attacker and the defender. can redistribute the forces and move the borders of their territories. The territories can thus take any shapes and Question. How should rational players play this game? occupy any areas where the players may move them, obeying the constraints that 3.2.1 Fortification strategy Goals. Since each player’s total force is divided by the (i) the length of the borders of both territories must be length of his borders, the maximal area defensible by a given preserved, and force has the shape of a disk. All other shapes with the (ii) the two territories must remain disjoint, except that boundaries of the same length enclose smaller areas. So D’s they may touch at the borders. simplest strategy is to acquire and maintain the smallest disk shaped territory of size θ.3 This is the fortification strategy: It is assumed that the desired asset Θ is initially held by the D only responds to A’s moves. defender D. Suppose that storing this asset takes an area A’s goal is, on the other hand, to create ”dents” in D’s of size θ. Defender’s goal is thus to maintain a territory pD territory pD , since the area of pD decreases most when its with an area pD ≥ θ. Attacker’s goal is to decrease the convexity is disturbed. If a dent grows deep enough to reach size of pD below θ, so that the defender must release some of across pD , or if two dents in it meet, then pD disconnects the asset Θ. To achieve this, the attacker A must bring his2 in two components. Given a constant length of the border, forces to defender D’s borders, and push into his territory. it is easy to see that the size of the enclosed area decreases A position in the game can thus be something like Fig. 5. exponentially as it gets broken up. In this way, the area Game. At each step in the game, each player makes a move enclosed by a border of given length can be made arbitrarily by specifying a distribution of his forces along his borders. small. Both players are assumed to be able to redistribute their But how can A create dents? Wherever he pushes, the forces with equal agility. The new force vectors meet at defender will push back. Since their forces are equal and the border, they add up, and the border moves along the constant, increasing the force along one vector decreases the resulting vector. So if the vectors are, say, in the opposite force along another vector. directions, the forces subtract and the border is pushed by Optimization tasks. To follow the fortification strategy, the greater vector. D just keeps restoring pD to a disk of size θ. To counter The players observe each other’s positions and moves in D’s defenses, A needs to find out where they are the weak- two ways: est. He can observe this wherever D’s territory pD is within 3 (a) Each player knows his own moves, i.e. distributions, This is why the core of a medieval fortification was a round and sees how his borders change. From the change in tower with a thick wall and a small space inside. The fortress the previous move, he can thus derive the opponent’s itself often is not round, because the environment is not flat, or because the straight walls were easier to build; but it 2 I hope no one minds that I will be using ”he” for both is usually at least convex. Later fortresses, however, had Attacker and Defender, in an attempt to avoid distracting protruding towers — to attack the attacker. Which leads us connotations. beyond the fortification strategy. . .
  • 6. Defense both players’ strategy refinements will evolve methods to Defense trade territory for information, making increasingly efficient use of the available territories. Note that the size of D’s territory must not drop below θ, whereas the size of A’s territory can, but then he can only store a part of whatever Attack assets he may force D to part with. A formalism for a mathematical analysis of this game is sketched in the Appendix. Attack 3.3 What does all this mean for security? The presented toy model provides a very crude picture of the evolution of defense strategies from fortification to adap- Figure 7: Sampling Figure 8: Adaptation tation. Intuitively, Fig. 5 can be viewed as a fortress under siege, whereas Fig. 8 can be interpreted as a macrophage localizing an invader. The intermediate pictures show the A’s range, i.e. contained in the convex hull of pA . So A adaptive immune system luring the invader and sampling needs to maximize the intersection of his range with D’s his chemical markers. territory. Fig. 5 depicts a position where this is achieved: D But there is nothing exclusively biological about the adap- is under A’s siege. It embodies the Fortification Principle, tation strategy. Figures 5–8 could also be viewed entirely in that the defender must defend all attack vectors, whereas the context of Figures 3–4, and interpreted as the transi- the attacker only needs to select one. For a fast push, A tion from the medieval defense strategies to modern politi- randomly selects an attack vector, and waits for D to push cal ideas. Fig. 8 could be viewed as a depiction of the idea back. Strengthening D’s defense along one vector weakens of ”preemptive siege”: while the medieval rulers tried to it along another one. Since all of D’s territory is within A’s keep their enemies out of their fortresses, some of the mod- range, A sees where D’s defense is the weakest, and launches ern ones try to keep them in their jails. The evolution of the next attack there. In contrast, D’s range is initially lim- strategic thinking illustrated on Figures 5-8 is pervasive in ited to his own disk shaped territory. So D only ”feels” A’s all realms of security, i.e. wherever the adversarial behaviors pushes when his own borders move. At each step, A pushes are a problem, including cyber-security. at D’s weakest point, and creates a deeper dent. A does And although the paradigm of keeping an eye on your en- enter into D’s range, but D’s fortification strategy makes emies is familiar, the fact that it reverts the odds of security no use of the information that could be obtained about A. and turns them in favor of the defenders does not seem to The number of steps needed to decrease pD below θ depends have received enough attention. It opens up a new game on how big are the forces and how small are the contested theoretic perspective on security, and suggests a new tool areas. for it. 3.2.2 Adaptation strategy What can D do to avoid the unfavorable outcome of the 4. A BRAND NEW SECURITY PARADIGM: fortification strategy? The idea is that he should learn to APPLIED SECURITY BY OBSCURITY know his enemy: he should also try to shape his territory to maximize the intersection of his range with A’s territory. 4.1 Gaming security basics D can lure A into his range simply letting A dent his terri- tory. This is the familiar honeypot approach, illustrated on Games of information. In games of luck, each player Fig. 6. Instead of racing around the border to push back has a type, and some secrets. The type determines player’s against every attack, D now gathers information about A’s preferences and behaviors. The secrets determine player’s his next moves within his range. If A maintains the old siege state. E.g., in poker, the secrets are the cards in player’s strategy, and pushes to decrease D’s territory, he will accept hand, whereas her type consists of her risk aversion, her D’s territory, and enter into his range more and more. gaming habits etc. The imperfect information means that Fig. 7 depicts a further refinement of D’s strategy, where all players’ types are a public information, whereas their the size of D’s territory, although it is still his main goal states are unknown, because their secrets are private. In in the game, is assigned a smaller weight than the size of games of incomplete information, both players’ types and A’s territory within D’s range. This is the adaptation strat- their secrets are unknown. The basic ideas and definitions egy. The shape of D’s territory is determined by the task of the complete and incomplete informations in games go all of gathering the information about A. If A follows the final the way back to von Neumann and Morgenstern [40]. The part of his siege strategy, he will accept to be observed in ideas and techniques for modeling incomplete information exchange for the territory, and the final position depicted on are due to Harsanyi [18], and constitute an important part Fig. 8 will be reached. Here A’s attacks are observed and of game theory [27, 17, 4]. prevented. D wins. The proviso is that D has enough ter- Security by secrecy. If cryptanalysis is viewed as a game, ritory to begin with that there is enough to store his assets then the algorithms used in a crypto system can be viewed after he changes its shape in order to control A. Another as the type of the corresponding player. The keys are, of proviso is that A blindly sticks with his siege strategy. course, its secrets. In this framewrok, Claude Shannon’s slo- To prevent this outcome, A will thus need to refine his gan that ”the enemy knows the system” asserts that crypt- strategy, and not release D from his range, or enter his range analysis should be viewed as a game of imperfect informa- so easily. However, if he wants to decrease D’s territory, A tion. Since the type of the crypto system is known to the will not be able to avoid entering D’s range altogether. So enemy, it is not a game of incomplete information. Another
  • 7. statement of the same imperative is the Kerckhoffs’ slogan The task of obfuscation is to transform a program repre- that ”there is no security by obscurity”. Here the obscurity sentation so that the obfuscated program runs roughly the refers to the type of the system, so the slogan thus suggests same as the original one, but that the original code (or some that the security of a crypto system should only depend on secrets built into it) cannot be recovered. Of course, the lat- the secrecy of its keys, and remain secure if its type is known. ter is harder, because encrypted data just need to be secret, In terms of physical security, both slogans thus say that the whereas an obfuscated program needs to be secret and and thief should not be able to get into the house without the to run like the original program. In [5], it was shown that right key, even if he knows the mechanics of the lock. The some programs must disclose the original code in order to key is the secret, the lock is the type. perform the same function (and they disclose it in a nontriv- Security by obscurity. And while all seems clear, and we ial way, i.e. not by simply printing out their own code). The all pledge allegiance to the Kerckhoffs’ Principle, the prac- message seems consistent with the empiric evidence that re- tices of security by obscurity abound. E.g., besides the locks verse engineering is, on the average5 , effective enough that that keep the thieves out, many of us use some child-proof you don’t want to rely upon its hardness. So it is much locks, to protect toddlers from dangers. A child-proof lock easier to find out the lock mechanism, than to find the right usually does not have a key, and only provides protection key, even in the digital domain. through the obscurity of its mechanism. One-way programming? The task of an attacker is to On the cryptographic side, security by obscurity remains construct an attack algorithm. For this, he needs to under- one of the main tools, e.g., in Digital Rights Management stand the system. The system code may be easy to reverse (DRM), where the task is to protect the digital content from engineer , but it may be genuinely hard to understand, e.g. its intended users. So our DVDs are encrypted to prevent if it is based on a genuinely complex mathematical construc- copying; but the key must be on each DVD, or else the DVD tion. And if it is hard to understand, then it is even harder to could not be played. In order to break the copy protection, modify into an attack, except in very special cases. The sys- the attacker just needs to find out where to look for the key; tem and the attack can be genuinely complex to construct, i.e. he needs to know the system used to hide the key. For a or to reconstruct from each other, e.g. if the constructions sophisticated attacker, this is no problem; but the majority involved are based on genuinely complex mathematical con- is not sophisticated. The DRM is thus based on the second- structions. This logical complexity of algorithms is orthogo- hand but almost-new paradigm from the preceding section: nal to their computational complexity: an algorithm that is the DVD designers study the DVD users and hide the keys easy to run may be hard to construct, even if the program in obscure places. From time to time, the obscurity wears resulting from that construction is relatively succinct, like out, by an advance in reverse engineering, or by a lapse of e.g. [2]); whereas an algorithm that requires a small effort defenders attention4 . Security is then restored by analyzing to construct may, of course, require a great computational the enemy, and either introducing new features to stall the effort when run. ripping software, or by dragging the software distributors to The different roles of computational and of logical com- court. Security by obscurity is an ongoing process, just like plexities in security can perhaps be pondered on the follow- all of security. ing example. In modern cryptography, a system C would be considered very secure if an attack algorithm AC on it would 4.2 Logical complexity yield a proof that P = N P . But how would you feel about a crypto system D such that an attack algorithm AD would What is the difference between keys and locks? The yield a proof that P = N P ? What is the difference between conceptual problem with the Kerckhoffs Principle, as the re- the reductions quirement that security should be based on secret keys, and not on obscure algorithms, is that it seems inconsistent, at AC =⇒ P = N P and AD =⇒ P = N P ? least at the first sight, with the Von Neumann architecture Most computer scientists believe that P = N P is true. If of our computers, where programs are represented as data. In a computer, both a key and an algorithm is a string of P = N P is thus false, then no attack on C can exist, whereas bits. Why can I hide a key and cannot hide an algorithm? an attack on D may very well exist. On the other hand, More generally, why can I hide data, and cannot hide pro- after many decades of efforts, the best minds of mankind grams? did not manage to construct a proof that P = N P . An attacker on the system D, whose construction of AD would Technically, the answer boils down to the difference be- provide us with a proof of P = N P , would be welcomed tween data encryption and program obfuscation. The task of encryption is to transform a data representation in such with admiration and gratitude. a way that it can be recovered if and only if you have a key. Since proving (or disproving) P = N P is worth a Clay Institute Millenium Prize of $ 1,000,000, the system D seems 4 The DVD Copy Scramble System (CSS) was originally re- secure enough to protect a bank account with $ 900,000. If verse engineered to allow playing DVDs on Linux computers. an attacker takes your money from it, he will leave you with This was possibly facilitated by an inadvertent disclosure a proof worth much more. So the logical complexity of the from the DVD Copy Control Association (CAA). DVD CAA system D seems to provide enough obscurity for a significant pursued the authors and distributors of the Linux DeCSS amount of security! module through a series of court cases, until the case was dismissed in 2004 [15]. Ironically, the cryptography used in But what is logical complexity? Computational com- DVD CSS has been so weak, in part due to the US export plexity of a program tells how many computational steps controls at the time of design, that any computer fast enough (counting them in time, memory, state changes, etc.) does to play DVDs could find the key by brute force within 18 5 seconds [38]. This easy cryptanalytic attack was published However, the International Obfuscated C Code Contest [21] before DeCSS, but seemed too obscure for everyday use. has generated some interesting and extremely amusing work.
  • 8. the program take to transform its input into its output. Log- analogous with (1), for all programs p and inputs x. So G ical complexity is not concerned with the execution of the executes spec(p) just like the U executes code(p). Moreover, program, but with its logical construction. Intuitively, if we require that G is a homomorphism with respect to some computational complexity of a program counts the number suitable operations, i.e. that it satisfies something like of computational steps to execute it on an input of a given G spec(p) ⊕ spec(q), x = p(x) + q(x) (5) length, its logical complexity should count the number of computational steps that it takes to derive the program in a where + is some operation on data, and ⊕ is a logical op- desired format, from some form of specification. This oper- eration on programs. In general, G may satisfy such re- ation may correspond to specification refinement and code quirements with respect to several such operations, pos- generation; or it may correspond to program transformation sibly of different arities. In this way, the structure of a into a desired format, if the specification is another program; program specification on the left hand side will reflect the or it may be a derivation of an attack, if the specification is structure of the program outputs on the right hand side. the system. If a complex program p has spec(p) satisfying (5), then we Formally, this idea of logical complexity seems related to can analyze and decompose spec(p) on the left hand side the notion of logical depth, as developed in the realm of of (5), learn a generic structure of its outputs on the right algorithmic information theory [10, 26, 6]. In the usual for- hand side, and predict the behavior of p without running it, mulation, logical depth is viewed as a complexity measure provided that we know the behaviors of the basic program assigned to numbers. viewed as programs, or more precisely components. The intended difference between the G¨del- o as the G¨del-Kleene indices, which can be executed by a o Kleene code(p) in (1), and the executable logical specifica- universal Turing machine, and thus represent partial recu tion spec(p) in (4) is that each code(p) must be evaluated p, viewed as the numbers code(p) which are executable on each x to tell p(x), whereas spec(p) allows us to derive by a universal Turing machine. The logical depth of a pro- p(x), e.g., from q(x) if spec(p) = Φspec(q) and x = φy for an gram p is defined to be the computational complexity of the easy program transformation Φ and data operation φ satis- simplest program that outputs code(p). The task of logi- fying G (Φ(spec(q)), y) = φ(q(y)). (See a further comment cal complexity, as a measure of the difficulty of attacker’s in Sec. 5.) algorithmic tasks, is to lift this idea from the realm of ex- The notion of logical complexity can thus be construed as ecutable encodings of programs, to their executable logical a strengthening of the notion of logical depth by the addi- specifications [19, 8], viewed as their ”explanations”. The tional requirement that the execution engine G should pre- underlying idea is that for an attacker on a logically com- serve some composition operations. I conjecture that a log- plex secure system, it may not suffice to have executable ical specification framework for measuring logical complex- code of that system, and an opportunity to run it; in order ity of programs can be realized as a strengthened version of to construct an attack, the attacker needs to ”understand” the G¨del-Kleene-style program encodings, using the avail- o the system, and specify an ”explanation”. But what does it able program specification and composition tools and for- mean to ”understand” the system? How can we recognize malisms [9, 14, 34, 29, 33]. Logical complexity of a program an ”explanation” of a program? A possible interpretation p could then be defined as the computational complexity of is that a specification explains a program if it allows pre- the simplest executable logical specification φ that satisfies dicting its outputs of the program without running it, and G(φ, x) = p(x) for all inputs x. it requires strictly less computation on the average6 . Let us One-way programmable security? If a proof of P = try to formalize this idea. N P is indeed hard to construct, as the years of efforts sug- Logical depth is formalized in terms of a universal Turing gest, then the program implementing an attack algorithm machine U , which takes code(p) of a program p and for any AD as above, satisfying AD ⇒ P = N P , must be logically input x outputs complex. Although the system D may be vulnerable to a U code(p), x = p(x) (1) computationally feasible attack AD , constructing this at- tack may be computationally unfeasible. For instance, the To formalize logical complexity, we also need a code genera- attacker might try to derive AD from the given program D. tor Γ, which for every program p with a logical specification Deriving a program from another program must be based on spec(p) generates ”understanding”, and some sort of logical homomorphism, mapping the meanings in a desired way. The attacker should Γ spec(p) = code(p) (2) thus try to extract spec(D) from D and then to construct so that U Γ (spec(p)) , x = p(x) holds for all inputs x again. spec(AD ) from spec(D). However, if D is logically complex, Composing the generator of the executable code with the it may be hard to extract spec(D), and ”understand” D, universal Turing machine yields a universal specification eval- notwithstanding the fact that it may be perfectly feasible to uator G, defined by reverse engineer code(D) from D. In this sense, generating a logically complex program may be a one-way operation, G(φ, x) = U Γ(φ), x (3) with an unfeasible inverse. A simple algorithm AD , and with it a simple proof of N = N P , may, of course, exist. But the This universal specification evaluator is analogous to the experience of looking for the latter suggests that the risk is universal Turing machine in that it satisfies the equation low. Modern cryptography is based on accepting such risks. G spec(p), x = p(x) (4) Approximate logical specifications. While the actual 6 This requirement should be formalized to capture the idea technical work on the idea of logical complexity remains for that the total cost of predicting all values of the program is future work, it should be noted that the task of provid- essentially lower than the total cost of evaluating the pro- ing a realistic model of attacker’s logical practices will not gram on all of its values. be accomplished realizing a universal execution engine G
  • 9. satisfying the homomorphism requirements with respect to confirmed, this claim implies that security can be increased some suitable logical operations. For a pragmatic attacker, not only by analyzing attacker’s type, but also by obscuring it does not seem rational to analyze a program p all the defender’s type. way to spec(p) satisfying (4), when most of the algorithms On logical complexity. The second idea of this paper that he deals with are randomized. He will thus probably is the idea of one way programming, based on the concept be satisfied with specε (p) such that of logical complexity of programs. The devil of the logical Prob G specε (p), x = p(x) ≥ ε (6) complexity proposal lies in the ”detail” of stipulating the logical operations that need to be preserved by the univer- 4.3 Logical complexity of gaming security sal specification evaluator G. These operations determine what does it mean to specify, to understand, to explain an Logical specifications as beliefs. Approximate logical algorithm. One stipulation may satisfy some people, a dif- specifications bring us back to security as a game of incom- ferent one others. A family of specifications structured for plete information. In order to construct a strategy each one G may be more suitable for one type of logical transfor- player in such a game must supplement the available infor- mations and attack derivations, another family for another mations about the opponent by some beliefs. Mathemat- one. But even if we give up the effort to tease out logical ically, these beliefs have been modeled, ever since [18], as structure of algorithms through the homomorphism require- probability distributions over opponent’s payoff functions. ment, and revert from logical complexity to logical depth, More generally, in games not driven by payoffs, beliefs can be from homomorphic evaluators G, to universal Turing ma- modeled as probability distributions over the possible oppo- chines U , and from spec(p) to code(p), even the existing nent’s behaviors, or algorithms. In the framework described techniques of algorithmic information theory alone may suf- above, such a belief can thus be viewed as an approximate fice to develop one-way programs, easy to construct, but logical specification of opponent’s algorithms. An approxi- hard to deconstruct and transform. A system programmed mation specε (p) can thus be viewed not as a deterministic in that way could still allow computationally feasible, but specification probably close to p, but as a probability distri- logically unfeasible attacks. bution containing some information about p. Since both players in a game of incomplete information On security of profiling. Typing and profiling are frowned are building beliefs about each other, they must also build on in security. Leaving aside the question whether gathering beliefs about each other beliefs: A formulates a belief about information about the attacker, and obscuring the system, B’s belief about A, and B formulates a belief about A’s be- might be useful for security or not, these practices remain lief about B. And so on to the infinity. This is described questionable socially. The false positives arising from such in more detail in the Appendix, and in still more detail in methods cause a lot of trouble, and tend to just drive the [4]. These hierarchies of beliefs are formalized as probability attackers deeper into hiding. distributions over probability distributions. In the frame- On the other hand, typing and profiling are technically work of logical complexity, the players thus specify approx- and conceptually unavoidable in gaming, and remain re- imate specifications about each other’s approximate speci- spectable research topics of game theory. Some games can- fications. Since these specifications are executable by the not be played without typing and profiling the opponents. universal evaluator G, they lead into an algorithmic theory Poker and the bidding phase of bridge are all about trying of incomplete information, since players’ belief hierarchies to guess your opponents’ secrets by analyzing their behav- are now computable, i.e. they consist of samplable probabil- iors. Players do all they can to avoid being analyzed, and ity distributions. Approximate specifications, on the other many prod their opponents to sample their behaviors. Some hand, introduce an interesting twist into algorithmic theory games cannot be won by mere uniform distributions, with- of information: while code(code(p)) could not be essentially out analyzing opponents’ biases. simpler than code(p) (because them we could derive for p Both game theory and immune system teach us that we simpler code than code(p)), specε (specε (p)) can be simpler cannot avoid profiling the enemy. But both the social ex- than specε (p). perience and immune system teach us that we must set the thresholds high to avoid the false positives that the profil- 5. FINAL COMMENTS ing methods are so prone to. Misidentifying the enemy leads to auto-immune disorders, which can be equally pernicious On games of security and obscurity. The first idea of socially, as they are to our health. this paper is that security is a game of incomplete informa- tion: by analyzing your enemy’s behaviors and algorithms Acknowledgement. As always, Cathy Meadows provided (subsumed under what game theorists call his type), and by very valuable comments. obscuring your own, you can improve the odds of winning this game. 6. REFERENCES This claim contradicts Kerckhoffs’ Principle that there is no security by obscurity, which implies that security should [1] Samson Abramsky, Pasquale Malacaria, and Radha be viewed as a game of imperfect information, by asserting Jagadeesan. Full completeness for pcf. Information that security is based on players’ secret data (e.g. cards), and Computation, 163:409–470, 2000. and not on their obscure behaviors and algorithms. [2] Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. I described a toy model of a security game which illus- PRIMES is in P. Annals of Mathematics, trates that security is fundamentally based on gathering and 160(2):781–793, 2004. analyzing information about the type of the opponent. This [3] George A. Akerlof. The market for ’lemons’: Quality model thus suggests that security is not a game of imper- uncertainty and the market mechanism. Quarterly fect information, but a game of incompete information. If Journal of Economics, 84(3):488–500, 1970.
  • 10. [4] Robert J. Aumann and Aviad Heifetz. Incomplete [23] Stephen C. Kleene. Realizability: a retrospective information. In Robert J. Aumann and Sergiu Hart, survey. In A.R.D. Mathias and H. Rogers, editors, editors, Handbook of Game Theory, Volume 3, pages Cambridge Summer School in Mathematical Logic, 1665–1686. Elsevier/North Holland, 2002. volume 337 of Lecture Notes in Mathematics, pages [5] Boaz Barak, Oded Goldreich, Russell Impagliazzo, 95–112. Springer-Verlag, 1973. Steven Rudich, Amit Sahai, Salil P. Vadhan, and [24] A. N. Kolmogorov. On tables of random numbers Ke Yang. On the (im)possibility of obfuscating (Reprinted from ”Sankhya: The Indian Journal of programs. In Proceedings of the 21st Annual Statistics”, Series A, Vol. 25 Part 4, 1963). Theor. International Cryptology Conference on Advances in Comput. Sci., 207(2):387–395, 1998. Cryptology, CRYPTO ’01, pages 1–18, London, UK, [25] Barbara Kordy, Sjouke Mauw, Sasa Radomirovic, and 2001. Springer-Verlag. Patrick Schweitzer. Foundations of attack-defense [6] C. H. Bennett. Logical depth and physical complexity. trees. In Pierpaolo Degano, Sandro Etalle, and In A half-century survey on The Universal Turing Joshua D. Guttman, editors, Formal Aspects in Machine, pages 227–257, New York, NY, USA, 1988. Security and Trust, volume 6561 of Lecture Notes in Oxford University Press, Inc. Computer Science, pages 80–95. Springer, 2010. [7] Elwyn Berlekamp, John Conway, and Richard Guy. [26] Leonid A. Levin. Randomness conservation Winning Ways for your Mathematical Plays, volume inequalities: Information and independence in 1-2. Academic Press, 1982. mathematical theories. Information and Control, [8] Dines Bjørner, editor. Abstract Software Specifications, 61:15–37, 1984. volume 86 of Lecture Notes in Computer Science. [27] Jean F. Mertens and Shmuel Zamir. Formulation of Springer, 1980. Bayesian analysis for games with incomplete [9] E. Borger and Robert F. Stark. Abstract State information. Internat. J. Game Theory, 14(1):1–29, Machines: A Method for High-Level System Design 1985. and Analysis. Springer-Verlag New York, Inc., [28] Tyler Moore, Allan Friedman, and Ariel D. Procaccia. Secaucus, NJ, USA, 2003. Would a ’cyber warrior’ protect us: exploring [10] Gregory Chaitin. Algorithmic information theory. trade-offs between attack and defense of information IBM J. Res. Develop., 21:350–359, 496, 1977. systems. In Proceedings of the 2010 workshop on New [11] William Dean. Pitfalls in the use of imperfect security paradigms, NSPW ’10, pages 85–94, New information. RAND Corporation Report P-7430, 1988. York, NY, USA, 2010. ACM. Santa Monica, CA. [29] Till Mossakowski. Relating casl with other [12] Whitfield Diffie and Martin E. Hellman. New specification languages: the institution level. Theor. directions in cryptography. IEEE Transactions on Comput. Sci., 286(2):367–475, 2002. Information Theory, 22(6):644–654, 1976. [30] Martin J. Osborne and Ariel Rubinstein. A Course in [13] Lester E. Dubins and Leonard J. Savage. How to Game Theory. MIT Press, 1994. Gamble If You Must. McGraw-Hill, New York, 1965. [31] Dusko Pavlovic. A semantical approach to equilibria [14] Jos´ Luiz Fiadeiro. Categories for software e and rationality. In Alexander Kurz and Andzrej engineering. Springer, 2005. Tarlecki, editors, Proceedings of CALCO 2009, volume [15] Electronic Frontier Foundation. Press releases and 5728 of Lecture Notes in Computer Science, pages court documents on DVD CCA v Bunner Case. 317–334. Springer Verlag, 2009. arxiv.org:0905.3548. w2.eff.org/IP/Video/DVDCCA case/#bunner-press. [32] Dusko Pavlovic and Douglas R. Smith. Guarded (Retrieved on July 31, 2011). transitions in evolving specifications. In H. Kirchner [16] Amanda Friedenberg and Martin Meier. On the and C. Ringeissen, editors, Proceedings of AMAST relationship between hierarchy and type morphisms. 2002, volume 2422 of Lecture Notes in Computer Economic Theory, 46:377–399, 2011. Science, pages 411–425. Springer Verlag, 2002. 10.1007/s00199-010-0517-2. [33] Dusko Pavlovic and Douglas R. Smith. Software [17] Drew Fudenberg and Jean Tirole. Game Theory. MIT development by refinement. In Bernhard K. Aichernig Press, August 1991. and Tom Maibaum, editors, Formal Methods at the [18] John C Harsanyi. Games with incomplete information Crossroads, volume 2757 of Lecture Notes in played by bayesian players, I-III. Management Computer Science. Springer Verlag, 2003. Science, 14:159–182, 320–334, 486–502, 1968. [34] Duˇko Pavlovi´. Semantics of first order parametric s c [19] C. A. R. Hoare. Programs are predicates. Phil. Trans. specifications. In J. Woodcock and J. Wing, editors, R. Soc. Lond., A 312:475–489, 1984. Formal Methods ’99, volume 1708 of Lecture Notes in Computer Science, pages 155–172. Springer Verlag, [20] J. Martin E. Hyland and C.-H. Luke Ong. On full 1999. abstraction for pcf: I, ii, and iii. Inf. Comput., 163(2):285–408, 2000. [35] Claude Shannon. Communication theory of secrecy systems. Bell System Technical Journal, [21] International Obfuscated C Code Contest. 28(4):656–715, 1949. www0.us.ioccc.org/main.html. (Retreived on 31 July 2011). [36] Ray J. Solomonoff. A formal theory of inductive inference. Part I., Part II. Information and Control, [22] Auguste Kerckhoffs. La cryptographie militaire. 7:1–22, 224–254, 1964. Journal des Sciences Militaires, IX:5–38, 161–191, 1883. [37] Michael Spence. Job market signaling. Quarterly Journal of Economics, 87(3):355–374, 1973.
  • 11. [38] Frank A. Stevenson. Cryptanalysis of Contents Following the same idea, for the mixed responses Φ : A → B Scrambling System. http://web.archive.org/web/- and Ψ : B → C we have the composite (Φ; Ψ) : A → C with 20000302000206/www.dvd-copy.com/news/ the entries cryptanalysis of contents scrambling system.htm, 8 November 1999. (Retrieved on July 31, 2011). (Φ; Ψ)aγγ ′ c = Φaββ ′ b · Ψbγγ ′ c ββ ′ b [39] George J Stigler. The economics of information. Journal of Political Economy, 69(3):213–225, 1961. It is easy to see that these composition operations are asso- [40] John von Neumann and Oskar Morgenstern. Theory of ciative and unitary, both for the simple and for the mixed Games and Economic Behavior. Princeton University responses. Press, Princeton, NJ, 1944. A.2 Games Arenas turn out to provide a convenient framework for a APPENDIX unified presentation of games studied in game theory [40, A. GAMING SECURITY FORMALISM 30], mathematical games [7], game semantics [1, 20], and some constructions in-between these areas [13]. Here we Can the idea of applied security by obscurity be realized? shall use them to succinctly distinguish between the various To test it, let us first make it more precise in a mathematical kinds of game with respect to the information available to model. I first present a very abstract model of strategic be- the players. As mentioned before, game theorists usually havior, capturing and distinguishing the various families of distinguish two kinds of players’ information: games studied in game theory, and some families not stud- ied. The model is based on coalgebraic methods, along the • data, or positions: e.g., a hand of cards, or a secret lines of [31]. I will try to keep the technicalities at a min- number; and imum, and the reader is not expected to know what is a • types, or preferences: e.g., player’s payoff matrix, or a coalgebra. system that he uses are components of his type. A.1 Arenas The games in which the players have private data or posi- tions are the games of imperfect information. The games Definition 1. A player is a pair of sets A = MA , SA , where the players have private types or preferences, e.g. be- where the elements of MA represent or moves available to cause they don’t know each other’s payoff matrices, are the A, and the elements of SA are the states that A may observe. games of incomplete information. See [4] for more about A simple response Σ : A → B for a player B to a player these ideas, [30] for the technical details of the perfect- A is a binary relation imperfect distinction, and [17] for the technical details of the complete-incomplete distinction. 2 Σ : MA × SB × MB → {0, 1} But let us see how arenas capture these distinction, and what does all that have to do with security. Σ When Σ(a, β, β ′ , b) = 1, we write a, β − β ′ , b , and say → that the strategy Σ at B’s state β prescribes that B should A.2.1 Games of perfect and complete information respond to A’s move a by the move b and update his state to In games of perfect and complete information, each player β ′ . The set of B’s simple responses to A is written SR(A, B). has all information about the other player’s data and pref- A mixed response Φ : A → B for the player B to the erences, i.e. payoffs. To present the usual stateless games in player A is a matrix normal form, we consider the players A and B whose state 2 spaces are the sets payoff bimatrices, i.e. Φ : MA × SB × MB → [0, 1] SA = SB = (R × R)MA ×MB required to be finitely supported and stochastic in MA , i.e. for every a ∈ MA holds In other words, a state σ ∈ SA = SB is a pair of maps ′ σ = σ A , σ B where σ A is the MA × MB -matrix of A’s pay- • Φaββ ′ b = 0 holds for all but finitely many β, β and b, A offs: the entry σab is A’s payoff if A plays a and B plays • ββ ′ b Φaββ ′ b = 1. b. Ditto for σB . Each game in the standard bimatrix form Φ corresponds to one element of both state spaces SA = SB . When Φaββ ′ b = p we write a, β − β ′ , b , and say that → It is nominally represented as a state, but this state does not p the strategy Φ at B’s state β responds to A’s move a with change. The main point here is that both A and B know a probability p by B’s move b leading him into the state β ′ . this element. This allows both of them to determine the The set of B’s mixed responses to A is written MR(A, B). best response strategies ΣA : B → A for A and ΣB : A → B An arena is a specification of a set of players and a set of for B, in the form responses between them. Σ b, σ A − A σ A , a −→ ⇐⇒ A A ∀x ∈ MA . σxb ≤ σab Σ Responses compose. Given simple responses Σ : A → B a, σ B − B σ B , b −→ ⇐⇒ B B ∀y ∈ MB . σay ≤ σab and Γ : B → C, we can derive a response (Σ; Γ) : A → C for the player C against A by taking the player B as a ”man in and to compute the Nash equilibria as the fixed points of the middle”. The derived response is constructed as follows: the composites (ΣB ; ΣA ) : A → A and (ΣA ; ΣB ) : B → B. This is further discussed in [31]. Although the payoff Σ Γ matrices in games studied in game theory usually do not a, β − β ′ , b → b, γ − γ ′ , c → change, so the corresponding responses fix all states, and (Σ;Γ) a, γ − − γ ′ , c −→ each response actually presents a method to respond in a
  • 12. whole family of games, represented by the whole space of where σi ∈ ∆i RMA ×MB . The even components σ2i repre- A A payoff matrices, it is interesting to consider, e.g. discounted sent A’s payoff matrix, A’s belief about B’s belief about A’s payoffs in some iterated games, where the full force of the payoff matrix, A’s belief about B’s belief about A’s belief response formalism over the above state spaces is used. about B’s belief about A’s payoff matrix, and so on. The 2i+1 odd components σA represent A’s belief about B’s payoff A.2.2 Games of imperfect information matrix, A’s belief about B’s belief about A’s belief about B’s Games of imperfect information are usually viewed in ex- payoff matrix, and so on. The meanings of the components tended form, i.e. with nontrivial state changes, because of the state σ B ∈ SB are analogous. players’ private data can then be presented as their private states. Each player now has a set if private positions, PA and PB , which is not visible to the opponent. On the other A.3 Security games hand, both player’s types, presented as their payoff matrices, are still visible to both. So we have We model security processes as a special family of games. It will be a game of imperfect information, since the players SA = PA × (R × R)MA ×MB of security games usually have some secret keys, which are SB = PB × (R × R)MA ×MB presented as the elements of their private state sets PA and PD . The player A is now the attacker, and the player D is E.g., in a game of cards, A’s hand will be an element of PA , the defender. B’s hand will be an element of PB . With each response, each The goal of a security game is not expressed through pay- player updates his position, whereas their payoff matrices offs, but through ”security requirements” Θ ⊆ PD . The usually do not change. intuition is that the defender D is given a family of assets to protect, and the Θ are the desired states, where these assets A.2.3 Games of incomplete information are protected. The defender’s goal is to keep the state of the Games of incomplete information are studied in epistemic game in Θ, whereas the attacker’s goal is to drive the game game theory [18, 27, 4], which is formalized through knowl- outside Θ. The attacker may have additional preferences, edge and belief logics. The reason is that each player here expressed by a probability distribution over his own private only knows with certainty his own preferences, as expressed states PA . We shall ignore this aspect, since it plays no role by his payoff matrix. The opponent’s preferences and payoffs in the argument here; but it can easily be captured in the are kept in obscurity. In order to anticipate opponent’s be- arena formalism. haviors, each player must build some beliefs about the other Since players’ goals are not to maximize their revenues, player’s preferences. In the first instance, this is expressed their behaviors are not determined by payoff matrices, but as a probability distribution over the other player’s possi- by their response strategies, which we collect in the sets ble payoff matrices. However, the other player also builds RA(A, D) and RA(D, A). In the simplest case, response beliefs about his opponent’s preferences, and his behavior strategies boil down to the response maps, and we take is therefore not entirely determined by his own preferences, RA(A, D) = SR(A, D), or RA(A, D) = MR(A, D). In gen- but also by his beliefs about his opponent’s preferences. So eral, though, A’s and D’s behavior may not be purely ex- each player also builds some beliefs about the other player’s tensional, and the elements of RA(A, D) may be actual al- beliefs, which is expressed as a probability distribution over gorithms. the probability distributions over the payoff matrices. And While both players surely keep their keys secret, and some so to the infinity. Harsanyi formalized the notion of players part of the spaces PA and PD are private, they may not know type as an element of such information space, which includes each other’s preferences, and may not be given each other’s each player’s payoffs, his beliefs about the other player’s pay- ”response algorithms”. If they do know them, then both offs, his beliefs about the other player’s beliefs, and so on defender’s defenses and attaker’s attacks are achieved with- [18]. Harsanyi’s form of games of incomplete information out obscurity. However, modern security definitions usually can be presented in the arena framework by taking require that the defender defends the system against a fam- ily of attacks without querying the attacker about his algo- SA = RMA ×MB + ∆SB rithms. So at least the theoretical attacks are in principle SB = RMA ×MB + ∆SA afforded the cloak of obscurity. Since the defender D thus does not know the attacker A’s algorithms, we model se- where + denotes the disjoint union of sets, and ∆X es the curity games as games of incomplete information, replacing space of finitely supported probability distributions over X, the player’s spaces of payoff matrices by the spaces RA(A, D) which consists of the maps p : X → [0, 1] such that and RA(D, A) of their response strategies to one another. Like above, A thus only knows PA and RA(D, A) with cer- |{x ∈ X|p(x) > 0}| < ∞ and p(x) = 1 tainty, and D only knows PD and RA(A, D) with certainty. x∈X Moreover, A builds his beliefs about D’s data and type, as a Resolving the above inductive definitions of SA and SB , we probability distribution over PD × RA(A, D), and D builds get similar beliefs about A. Since they then also have to build ∞ beliefs about each other’s beliefs, we have a mutually recur- rent definition of the state spaces again: SA = SB = ∆i RMA ×MB i=0 Here the state σ A ∈ SA is thus a sequence SA = PA × RA(D, A) + ∆SD A A A A σ = σ0 , σ1 , σ2 , . . . SD = PD × RA(A, D) + ∆SA
  • 13. Resolving the induction again, we now get where po denotes the interior of the image of p : O → R. ∞ The assumption that both players know both their own and SA = ∆2i PA × RA(D, A) × ∆2i+1 PD × RA(A, D) the opponent’s positions means that both state spaces SA i=0 and SD will contain PA × PD as a component. The state ∞ spaces are thus SD = ∆2i PD × RA(A, D) × ∆2i+1 PA × RA(D, A) SA = (PA × PD × RA(D, A)) + ∆SD i=0 SD = (PA × PD × RA(A, D)) + ∆SD Defender’s state β ∈ SD is thus a sequence β = β0 , β1 , β2 , . . . , where To start off the game, we assume that the defender D is P RA • β0 = β0 , β0 ∈ PD × RA(A, D) consists of given some assets to secure, presented as an area Θ ⊆ R. P The defender wins as long as his position pD ∈ PD is such – D’s secrets β0 ∈ PD , and that Θ ⊆ po . Otherwise the defender loses. The attacker’s D RA – D’s current response strategy β0 ∈ RA(A, D) goal is to acquire the assets, i.e. to maximize the area Θ∩po . A • β1 ∈ ∆ PA × RA(D, A) is D’s belief about A’s secrets The players are given equal forces to distribute along the and her response strategy; borders of their respective territories. Their moves are the choices of these distributions, i.e. • β2 ∈ ∆2 PD × RA(A, D) is D’s belief about A’s belief about D’s secrets and his response strategy; MA = MD = ∆ (∂O) • β3 ∈ ∆3 PA × RA(D, A) is D’s belief about A’s belief where ∂O is the unit circle, viewed as the boundary of O, about D’s beliefs, etc. and ∆(∂O) denotes the distributions along ∂O, i.e. the Each response strategy Σ : A → D prescribes the way in measurable functions m : ∂O → [0, 1] such that ∂O m = 1. which D should update his state in response to A’s observed How will D update his space after a move? This can be moves. E.g., if RA(D, A) is taken to consist of relations in specified as a simple response Σ : A → D. Since the point + + the form Λ : MD × MA → {0, 1}, where MD is the set of of this game is to illustrate the need for learning about the nonempty strings in MD , then D can record the longer and opponent, let us leave out players’ type information for the longer histories of A’s responses to his moves. moment, and assume that the players only look at their Remark. The fact that, in a security game, A’s state space positions, i.e. SA = SD = PA × PD . To specify Σ : A → B, SA contains RA(D, A) and RA(A, D) means that each player we must thus determine the relation A is prepared for playing against a particular player D; while Σ mA , p A , p D − p ′ , p ′ , mD → A D D is prepared for playing against A. This reflects the sit- uation in which all security measures are introduced with for any given mA ∈ MA , pA ∈ PA , and pD ∈ PD . We ′ ′ particular attackers in mind, whereas the attacks are built describe the updates pA and pD for an arbitrary mD , and to attack particular security measures. A technical conse- leave it to D to determine which mD s are the best responses quence is that players’ state spaces are defined by the in- for him. So given the previous positions and both player’s ductive clauses, which often lead to complex impredicative moves, the new position p′ will map a point on the bound- A structures. This should not be surprising, since even infor- ary of the circle, viewed as a unit vector x ∈ O into the mal security considerations often give rise to complex belief vector p ′ (x) in R ⊆ R2 as follows. A hierarchies, and the formal constructions of epistemic game • If for all y ∈ O and for all s ∈ [0, mA (x)] and all t ∈ theory [18, 27, 16] seem like a natural tool to apply. [0, mD (y)] holds (1 + s)pA (x) = (1 + t)pD (y) then set A.4 The game of attack vectors 1 + mA (x) p ′ (x) A = · pA (x) We specify a formal model of the game of attack vec- 2 tors as a game of perfect but incomplete information. This • Otherwise, let y ∈ O, s ∈ [0, mA (x)] and t ∈ [0, mD (y)] means that the players know each other’s positions, but need be the smallest numbers such that (1 + s)pA (x) = (1 + to learn about each other’s type, i.e. plans and methods. t)pD (y). Writing m′ (x) = mA (x) − s and m′ (y) = A D The assumption that the players know each other’s posi- mD (y) − t, we set tion could be removed, without changing the outcomes and strategies, by refining the way the model of players’ obser- 1 + m′ (x) 1 + m′ (y) vations. But this seems inessential, and we omit it for sim- p ′ (x) A = A · pA (x) + D · pD (y) 2 2 plicity. Let O denote the unit disk in the real plane. If we assume This means that the player A will push her boundary by that it is parametrized in the polar coordinates, then O = mA (x) in the direction pA (x) if she does not encounter D {0} ∪ (0, 1] × R/2πZ, where R/2πZ denotes the circle. Let at any point during that push. If somewhere during that R ⊆ R2 be an open domain in the real plane. Players’ push she does encounter D’s territory, then they will push positions can then be defined as continuous mappings of O against each other, i.e. their push vectors will compose. into R, i.e. More precisely, A will push in the direction pA (x) with the force m′ (x) = mA (x) − s, that remains to her after the A PA = PD = RO initial free push by s; but moreover, her boundary will also be pushed in the direction pD (y) by the boundary of D’s The rules of the game will be such that the attacker’s and ′ territory, with the force mD (y) = mD (y) − t, that remains the defender’s positions pA , pD ∈ RO always satisfy the con- to D after his initial free push by t. Since D’s update is straint that defined analogously, a common boundary point will arise, po ∩ po A D = ∅ i.e. players’ borders will remain adjacent. When the move
  • 14. next, there will be no free initial pushes, i.e. s and t will be 0, and the update vectors will compose in full force. How do the players compute the best moves? Attacker’s goal is, of course, to form a common boundary and to push towards Θ, preferably from the direction where the defender does not defend. The defender’s goal is to push back. As explained explained in the text, the game is thus resolved on defender’s capability to predict attacker’s moves. Since the territories do not intersect, but A’s moves become observable for D along the part of the boundary of A’s territory that lies within the convex hull of D’s territory, D’s moves must be selected to maximize the length of the curve ∂pA ∩ conv(pD ) This strategic goal leads to the evolution described infor- mally in Sec. 3.2.