The question of whether a problem in a multiagent system that can be solved with a trusted mediator can be solved by just the agents in the system, without the mediator, has attracted a great deal of attention in both computer science (particularly in the cryptography community) and game theory. In cryptography, the focus on the problem has been on secure multiparty computation, where each agent has some private information and the agents want to compute some function of this information without revealing it. This can be done trivially with a trusted mediator: the agents just send their private information to the mediator, who computes the function value and sends it to all of them. Work on multiparty computation conditions under which this can be done without a mediator, under the assumption that at most a certain fraction of the agents are faulty, and do not follow the recommended protocol. By way of contrast, game theory is interested in implementing mediators using what is called "cheap talk", under the assumption that agents are rational. We are interested in combining both strands: We consider games that have (k,t)-robust equilibria when played with a mediator, where an equilibrium is (k,t)-robust if it tolerates deviations by coalitions of rational players of size up to k and deviations by up to t players who can be viewed as faulty (although they can equally well be viewed as rational players with unanticipated utilities). We prove matching upper and lower bounds on the ability to implement such mediators using cheap talk (that is, just allowing communication among the players). The bounds depend on (a) the relationship between k, t and n, the total number of players in the system; (b) whether players know the exact utilities of other players; (c) whether there are broadcast channels or just point-to-point channels; (d) whether cryptography is available; and (e) whether the game has a (k+t)-punishment strategy; that is, a strategy that, if used by all but at most k+t players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy.This talk covers joint work Ittai Abraham, Danny Dolev, and Rica Gonen.No previous background in game theory is assumed.
GDRR Opening Workshop - Computer Science Meets Game Theory: Implementing Mediators Robustly - Joe Halpern, August 7, 2019
1. Distributed Computing Meets Game Theory
Robust Mechanisms for Secret Sharing and Implementation
with Cheap Talk
Joe Halpern
Cornell University
Joint work with Ittai Abraham, Danny Dolev, Rica Gonen
1 / 22
2. Two Views of the World
Work on distributed computing and on cryptography has assumed
agents are either “good” or “bad”
good agents follow the protocol
bad agents do all they can to subvert it
2 / 22
3. Two Views of the World
Work on distributed computing and on cryptography has assumed
agents are either “good” or “bad”
good agents follow the protocol
bad agents do all they can to subvert it
Game theory assumes
all agents are rational
they try to maximize their utility
2 / 22
4. Two Views of the World
Work on distributed computing and on cryptography has assumed
agents are either “good” or “bad”
good agents follow the protocol
bad agents do all they can to subvert it
Game theory assumes
all agents are rational
they try to maximize their utility
Both views make sense in different contexts
We want to combine them
2 / 22
5. k-Resilient Equilibria
Nash equilibrium tolerates deviations by one player.
It’s consistent with NE that 2 players could do better by
deviating.
An equilibrium is k-resilient if no group of size k can gain by
deviating (in a coordinated way).
Example: n > 1 players must play either 0 or 1.
if everyone plays 0, everyone gets 1
if exactly two players play 1, they get 2; the rest get 0.
otherwise; everyone gets 0.
Everyone playing 0 is a NE, but not 2-resilient.
3 / 22
6. Nash equilibrium = 1-resilient equilibrium.
In general, k-resilient equilibria do not exist if k > 1.
Aumann [1959] already considers n-resilient equilibria.
He called this strong equilibrium
Other related notions have also been considered in the
literature.
We are interested in more limited coalition sizes
It’s hard to imagine n people coordinating if n is large
In any case, resilience does not give us all the robustness we
need in large systems.
4 / 22
7. “Irrational” Players
Some agents don’t seem to respond to incentives, perhaps because
their utilities are not what we thought they were
they are irrational
they have faulty computers
Apparently “irrational” behavior is not uncommon:
People share on Gnutella and Kazaa, seed on BitTorrent
5 / 22
8. Example: Consider a group of n bargaining agents.
If they all stay and bargain, then all get 2.
Anyone who goes home gets 1.
Anyone who stays gets 0 if not everyone stays.
Everyone staying is a k-resilient Nash equilibrium for all k < n, but
not immune to one “irrational” player going home.
People certainly take such possibilities into account!
6 / 22
9. Immunity
A protocol is t-immune if the payoffs of “good” agents are not
affected by the actions of up to t other agents.
Somewhat like Byzantine agreement in distributed computing.
Good agents reach agreement despite up to t faulty agents.
7 / 22
10. Immunity
A protocol is t-immune if the payoffs of “good” agents are not
affected by the actions of up to t other agents.
Somewhat like Byzantine agreement in distributed computing.
Good agents reach agreement despite up to t faulty agents.
Immunity and resilience are complementary
k-resilience says that groups of up to size k cannot gain by
deviating
t-immunity says that non-deviators do not suffer if groups of
size up to t deviate
7 / 22
11. Immunity
A protocol is t-immune if the payoffs of “good” agents are not
affected by the actions of up to t other agents.
Somewhat like Byzantine agreement in distributed computing.
Good agents reach agreement despite up to t faulty agents.
Immunity and resilience are complementary
k-resilience says that groups of up to size k cannot gain by
deviating
t-immunity says that non-deviators do not suffer if groups of
size up to t deviate
A (k, t)-robust protocol tolerates coalitions of size k and is
t-immune.
Nash equilibrium = (1,0)-robustness
In general, (k, t)-robust equilibria don’t exist
they can be obtained with the help of mediators
7 / 22
12. Mediators
Consider an auction where people do not want to bid publicly
public bidding reveals useful information
don’t want to do this in bidding for, e.g., oil drilling rights
If there were a mediator (trusted third party), we’d be all set . . .
Distributed computing example: Byzantine agreement
8 / 22
13. Implementing Mediators
Can we eliminate the mediator? If so, when?
Work in economics: implementing mediators with “cheap
talk” [Myerson, Forges, . . . ]
“implementation” means that if a NE can be achieved with a
mediator, the same NE can be achieved without
Work in CS: multi-party computation [Ben-Or, Goldwasser,
Goldreich, Micali, Wigderson, . . . ]
“implementation” means that “good” players follow the
recommended protocol; “bad” players can do anything they like
By considering (k, t)-robust equilibria, we can generalize the work
in both CS and economics.
9 / 22
14. Three Games
Technnically, in the results that follow, we are working with three
games:
An underlying (normal-form) game
A mediator game, where players communicate with the
mediator, then make a move in the underlying game
A cheap-talk game, where players talk to each other, then
make a move in the underlying game
The payoffs in all cases are the payoffs in the underlying game.
We are interested in knowing whether Nash equilibria in the
mediator game can be implemented in a cheap-talk game.
10 / 22
15. Typical results
Theorem 1(a): If n > 3k + 3t, a (k, t)-robust strategy σ with a
mediator can be implemented using cheap talk.
No knowledge of other agents’ utilities required
The protocol has bounded running time that does not depend
on the utilities.
11 / 22
16. Typical results
Theorem 1(a): If n > 3k + 3t, a (k, t)-robust strategy σ with a
mediator can be implemented using cheap talk.
No knowledge of other agents’ utilities required
The protocol has bounded running time that does not depend
on the utilities.
Theorem 1(b): Can’t do this if n ≤ 3k + 3t.
11 / 22
17. Typical results
Theorem 1(a): If n > 3k + 3t, a (k, t)-robust strategy σ with a
mediator can be implemented using cheap talk.
No knowledge of other agents’ utilities required
The protocol has bounded running time that does not depend
on the utilities.
Theorem 1(b): Can’t do this if n ≤ 3k + 3t.
Theorem 2(a): If n > 2k + 3t, agents’ utilities are known, and
there is a punishment strategy (a way of punishing someone
caught deviating), then we can implement the mediator.
The protocol has unbounded running time, but constant
expected running time
11 / 22
18. Typical results
Theorem 1(a): If n > 3k + 3t, a (k, t)-robust strategy σ with a
mediator can be implemented using cheap talk.
No knowledge of other agents’ utilities required
The protocol has bounded running time that does not depend
on the utilities.
Theorem 1(b): Can’t do this if n ≤ 3k + 3t.
Theorem 2(a): If n > 2k + 3t, agents’ utilities are known, and
there is a punishment strategy (a way of punishing someone
caught deviating), then we can implement the mediator.
The protocol has unbounded running time, but constant
expected running time
Theorem 2(b): Can’t do this if n ≤ 2k + 3t or no punishment
strategy
11 / 22
19. Typical results
Theorem 1(a): If n > 3k + 3t, a (k, t)-robust strategy σ with a
mediator can be implemented using cheap talk.
No knowledge of other agents’ utilities required
The protocol has bounded running time that does not depend
on the utilities.
Theorem 1(b): Can’t do this if n ≤ 3k + 3t.
Theorem 2(a): If n > 2k + 3t, agents’ utilities are known, and
there is a punishment strategy (a way of punishing someone
caught deviating), then we can implement the mediator.
The protocol has unbounded running time, but constant
expected running time
Theorem 2(b): Can’t do this if n ≤ 2k + 3t or no punishment
strategy
Theorem 2(c): Unbounded running time is required. (In general,
the problem cannot be solved by a protocol that has bounded
running time.)
11 / 22
20. Theorem 3(a): If n > 2k + 2t and a broadcast facility is
available, can -implement a mediator.
12 / 22
21. Theorem 3(a): If n > 2k + 2t and a broadcast facility is
available, can -implement a mediator.
12 / 22
22. Theorem 3(a): If n > 2k + 2t and a broadcast facility is
available, can -implement a mediator.
Theorem 3(b): Can’t do it if n ≤ 2k + 2t.
12 / 22
23. Theorem 3(a): If n > 2k + 2t and a broadcast facility is
available, can -implement a mediator.
Theorem 3(b): Can’t do it if n ≤ 2k + 2t.
Theorem: If n > k + t, assuming cryptography,
polynomially-bounded players, a (k + t)-punishment strategy, and
a PKI (public-key infrastructure), then can -implement mediators
using cheap talk.
12 / 22
24. Theorem 3(a): If n > 2k + 2t and a broadcast facility is
available, can -implement a mediator.
Theorem 3(b): Can’t do it if n ≤ 2k + 2t.
Theorem: If n > k + t, assuming cryptography,
polynomially-bounded players, a (k + t)-punishment strategy, and
a PKI (public-key infrastructure), then can -implement mediators
using cheap talk.
Note how standard distributed computing assumptions make a big
difference to implementation!
12 / 22
25. Lower Bounds on Running Time
Theorem 5: If n ≤ 2k + 2t, then
(a) there is a game Γ with a (k, t)-robust strategy with a
mediator that cannot be implemented by any deterministic
cheap talk strategy.
(b) for all b, there is a game Γb with a (k, t)-robust strategy with
a mediator that cannot be implemented using cheap talk with
expected running time ≤ b.
(c) there is a game Γ with a (k, t)-robust strategy with a
mediator such that for all , there exists b such that we
cannot implement the mediator with error with a cheap-talk
strategy that runs in ≤ b steps.
13 / 22
26. Related Work
Lots of related work on implementation in both CS and game
theory:
work of Forges + Barany [≈1990] prove Theorem 1(a) with
k = 1
with 4 players, can implement a Nash equilibrium with a
mediator
work on secure multiparty [BGW88,CCD88] computation
gives Theorem 1(a) for all (k, t)!
14 / 22
27. Related Work
Lots of related work on implementation in both CS and game
theory:
work of Forges + Barany [≈1990] prove Theorem 1(a) with
k = 1
with 4 players, can implement a Nash equilibrium with a
mediator
work on secure multiparty [BGW88,CCD88] computation
gives Theorem 1(a) for all (k, t)!
Ben-Porath (’03): Theorem 2(a) with k = 1 (no crypto,
known utilities, but does sequential equilibrium)
Heller (’05): extends B-P to all k; proves matching lower
bound
14 / 22
28. Related Work
Lots of related work on implementation in both CS and game
theory:
work of Forges + Barany [≈1990] prove Theorem 1(a) with
k = 1
with 4 players, can implement a Nash equilibrium with a
mediator
work on secure multiparty [BGW88,CCD88] computation
gives Theorem 1(a) for all (k, t)!
Ben-Porath (’03): Theorem 2(a) with k = 1 (no crypto,
known utilities, but does sequential equilibrium)
Heller (’05): extends B-P to all k; proves matching lower
bound
Theorem 2(c) shows that B-P’s strategy must be incorrect
(because bounded); Heller’s has problems too
14 / 22
29. More Related Work
Urbano-Vila (’04) and Dodis-Halevi-Rabin (’00) get Theorem
4(a) if k = 1, n = 2
Theorem 5(a) shows UV’s strategy is incorrect
Izmalkov, Micali, Lepinski; Lepinski, Micali, Shelat (’05) prove
stronger implementation results, but require strong primitives
(envelopes and ballot-boxes) that cannot be implemented over
broadcast channels
Lysanskaya-Triandopoulos: Theorem 1(c) for k = 1
15 / 22
30. Asynchrony
All the results above assume that the system is synchronous
All players move simultaneously
All messages arrive within one step
Essentially, can assume that there is a global clock
16 / 22
31. Asynchrony
All the results above assume that the system is synchronous
All players move simultaneously
All messages arrive within one step
Essentially, can assume that there is a global clock
But the real world is often asynchronous.
There is no global clock
Players can move at arbitrary speeds relative to each other
There is no bound on message delivery time.
16 / 22
32. Working in an asynchronous system introduces new subtleties.
If i has not received a message from j, is it because j is slow
or faulty?
What are the payoffs if someone doesn’t decide in finite time?
17 / 22
33. Can prove results in the asynchronous case similar in spirit to those
in the synchronous case
Joint work with Ittai Abraham, Danny Dolev, and Ivan Geffner
Typical results:
Theorem 6: We can implement a mediator in an asynchronous
system
(a) if n > 4k + 4t
(b) If n > 3k + 4t and there is a punishment strategy.
Key point: the real world is asynchronous, while game theory has
typically implicitly assumed synchrony.
Game theory needs to think more about the impact of
asynchrony!
18 / 22
34. Techniques I: Secret Sharing
Agent 0 has a secret s (an integer) that she wants to share among
n other agents in such a way that any m of them can reconstruct it.
Adi Shamir’s secret-sharing protocol (1977):
Agent 0 chooses a polynomial p of degree m − 1 with
p(0) = s.
Agent 0 tells p(i) to agent i, i = 1, . . . , n
p(i) is agent i’s share of the secret
Any subset of m agents can pool their shares and reconstruct
p, and hence p(0).
No subset of size < m can figure out the secret.
19 / 22
35. Dealing with deviation in secret-sharing
What happens if there are players who deviate from the
secret-sharing protocol?
Can they prevent others from learning the secret?
Take the special case where p is a line (degree-1 polynomial) and
the secret is p(0).
n = 3 and player i lies about p(i), then the three points will
not be collinear.
Players will know that there is a problem, but won’t know the
secret.
But if n = 4 and one person deviates, then three points are
still collinear
The “good” players will know who deviated, and can still
reconstruct the line and learn p(0) (the secret)
In general, if we have a polynomial of degree > 3k, then we can
reconstruct the polynomial even if up to k players lie about their
values.
20 / 22
36. Techniques II: Circuits
Idea of protocol (due to Goldreich-Micali-Wigderson):
Consider a circuit for computing function f :
x1 x2 x3 xn. . .q q q q
q
q
q qd
d
d
d
d
d
dd
f (x1, . . . , xn)
21 / 22
37. Techniques II: Circuits
Idea of protocol (due to Goldreich-Micali-Wigderson):
Consider a circuit for computing function f :
x1 x2 x3 xn. . .q q q q
q
q
q qd
d
d
d
d
d
dd
f (x1, . . . , xn)
Want each agent to have a share of value of each node in
circuit
Each agent sends a share of its input to all other agents.
At the end, each agent has a share of f (x1, . . . , xn)
Agents use secret sharing to learn f (x1, . . . , xn)
If agent has incentive to lie about its input, then it could have
lied the same way with the mediator
21 / 22
38. Conclusions
Issues of coalitions and fault-tolerance are critical in
distributed computing, game theory, and cryptography.
By combining ideas from all three areas we can gain new
insights.
Better understanding of role of cryptography in games.
Better understanding of cheap talk
What happens if we take computational issues into account?
How about the structure of the network?
22 / 22