In this paper we describe a decision process framework allowing an agent to decide what information it should reveal to its neighbours within a com- munication graph in order to maximise its utility. We assume that these neigh- bours can pass information onto others within the graph, and that the commu- nicating agent gains and loses utility based on the information which can be in- ferred by specific agents following the original communicative act. To this end, we construct an initial model of information propagation and describe an optimal decision procedure for the agent.
1. A Framework for Using Trust to
Assess Risk in Information
Sharing
Chatschik Bisdikian, Yuqing Tang, Federico Cerutti, Nir Oren
AT-2013
Thursday 1st
August, 2013
c 2013 Federico Cerutti <f.cerutti@abdn.ac.uk>
2. Summary
Framework for describing how much information should be
disclosed
Preliminary discussion on multi-agent systems
Illustration of the relevant definition with a scenario
Description of the decision support this framework can provide
given this scenario
Missing in this presentation: some statistical properties of the
proposed approach
2 of 18
3. A Scenario
British Intelligence sent two spies, James and Alec, to France
James: clever, very loyal
Alec: clumsy, selfish
London knows that France will be invaded by Germany, but
London just informs her men that France will be invaded by a
European country
Purpose: James and Alec can use this information for recruiting
new agents in France
Risk: if they share that Germany will invade France, this will
result in a loss of credibility of UK government (they are the only
ones aware of these plans)
3 of 18
4. A Probabilistic Approach: the Big Picture
‘c’ obtains ‘x’ from ‘p’
0
1
0
1
producer, p consumer, c
y
z
x
inference impact
(behavioral trust)
Pr(infer | )
( ; )I
y x
f y x dy
Pr(impact | )
( ; )B
z y
f z y dz
Pr(impact | ) ( ; )R
z x f z x dz
4 of 18
5. The Formal Definitions (i)
Definition
A Framework for Risk Assessment (FRA) is a 6-ple:
A, C, M, ag, m, Tg
where:
A is a set of agents;
C ⊆ A × A is the set of communication links among agents;
M is the set of all the messages that can be exchanged;
ag ∈ A is the producer, viz. the agent that shares information;
m ∈ M is a message to be assessed;
A {ag} is the set of consumers, and in particular:
Tg ⊆ A {ag} are the desired consumers, and
∀agX ∈ Tg, ag, agX ∈ C;
A ({ag} ∪ Tg), are the undesired consumers.
5 of 18
6. The Example Formalised (i)
FRABI = ABI, CBI, MBI, agBI, mBI, TgBI , where:
{BI, James, Alec} ⊆ ABI;
{ BI, James , James, BI , BI, Alec , Alec, BI } ⊆ CBI;
{m1, m2} ⊆ MBI with:
m1: France will be invaded by Germany;
m2: France will be invaded by a European country;
agBI = BI;
mBI = m1;
{James, Alec} ⊆ TgBI.
6 of 18
7. The Formal Definitions (ii)
Definition
Given A a set of agents, a message m ∈ M, ag1, ag2 ∈ A,
xag2
ag1 (m) ∈ [0, 1] is the degree of disclosure of message m used between
the agent ag1 and the agent ag2, where xag2
ag1 (m) = 0 implies no sharing
and xag2
ag1 (m) = 1 implies full disclosure between the two agents.
We define the disclosure function as follows:
d : M × [0, 1] → M
d(·, ·) accepts as input a message and a degree of disclosure of the
same message, and returns the disclosed part of the message as a new
message.
7 of 18
8. The Example Formalised (ii)
Let’s suppose that xJames
BI = xAlec
BI = x. In other terms, BI uses the
same disclosure degree with both James and Alec.
In addition, d(m1, x) = m2
N.B.
m1: France will be invaded by Germany;
m2: France will be invaded by a European country;
8 of 18
9. Disclosure Degree and Multi-Agents Networks
d(m , xag3
ag2
) = d(m, xag3
ag1
)
where
xag3
ag1
= sag2
ag1
, xag2
ag1
sag3
ag2
, xag3
ag2
;
sag2
ag1
∈ [0, 1] is the probability that ag1 will propagate to ag2 the
disclosed part of m that it receives;
is a transitive function such that
: ([0, 1] × [0, 1]) × ([0, 1] × [0, 1]) → [0, 1]
xag3
ag1
≤ xag2
ag1
.
9 of 18
10. Disclosure Degree and Multi-Agents Networks
merge(d(m , xag4
ag2
), d(m , xag4
ag3
)) = d(m, xag4
ag1
)
where
xag4
ag1
= sag2
ag1
, xag2
ag1
sag4
ag2
, xag4
ag2
⊕ sag3
ag1
, xag3
ag1
sag4
ag3
, xag4
ag3
;
sag2
ag1
∈ [0, 1] is the probability that ag1 will propagate to ag2 the
disclosed part of m that it receives;
⊕ is a transitive function
⊕ : [0, 1] × [0, 1] → [0, 1]
xag4
ag1
≤ min {xag2
ag1
, xag3
ag1
}.
9 of 18
11. The Formal Definitions (iii)
Definition
Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg:
P(xagX
ag ) is a r.v.(FP (·; xagX
ag ), fP (·; xagX
ag )) which represents the
benefit agent ag receives when sharing the message m with a
degree of disclosure xagX
ag with agent agX;
yag2|x
ag2
ag1
∈ [0, 1] is the amount of knowledge of m that ag2 can
infer given xag2
ag1 according to the r.v. Iag2 (xag2
ag1 ) ( FIag2
(·; xag2
ag1 ),
fIag2
(·; xag2
ag1 )).
zag2|x
ag2
ag1
∈ [0, 1], the impact that an information producer ag
incurs when an information consumer ag1 makes use of the
information inferred yag|ag1
from a message m disclosed with xag1
ag
according to the r.v. B(yag|ag1
) ( FB(·; yag|ag1
), fB(·; yag|ag1
));
10 of 18
12. The Formal Definitions (iv)
Proposition
Given a FRA A, C, M, ag, m, Tg , an agent agY ∈ A that has received
a message d(m, x), with x = xagY
ag . Let y be the inferred (by agY )
information according to the r.v. I(x) (with probability ≈ fI(y; x) dy).
Then, assuming that the impact z is independent of the degree of
disclosure x given the inferred information y, ag expects a level of risk
z described by the r.v. R(x) with density:
fR(z; x) =
1
0
fB(z; y) fI(y; x) dy.
Definition
Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg, ∀agY ∈ A, the net
benefit for the producer to share information with agY is described by:
C = P − R, with an average, or expected benefit, E{C(xagY
ag )} =
E{P(xagY
ag )} − E{R(xagY
ag )}.
11 of 18
13. A Probabilistic Approach: the Big Picture
‘c’ obtains ‘x’ from ‘p’
0
1
0
1
producer, p consumer, c
y
z
x
inference impact
(behavioral trust)
Pr(infer | )
( ; )I
y x
f y x dy
Pr(impact | )
( ; )B
z y
f z y dz
Pr(impact | ) ( ; )R
z x f z x dz
12 of 18
15. Our Scenario Revisited: James
A
0
1
B
100K
(impact to the provider)
10K
q= 0.1
0.9
inference impact
w(0) = 0.9
0.1
w(1) = 0.9
0.1
x
( 25K)
Average impact: 10K
Net benefit: 75
90 ≤ 0.9 ≤ 1
Conclusion: BI can “safely” share with James the information
that France is going to be invaded
14 of 18
16. Our Scenario Revisited: Alec
A
0
1
B
100K
(impact to the provider)
10K
q= 0.6
0.4
inference impact
w(0) = 0.6
0.4
w(1) = 0.4
0.6
x
( 25K)
Average impact: 53.2K
Net benefit: 75
90 0.52 ≤ 1
Conclusion: BI cannot “safely” share with Alec the information
that France is going to be invaded
15 of 18
17. Conclusions
Framework enabling an agent to determine how much information
should disclose to others in order to maximise its utility
Allows to distinguish between “desired” (e.g. James) and
“undesired” consumers (e.g. Alec)
It helps in handling the risk of information propagated across a
network of agents
Potential applications in strategic contexts where pieces of
information are shared across several partners which can have
hidden agenda
Future works:
Integration with quantitative trust models
Studying statistical properties of the r.v. R(x)
Developing statistical operators for representing the propagation
of information across a (partially known) network of agents
16 of 18
18. In loving memory of Chatschik Bisdikian Ph.D.
Born December 21st 1960 — Died April 24th 2013
Researcher at IBM, IEEE Fellow, inductee of the Academy of Distinguished Engineers,
Hall of Fame of the School of Engineering of the University of Connecticut, lifelong
member of the Eta Kappa Nu, and Phi Kappa Phi Honor Societies.
17 of 18
19. Acknowledgement
Research was sponsored by US Army Research laboratory
and the UK Ministry of Defence and was accomplished under
Agreement Number W911NF-06-3-0001. The views and
conclusions contained in this document are those of the
authors and should not be interpreted as representing the
official policies, either expressed or implied, of the US Army
Research Laboratory, the U.S. Government, the UK Ministry
of Defense, or the UK Government. The US and UK
Governments are authorized to reproduce and distribute
reprints for Government purposes notwithstanding any
copyright notation hereon.
18 of 18