SlideShare a Scribd company logo
1 of 19
Download to read offline
A Framework for Using Trust to
Assess Risk in Information
Sharing
Chatschik Bisdikian, Yuqing Tang, Federico Cerutti, Nir Oren
AT-2013
Thursday 1st
August, 2013
c 2013 Federico Cerutti <f.cerutti@abdn.ac.uk>
Summary
Framework for describing how much information should be
disclosed
Preliminary discussion on multi-agent systems
Illustration of the relevant definition with a scenario
Description of the decision support this framework can provide
given this scenario
Missing in this presentation: some statistical properties of the
proposed approach
2 of 18
A Scenario
British Intelligence sent two spies, James and Alec, to France
James: clever, very loyal
Alec: clumsy, selfish
London knows that France will be invaded by Germany, but
London just informs her men that France will be invaded by a
European country
Purpose: James and Alec can use this information for recruiting
new agents in France
Risk: if they share that Germany will invade France, this will
result in a loss of credibility of UK government (they are the only
ones aware of these plans)
3 of 18
A Probabilistic Approach: the Big Picture
‘c’ obtains ‘x’ from ‘p’
0
1
0
1
producer, p consumer, c
y
z
x
inference impact
(behavioral trust)
Pr(infer | )
( ; )I
y x
f y x dy
Pr(impact | )
( ; )B
z y
f z y dz
Pr(impact | ) ( ; )R
z x f z x dz
4 of 18
The Formal Definitions (i)
Definition
A Framework for Risk Assessment (FRA) is a 6-ple:
A, C, M, ag, m, Tg
where:
A is a set of agents;
C ⊆ A × A is the set of communication links among agents;
M is the set of all the messages that can be exchanged;
ag ∈ A is the producer, viz. the agent that shares information;
m ∈ M is a message to be assessed;
A  {ag} is the set of consumers, and in particular:
Tg ⊆ A  {ag} are the desired consumers, and
∀agX ∈ Tg, ag, agX ∈ C;
A  ({ag} ∪ Tg), are the undesired consumers.
5 of 18
The Example Formalised (i)
FRABI = ABI, CBI, MBI, agBI, mBI, TgBI , where:
{BI, James, Alec} ⊆ ABI;
{ BI, James , James, BI , BI, Alec , Alec, BI } ⊆ CBI;
{m1, m2} ⊆ MBI with:
m1: France will be invaded by Germany;
m2: France will be invaded by a European country;
agBI = BI;
mBI = m1;
{James, Alec} ⊆ TgBI.
6 of 18
The Formal Definitions (ii)
Definition
Given A a set of agents, a message m ∈ M, ag1, ag2 ∈ A,
xag2
ag1 (m) ∈ [0, 1] is the degree of disclosure of message m used between
the agent ag1 and the agent ag2, where xag2
ag1 (m) = 0 implies no sharing
and xag2
ag1 (m) = 1 implies full disclosure between the two agents.
We define the disclosure function as follows:
d : M × [0, 1] → M
d(·, ·) accepts as input a message and a degree of disclosure of the
same message, and returns the disclosed part of the message as a new
message.
7 of 18
The Example Formalised (ii)
Let’s suppose that xJames
BI = xAlec
BI = x. In other terms, BI uses the
same disclosure degree with both James and Alec.
In addition, d(m1, x) = m2
N.B.
m1: France will be invaded by Germany;
m2: France will be invaded by a European country;
8 of 18
Disclosure Degree and Multi-Agents Networks
d(m , xag3
ag2
) = d(m, xag3
ag1
)
where
xag3
ag1
= sag2
ag1
, xag2
ag1
sag3
ag2
, xag3
ag2
;
sag2
ag1
∈ [0, 1] is the probability that ag1 will propagate to ag2 the
disclosed part of m that it receives;
is a transitive function such that
: ([0, 1] × [0, 1]) × ([0, 1] × [0, 1]) → [0, 1]
xag3
ag1
≤ xag2
ag1
.
9 of 18
Disclosure Degree and Multi-Agents Networks
merge(d(m , xag4
ag2
), d(m , xag4
ag3
)) = d(m, xag4
ag1
)
where
xag4
ag1
= sag2
ag1
, xag2
ag1
sag4
ag2
, xag4
ag2
⊕ sag3
ag1
, xag3
ag1
sag4
ag3
, xag4
ag3
;
sag2
ag1
∈ [0, 1] is the probability that ag1 will propagate to ag2 the
disclosed part of m that it receives;
⊕ is a transitive function
⊕ : [0, 1] × [0, 1] → [0, 1]
xag4
ag1
≤ min {xag2
ag1
, xag3
ag1
}.
9 of 18
The Formal Definitions (iii)
Definition
Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg:
P(xagX
ag ) is a r.v.(FP (·; xagX
ag ), fP (·; xagX
ag )) which represents the
benefit agent ag receives when sharing the message m with a
degree of disclosure xagX
ag with agent agX;
yag2|x
ag2
ag1
∈ [0, 1] is the amount of knowledge of m that ag2 can
infer given xag2
ag1 according to the r.v. Iag2 (xag2
ag1 ) ( FIag2
(·; xag2
ag1 ),
fIag2
(·; xag2
ag1 )).
zag2|x
ag2
ag1
∈ [0, 1], the impact that an information producer ag
incurs when an information consumer ag1 makes use of the
information inferred yag|ag1
from a message m disclosed with xag1
ag
according to the r.v. B(yag|ag1
) ( FB(·; yag|ag1
), fB(·; yag|ag1
));
10 of 18
The Formal Definitions (iv)
Proposition
Given a FRA A, C, M, ag, m, Tg , an agent agY ∈ A that has received
a message d(m, x), with x = xagY
ag . Let y be the inferred (by agY )
information according to the r.v. I(x) (with probability ≈ fI(y; x) dy).
Then, assuming that the impact z is independent of the degree of
disclosure x given the inferred information y, ag expects a level of risk
z described by the r.v. R(x) with density:
fR(z; x) =
1
0
fB(z; y) fI(y; x) dy.
Definition
Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg, ∀agY ∈ A, the net
benefit for the producer to share information with agY is described by:
C = P − R, with an average, or expected benefit, E{C(xagY
ag )} =
E{P(xagY
ag )} − E{R(xagY
ag )}.
11 of 18
A Probabilistic Approach: the Big Picture
‘c’ obtains ‘x’ from ‘p’
0
1
0
1
producer, p consumer, c
y
z
x
inference impact
(behavioral trust)
Pr(infer | )
( ; )I
y x
f y x dy
Pr(impact | )
( ; )B
z y
f z y dz
Pr(impact | ) ( ; )R
z x f z x dz
12 of 18
Our Scenario Revisited
A
0
1
B
100K
(impact to the provider)
10K
q
1-q
inference impact
w(0)
1-w(0)
w(1)
1-w(1)
x
( 25K)
Average impact:
E{h} = q 10w(0) + 100[1 − w(0)] + (1 − q) 10w(1) + 100[1 − w(1)]
= 100 − 90 q[w(0) − w(1)] + w(1)
Expected net benefit: ¯C(x) = ¯P(x) − 100 + 90 q[w(0) − w(1)] + w(1)
¯C ≥ 0 ⇒ 100− ¯P (x)
90 ≤ qw(0) + (1 − q)w(1) ≤ 1
13 of 18
Our Scenario Revisited: James
A
0
1
B
100K
(impact to the provider)
10K
q= 0.1
0.9
inference impact
w(0) = 0.9
0.1
w(1) = 0.9
0.1
x
( 25K)
Average impact: 10K
Net benefit: 75
90 ≤ 0.9 ≤ 1
Conclusion: BI can “safely” share with James the information
that France is going to be invaded
14 of 18
Our Scenario Revisited: Alec
A
0
1
B
100K
(impact to the provider)
10K
q= 0.6
0.4
inference impact
w(0) = 0.6
0.4
w(1) = 0.4
0.6
x
( 25K)
Average impact: 53.2K
Net benefit: 75
90 0.52 ≤ 1
Conclusion: BI cannot “safely” share with Alec the information
that France is going to be invaded
15 of 18
Conclusions
Framework enabling an agent to determine how much information
should disclose to others in order to maximise its utility
Allows to distinguish between “desired” (e.g. James) and
“undesired” consumers (e.g. Alec)
It helps in handling the risk of information propagated across a
network of agents
Potential applications in strategic contexts where pieces of
information are shared across several partners which can have
hidden agenda
Future works:
Integration with quantitative trust models
Studying statistical properties of the r.v. R(x)
Developing statistical operators for representing the propagation
of information across a (partially known) network of agents
16 of 18
In loving memory of Chatschik Bisdikian Ph.D.
Born December 21st 1960 — Died April 24th 2013
Researcher at IBM, IEEE Fellow, inductee of the Academy of Distinguished Engineers,
Hall of Fame of the School of Engineering of the University of Connecticut, lifelong
member of the Eta Kappa Nu, and Phi Kappa Phi Honor Societies.
17 of 18
Acknowledgement
Research was sponsored by US Army Research laboratory
and the UK Ministry of Defence and was accomplished under
Agreement Number W911NF-06-3-0001. The views and
conclusions contained in this document are those of the
authors and should not be interpreted as representing the
official policies, either expressed or implied, of the US Army
Research Laboratory, the U.S. Government, the UK Ministry
of Defense, or the UK Government. The US and UK
Governments are authorized to reproduce and distribute
reprints for Government purposes notwithstanding any
copyright notation hereon.
18 of 18

More Related Content

Viewers also liked

Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Federico Cerutti
 
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Federico Cerutti
 
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Federico Cerutti
 
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Federico Cerutti
 

Viewers also liked (8)

Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
 
Algorithm Selection for Preferred Extensions Enumeration
Algorithm Selection for Preferred Extensions EnumerationAlgorithm Selection for Preferred Extensions Enumeration
Algorithm Selection for Preferred Extensions Enumeration
 
Cerutti--PhD viva voce defence
Cerutti--PhD viva voce defenceCerutti--PhD viva voce defence
Cerutti--PhD viva voce defence
 
Cerutti -- TAFA2013
Cerutti -- TAFA2013Cerutti -- TAFA2013
Cerutti -- TAFA2013
 
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
 
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
Cerutti--Web Information Systems (postgrad seminar @ University of Brescia)
 
Cerutti-AT2013-Graphical Subjective Logic
Cerutti-AT2013-Graphical Subjective LogicCerutti-AT2013-Graphical Subjective Logic
Cerutti-AT2013-Graphical Subjective Logic
 
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
 

Similar to Cerutti-AT2013-Trust and Risk

Net sci13
Net sci13Net sci13
Net sci13
Victor
 

Similar to Cerutti-AT2013-Trust and Risk (20)

Paper Summary of Infogan-CR : Disentangling Generative Adversarial Networks w...
Paper Summary of Infogan-CR : Disentangling Generative Adversarial Networks w...Paper Summary of Infogan-CR : Disentangling Generative Adversarial Networks w...
Paper Summary of Infogan-CR : Disentangling Generative Adversarial Networks w...
 
Graphical Model Selection for Big Data
Graphical Model Selection for Big DataGraphical Model Selection for Big Data
Graphical Model Selection for Big Data
 
Ica group 3[1]
Ica group 3[1]Ica group 3[1]
Ica group 3[1]
 
MAXENTROPIC APPROACH TO DECOMPOUND AGGREGATE RISK LOSSES
MAXENTROPIC APPROACH TO DECOMPOUND AGGREGATE RISK LOSSESMAXENTROPIC APPROACH TO DECOMPOUND AGGREGATE RISK LOSSES
MAXENTROPIC APPROACH TO DECOMPOUND AGGREGATE RISK LOSSES
 
On theory and applications of mathematics to security in cloud computing: a c...
On theory and applications of mathematics to security in cloud computing: a c...On theory and applications of mathematics to security in cloud computing: a c...
On theory and applications of mathematics to security in cloud computing: a c...
 
presentation.pdf
presentation.pdfpresentation.pdf
presentation.pdf
 
Machine Learning in Actuarial Science & Insurance
Machine Learning in Actuarial Science & InsuranceMachine Learning in Actuarial Science & Insurance
Machine Learning in Actuarial Science & Insurance
 
ML unit-1.pptx
ML unit-1.pptxML unit-1.pptx
ML unit-1.pptx
 
Security of Artificial Intelligence
Security of Artificial IntelligenceSecurity of Artificial Intelligence
Security of Artificial Intelligence
 
An Analysis of Fraudulence in Fuzzy Commitment Scheme With Trusted Party
An Analysis of Fraudulence in Fuzzy Commitment Scheme With Trusted PartyAn Analysis of Fraudulence in Fuzzy Commitment Scheme With Trusted Party
An Analysis of Fraudulence in Fuzzy Commitment Scheme With Trusted Party
 
Optimal Budget Allocation: Theoretical Guarantee and Efficient Algorithm
Optimal Budget Allocation: Theoretical Guarantee and Efficient AlgorithmOptimal Budget Allocation: Theoretical Guarantee and Efficient Algorithm
Optimal Budget Allocation: Theoretical Guarantee and Efficient Algorithm
 
Net sci13
Net sci13Net sci13
Net sci13
 
The Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic DistributionThe Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic Distribution
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSHYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
 
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDSHYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
HYBRIDIZATION OF DCT BASED STEGANOGRAPHY AND RANDOM GRIDS
 
Cdis hammar stadler_15_oct_2020
Cdis hammar stadler_15_oct_2020Cdis hammar stadler_15_oct_2020
Cdis hammar stadler_15_oct_2020
 
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
 
Self-Learning Systems for Cyber Security
Self-Learning Systems for Cyber SecuritySelf-Learning Systems for Cyber Security
Self-Learning Systems for Cyber Security
 
신뢰 전파 기법을 이용한 스테레오 정합(Stereo matching using belief propagation algorithm)
신뢰 전파 기법을 이용한 스테레오 정합(Stereo matching using belief propagation algorithm)신뢰 전파 기법을 이용한 스테레오 정합(Stereo matching using belief propagation algorithm)
신뢰 전파 기법을 이용한 스테레오 정합(Stereo matching using belief propagation algorithm)
 

More from Federico Cerutti

Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Federico Cerutti
 

More from Federico Cerutti (12)

Introduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural Networks
 
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
 
Human-Argumentation Experiment Pilot 2013: Technical Material
Human-Argumentation Experiment Pilot 2013: Technical MaterialHuman-Argumentation Experiment Pilot 2013: Technical Material
Human-Argumentation Experiment Pilot 2013: Technical Material
 
Probabilistic Logic Programming with Beta-Distributed Random Variables
Probabilistic Logic Programming with Beta-Distributed Random VariablesProbabilistic Logic Programming with Beta-Distributed Random Variables
Probabilistic Logic Programming with Beta-Distributed Random Variables
 
Supporting Scientific Enquiry with Uncertain Sources
Supporting Scientific Enquiry with Uncertain SourcesSupporting Scientific Enquiry with Uncertain Sources
Supporting Scientific Enquiry with Uncertain Sources
 
Introduction to Formal Argumentation Theory
Introduction to Formal Argumentation TheoryIntroduction to Formal Argumentation Theory
Introduction to Formal Argumentation Theory
 
Handout: Argumentation in Artificial Intelligence: From Theory to Practice
Handout: Argumentation in Artificial Intelligence: From Theory to PracticeHandout: Argumentation in Artificial Intelligence: From Theory to Practice
Handout: Argumentation in Artificial Intelligence: From Theory to Practice
 
Argumentation in Artificial Intelligence: From Theory to Practice
Argumentation in Artificial Intelligence: From Theory to PracticeArgumentation in Artificial Intelligence: From Theory to Practice
Argumentation in Artificial Intelligence: From Theory to Practice
 
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...Handout for the course Abstract Argumentation and Interfaces to Argumentative...
Handout for the course Abstract Argumentation and Interfaces to Argumentative...
 
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
 
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
 
Argumentation in Artificial Intelligence
Argumentation in Artificial IntelligenceArgumentation in Artificial Intelligence
Argumentation in Artificial Intelligence
 

Recently uploaded

Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 

Recently uploaded (20)

SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 

Cerutti-AT2013-Trust and Risk

  • 1. A Framework for Using Trust to Assess Risk in Information Sharing Chatschik Bisdikian, Yuqing Tang, Federico Cerutti, Nir Oren AT-2013 Thursday 1st August, 2013 c 2013 Federico Cerutti <f.cerutti@abdn.ac.uk>
  • 2. Summary Framework for describing how much information should be disclosed Preliminary discussion on multi-agent systems Illustration of the relevant definition with a scenario Description of the decision support this framework can provide given this scenario Missing in this presentation: some statistical properties of the proposed approach 2 of 18
  • 3. A Scenario British Intelligence sent two spies, James and Alec, to France James: clever, very loyal Alec: clumsy, selfish London knows that France will be invaded by Germany, but London just informs her men that France will be invaded by a European country Purpose: James and Alec can use this information for recruiting new agents in France Risk: if they share that Germany will invade France, this will result in a loss of credibility of UK government (they are the only ones aware of these plans) 3 of 18
  • 4. A Probabilistic Approach: the Big Picture ‘c’ obtains ‘x’ from ‘p’ 0 1 0 1 producer, p consumer, c y z x inference impact (behavioral trust) Pr(infer | ) ( ; )I y x f y x dy Pr(impact | ) ( ; )B z y f z y dz Pr(impact | ) ( ; )R z x f z x dz 4 of 18
  • 5. The Formal Definitions (i) Definition A Framework for Risk Assessment (FRA) is a 6-ple: A, C, M, ag, m, Tg where: A is a set of agents; C ⊆ A × A is the set of communication links among agents; M is the set of all the messages that can be exchanged; ag ∈ A is the producer, viz. the agent that shares information; m ∈ M is a message to be assessed; A {ag} is the set of consumers, and in particular: Tg ⊆ A {ag} are the desired consumers, and ∀agX ∈ Tg, ag, agX ∈ C; A ({ag} ∪ Tg), are the undesired consumers. 5 of 18
  • 6. The Example Formalised (i) FRABI = ABI, CBI, MBI, agBI, mBI, TgBI , where: {BI, James, Alec} ⊆ ABI; { BI, James , James, BI , BI, Alec , Alec, BI } ⊆ CBI; {m1, m2} ⊆ MBI with: m1: France will be invaded by Germany; m2: France will be invaded by a European country; agBI = BI; mBI = m1; {James, Alec} ⊆ TgBI. 6 of 18
  • 7. The Formal Definitions (ii) Definition Given A a set of agents, a message m ∈ M, ag1, ag2 ∈ A, xag2 ag1 (m) ∈ [0, 1] is the degree of disclosure of message m used between the agent ag1 and the agent ag2, where xag2 ag1 (m) = 0 implies no sharing and xag2 ag1 (m) = 1 implies full disclosure between the two agents. We define the disclosure function as follows: d : M × [0, 1] → M d(·, ·) accepts as input a message and a degree of disclosure of the same message, and returns the disclosed part of the message as a new message. 7 of 18
  • 8. The Example Formalised (ii) Let’s suppose that xJames BI = xAlec BI = x. In other terms, BI uses the same disclosure degree with both James and Alec. In addition, d(m1, x) = m2 N.B. m1: France will be invaded by Germany; m2: France will be invaded by a European country; 8 of 18
  • 9. Disclosure Degree and Multi-Agents Networks d(m , xag3 ag2 ) = d(m, xag3 ag1 ) where xag3 ag1 = sag2 ag1 , xag2 ag1 sag3 ag2 , xag3 ag2 ; sag2 ag1 ∈ [0, 1] is the probability that ag1 will propagate to ag2 the disclosed part of m that it receives; is a transitive function such that : ([0, 1] × [0, 1]) × ([0, 1] × [0, 1]) → [0, 1] xag3 ag1 ≤ xag2 ag1 . 9 of 18
  • 10. Disclosure Degree and Multi-Agents Networks merge(d(m , xag4 ag2 ), d(m , xag4 ag3 )) = d(m, xag4 ag1 ) where xag4 ag1 = sag2 ag1 , xag2 ag1 sag4 ag2 , xag4 ag2 ⊕ sag3 ag1 , xag3 ag1 sag4 ag3 , xag4 ag3 ; sag2 ag1 ∈ [0, 1] is the probability that ag1 will propagate to ag2 the disclosed part of m that it receives; ⊕ is a transitive function ⊕ : [0, 1] × [0, 1] → [0, 1] xag4 ag1 ≤ min {xag2 ag1 , xag3 ag1 }. 9 of 18
  • 11. The Formal Definitions (iii) Definition Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg: P(xagX ag ) is a r.v.(FP (·; xagX ag ), fP (·; xagX ag )) which represents the benefit agent ag receives when sharing the message m with a degree of disclosure xagX ag with agent agX; yag2|x ag2 ag1 ∈ [0, 1] is the amount of knowledge of m that ag2 can infer given xag2 ag1 according to the r.v. Iag2 (xag2 ag1 ) ( FIag2 (·; xag2 ag1 ), fIag2 (·; xag2 ag1 )). zag2|x ag2 ag1 ∈ [0, 1], the impact that an information producer ag incurs when an information consumer ag1 makes use of the information inferred yag|ag1 from a message m disclosed with xag1 ag according to the r.v. B(yag|ag1 ) ( FB(·; yag|ag1 ), fB(·; yag|ag1 )); 10 of 18
  • 12. The Formal Definitions (iv) Proposition Given a FRA A, C, M, ag, m, Tg , an agent agY ∈ A that has received a message d(m, x), with x = xagY ag . Let y be the inferred (by agY ) information according to the r.v. I(x) (with probability ≈ fI(y; x) dy). Then, assuming that the impact z is independent of the degree of disclosure x given the inferred information y, ag expects a level of risk z described by the r.v. R(x) with density: fR(z; x) = 1 0 fB(z; y) fI(y; x) dy. Definition Given a FRA A, C, M, ag, m, Tg , let agX ∈ Tg, ∀agY ∈ A, the net benefit for the producer to share information with agY is described by: C = P − R, with an average, or expected benefit, E{C(xagY ag )} = E{P(xagY ag )} − E{R(xagY ag )}. 11 of 18
  • 13. A Probabilistic Approach: the Big Picture ‘c’ obtains ‘x’ from ‘p’ 0 1 0 1 producer, p consumer, c y z x inference impact (behavioral trust) Pr(infer | ) ( ; )I y x f y x dy Pr(impact | ) ( ; )B z y f z y dz Pr(impact | ) ( ; )R z x f z x dz 12 of 18
  • 14. Our Scenario Revisited A 0 1 B 100K (impact to the provider) 10K q 1-q inference impact w(0) 1-w(0) w(1) 1-w(1) x ( 25K) Average impact: E{h} = q 10w(0) + 100[1 − w(0)] + (1 − q) 10w(1) + 100[1 − w(1)] = 100 − 90 q[w(0) − w(1)] + w(1) Expected net benefit: ¯C(x) = ¯P(x) − 100 + 90 q[w(0) − w(1)] + w(1) ¯C ≥ 0 ⇒ 100− ¯P (x) 90 ≤ qw(0) + (1 − q)w(1) ≤ 1 13 of 18
  • 15. Our Scenario Revisited: James A 0 1 B 100K (impact to the provider) 10K q= 0.1 0.9 inference impact w(0) = 0.9 0.1 w(1) = 0.9 0.1 x ( 25K) Average impact: 10K Net benefit: 75 90 ≤ 0.9 ≤ 1 Conclusion: BI can “safely” share with James the information that France is going to be invaded 14 of 18
  • 16. Our Scenario Revisited: Alec A 0 1 B 100K (impact to the provider) 10K q= 0.6 0.4 inference impact w(0) = 0.6 0.4 w(1) = 0.4 0.6 x ( 25K) Average impact: 53.2K Net benefit: 75 90 0.52 ≤ 1 Conclusion: BI cannot “safely” share with Alec the information that France is going to be invaded 15 of 18
  • 17. Conclusions Framework enabling an agent to determine how much information should disclose to others in order to maximise its utility Allows to distinguish between “desired” (e.g. James) and “undesired” consumers (e.g. Alec) It helps in handling the risk of information propagated across a network of agents Potential applications in strategic contexts where pieces of information are shared across several partners which can have hidden agenda Future works: Integration with quantitative trust models Studying statistical properties of the r.v. R(x) Developing statistical operators for representing the propagation of information across a (partially known) network of agents 16 of 18
  • 18. In loving memory of Chatschik Bisdikian Ph.D. Born December 21st 1960 — Died April 24th 2013 Researcher at IBM, IEEE Fellow, inductee of the Academy of Distinguished Engineers, Hall of Fame of the School of Engineering of the University of Connecticut, lifelong member of the Eta Kappa Nu, and Phi Kappa Phi Honor Societies. 17 of 18
  • 19. Acknowledgement Research was sponsored by US Army Research laboratory and the UK Ministry of Defence and was accomplished under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the US Army Research Laboratory, the U.S. Government, the UK Ministry of Defense, or the UK Government. The US and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. 18 of 18