• Save
Soc july-2012-dmitri-botvich
Upcoming SlideShare
Loading in...5
×
 

Soc july-2012-dmitri-botvich

on

  • 517 views

 

Statistics

Views

Total Views
517
Views on SlideShare
517
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Soc july-2012-dmitri-botvich Soc july-2012-dmitri-botvich Presentation Transcript

  • 1Trust for Communication Services and Networks: Introduction and Applications Dmitri Botvich 1
  • Presentation Outline• What is Trust• Trust Modelling• Trust Overlay Architecture• Applications of Trust • Distributed intrusion detection • Spam detection • Compose services trust monitoring• FP7 Aniketos Project Demo 2
  • The meanings of trust Acme Bank amazon.com Alice V ED A PPR O C IA L -F IN A N O R LA T OD R EG U N T IC A GO ER E Y AUTHRISIGN LAW - VE - BOB• Trusted to do what?• Who is Verisign? Who is Bob? Can you trust them?• Can you trust the recommender? • To certify identity? • To attest to behaviour? 3
  • The meanings of trust• Trust is an overloaded term• Several different definitions, including: • Trust as a measure of something: to be used in decision-making − Gambetta (1988): trust as a probabilistic measure indicating confidence in a certain type of behaviour, used as a basis for deciding whether to rely on another entity • “To trust” (or not) may be taken to mean the decision itself − Binary value: result that entity is either trusted or untrusted − Supports predicate logic of trust, chains of trust, etc We adopt Gambetta’s approach: trust as a measure of something (richer potential; more socially-motivated)• But more than just as a single number. We also consider • Confidence – relating to reliability of trust assessment (e.g. on how many experiences or recommendations was it made) • Recency – when was it last updated 4
  • Representing and updating trust• Each node i maintains trust in j: Ti,j• Initialisation: • set Ti,j to default trust level• Update based on direct experience • Ti,j := fe (Ti,j ,S) (S is a score attributed to the experience) • e.g. exponential average: Ti,j := αS + (1 – α)Ti,j• Update based on third party advice (reputation) • Ti,j := fr (Ti,j , Ti,k , Tk,j) • e.g. exponential average: Ti,j := Ti,j – β Ti,k(Ti,j – Tk,j ) 5
  • Service-centric model• All relevant activity modelled as service usage • Some peer nodes offer services to others • Some peer nodes use services of others• Node offering service stipulates trust threshold for that service Node i tries to use service Sx on Node j fx (Ti,j) < tx service use blocked else service use allowed Ti,j is a vector measure of i’s trust in j fx(Ti,j) maps Ti,j onto a scalar number in range (0,1) Service Sx has the (scalar) trust threshold tx 6
  • Peer-to-peer trust• Peer-to-peer model • Each node autonomously decides how much it trusts each other node − Based on own experience − Based on reputation (third party experience)• Not incompatible with centralised trust systems • Verisign provides a certification service • Every node (web browser) configured to trust Verisign 100% (to authenticate web server identities) 7
  • Trust Management Features• Peer-to-peer distributed trust management• Service-centric model • All relevant activity modelled as service usage • Each service offered by a node has an associated trust threshold• Closed loop control • Behaviour of a node causes its trust score to change, which in turn affects future access control decisions • This percolates through the network with sharing of trust information 8
  • Trust Overlay Architecture Trust Trust manager (2) Trust updates manager Trust Trust Trust management manager manager layer (3) Update (1) Usage protections events Service Service usage usage layer 9
  • Strategies for trust update: desirable properties • Ideally, good nodes are allowed and bad nodes are blocked • But this distinction is not always clear: − A bad node might appear to behave well for a time, to gain trust − A good node might exhibit mixed behaviour due to lazy configuration • Stable dynamics • Responsiveness to suspicious behaviour needs to be tuned • IDSs tend to produce false alarms, so totally cutting off a node may be inappropriate • Normalisation • To remove bias • Incentive • Why would a node bother to pass on trust information to others? 10
  • Algorithms (strategies) for trust update• Averaging • Voting system where an average is taken dynamically • e.g. moving average, exponential average• Time decay to default trust level • Trust assessment loses validity over time and eventually becomes worthless• Level of “forgiveness” • Nodes can be given a second (or nth) chance• Use of third party trust threshold for accepting referrals• Hard to gain trust; easy to lose it • Lots of effort is required to gain trust and it can be lost easily.• Use of corroboration 11
  • Distribution of trust within “neighbourhood” E Neighbourhood of node B G B F H A C D Key: Neighbourhood Service usage (experience) of node A Trust info (collaboration) 12
  • The problem with Intrusion Detection Systems• IDS traditionally based on principle of perimeter defence Firewall Protected Network Internet IDS• … but this model is becoming less useful, especially with unstructured networks 13
  • Trust based Solution is Distributed IDS• How do loosely-coupled IDS components work together, in an ad hoc or other unstructured network? Ad hoc network (1) Service misuse or other suspicious behaviour X (3) B revaluates trust in X detected by A and blocks access A (2) A sends reputation B updates about X C For this to be effective, node B needs to trust node A 14
  • Spam Filtering• Spam filtering techniques • Parsing message content • DNS blocklists • Collaborative filtering databases • New ideas: micro-payments, computational challenge, …• Limitations • False negatives: some spam gets through • False positives: genuine mail gets blocked (worse…) • Spammers become aware of filtering techniques and adapt − Use real users’ names, return addresses, etc − Add superfluous text, embed text in images, etc • Lack of real incentive for MTAs to check outgoing mail for spam MTA = Mail Transfer Agent (“mail server”) 15
  • Using trust to enhance spam filtering• Typical approach to content filtering: • Apply tests to each e-mail, compute a spam score and compare it with a threshold − If the message “scores” above a certain threshold, then flag it as spam − Otherwise, accept the message • This threshold is pre-defined• Features of our approach: • Distributed trust • Closed loop control system • Vary threshold depending on trust in the sender − Mails from known reliable sources are mostly accepted even if they look a bit like spam − Lower tolerance for mails from unreliable sources 16
  • Using trust to enhance spam filtering• So each MTA builds up a picture of the trustworthiness of other MTAs (in respect to their spam filtering) Node name Trust score Confidence Recency smtp2.foo-inc.com 0.99 0.99 100 mail.barfoo.org 0.95 0.95 200 labserver.uxy.edu 0.8 0.8 700 webmailprov.com 0.5 0.5 30 lazyconfig.net 0.25 0.25 10 phishysite1342.com 0.01 0.01 8000 17
  • Note on verifying sender identity• A prerequisite for any trust system is consistent identities• Problem with spam is that it is relatively easy to spoof mail headers• So trust system cannot be based on originating MTA• Instead, we assess trust level based on identity of last hop (last SMTP relay) • Last hop can be identified by IP address used in SMTP dialogue • This provides incentive for mail relay to check what it is sending• Of course rogue MTA may change IP address (Sybil attack) • But then reverts to default trust (low) 18
  • 19Composite Services• Composite services (CS); key concept in service-oriented computing composing basic services into enterprise services.• Environment characteristics: • openness, • distribution, • dynamicity, • loosely coupled, • etc.• These characteristics result in the co-existence of various levels of: • security, • capacity, • availability, • reliability, • other functional and non-functional properties and behaviour of the services operating in such environments. 19
  • 20Trust and Service Composition (1)• Challenges for the interacting services and service consumers that require to carry out transactions with only trustworthy services.• Service composition techniques must be able to establish a trustworthy service by selecting trustworthy component services.• The composition techniques also must be able to maintain the most trustworthy (and cost efficient) composite service.• Addressing establishing and maintaining multidimensional trust is essential for the success and adoption of the services paradigm• Composite service providers are able to create new trustworthy composite services from component services based on knowledge of the component trustworthiness. 20
  • 21Trust and Service Composition (2)• The capability for composing trustworthy services includes the ability to adapt the services dynamically or statically resulting from: • changes in trustworthiness based on a runtime change in behaviour or attributes of the services, • changes in the trustworthiness requirements, • changes in service environment e.g. new threats, or • the emergence of new services that are more trustworthy..• The consideration of heterogeneous properties to support trustworthiness evaluation in compositions: • such as those related to security, availability and reputation.• The composite service providers support consumers in ensuring the trustworthiness of the component services originating from different providers. 21
  • 22Trust and Service Composition (3)• Composite service providers help maintain the trustworthiness of component services during runtime to minimise the need for adaptation: • the ability to allocate distributed services resources, control admission and to communicate with component providers.• One of the main goals in the support for trustworthiness is to maintain profitability. • Therefore, the techniques must take into account cost, pricing and other business related aspects.• The trustworthiness evaluation will support mechanisms for resilience against attacks and other problems: • such as collusion, dishonesty of raters, and deficiency of resources. 22
  • 23Trust in Aniketos Project• Trustworthiness module: implementation of monitoring and prediction approach.• Aggregation techniques for service trustworthiness properties based on composite service structure and characteristics e.g. importance of components.• Trustworthiness prediction procedure for services.• Optimisation of service composition through a custom genetic algorithm.• Comparison with other optimization techniques 23
  • 24Trust in Aniketos Project (cont’d)• Effect of Changes in Trustworthiness in Components: • depending on component constructs.• Business view of composite service trustworthiness. • optimisation of prices of composite services based on price response function and trustworthiness. • capacity dependent charging. • profitability issues e.g. consumer differentiation. 24
  • 25 Questions?email: dbotvich@tssg.orgWebsite: http://www.tssg.org/people/dbotvich 25