• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Efficient Sharing of Conflicting Opinions with Minimal Communication in Large Decentralised Teams
 

Efficient Sharing of Conflicting Opinions with Minimal Communication in Large Decentralised Teams

on

  • 1,002 views

presentation given on IJCAI-HINA 2011 workshop http://bit.ly/HINA2011

presentation given on IJCAI-HINA 2011 workshop http://bit.ly/HINA2011

the paper itself:
http://eprints.ecs.soton.ac.uk/22435/

Statistics

Views

Total Views
1,002
Views on SlideShare
417
Embed Views
585

Actions

Likes
0
Downloads
1
Comments
0

2 Embeds 585

http://users.ecs.soton.ac.uk 582
http://www.linkedin.com 3

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Efficient Sharing of Conflicting Opinions with Minimal Communication in Large Decentralised Teams Efficient Sharing of Conflicting Opinions with Minimal Communication in Large Decentralised Teams Presentation Transcript

    • Introduction Model AAT Experiments and Results Conclusions Efficient Sharing of Conflicting Opinions with Minimal Communication in Large Decentralised Teams Oleksandr Pryymak, Alex Rogers and Nicholas R. Jennings University of Southampton {op08r,acr,nrj}@ecs.soton.ac.uk July 20, 2011 0 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals Decentralised 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals Decentralised Not every individual can make an observation 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals Decentralised Not every individual can make an observation Observations are uncertain and conflicting 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals Decentralised Not every individual can make an observation Observations are uncertain and conflicting Individuals share opinions without supporting information 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDisaster response and large decentralised teams 2010, Haiti earthquake – Citizen and public news reporting, plotted on an online map (Ushahidi). 2010, Chile earthquake – Twitter is one of the speediest, albeit not the most accurate, sources of real-time information (France24). Large teams of individuals Decentralised Not every individual can make an observation Observations are uncertain and conflicting Individuals share opinions without supporting information How opinions are shared and how to improve their accuracy? 1 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Opinions are shared in cascades (avalanches) 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Can we trust what we share? Opinions are shared in cascades (avalanches) 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Can we trust what we share? Chile’10 : yes / no (Mendoza et al. 2010) Opinions are shared in cascades (avalanches) 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Can we trust what we share? Chile’10 : yes / no (Mendoza et al. 2010) Santiago airport is closed Fire at the University of Conceptcion Looting in Conceptcion Opinions are shared in cascades (avalanches) 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Can we trust what we share? Chile’10 : yes / no (Mendoza et al. 2010) Santiago airport is closed Fire at the University of Conceptcion Looting in Conceptcion Looting in Santiago Tsunami warning Active volcano Opinions are shared in cascades (avalanches) 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsHow opinions are shared – Can we trust what we share? Can we trust what we share? Chile’10 : yes / no (Mendoza et al. 2010) Santiago airport is closed Fire at the University of Conceptcion Looting in Conceptcion Looting in Santiago Tsunami warning Active volcano Opinions are shared in cascades (avalanches) Even in cooperative settings opinions might be incorrect 2 / 19
    • Introduction Model AAT Experiments and Results ConclusionsProblem of Forming a Correct Opinion How do agents make a decision which opinion is correct? based on own priors, observations based on information from others 3 / 19
    • Introduction Model AAT Experiments and Results ConclusionsProblem of Forming a Correct Opinion How do agents make a decision which opinion is correct? based on own priors, observations based on information from others by analysing communicated information reaching agreements interactivity with others 3 / 19
    • Introduction Model AAT Experiments and Results ConclusionsProblem of Forming a Correct Opinion How do agents make a decision which opinion is correct? based on own priors, observations based on information from others by analysing communicated information reaching agreements interactivity with others The Problem However, if: agents’ processing abilities are limited communication is strictly limited to opinion sharing 3 / 19
    • Introduction Model AAT Experiments and Results ConclusionsProblem of Forming a Correct Opinion How do agents make a decision which opinion is correct? based on own priors, observations based on information from others by analysing communicated information reaching agreements interactivity with others The Problem However, if: agents’ processing abilities are limited communication is strictly limited to opinion sharing The Solution Agents have to exploit properties of opinion sharing dynamics, and filter out incorrect opinions in the sharing process 3 / 19
    • Introduction Model AAT Experiments and Results ConclusionsProblem of Forming a Correct Opinion How do agents make a decision which opinion is correct? based on own priors, observations based on information from others by analysing communicated information reaching agreements interactivity with others The Problem However, if: agents’ processing abilities are limited communication is strictly limited to opinion sharing The Solution Agents have to exploit properties of opinion sharing dynamics, and filter out incorrect opinions in the sharing process How to find such settings by independent actions of the agents? 3 / 19
    • Introduction Model AAT Experiments and Results ConclusionsOutline Remaining sections: 1 Model of opinion sharing 2 Existing message-passing algorithm 3 Our algorithm based on independent actions 4 Evaluation 4 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Agent 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest Agent 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 Updated with: Own observations sensors 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 Updated with: Own observations sensors 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 Updated with: Own observations sensors Opinions of others network ... Yes ? No No neighbours 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 Updated with: Own observations sensors Opinions of others network ... Yes ? No No neighbours 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – an Agent Will it rain Subject of tonight? interest No Dont know Yes Opinion Agent Belief 0 Prior 1 Updated with: Own observations sensors Opinions of others network ... Yes ? No No neighbours 5 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – Sample Dynamics red nodes are agents with sensors; green nodes are agents with undeter. opinion; white and black are agents that support the corresponding opinions. (b = white) 6 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – Sample Dynamics opinions are shared in cascades red nodes are agents with sensors; green nodes are agents with undeter. opinion; white and black are agents that support the corresponding opinions. (b = white) 6 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – Sample Dynamics opinions are shared in cascades cascades might be wrong and fragile red nodes are agents with sensors; green nodes are agents with undeter. opinion; white and black are agents that support the corresponding opinions. (b = white) 6 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – Sample Dynamics opinions are shared in cascades cascades might be wrong and fragile cascades depend on trust levels red nodes are agents with sensors; green nodes are agents with undeter. opinion; white and black are agents that support the corresponding opinions. (b = white) 6 / 19
    • Introduction Model AAT Experiments and Results ConclusionsModel – Sample Dynamics opinions are shared in cascades cascades might be wrong and fragile cascades depend on trust levels double counting fallacy red nodes are agents with sensors; green nodes are agents with undeter. opinion; white and black are agents that support the corresponding opinions. (b = white) 6 / 19
    • Introduction Model AAT Experiments and Results ConclusionsSettings for Improved Reliability – Metrics 100 Agents holding opinion, % 80 60 correct incorrect 40 undetermined 20 0 0.55 0.6 tcritical 0.65 0.7 0.75 Stable dynamics Scale-Invariant dynamics Unstable dynamics 1 0.8 Reliability 0.6 Reliability 0.4 Awareness 0.2 0 0.55 0.6 t critical 0.65 0.7 0.75 Trust level (common for all agents) 7 / 19
    • Introduction Model AAT Experiments and Results ConclusionsCascades Distribution Stable Dynamics Scale-Invariant Dynamics Unstable Dynamics 4 t=0.6 4 t=0.63 2 t=0.66 10 10 10 3 3 10 10 Cascade Frequency Cascade Frequency Cascade Frequency 2 2 1 10 10 10 1 1 10 10 0 0 0 10 10 10 0 1 2 3 0 1 2 3 0 1 2 3 10 10 10 10 10 10 10 10 10 10 10 10 Size of Opinion Cascade Size of Opinion Cascade Size of Opinion Cascade 8 / 19
    • Introduction Model AAT Experiments and Results ConclusionsCascades Distribution Stable Dynamics Scale-Invariant Dynamics Unstable Dynamics 4 t=0.6 4 t=0.63 2 t=0.66 10 10 10 3 3 10 10 Cascade Frequency Cascade Frequency Cascade Frequency 2 2 1 10 10 10 1 1 10 10 0 0 0 10 10 10 0 1 2 3 0 1 2 3 0 1 2 3 10 10 10 10 10 10 10 10 10 10 10 10 Size of Opinion Cascade Size of Opinion Cascade Size of Opinion Cascade Branching factor of opinion sharing αimproved reliability = 1 R. Glinton, P. Scerri, and K. Sycara. (2010) Exploiting scale invariant dynamics for efficient information propagation in large teams. In Proceedings of 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10), pages 21-28, Toronto, Canada. 8 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDACOR Yes ? ? α Yes ? ? introduces additional communication NumberOfNeighbours 2 additional messages for a single opinion change 9 / 19
    • Introduction Model AAT Experiments and Results ConclusionsDACOR Yes ? ? α Yes ? ? introduces additional communication NumberOfNeighbours 2 additional messages for a single opinion change exhibits low adaptivity requires tuning of its parameters 9 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels How to find the settings for improved reliability based on local observations only? 10 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels How to find the settings for improved reliability based on local observations only? Stable dynamics Scale-Invariant dynamics Unstable dynamics 1 0.8 Reliability 0.6 Reliability 0.4 Awareness 0.2 0 0.55 0.6 tcritical 0.65 0.7 0.75 Trust level (common for all agents) 10 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels How to find the settings for improved reliability based on local observations only? Stable dynamics Scale-Invariant dynamics Unstable dynamics 1 0.8 Reliability 0.6 Reliability 0.4 Awareness 0.2 0 0.55 0.6 tcritical 0.65 0.7 0.75 Trust level (common for all agents) Intuition An agent must use the minimal trust level that still enables it to form its opinion 10 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels How to find the settings for improved reliability based on local observations only? Stable dynamics Scale-Invariant dynamics Unstable dynamics 1 0.8 Reliability 0.6 Reliability 0.4 Awareness 0.2 0 0.55 0.6 tcritical 0.65 0.7 0.75 Trust level (common for all agents) Intuition An agent must use the minimal trust level that still enables it to form its opinion However, the agent’s choice influences others in the team 10 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels Agent i has to select minimal trust level til from the candidates. 11 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels Agent i has to select minimal trust level til from the candidates. The agent with til has to achieve the target awareness rate, hbest 11 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels Agent i has to select minimal trust level til from the candidates. The agent with til has to achieve the target awareness rate, hbest ti = arg min |hi (til ) − hbest | til 11 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAutonomous Adaptive Tuning of Trust Levels Agent i has to select minimal trust level til from the candidates. The agent with til has to achieve the target awareness rate, hbest ti = arg min |hi (til ) − hbest | til 1 How to select candidate trust levels? 2 How to estimate their awareness rates? 3 How to choose the trust level to use? 11 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Candidate Trust Levels ck te la hi k=3 =b k=1 2 =w oi oi Pki 0 1-σ Pi σ 1 To form the most accurate opinion the agent must form its opinion when it observes the strongest support. 12 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Candidate Trust Levels ck te la hi k=1 =b k= 1 = w oi oi 1+ 1− ti 2− 2+ ti ti ti P k i Pki 0 1-σ Pi 0.5 σ 1 0 1-σ Pi 0.5 σ 1 To form the most accurate opinion the agent must form its opinion when it observes the strongest support. 12 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Candidate Trust Levels ck te la hi k=1 =b k= 1 = w oi oi 1+ 1− ti 2− 2+ ti ti ti P k i Pki 0 1-σ Pi 0.5 σ 1 0 1-σ Pi 0.5 σ 1 To form the most accurate opinion the agent must form its opinion when it observes the strongest support. Since the number of neighbours |Ni | is limited, the set of the candidate trust levels is: Ti = {til− , til+ : l = 1 . . . |Ni |} 12 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Candidate Trust Levels ck te la hi k=1 =b k= 1 = w oi oi 1+ 1− ti 2− 2+ ti ti ti P k i Pki 0 1-σ Pi 0.5 σ 1 0 1-σ Pi 0.5 σ 1 To form the most accurate opinion the agent must form its opinion when it observes the strongest support. Since the number of neighbours |Ni | is limited, the set of the candidate trust levels is: Ti = {til− , til+ : l = 1 . . . |Ni |} In the settings of dynamic topology and agent may use arbitrary Ti 12 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Estimation of the Awareness Rates The awareness rates of the candidate trust levels cannot be calculated. 13 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Estimation of the Awareness Rates The awareness rates of the candidate trust levels cannot be calculated. There are two evidences that indicate that agent could have formed an opinion with til actually using ti : 13 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Estimation of the Awareness Rates The awareness rates of the candidate trust levels cannot be calculated. There are two evidences that indicate that agent could have formed an opinion with til actually using ti : 1 Ev1: If an opinion was formed, then all higher trust levels (til ≥ ti ) would have led to opinion formation as well. 13 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Estimation of the Awareness Rates The awareness rates of the candidate trust levels cannot be calculated. There are two evidences that indicate that agent could have formed an opinion with til actually using ti : 1 Ev1: If an opinion was formed, then all higher trust levels (til ≥ ti ) would have led to opinion formation as well. 2 Ev2: Otherwise, if til requires less updates to form an opinion then the observed strongest support. 13 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Estimation of the Awareness Rates The awareness rates of the candidate trust levels cannot be calculated. There are two evidences that indicate that agent could have formed an opinion with til actually using ti : 1 Ev1: If an opinion was formed, then all higher trust levels (til ≥ ti ) would have led to opinion formation as well. 2 Ev2: Otherwise, if til requires less updates to form an opinion then the observed strongest support. ˆ hi (til ) ≈ hi (til ) 13 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Strategies to Select a Trust Level ˆ The problem of selecting til ∈ Ti , accordingly their h(til ), resembles the standard multi-armed bandit (MAB) model. 14 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Strategies to Select a Trust Level ˆ The problem of selecting til ∈ Ti , accordingly their h(til ), resembles the standard multi-armed bandit (MAB) model. The agent can apply MAB strategies: Greedy -greedy -N-greedy Soft-max – assume that reward distribution is unknown. 14 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Strategies to Select a Trust Level ˆ The problem of selecting til ∈ Ti , accordingly their h(til ), resembles the standard multi-armed bandit (MAB) model. The agent can apply MAB However, for ascendantly ordered Ti : 1 Awareness Rate strategies: 0.8 Greedy 0.6 0.4 -greedy 0.2 -N-greedy 0 0.55 0.6 tcritical 0.65 0.7 Soft-max Trust Level – assume that reward Hill-climbing: Select a trust level from distribution is unknown. the closest to the currently used 14 / 19
    • Introduction Model AAT Experiments and Results ConclusionsAAT – Strategies to Select a Trust Level ˆ The problem of selecting til ∈ Ti , accordingly their h(til ), resembles the standard multi-armed bandit (MAB) model. The agent can apply MAB However, for ascendantly ordered Ti : 1 Awareness Rate strategies: 0.8 Greedy 0.6 0.4 -greedy 0.2 -N-greedy 0 0.55 0.6 tcritical 0.65 0.7 Soft-max Trust Level – assume that reward Hill-climbing: Select a trust level from distribution is unknown. the closest to the currently used Since an agent’s choice influences others, strategies with less dramatic changes to the dynamics are expected to perform better. 14 / 19
    • Introduction Model AAT Experiments and Results ConclusionsSelection of the Target Awareness Rate 1 0.9Reliability 0.8 0.7 0.6 0.8 0.85 0.9 0.95 1 Target awareness rate, hbest random scalefree smallworld The agents have to compromise their awareness rates to improve team’s reliability. 15 / 19
    • Introduction Model AAT Experiments and Results ConclusionsSelection of the Target Awareness Rate 1 0.75 Average trust level, 〈ti 〉 0.9 0.7Reliability 0.8 0.65 0.7 0.6 0.6 0.8 0.85 0.9 0.95 1 0.8 0.85 0.9 0.95 1 Target awareness rate, hbest Target awareness rate, hbest random scalefree smallworld The agents have to compromise their awareness rates to improve team’s reliability. With a high target awareness rate, hbest , a team exhibits unstable dynamics, thus the reliability drops. 15 / 19
    • Introduction Model AAT Experiments and Results ConclusionsReliability of a Team (a) Random Network 1 0.9 AAT 0.8 Reliability DACOR Pre-tuned Trust Levels 0.7 Average Pre-tuned Trust Levels 0.6 0.5 500 1000 1500 2000 Network Size AAT significantly outperforms prediction of the best parameters (average pre-tuned) and existing DACOR. Individually pre-tuned trust levels indicate on the upper-bound that can be achieved. 16 / 19
    • Introduction Model AAT Experiments and Results ConclusionsReliability of a Team (b) Scale−Free Network 1 0.9 AAT 0.8 Reliability DACOR Pre-tuned Trust Levels 0.7 Average Pre-tuned Trust Levels 0.6 0.5 500 1000 1500 2000 Network Size AAT significantly outperforms prediction of the best parameters (average pre-tuned) and existing DACOR. Individually pre-tuned trust levels indicate on the upper-bound that can be achieved. 16 / 19
    • Introduction Model AAT Experiments and Results ConclusionsReliability of a Team (c) Small−World Network 1 0.9 AAT 0.8 Reliability DACOR Pre-tuned Trust Levels 0.7 Average Pre-tuned Trust Levels 0.6 0.5 500 1000 1500 2000 Network Size AAT significantly outperforms prediction of the best parameters (average pre-tuned) and existing DACOR. Individually pre-tuned trust levels indicate on the upper-bound that can be achieved. 16 / 19
    • Introduction Model AAT Experiments and Results ConclusionsCommunication Expense MinimalCommunication = NumberOfNeighbours Agents 17 / 19
    • Introduction Model AAT Experiments and Results ConclusionsCommunication Expense MinimalCommunication = NumberOfNeighbours Agents 80 Messages per Agent 60 AAT 40 DACOR Minimal Communication 20 0 500 1000 1500 2000 Network Size AAT is communicationally efficient while DACOR requires 4-7 times more messages to operate 17 / 19
    • Introduction Model AAT Experiments and Results ConclusionsPerformance in the Presence of Indifferent Agents (a) Random Network 1 0.9 Reliability 0.8 AAT DACOR Pre-tuned Trust Levels 0.7 Average Pre-tuned Trust Levels 0.6 0.5 0 20 40 60 80 100 % of Indifferent Agents AAT installed on a half of a team delivers higher reliability than we 18 / 19
    • Introduction Model AAT Experiments and Results ConclusionsPerformance in the Presence of Indifferent Agents (b) Scale−Free Network 1 0.9 0.8 Reliability AAT 0.7 DACOR Pre-tuned Trust Levels Average Pre-tuned 0.6 Trust Levels 0.5 0.4 0 20 40 60 80 100 % of Indifferent Agents AAT installed on a half of a team delivers higher reliability than we can predict by using the average pre-tuned trust-levels. 18 / 19
    • Introduction Model AAT Experiments and Results ConclusionsPerformance in the Presence of Indifferent Agents (c) Small−World Network 1 0.9 0.8 Reliability AAT 0.7 DACOR Pre-tuned Trust Levels Average Pre-tuned 0.6 Trust Levels 0.5 0.4 0 20 40 60 80 100 % of Indifferent Agents AAT installed on a half of a team delivers higher reliability than we can predict by using the average pre-tuned trust-levels. 18 / 19
    • Introduction Model AAT Experiments and Results ConclusionsConclusions AAT exploits properties of social behaviour to improve accuracy of agents’ opinions. Contributions: improves Reliability minimises Communication – the first to operate under this restriction Computationally inexpensive Adaptive, Scalable, Robust to the presence of indifferent agents 19 / 19
    • Introduction Model AAT Experiments and Results ConclusionsConclusions AAT exploits properties of social behaviour to improve accuracy of agents’ opinions. Contributions: improves Reliability minimises Communication – the first to operate under this restriction Computationally inexpensive Adaptive, Scalable, Robust to the presence of indifferent agents Future work: Tuning an individual trust level for each neighbour Attack-resistant solution 19 / 19