Malware continues to thrive on the Internet. Besides auto-mated mechanisms for detecting malware, we provide users with trust evidence information to enable them to make in-formed trust decisions. To scope the problem, we study the challenge of assisting users with judging the trustworthiness of software downloaded from the Internet. Through expert elicitation, we deduce indicators for trust evidence, then analyze these indicators with respect to scal-ability and robustness. We design OTO, a system for com-municating these trust evidence indicators to users, and we demonstrate through a user study the effectiveness of OTO, even with respect to IE’s SmartScreen Filter (SSF). The results from the between-subjects experiment with 58 par-ticipants confirm that the OTO interface helps people make correct trust decisions compared to the SSF interface regard-less of their security knowledge, education level, occupation, age, or gender.
Authors are Tiffany Hyun-Jin Kim, Payas Gupta, Jun Han, Emmanuel Owusu, Jason Hong, Adrian Perrig, and Debin Gao
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
OTO: Online Trust Oracle for User-Centric Trust Establishment, at CCS 2012
1. OTO:
Online Trust Oracle for
User-Centric Trust Establishment
Tiffany Hyun-Jin Kim, Jun Han, Emmanuel Owusu, Jason Hong, Adrian Perrig
Carnegie Mellon University
Payas Gupta, Debin Gao
Singapore Management University
19th International Conference on Computer and Communication Security (CCS)
October 17, 2012
1
2. WHEN DOWNLOADING SOFTWARE…
Challenge: gauging authenticity & legitimacy of software
Novice users
Don’t understand dangers
Lack ability to validate
Security-conscious users
Often frustrated by their inability to judge 2
4. TRUST INFO FROM THE INTERNET
Challenging for end-users
Cumbersome information gathering
Being unaware of existing evidence
Assessing the quality of evidence
Contradicting evidence
Automate trust decisions for users?
Delays in identifying new & evolving threats
Malware authors can circumvent the automated system
Users are still left alone to make trust decisions!
4
5. PROBLEM DEFINITION
Design a dialog box with robust trust evidence indicators
Help novice users make correct trust decisions
Avoid malware
Even if underlying OS fails to correctly label legitimacy
Desired properties
Correct
Users can still make correct trust decisions given conflicting info
Usable
Indicators are useful to novice users
Indicators should not disturb users
5
6. ASSUMPTION
Malware cannot interfere with dialog box operations
Display of the dialog box
Detection of software downloads
Gathering trust evidence
Adversary model
Malware distributors manipulate trust evidence
Provide falsifying info
Hide crucial info
6
10. He searches on Google for
“batman begins.”
After looking through several
options, he decides to watch
this video and clicks.
Click on the link
11. While waiting for the
video to load, a dialog
box appears.
Would you recommend
your friend to continue?
12. AT THE END OF EACH SCENARIO
Questions
Would you recommend that your friend proceeds and
downloads the software [Yes/No/Not sure]
[If Yes or No] Why?
[If Not sure] What would you do to find out the legitimacy of this
software?
What evidence would you present to your friend to
convince him/her of the legitimacy of this software?
How well do you know this software? [1:don’t know at all
– 5: know very well]
12
13. RESULTS OF EXPERTS’ USER STUDY
13
PROCESSING OPERATION # EXPERTS
SOFTWARE REVIEW
Are reviews available from reputable sources, experts, or friends? 9
Are the reviews good? 3
HOSTING SITE
Is the hosting site reputable? 8
What is the corporate parameter (e.g., # employees, age of company)? 2
USER INTENTION
Did you search for that specific software? 1
Are you downloading from a pop-up? 1
SECURING MACHINE
Do you run an updated antivirus? 2
Is your machine trusted? 1
14. OTO: ONLINE TRUST ORACLE
14
User interface displaying safety of downloading file
Summary &
clickable link
15. 3 COLOR MODES
Similar to Windows User Account Control framework
Blue: highly likely to be legitimate
Red: highly likely to be malicious
Yellow: system cannot determine the legitimacy
15
16. EVALUATION
Experiment with 2 conditions
IE9 SmartScreen Filter (SSF): base condition
Current state-of-the-art technology[1]
Widely used on browser
Checks software against a known blacklist
If flagged red warning banner
No reputation yellow warning banner
16
[1] M. Hachman. Microsoft’s IE9 Blocks Almost All Social Malware, Study Finds. http://www.pcmag.com/article2/0,2817,2391164,00.asp
17. Same 10 scenarios for experts’ user study
End of each scenario: display SSF or OTO warning dialog box
Legitimate Malicious
System detection outcome
Groundtruth
Legitimate
Malicious
TN
Kaspersky
SPAMfighter
Ahnlab
MindMaple
Adobe flash
ActiveX codec Windows activation
Privacy violation
HDD diagnostics
Rkill
FP
FN TP
PROCEDURE
17
18. END OF EACH SCENARIO
18
While waiting for the
video to load, a dialog
box appears.
Your friend clicks the
“Continue” button.Click on the link
19. When he clicks “Continue,"
your friend's computer
prevents him from
proceeding and instead
displays this interface.
Please help your friend
make a decision.
20.
21.
22. EFFECTIVENESS OF OTO
Demographics
58 participants
30 male and 28 female
Age 18—59
Between-subjects study: 29 for each condition
Compensation
$15 for participating
Additional $1 for each correct answer $25 max
22
23. RESULTS
Repeated Measures ANOVA test
Did participants answer each scenario correctly?
OTO helps people make more correct decisions than SSF
does regardless of gender, age, occupation, education
level, or background security knowledge!
23
24. TIMING ANALYSIS
N = 13 for SSF, N = 11 for OTO
Overall, time(OTO) < time(SSF)
Participants relied on evidence to make trust decisions
24
25. WHAT IF OS MISCATEGORIZES?
OTO >> SSF
5-pt Likert scale questions
OTO is as useful as SSF
OTO is more comfortable to use
25
Legitimate Malicious
System detection outcome
Groundtruth
Legitimate
Malicious
TN
Kaspersky
SPAMfighter
Ahnlab
MindMaple
Adobe flash
ActiveX codec Windows activation
Privacy violation
HDD diagnostics
Rkill
FP
FN TP
26. SCOPE OF THIS PAPER
Main objective of this paper
Whether providing extra pieces of evidence helps users
Outside the scope of this paper
How each piece of evidence is gathered
How each piece of evidence is authenticated
How malware cannot interfere with OTO operations
Existence of system-level trusted path for input and output
Helping people who don’t care about security
26
27. CONCLUSIONS
OTO: download dialog box
Displays robust & scalable trust evidence to users
Based on interview results of security experts
Goal: do users find additional trust evidence useful?
People actually read the evidence
Empowers users to make better trust decisions
Even if underlying OS misdetects
27
36. SECURITY ANALYSIS
Malware detection
Zero-day: lack of enough evidence
Well-known malware: likely to have more negative than positive
False alarms
Users examine and compare
Evidence is what users would have gathered from Internet
Manipulation attack
Creating fake positive evidence
OTO’s evidence is robust
E.g., by considering temporal aspect
Need to forge multiple pieces of evidence
Hiding harmful evidence
Challenging to prevent authorative resources from serving negative evidence
Impersonation of legitimate software
Can associate each piece of software with cryptographic hash
36
38. RELATED WORK
User mental models
Responses to SSL warning messages [Sunshine et al. 2009]
Psychological responses to warnings [Bravo-Lillo et al., 2011]
Folk models of security threats [Wash, 2010]
Information Content for Microsoft UAC warning [Motiee, 2011]
Habituation
Effectiveness of browser warnings [Egelman et al. 2008]
Polymorphic and audited dialogs [Brustoloni et al. 2007]
Assessing credibility online
Augmenting search results with credibility visualizations [Schwarz
and Morris, 2011]
Prominence-Interpretation theory [Fogg et al. 2003]
38
39. RELATED WORK
User mental models
Responses to SSL warning messages [Sunshine et al. 2009]
Warnings in general do not prevent users from unsafe behavior
Psychological responses to warnings [Bravo-Lillo et al., 2011]
Users have wrong mental model for computer warnings
Most users don’t understand SSL warnings without background
knowledge Warnings should not be the main way of defense
Folk models of security threats [Wash, 2010]
Security should focus on both actionable advice and potential threats
Information Content for Microsoft UAC warning [Motiee,
2011]
Let users assess risk and correctly respond to warnings
Information can still be easily spoofed
39
40. RELATED WORK
Microsoft SmartScreen Filter
current state-of-the-art technology widely used on browsers
Checks the software against a known blacklist of malicious software
If flagged -> red-banner warning appears, hiding options to make users
download
Information Content for Microsoft UAC warning [Motiee, 2011]
Let users assess risk and correctly respond to warnings
Information can still be easily spoofed
Psychological responses to warnings [Bravo-Lillo et al., 2011]
Users have wrong mental model for computer warnings
Most users don’t understand SSL warnings without background knowledge
Warnings should not be the main way of defense
40
41. DESIGN RATIONALE
Prevalent security threats
85% malware from web
Drive-by downloads
Fake antivirus
Keyloggers
45% success from user actions
Common pitfalls
Lack of security knowledge
Visual deception
Psychological pressure
Reliance on prior experience
Bounded attention
41
Effective design principle
Grayed-out background
Mimicked UI of OS vendor
Detailed explanation
Non-uniform UIs
Editor's Notes
Clearly see whether interface is legit or not based on the answers, especially if they want to get the answer correctly.
Factors we took into account in our designUnderstand prevalent security threatsAccording to industry reports, 85%...comes from the web, especially by luring people to sites with malicious codeRecurring popular threat is fake antivirus and using keyloggers. 45% malware attacks succeedWe also considered common pitfalls when users make security decisions online that we wanted to avoidMisinterpreting indicators: broken image, from line of emailVisual deception: typejacking homograph attacksBounded attention: pay insufficient attention to existing security indicators and lack of them.