• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Applying the Wisdom of Crowds to Usable Privacy and Security, CMU Crowdsourcing Seminar Oct 2011
 

Applying the Wisdom of Crowds to Usable Privacy and Security, CMU Crowdsourcing Seminar Oct 2011

on

  • 263 views

A summary of my group's work in using crowdsourcing techniques and wisdom of crowds to improve privacy and security. I talked about some techniques to improve crowdsourcing for anti-phishing, some ...

A summary of my group's work in using crowdsourcing techniques and wisdom of crowds to improve privacy and security. I talked about some techniques to improve crowdsourcing for anti-phishing, some ways of using lots of location data to infer location privacy preferences, and some of our early work on using crowdsourcing to understand privacy preferences regarding smartphone apps.

Statistics

Views

Total Views
263
Views on SlideShare
263
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Entropy related to location privacy Fewer concerns in “public” places
  • What this means is, just looking at very obvious properties of the co-locations histories doesn't really tell you very much. Also, notice most of the performance boost is at low levels of recall. so if you want to build a high-precision classifier this is the best approach. Really there are two stories here. first it's that the intensity features do not really provide much of a gain over just looking at the number of locations, especially at high recall levels. Second, is that location based features significantly improves performance. This validates that these are clearly good things to look at when you're analyzing this kind of data
  • What this means is, just looking at very obvious properties of the co-locations histories doesn't really tell you very much. Also, notice most of the performance boost is at low levels of recall. so if you want to build a high-precision classifier this is the best approach. Really there are two stories here. First it's that the intensity features (time spent co-located) do not really provide much of a gain over just looking at the number of locations, especially at high recall levels. Second, is that location based features (ie entropy) significantly improves performance. This validates that these are clearly good things to look at when you're analyzing this kind of data
  • Entropy related to location privacy Fewer concerns in “public” places
  • Compare privacy as expectations with: Flow control, informed consent, not sharing information, solitude

Applying the Wisdom of Crowds to Usable Privacy and Security, CMU Crowdsourcing Seminar Oct 2011 Applying the Wisdom of Crowds to Usable Privacy and Security, CMU Crowdsourcing Seminar Oct 2011 Presentation Transcript

  • ©2009CarnegieMellonUniversity:1 Applying the Wisdom of Crowds to Usable Privacy and Security Jason I. Hong Carnegie Mellon University
  • ©2011CarnegieMellonUniversity:2 Usable Privacy and Security • Cyber security is a national priority – Increasing levels of malware and phishing – Accidental disclosures of sensitive info – Reliability of critical infrastructure • Privacy concerns growing as well – Breaches and theft of customer data – Ease of gathering, storing, searching • Increasing number of issues deal with human element
  • ©2011CarnegieMellonUniversity:3 Fake Interfaces to Trick People Fake Anti-Virus (installs malware)
  • ©2011CarnegieMellonUniversity:4 Misconfigurations Facebook controls for managing sharing preferences
  • ©2011CarnegieMellonUniversity:5 Too Many Passwords
  • ©2011CarnegieMellonUniversity:6 Other Examples • Web browser certificates • Do not track / behavioral advertising • Location privacy • Online social network privacy • Intrusion detection and visualizations • Effective warnings • Effective security training • …
  • ©2011CarnegieMellonUniversity:7 Usable Privacy and Security “Give end-users security controls they can understand and privacy they can control for the dynamic, pervasive computing environments of the future.” CRA “Grand Challenges in Information Security & Assurance” 2003
  • ©2011CarnegieMellonUniversity:8 Today’s Talk • Apply crowdsourcing to speed up detection of phishing web sites • Using location data to understand people, places, and relationships • Using crowdsourcing to understand privacy of mobile apps
  • ©2011CarnegieMellonUniversity:9 Today’s Talk • Apply crowdsourcing to speed up detection of phishing web sites • Using location data to understand people, places, and relationships • Using crowdsourcing to understand privacy of mobile apps
  • ©2009CarnegieMellonUniversity:10 Smartening the Crowds: Computational Techniques for Improving Human Verification to Fight Phishing Scams Symposium on Usable Privacy and Security 2011 Gang Liu Wenyin Liu Department of Computer Science City University of Hong Kong Guang Xiang Bryan A. Pendleton Jason I. Hong Carnegie Mellon University
  • ©2011CarnegieMellonUniversity:11 • RSA SecurID • Lockheed-Martin • Gmail • Epsilon mailing list • Australian government • Canadian government • Oak Ridge Nat’l Labs • Operation Aurora
  • ©2011CarnegieMellonUniversity:12 Detecting Phishing Websites • Method 1: Use heuristics – Unusual patterns in URL, HTML, topology – Approach favored by researchers – High true positives, some false positives • Method 2: Manually verify – Approach used by industry blacklists today (Microsoft, Google, PhishTank) – Very few false positives, low risk of liability – Slow, easy to overwhelm
  • ©2011CarnegieMellonUniversity:13
  • ©2011CarnegieMellonUniversity:14
  • ©2011CarnegieMellonUniversity:15
  • ©2011CarnegieMellonUniversity:16 Wisdom of Crowds Approach • Mechanics of PhishTank – Submissions require at least 4 votes and 70% agreement – Some votes weighted more • Total stats (Oct2006 – Feb2011) – 1.1M URL submissions from volunteers – 4.3M votes – resulting in about 646k identified phish • Why so many votes for only 646k phish?
  • ©2011CarnegieMellonUniversity:17 PhishTank Statistics Jan 2011 Submissions 16019 Total Votes 69648 Valid Phish 12789 Invalid Phish 549 Median Time 2hrs 23min • 69648 votes → max of 17412 labels – But only 12789 phish and 549 legitimate identified – 2681 URLs not identified at all • Median delay of 2+ hours still has room for improvement (used to be 12 hours)
  • ©2011CarnegieMellonUniversity:18 Why Care? • Can improve performance of human-verified blacklists – Dramatically reduce time to blacklist – Improve breadth of coverage – Offer same or better level of accuracy • More broadly, new way of improving performance of crowd for a task
  • ©2011CarnegieMellonUniversity:19 Ways of Smartening the Crowd • Change the order URLs are shown – Ex. most recent vs closest to completion • Change how submissions are shown – Ex. show one at a time or in groups • Adjust threshold for labels – PhishTank is 4 votes and 70% – Ex. vote weights, algorithm also votes • Motivating people / allocating work – Filtering by brand, competitions, teams of voters, leaderboards
  • ©2011CarnegieMellonUniversity:20 Ways of Smartening the Crowd • Change the order URLs are shown – Ex. most recent vs closest to completion • Change how submissions are shown – Ex. show one at a time or in groups • Adjust threshold for labels – PhishTank is 4 votes and 70% – Ex. vote weights, algorithm also votes • Motivating people / allocating work – Filtering by brand, competitions, teams of voters, leaderboards
  • ©2011CarnegieMellonUniversity:21 Ways of Smartening the Crowd • Change the order URLs are shown – Ex. most recent vs closest to completion • Change how submissions are shown – Ex. show one at a time or in groups • Adjust threshold for labels – PhishTank is 4 votes and 70% – Ex. vote weights, algorithm also votes • Motivating people / allocating work – Filtering by brand, competitions, teams of voters, leaderboards
  • ©2011CarnegieMellonUniversity:22 Ways of Smartening the Crowd • Change the order URLs are shown – Ex. most recent vs closest to completion • Change how submissions are shown – Ex. show one at a time or in groups • Adjust threshold for labels – PhishTank is 4 votes and 70% – Ex. vote weights, algorithm also votes • Motivating people / allocating work – Filtering by brand, competitions, teams of voters, leaderboards
  • ©2011CarnegieMellonUniversity:23 Ways of Smartening the Crowd • Change the order URLs are shown – Ex. most recent vs closest to completion • Change how submissions are shown – Ex. show one at a time or in groups • Adjust threshold for labels – PhishTank is 4 votes and 70% – Ex. vote weights, algorithm also votes • Motivating people / allocating work – Filtering by brand, competitions, teams of voters, leaderboards
  • ©2011CarnegieMellonUniversity:24 Overview of Our Work • Crawled unverified submissions from PhishTank over 2 week period • Replayed URLs on MTurk over 2 weeks – Required participants to play 2 rounds of Anti-Phishing Phil – Clustered phish by html similarity – Two cases: phish one at a time, or in a cluster (not strictly separate conditions) – Evaluated effectiveness of vote weight algorithm after the fact
  • ©2011CarnegieMellonUniversity:25 Anti-Phishing Phil • We had MTurkers play two rounds of Phil [Sheng 2007] to qualify (µ = 5.2min) • Goal was to reduce lazy MTurkers and ensure base level of knowledge
  • ©2011CarnegieMellonUniversity:26
  • ©2011CarnegieMellonUniversity:27
  • ©2011CarnegieMellonUniversity:28 Clustering Phish • Observations – Most phish are generated by toolkits and thus are similar in content and appearance – Can potentially reduce labor by labeling suspicious sites in bulk – Labeling single sites as phish can be hard if unfamiliar, easier if multiple examples
  • ©2011CarnegieMellonUniversity:29 Clustering Phish • Motivations – Most phish are generated by toolkits and thus similar – Labeling single sites as phish can be hard, easier if multiple examples – Reduce labor by labeling suspicious sites in bulk
  • ©2011CarnegieMellonUniversity:30 Clustering Phish • Motivations – Most phish are generated by toolkits and thus similar – Labeling single sites as phish can be hard, easier if multiple examples – Reduce labor by labeling suspicious sites in bulk
  • ©2011CarnegieMellonUniversity:31 Most Phish Can be Clustered • With all data over two weeks, 3180 of 3973 web pages can be grouped (80%) – Used shingling and DBSCAN (see paper) – 392 clusters, size from 2 to 153 URLs
  • ©2011CarnegieMellonUniversity:32
  • ©2011CarnegieMellonUniversity:33 MTurk Tasks • Two kinds of tasks, control and cluster – Listed these two as separate HITs – MTurkers paid $0.01 per label – Cannot do between-conditions on MTurk – MTurker saw a given URL at most once • Four votes minimum, 70% threshold – Stopped at 4 votes, cannot dynamically request more votes on MTurk – 153 (3.9%) in control and 127 (3.2%) in cluster not labeled
  • ©2011CarnegieMellonUniversity:34 MTurk Tasks • URLs were replayed in order – Ex. If crawled at 2:51am from PhishTank on day 1, then we would replay at 2:51am on day 1 of experiment – Listed new HITs each day rather than a HIT lasting two weeks (to avoid delays and last minute rush)
  • ©2011CarnegieMellonUniversity:35 Summary of Experiment • 3973 suspicious URLs – Ground truth from Google, MSIE, and PhishTank, checked every 10 min – 3877 were phish, 96 not • 239 MTurkers participated – 174 did HITs for both control and cluster – 26 in Control only, 39 in Cluster only • Total of 33,781 votes placed – 16,308 in control – 11,463 in cluster (17473 equivalent) • Cost (participants + Amazon): $476.67 USD
  • ©2011CarnegieMellonUniversity:36 Results of Aquarium • All votes are the individual votes • Labeled URLs are after aggregation
  • ©2011CarnegieMellonUniversity:37 Comparing Coverage and Time
  • ©2011CarnegieMellonUniversity:38 Voteweight • Use time and accuracy to weight votes – Those who vote early and accurately are weighted more – Older votes discounted – Incorporates a penalty for wrong votes • Done after data was collected – Harder to do in real-time since we don’t know true label until later • See paper for parameter tuning – Of threshold and penalty function
  • ©2011CarnegieMellonUniversity:39 Voteweight Results • Control condition best scenario – Before-after – 94.8% accuracy, avg 11.8 hrs, median 3.8 – 95.6% accuracy, avg 11.0 hrs, median 2.3 • Cluster condition best scenario – Before-after – 95.4% accuracy, avg 1.8 hrs, median 0.7 – 97.2% accuracy, avg 0.8 hrs, median 0.5 • Overall: small gains, potentially more fragile and more complex though
  • ©2011CarnegieMellonUniversity:40 Limitations of Our Study • Two limitations of MTurk – No separation between control and cluster – ~3% tie votes unresolved (more votes) • Possible learning effects? – Hard to tease out with our data – Aquarium doesn’t offer feedback – Everyone played Phil – No condition prioritized over other • Optimistic case, no active subversion
  • ©2011CarnegieMellonUniversity:41 Conclusion • Investigated two techniques for smartening the crowd for anti-phishing – Clustering and voteweight • Clustering offers significant advantages wrt time and coverage • Voteweight offers smaller improvements in effectiveness
  • ©2011CarnegieMellonUniversity:42 Today’s Talk • Apply crowdsourcing to speed up detection of phishing web sites • Using location data to understand people, places, and relationships • Using crowdsourcing to understand privacy of mobile apps
  • ©2009CarnegieMellonUniversity:43 Bridging the Gap Between Physical Location and Online Social Networks 12th International Conference on Ubiquitous Computing (Ubicomp 2010) Justin Cranshaw Eran Toch Jason Hong Aniket Kittur Norman Sadeh Carnegie Mellon University
  • ©2011CarnegieMellonUniversity:44 Understanding Human Behavior at Large Scales • Capabilities of today’s mobile devices – Location, sound, proximity, motion – Call logs, SMS logs, pictures • We can now analyze real-world social networks and human behaviors at unprecedented fidelity and scale • 2.8m location sightings of 489 participants in Pittsburgh
  • ©2011CarnegieMellonUniversity:45 • Insert graph here • Describe entropy
  • ©2011CarnegieMellonUniversity:46 Early Results • Can predict Facebook friendships based on co-location patterns – 67 different features • Intensity and Duration • Location diversity (entropy) • Mobility • Specificity (TF-IDF) • Graph structure (mutual neighbors, overlap) – 92% accuracy in predicting friend/not
  • ©2011CarnegieMellonUniversity:47 Using features like location entropy significantly improves performance over shallow features such as #co-locations
  • ©2011CarnegieMellonUniversity:48 Intensity features Intensity features Numberofco- locations Numberofco- locations W ithout intensity Full m odel
  • ©2011CarnegieMellonUniversity:49 Early Results • Can predict number of friends based on mobility patterns – People who go out often, on weekends, and to high entropy places tend to have more friends – (Didn’t check age though)
  • ©2011CarnegieMellonUniversity:50 Entropy Related to Location Privacy
  • ©2011CarnegieMellonUniversity:51 Collective Real-World Intelligence • Location data alone can tell us a lot about people, the places they go, the relationships they have • Characterizing individuals – Personal frequency – Personal mobility pattern • Characterizing social quality of places – Entropy – number of unique people – Churn – same people or different – Transience – amount of time spent – Burst – regularity of people seen
  • ©2011CarnegieMellonUniversity:52 Collective Real-World Intelligence • Apps for Usable Privacy and Security – Using places for authentication – Protecting geotagged data • 4.3% Flickr photos, 3% YouTube, 1% Craigslist photos geotagged
  • ©2011CarnegieMellonUniversity:53 Collective Real-World Intelligence • Other potential apps and analyses: – Architecture and urban design – Use of public resources (e.g. buses) – Traffic Behavioral Inventory (TBI) – Characterizing neighborhoods – What do Pittsburghers do?
  • ©2011CarnegieMellonUniversity:54 Crowdsourcing Location Data • How to incentivize thousands of people in multiple cities to run our app? – Pay? – Altruism? – Enjoyment? – Side effect? • Key difference is highly sensitive personal data (vs microtasks)
  • ©2011CarnegieMellonUniversity:55 Today’s Talk • Apply crowdsourcing to speed up detection of phishing web sites • Using location data to understand people, places, and relationships • Using crowdsourcing to understand privacy of mobile apps
  • ©2011CarnegieMellonUniversity:56 Shares your location, gender, unique phone ID, phone# with advertisers Uploads your entire contact list to their server (including phone #s) What are your apps really doing? • WSJ analysis of 101 apps found half share phone’s unique ID and location
  • ©2011CarnegieMellonUniversity:57 Android • What do these permissions mean? • Why does app need this permission? • When does it use these permissions?
  • ©2011CarnegieMellonUniversity:58 Research on Scanning Apps • TaintDroid intercepts certain calls and asks user if it’s ok • Others scan binaries – Ex. what web sites it connects to • Others scan what goes on the network – Ex. “looks like a SSN”
  • ©2011CarnegieMellonUniversity:59 Our Position • No automated technique will ever be able to differentiate between acceptable and unacceptable behavior • Many false positives b/c scanners also flag things app does by design – Ex. Flagging Evernote for connecting to their servers
  • ©2011CarnegieMellonUniversity:60 Crowdsourcing and Privacy • Re-frame privacy as expectations – Capture what people expect an app to do – See how well app matches expectations – Use top mismatches as privacy summary for non-experts (and for devs) • Use crowdsourcing to accomplish this – Ideally would like experts, but experts don’t scale – 300k Android apps, 500k iPhone apps
  • ©2011CarnegieMellonUniversity:61 Screen-by-Screen Probing • Generate tree of UI screens …
  • ©2011CarnegieMellonUniversity:62 Screen-by-Screen Probing • Scan app to capture what happens if a person transitions from one screen to another … Gets location Sends to yelp.com Gets contacts Sends to yelp.com
  • ©2011CarnegieMellonUniversity:63 Screen-by-Screen Probing What data do you think is sent to Yelp if you click the “Nearby” icon? • Current location • Contact List • Phone call log • SMS log • Unique phone ID • …
  • ©2011CarnegieMellonUniversity:64 Screen-by-Screen Probing How comfortable would you be if the Yelp app sent your current location to the Yelp servers when you click on the “Nearby” icon?
  • ©2011CarnegieMellonUniversity:65 Screen-by-Screen Probing • Use top mismatches to generate new privacy summaries – Ex. “93% of people didn’t expect Facebook app to send contact list to their servers” • Current work: – Building remote evaluation tool – Creating screen mockups to compare expert vs MTurker results • Can MTurkers understand the data types? • Can MTurkers offer mostly accurate results?
  • ©2011CarnegieMellonUniversity:66 What’s New and Different for Crowdsourcing? • New crowdsourcing issues with security – Active and adaptive adversaries – Timeliness has new urgency • New ways of understanding human behaviors at large scale thru location – Incentivizing people to share data • New ways of gauging end-user privacy – Possibly new way of understanding privacy – Structuring tasks so that novices can give useful feedback
  • ©2011CarnegieMellonUniversity:67 Acknowledgments • CyLab and Army Research Office • Research Grants Council of the Hong Kong Special Administrative Region • Alfred P. Sloan Foundation • Google • DARPA