ndss2007-phinding-ph..

  • 238 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
238
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
7
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Phinding Phish: An Evaluation of Anti-Phishing Toolbars Yue Zhang, Serge Egelman, Lorrie Cranor, and Jason Hong
  • 2. Anti-Phishing Tools
    • 84 Listed on download.com (Sept. ‘06)
    • Included in many browsers
    • Poor usability
      • Many users don’t see indicators
      • Many choose to ignore them
      • But usability is being addressed
    • Are they accurate?
  • 3. Tools Tested
    • CallingID
    • Cloudmark
    • EarthLink
  • 4. Tools Tested
    • eBay
    • Firefox
  • 5. Tools Tested
    • IE7
  • 6. Tools Tested
    • Netcraft
    • Netscape
  • 7. Tools Tested
    • SpoofGuard
    • TrustWatch
  • 8. Source of Phish
    • High volume of fresh phish
      • Sites taken down after a day on average
      • Fresh phish yield blacklist update information
    • Can’t use toolbar blacklists
    • We experimented with several sources
      • APWG - high volume but many duplicates and legitimate URLs included
      • Phishtank.org - lower volume but easier to extract phish
      • Assorted other phish archives - often low volume or not fresh enough
  • 9. Phishing Feeds
    • Anti-Phishing Working Group
      • [email_address] .org
      • ISPs, individuals, etc.
      • >2,000 messages/day
      • Filtering out URLs from messages
    • PhishTank
      • http://www. phishtank .org/
      • Submitted by public
      • ~48 messages/day
      • Manually verify URLs
  • 10. Testbed for Anti-Phishing Toolbars
    • Automated testing
    • Aggregate performance statistics
    • Key design issue:
      • Different browsers
      • Different toolbars
      • Different indicator types
    • Solution: Image analysis
      • Compare screenshots with known states
  • 11. Image-Based Comparisons
    • Two examples: TrustWatch and Google
    • TrustWatch:
    • Google:
    ScreenShot ScreenShot Phish!! Warning!! Verified Not verified
  • 12. Testbed System Architecture
  • 13. Testbed System Architecture Retrieve Potential Phishing Sites
  • 14. Testbed System Architecture Send URL to Workers
  • 15. Testbed System Architecture Worker Evaluates Potential Phishing Site
  • 16. Testbed System Architecture Task Manager Aggregates Results
  • 17. Experiment Methodology
    • Catch Rate:
    • Given a set of phishing URLs, what percentage of them are correctly labeled as phish by the tool
    • - count block and warning only
    • - taken down sites removed
    • False Positives:
    • Given a set of legitimate URLs, what percentage of them are incorrectly labeled as phish by the tool
    • - count block and warning only
    • - taken down sites removed
  • 18. Experiment 1
    • PhishTank feed used
    • Equipment:
      • 1 Notebook as Task Manager
      • 2 Notebooks as Workers
    • 10 Tools Examined:
      • CloudMark
      • Earthlink
      • eBay
      • IE7
      • Google/Firefox
      • McAfee
      • Netcraft
      • Netscape
      • SpoofGuard
      • TrustWatch
  • 19. Experiment 1
    • 100 phishing URLs
      • PhishTank feed
      • Manually verified
      • Re-examined at 1, 2, 12, 24 hour intervals
      • Examined blacklist update rate (except w/SpoofGuard)
      • Examined take-down rate
    • 514 legitimate URLs
      • 416 from 3Sharp report
      • 35 from bank log-in pages
      • 35 from top pages by Alexa
      • 30 random pages
  • 20. Experiment 2
    • APWG phishing feed
    • 9 of the same toolbars tested + CallingID
    • Same testing environment
  • 21. Results of Experiment 1
  • 22. Results of Experiment 2
  • 23. False Positives Not a big problem for most of the toolbars 5 (1%) EarthLink 5 (1%) Cloudmark 10 (2%) CallingID 218 (42%) SpoofGuard False Positive Toolbar
  • 24. Overall findings
    • No toolbar caught 100%
    • Good performers:
      • SpoofGuard (>90%)
        • Though 42% false positives
      • IE7 (70%-80%)
      • Netcraft (60%-80%)
      • Firefox (50%-80%)
    • Most performed poorly:
      • Netscape (10%-30%)
      • CallingID (20%-40%)
  • 25. More findings
    • Performance varied with feed
      • Better with Phishtank:
        • Cloudmark, Earthlink, Firefox, Netcraft
      • Better with APWG:
        • eBay, IE7, Netscape
      • Almost the same:
        • Spoofguard, Trustwatch
    • Different increases over time
      • More increases on APWG
      • Reflects the “freshness” of URLs
  • 26. CDN Attack
    • Many tools use blacklists
    • Many examine IP addresses (location, etc.)
    • Proxies distort URLs
      • Used Coral CDN
      • Append .nyud.net:8090 to URLs
      • Uses PlanetLab
    • Works on:
      • Cloudmark
      • Google
      • TrustWatch
      • Netcraft
      • Netscape
  • 27. Page Load Attack
    • Some wait for page to be fully loaded
      • SpoofGuard
      • eBay
    • Insert a web bug taking infinite load time
      • 5 lines of PHP
      • 1x1 GIF
      • Infinite loop spitting out data very slowly
    • Tool stays in previous state
    • Unable to indicate anything
  • 28. Conclusion
    • Tool Performance
      • No toolbars are perfect
      • No single toolbar will outperform others
      • Heuristics have false positives
        • Whitelists?
        • Hybrid approach?
    • Testing Methodology
      • Get fresher URLs
      • Test other than default settings
    • User interfaces
      • Usability is important
        • Traffic light?
        • Pop up message?
        • Re-direct page?
  • 29. C MU U sable P rivacy and S ecurity Laboratory http://cups.cs.cmu.edu/