Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Why Johnny Still Can’t Pentest: A Comparative Analysis of Open-source Black-box Web Vulnerability Scanners

394 views

Published on

Presentation at BSides Ottawa 2018.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Why Johnny Still Can’t Pentest: A Comparative Analysis of Open-source Black-box Web Vulnerability Scanners

  1. 1. Why Johnny Still Can’t Pentest: A Comparative Analysis of Open-source Black-box Web Application Vulnerability Scanners Rana Khalil Master of Computer Science University of Ottawa 8/11/2018
  2. 2. Who am I? • Student at the University of Ottawa • B.S. in Mathematics and Computer Science (2016) • M.S. in Computer Science (2018) • Supervisor: Dr. Carlisle Adams 2 • Previous work experience include: software development, testing, ransomware research, teaching and penetration testing
  3. 3. Roadmap 1. Introduction 2. Background 3. Methodology 4. Results 5. Conclusion 3
  4. 4. Web Applications • We use web applications for everything: • Over 3.9 billion users world wide • Over 1.8 billion websites online 4 Banking Education Shopping Communication • How much personal data do you have online? • Name, SIN, addresses, phone numbers, emails • Financial information • Heath information
  5. 5. Web Security • State of web security today • Trustwave’s 2018 Global Security Report: • 100% of web applications displayed at least one vulnerability • Median number of 11 vulnerabilities per application 5
  6. 6. Data Breaches 6
  7. 7. How to Secure a Web Application? • A combination of techniques are used to secure web applications: 7 • Static code analysis • Web application firewalls • Secure coding practices • Web application vulnerability scanners
  8. 8. How to Secure a Web Application? • A combination of techniques are used to secure web applications. 8 • Static code analysis • Web application firewalls • Secure coding practices • Web application vulnerability scanners
  9. 9. Roadmap 1. Introduction 2. Background 3. Methodology 4. Results 5. Conclusion 9
  10. 10. What is a Vulnerability? Vulnerability is a term that refers to a flaw in a system that can leave the system open to attack. Example: Cross Site Scripting (XSS) 10
  11. 11. OWASP TOP 10 The Open Web Application Security Project Top 10 project contains the top 10 most critical web application security risks: 11 1. Injection 2. Broken Authentication 3. Sensitive Data Exposure 4. XML External Entities (XXE) 5. Broken Access Control 6. Security Misconfiguration 7. Cross-Site Scripting 8. Insecure Deserialization 9. Using Components with Known Vulnerabilities 10. Insufficient Logging & Monitoring
  12. 12. OWASP TOP 10 The Open Web Application Security Project Top 10 project contains the top 10 most critical web application security risks: 12 1. Injection 2. Broken Authentication 3. Sensitive Data Exposure 4. XML External Entities (XXE) 5. Broken Access Control 6. Security Misconfiguration 7. Cross-Site Scripting 8. Insecure Deserialization 9. Using Components with Known Vulnerabilities 10. Insufficient Logging & Monitoring
  13. 13. WAVS Web Application Vulnerability Scanners have three modules: 13 Crawler Attacker Analysis *XSS found* *SQLi found* *LFI found* *RFI found*
  14. 14. WAVS Web application vulnerability scanners are largely used in two ways: 1. Point-and-Shoot (PaS) / Default • Scanner is given root URL of the application • Default configuration remains unchanged • Minimal human interference 14
  15. 15. WAVS Web application vulnerability scanners are used in two ways: 2. Trained / Configured • Change configuration (ex. crawl depth) • Manually visit every page of the application while scanner is in proxy mode. 15 Browser Scanner Proxy Web Application
  16. 16. WASSEC The Web Application Security Scanner Evaluation Criteria provides a set of detailed evaluation criteria and a framework for formally evaluating WAVS. 16 • Protocol support • Authentication • Session management • Crawling • Parsing • Testing • Command and control • Reporting
  17. 17. Previous Work 17 • Suto’s case studies: • 2007 paper evaluated scanners in PaS mode • 2010 paper evaluated scanners in PaS and Trained modes • Benchmark applications: • Web Input Vector Extractor Teaser (WIVET) created in 2009 by Tatli et al. • Web Application Vulnerability Scanner Evaluation Project (WAVSEP) created in 2010 by Chen • Doupé et al.’s 2010 work on evaluating WAVS on the WackoPicko application • Several other more recent studies evaluate scanners in PaS mode only
  18. 18. Roadmap 1. Introduction 2. Background 3. Methodology 4. Results 5. Conclusion 18
  19. 19. Methodology 19 Tool Selection Benchmark Selection Environment Setup Feature & Metric Selection Result Analysis • Goal: Performing a comprehensive comparative analysis of the performance of six chosen scanners in two modes: • PaS / Default • Trained / Configured
  20. 20. Tool Selection 20 • Chen’s evaluation • Consultation with professional ethical hackers Name Version License Price Last Update* Arachni 1.5.1-0.5.12 Arachni Public Source v1.0 N/A 2017-03-29 Burp Pro 1.7.35 Commercial $349/year 2018-08-29 Skipfish 2.10b Apache v2.0 N/A 2012-12-04 Vega 1.0 MIT N/A 2016-06-29 Wapiti 3.0.1 GNU GPL v2 N/A 2018-05-11 ZAP 2.7.0 Apache v2.0 N/A 2017-11-28 *Checked on August 2018
  21. 21. Benchmark Selection 21 • Benchmark applications: • WIVET – crawling challenges • WAVSEP – vulnerability classes • Intentionally vulnerable realistic web application • Type of vulnerabilities included in the application • Architecture of the application and the web technologies used • Ability of the application to with stand aggressive automated scans • OWASP Vulnerable Web Applications Directory (VWAD) project • WackoPicko
  22. 22. Benchmark Selection - WIVET 22 • Contains 56 test cases that utilize both Web 1.0 and Web 2.0 technologies • Test cases include: • Standard anchor links • Links created dynamically using JavaScript • Multi-page forms • Links in comments • Links embedded in Flash objects • Links within AJAX requests
  23. 23. Benchmark Selection - WAVSEP 23 • Consists of a total of 1220 true positive (TP) test cases and 40 false positive (FP) test cases Vulnerability Category # of TP test cases # of FP test cases SQL Injection 138 10 Reflected XSS 89 7 Path Traversal / LFI 816 8 RFI 108 6 Unvalidated Redirect 60 9 DOM XSS 4 0 Passive 5 0
  24. 24. Benchmark Selection - WackoPicko 24 • Open-source intentionally vulnerable realistic web application • Photo sharing and purchasing site • Contains 16 vulnerabilities covering several of the OWASP Top 10 • Contains crawling challenges: • HTML parsing • Multi-step process • Infinite website • Authentication • Client-side code
  25. 25. Environment Setup 1/2 25 Tools Applications * * VM restored to initial state before every test run
  26. 26. Environment Setup 2/2 26 • Each scanner was run in two modes: • PaS / Default - default configuration setting • Trained / Configured 1. Maximize crawling coverage – changing configuration 2. Maximize crawling coverage – use of proxy 3. Maximize attack strength • WackoPicko test scans were further divided into two subcategories: • INITIAL – without authentication / publicly accessible • CONFIG - valid username/password combination • In total, each scanner was run eight times
  27. 27. Feature and Metric Selection 27 • Crawling coverage: • % of passed test cases on the WIVET application • Crawling challenges in the WackoPicko application • Vulnerability detection accuracy: • TP, FN and FP on the WAVSEP and WackoPicko applications • Speed: • Scan time on the WAVSEP and WackoPicko appliations • Reporting: • Vulnerability detected • Vulnerability location • Exploit performed • Usability: • Efficiency • Product documentation • Community support Crawling Coverage Detection Accuracy Speed WIVET WackoPicko WAVSEP Features Applications
  28. 28. Roadmap 1. Introduction 2. Background 3. Methodology 4. Results 5. Conclusion 28
  29. 29. Vulnerability Detection -FNs Vulnerabilities in WackoPicko that were not detected by any scanners: 1. Weak authentication credentials • admin/admin • Reasons: • Scanners did not attempt to guess username/password • Scanners did attempt to guess username/password but failed 29
  30. 30. Vulnerability Detection -FNs Vulnerabilities in WackoPicko that were not detected by any scanners: 2. Parameter Manipulation • Sample user: WackoPicko/users/sample.php?userid=1 Real user: WackoPicko/users/sample.php?userid=2 • Reasons: • Most scanners did not attempt to manipulate the userid field • Arachni manipulated the userid field but failed to enter a valid number. • Skipfish successfully manipulated the userid field but did not report it as a vulnerability 30 userid=2
  31. 31. Vulnerability Detection -FNs Vulnerabilities in WackoPicko that were not detected by any scanners: 3. Sored SQL Injection 4. Directory Traversal 5. Stored XSS Reasons: • Crawling challenges – discussed later • Lack of detection for these types of vulnerabilities 31
  32. 32. Vulnerability Detection -FNs Vulnerabilities in WackoPicko that were not detected by any scanners: 6. Forceful Browsing • Access to a link that contains a high quality version of a picture without authentication • /WackoPicko/pictures/high_quality.php?key=hig hquality&picid=11 7. Logic Flaw • Coupon management functionality Reasons: • Require understanding business logic of the application • Application specific vulnerabilities 32
  33. 33. Vulnerability Detection Accuracy – TPs 33 WackoPicko Overall Scan Detection Results Arachni Burp Skipfish Vega Wapiti ZAP PaS 37.5 37.5 31.25 18.75 25 37.5 Trained 37.5 50 31.25 25 25 43.75 0 10 20 30 40 50 60 70 80 90 100 %ofDetectedVulnerabilities Key Observations: • All scanners missed at least 50% of the vulnerabilities • In PaS mode Burp, ZAP and Arachni achieved the same score • Running the scanners in trained mode increased the overall detection • Vega – increase in attack vector • ZAP & Burp – Manually visiting the pages in proxy mode for Flash and dynamic JS technologies
  34. 34. Vulnerability Detection Accuracy - TPs 34 WAVSEP Overall TP Detection Key Observations: • WAVSEP results were better than WackoPicko. • Vulnerability categories in the application • Integrating WAVSEP in the SDLC of the scanner • ZAP achieved highest score, followed by Vega and Skipfish • Vulnerability category detection varied with scanner. • Arachni discovered 100% of SQLi, RFI, unvalidated redirect, but had a low detection rate for LFI vulnerabilities Arachni Burp Skipfish Wapiti Vega ZAP PaS 60.2 27.9 4.0 25.4 71.3 60.7 Trained 60.2 42.5 62.6 24.4 71.3 79.3 0 10 20 30 40 50 60 70 80 90 100 %ofWAVSEPTestsDetected
  35. 35. Vulnerability Detection Accuracy - FPs 35 WackoPicko CONFIG scan FPs Name PaS Trained Arachni 5 5 Burp 4 5 Skipfish 0 1 Vega 2 1 Wapiti 0 0 ZAP 2 8 Key Observations: • # of FPs varied across scanners • No correlation between # of TPs, FNs and FPs • No correlation between # of requests a scanner sends and # of FPs • Increase in attack strength generally increases # of FPs – not enough data to make that correlation
  36. 36. Crawling Coverage Features that scanners found difficult to crawl in WackoPicko: • Uploading a picture • All scanners were not able to upload a picture in PaS mode • Burp and ZAP were able to in Trained mode 36 Scanner # of Accounts Arachni 202 Burp 113 Skipfish 364 Vega 117 Wapiti 0 ZAP 111 • Authentication • All scanners except for Wapiti successfully created accounts • Multi-step processes • All scanners were not able to complete the process in PaS mode • Burp and ZAP were able to in Trained mode
  37. 37. Crawling Coverage • Infinite websites • All scanners recognized the infinite loop except Arachni • Client-side code • Flash applications • Dynamic JavaScript • Ajax Requests 37 Arachni Burp Skipfish Wapiti Vega ZAP PaS 94 50 50 50 16 42 Trained 94 50 50 50 16 78 0 10 20 30 40 50 60 70 80 90 100 %ofWIVETTestsPassed Features that scanners found difficult to crawl in WackoPicko:
  38. 38. Scanning Speed 38 Arachni Burp Skipfish Vega Wapiti ZAP PaS 0.32 0.12 0.1 0.1 0.05 0.18 Trained 0.32 0.35 0.1 0.22 1.62 1.31 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 ScanTime(Hours) WackoPicko CONFIG Mode Scanning Speed Key Observations: • Speed varies across scanners – no correlation to number of requests and vulnerability detection rate • Speed generally increases with configuration Note: WAVSEP scanning speed shows similar results
  39. 39. Reporting Features Features tested for: 1) List of all the vulnerabilities detected 2) Locations of all the detected vulnerabilities 3) Exploits performed to detect these vulnerabilities 39 All six scanners generate reports that include these three features.
  40. 40. Usability Features Features tested for: 1) Efficiency 2) Product documentation 3) Community support 40 * * Active in the past 3 months – checked last in August 2018
  41. 41. Final Ranking 1/2 41 WackoPicko Vulnerability Scores • Final ranking was calculated based on the crawling coverage and vulnerability detection on the WackoPicko application.
  42. 42. Final Ranking 2/2 42 Name Score Burp Pro 26 ZAP 23 Arachni 15 Wapiti 10 Skipfish 10 Vega 8
  43. 43. Roadmap 1. Introduction 2. Background 3. Methodology 4. Results 5. Conclusion 43
  44. 44. Conclusion • Scanners are far from being used as PaS tools only • Several classes of vulnerabilities were not detected • Scanners had difficulty crawling through common web technologies such as dynamic JavaScript and Flash applications • Different scanners have different strengths/weaknesses • Open-source scanner performance is comparable to commercial scanner performance and in several cases better 44
  45. 45. Get in Touch! 45 https://rkhal101.github.io/ /ranakhalil1 @rana__khalil /rkhal101
  46. 46. Questions? 46
  47. 47. To view full results of research refer to: https://github.com/rkhal101/Thesis- Test-Results Note: This slide was not shown during the presentation. I added it after receiving many requests asking for the link to the detailed results. 47

×