Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Secure360 May 2018 Lessons Learned from OWASP T10 Datacall

170 views

Published on

Gain insight into some of the details of the OWASP Top 10 Call for Data and industry survey, and what we were attempting to learn. Hear about was learned from collecting and analyzing widely varying industry data and attempts to build a dataset for comparison and analysis. This session will provide tips and common pitfalls for structuring vulnerability data and the subsequent analysis. Learn what the data can tell us and what questions are still left unanswered. Uncover some of the differences in collecting metrics in different stages of the software lifecycle and recommendations for handling them.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Secure360 May 2018 Lessons Learned from OWASP T10 Datacall

  1. 1. LESSONS LEARNED FROM A OWASP TOP 10 DATACALL Tuesday, May 15 9:30am 2018 Secure360 Twin Cities @secure360 facebook.com/secure360 www.Secure360.org Brian Glas
  2. 2. OWASP TOP 10 OVERVIEW • First version was released in 2003 • Updated in 2004, 2007, 2010, 2013, 2017 • Started as an awareness document • Now widely considered the global baseline • Is a standard for vendors to measure against
  3. 3. OWASP TOP 10-2017 RC1 • April 2017 • Controversy over first release candidate • Two new categories in RC1 • A7 – Insufficient Attack Protection • A10 – Underprotected APIs • Social Media got ugly
  4. 4. BLOG POSTS • Decided to do a little research and analysis • Reviewed the history of Top 10 development • Analyzed the public data • Wrote two blog posts…
  5. 5. DATA COLLECTION • Original desire for full public attribution • This meant many contributors, didn’t… • End up mostly being consultants and vendors • Hope to figure out a better way for 2020
  6. 6. HUMAN-AUGMENTED TOOLS (HAT) VS. TOOL-AUGMENTED HUMANS (TAH) • Frequency of findings • Context (or lack thereof) • Natural Curiosity • Scalability • Consistency
  7. 7. HAT VS TAH
  8. 8. HAT VS TAH
  9. 9. HAT VS TAH
  10. 10. TOOLING TOP 10
  11. 11. HUMAN TOP 10
  12. 12. OWASP SUMMIT JUNE 2017 • Original leadership resigns right before • I was there for SAMM working sessions • Top 10 had working sessions as well • Asked to help with data analysis for Top 10
  13. 13. OWASP TOP 10-2017 • New Plan • Expanded data call, one of largest ever @ 114k • Industry Survey to select 2 of 10 categories • Fully open process in GitHub • Actively translate into multiple languages • en, es, fr, he, id, ja, ko…
  14. 14. INDUSTRY SURVEY • Looking for two forward looking categories • 550 responses
  15. 15. INDUSTRY SURVEY RESULTS • 550 responses • Thank you!
  16. 16. INDUSTRY SURVEY RESULTS
  17. 17. DATA CALL RESULTS • A change from frequency to incident rate • Extended Data Call added: More Veracode, Checkmarx, Security Focus (Fortify), Synopsys, Bug Crowd • Data for over 114,000 applications
  18. 18. DATA CALL RESULTS
  19. 19. DATA CALL RESULTS
  20. 20. DATA CALL RESULTS
  21. 21. DATA CALL RESULTS • Percentage of submitting organizations that found at least one instance in that vulnerability category
  22. 22. WHAT CAN THE DATA TELL US • Humans still find more diverse vulnerabilities • Tools only look for what they know about • Tools can scale on a subset of tests • You need both • We aren’t looking for everything…
  23. 23. WHAT CAN THE DATA NOT TELL US • Is a language or framework more susceptible • Are the problems systemic or one-off • Is developer training effective • Are IDE plug-ins effective • How unique are the findings? • Consistent mapping? • Still only seeing part of the picture
  24. 24. VULN DATA IN PROD VS TESTING 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 3.5 Number of Vulnerabilities in Production
  25. 25. VULN DATA IN PROD VS TESTING 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 3.5 Security Defects in Testing
  26. 26. VULN DATA STRUCTURES • CWE Reference • Related App • Date • Language/Framework • Point in the process found • Severity (CVSS/CWSS/Something) • Verified
  27. 27. VULN DATA IN SECURITY STORIES
  28. 28. WHAT ABOUT TRAINING DATA? • How are you measuring training? • Are you correlating data from training to testing automation? • Can you track down to the dev? • Do you know your Top 10?
  29. 29. WHAT CAN YOU DO? • Think about what story to tell, then figure what data is needed to tell that story • Structure your data collection • Keep your data as clean and accurate as possible • Write stories • Consider contributing to Top 10 2020
  30. 30. THAT’S ALL FOLKS THANK YOU! Brian Glas @infosecdad brian.glas@gmail.com

×