Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Offensive (Web, etc) Testing Framework: My gift for the community - BerlinSides 2011


Published on

Introduction to the Offensive (Web, etc) Testing Framework


Download as PDF if fonts look funny.

  • If you need your papers to be written and if you are not that kind of person who likes to do researches and analyze something - you should definitely contact these guys! They are awesome ⇒⇒⇒ ⇐⇐⇐
    Are you sure you want to  Yes  No
    Your message goes here
  • D0WNL0AD FULL ▶ ▶ ▶ ▶ ◀ ◀ ◀ ◀
    Are you sure you want to  Yes  No
    Your message goes here
  • D0WNL0AD FULL ▶ ▶ ▶ ▶ ◀ ◀ ◀ ◀
    Are you sure you want to  Yes  No
    Your message goes here
  • How to use "The Scrambler" ot get a girl obsessed with BANGING you... ➤➤
    Are you sure you want to  Yes  No
    Your message goes here
  • Earn a 6-Figure Side-Income Online... Signup for the free training HERE 
    Are you sure you want to  Yes  No
    Your message goes here

Offensive (Web, etc) Testing Framework: My gift for the community - BerlinSides 2011

  1. 1. Offensive (Web, etc) Testing Framework My gift for the communityBerlin Sides, December 29th 2011 Abraham Aranguren @7a_
  2. 2. Agenda• About me• Lessons from: OSCP Experience Chess Players• OWTF vs Traditional + Demos• Conclusion• Q&A
  3. 3. About me• Spanish dude• Degree + Diploma in Computer Science• Uni: Security research + honour mark• IT: Since 2000 (netadmin / developer)• Comeback to (offensive) security in 2007• OSCP, CISSP, GWEB, CEH, MCSE, Etc.• Web App Sec and Dev/Architect• OWTF, GIAC, BeEF
  4. 4. What is OSCP?• Certification run by Offensive Security **Offensive Security maintain the Backtrack distro100% practical exam:• 24 hour hacking challenge• Few pass the 1st time• Experienced pen testers have failed this
  5. 5. Lessons from OSCPBackground: Nessus, etc were forbidden, scripts ok.Approach to get a 100% score:• Understand + script everything• Make scripts reliable (!babysitting)• Make scripts staged (results in < 10 mins)• Scripts find vulns in background• Scripts present information efficientlyThe test taker is now:• Fresh to analyse info + exploit vulns• Using more time to think
  6. 6. Lessons from OSCP cont.Others spent valuable energy to run (a lot of) tools by hand … I had this in < 10 minutes via scripts!:
  7. 7. Lessons from OSCP cont.Newer results merged via script with exploitation notes, etc:
  8. 8. Lessons from ExperiencePen testers vs Bad guys• Pen testers have time/scope constraints Bad guys don’t• Pen testers have to write a report Bad guys don’tComplexity is increasingMore complexity = more time needed to test properlyCustomers are rarely willing to:“Pay for enough / reasonable testing time“A call for efficiency:• We must find vulns faster• We must be more efficient• .. or bad guys will find the vulns, not us
  9. 9. Lessons from Experience cont.Ways to beat time constraints:• Test ahead of time (i.e. Silent testing)• Automate as much as possible (i.e. Scripting)• Efficient testing (i.e. Scripting/Analysis)• Efficient reporting (i.e. Templates/Scripting)
  10. 10. Learning from Chess Players Image Credit: / Terra
  11. 11. Chess Complexity Image Credit:
  12. 12. Efficient Chess Analysis Chess players have time constraints like Pen testers. From Alexander Kotov - "Think like a Grandmaster": 1) Draw up a list of candidates moves 1) Draw up a list of candidate paths of attack 2) Analyse each variation once and only once 2) Analyse tool output once and only once 3) Having gone through step 1 and 2 make a move 3) After 1) and 2) exploit the best path of attackEver analysed X in depth to only see “super-Y” later?
  13. 13. Chess Openings Image Credit:
  14. 14. Chess Player approachChess players:• Memorise openings• Memorise endings• Memorise entire lines of attack/defence• Try hard to analyse games efficientlyPen tester translation:• Chess players precompute all they can• Chess players analyse info only once
  15. 15. Garry Kasparov vs Nigel ShortWorld Championship Match 1993“Kasparov was evidently disoriented as he used 1 hour 29 minutes to Shorts 11 minutes(!) for the entire game.“ Short (weaker) was 8 times faster“In just 9 days after facing it for the first time … Kasparov and his team had found the best reply (11.Ne2 ) and even succeeded in completely bamboozling Short with 12.Be5: <This move was a surprise for me. I spent 45 minutes on my reply. I could not fathom out the complications …- Short“
  16. 16. Can we be more efficient?Can tools, knowledge and human analysis be coordinated like an army?Image Credit:
  17. 17. OWTF Process Demos (1+2) Image Credit:
  18. 18. OWFT vs Traditional: DisclaimerExisting tools:• Are great at what they do• Solve difficult problems• Their authors are typically very smart people!• Made OWTF possibleNot all limitations covered next apply to all tools
  19. 19. Define once + AutomateTraditionalToo many tools to run manuallyFigure out how to call the tool each timeFigure out how to overcome poor defaults (i.e. UA)poor defaults sometimes hard-coded in the code! All tools are run for you automatically Define how to call each tool only once Useful defaults + Easy to run
  20. 20. Demo 3Define + Automate
  21. 21. ComprehensiveTraditionalRemember tests to runRemember tools/websites to perform each testRemember best order to run tools / use sitesTests are run automaticallyUse of best known tools + websitesCalls tools/sites in the best known orderImplements tests not found on other tools
  22. 22. Demo 4Comprehensive
  23. 23. Staged Report + Vuln Stats TraditionalNo report until end of scan waste of timeReport vulnerabilities 1 by 1 waste of timeCannot analyse + exploit concurrentlyYou have a partial report in < 5 secondsRefresh report = New results are highlightedReports vuln stats, which you can drill onFresh to analyse + exploit concurrently
  24. 24. Demo 5Staged Report
  25. 25. Dynamic Report, flags, notes, etc. TraditionalReport is static + poor interactionCannot flag / rate / ignore findingsCannot take notes / filter findings with your criteriaReport is dynamic + interactiveCan flag / rate / ignore findingsCan take notes / filter findings with your criteriaPen tester can import / export reviews
  26. 26. Demo 6Import / Export Review
  27. 27. Reliable + Partial results if crashed TraditionalRequire babysitting (i.e. did it crash/stop?)Lose all results + no report if crashedPoor exception handling = crashes happenLimited babysitting required (i.e. often none)Tries hard not to crash + save results if crashedTool or plugin crashed? save data + continueRobust exception handling (I think ☺)
  28. 28. Demo 7Exception Handling
  29. 29. Cancel + Move on support TraditionalStuck / Crashed command no reportStuck / Crashed plugin no reportStuck / Crashed tool no reportStuck? Control+C + saves data + moves onCrashed? Moves on (“finished”) + saves dataYou can Control+C commands, plugins and owtfWhen Control+C: Choose next cmd / plugin / exit
  30. 30. Demo 8Cancel + Move on Support
  31. 31. Aligned to Standards TraditionalNot OWASP Testing Guide alignedNot PTES alignedNarrow standard coverageOWASP Testing Guide alignedPTES alignment-coverage plannedExtensive standard coverage
  32. 32. Demo 4OWASP Testing Guide Aligned
  33. 33. Simulation + Silent testing support TraditionalNo “Simulation mode” Run and see (!)Cannot start test without permission (usually)No passive, semi passive, active test separationSupports “Simulation mode” 1st see, then runCan test without permission: Silent testing supportPassive, semi passive, active test separationTest ahead of time = More efficiency
  34. 34. Demo 9Simulation + Silent testing Support
  35. 35. Language agnostic, easy to extendTraditionalLanguage dependent (ruby, python, perl, etc.)Cannot contribute in your language (usually)Difficult to extend / share infoLanguage agnostic: if the shell can run it = WINContribute in your language (best if CLI-callable)Easy to extend / share info
  36. 36. Easy setup and greppable DB TraditionalHard to setup: libraries, gems, DB installs, etcDB in obscure formatCannot custom search DBEasy to setup: copy dir + runDB in plain text, links provided to everythingDB is easy to grep for custom searches
  37. 37. Demo 10Greppable DB
  38. 38. Chess-like analysis support TraditionalCannot pre-compute / define tests (self/other)Cannot mark “best candidate moves”Cannot analyse each option only once + !notesTests are pre-computed / defined (self + other)Mark “best candidate moves” via flagsMark as analysed via strike-throughFilter your analysis with your priorities + notes
  39. 39. Demo 11Chess-like analysis Support
  40. 40. What about Tactical Fuzzing? i.e. Burp, ZAP, etc TraditionalSome tools do not support outbound proxies (!)Can only pass their own info to the tactical fuzzerMessy proxying when multiple tools are usedCan scrape results from all tools runCan pass scraped results to tactical fuzzerProxy ok when multiple tools used under the hoodProxy ok even if tool called has no proxy support
  41. 41. Demo 12Outbound Proxy
  42. 42. Google Hacking without API keys TraditionalSome GH tools require API keys to workOthers require you to break CAPTCHA (!)No API keys requiredNo CAPTCHA breaking requiredUse of tunneable blanket searches instead“Open all in tabs” for ease of use ☺
  43. 43. Demo 13Google Hackingwithout API Keys
  44. 44. OWTF > Running tools TraditionalFocused on small problemsMissing a lot from the OWASP Testing GuideMust find X number of tools to bridge the gapCalls the “best tool for the job” when possibleImplements many tests on its own too!Links for test sites / “Suggestions”Custom template support planned for reporting
  45. 45. Demo 14OWTF tests without external tools
  46. 46. Demo 15Aux Plugin intro Phising
  47. 47. Demo 16 DoS
  48. 48. OWTF Considerations/Limitations• Relies on existing great tools != replacement• Developed on python 2.6.5• CLI Linux-only (dev on Backtrack 5 R1)• GUI Multiplatform (web page)• Lots of bugs (but stable! ☺)• Lots of features in my todo list! ☺• Not a “script kiddie tool” + Not a silverbullet• Does not try to rate severity/replace humans:• Focus is to provide data efficiently for the pen tester
  49. 49. OWTF Target User base Who is this for?
  50. 50. OWTF:Not for Nessus Monkeys Image Credit: Steve Lord, BSides London 2011
  51. 51. OWTF: Import/Export ReviewsJaded Cynic compatible Image Credit: Steve Lord, BSides London 2011
  52. 52. OWTF Goal:Bring you closer to this Image Credit: Steve Lord, BSides London 2011
  53. 53. OWTF – I need your helpLicence?• 3-clause-BSD (metasploit)• GPL v3 / v2, Apache• Other?Hosting service?• github (metasploit, BeEF, whatweb, …)• googlecode• sourceforge• Other?
  54. 54. OWTF - I need your helpTool authors: Can owtf run your tool better?Pen testers / Python wizards:• What is missing? (tools, resources, approach,..)• What could be done better?Web designers:• Make the report look better / easier to useJavaScript gurus:• More ideas to improve interactive reportRegexp and Selenium gurus:• To suggest better Regexps and/or approach
  55. 55. ConclusionOWTF aims to make pen testing:• Aligned with OWASP Testing Guide + PTES• More efficient• More comprehensive• More creative and fun (minimise un-creative work)This way pen testers will have time to:• Focus on sharing information (tools, techniques, ..)• Think out of the box for real (!babysit, !stupid work)• Chain vulnerabilities like attackers do• Really show impact so that risk is understood
  56. 56. Special thanks toFor getting me started:Justin Searle: “Python Basics for Web App Pentesters” – OWASP AppSec EU 2011For showing what I was missing in my process:Jason Haddix: “The Web Application Hacking Toolchain” – BruCon 2011For “do what you love” inspiration:Haroon Meer: “You and your research” – Brucon 2011
  57. 57. Special thanks to• OWASP Testing Guide + PTES contributors• Andrés Riancho• Marcus Niemietz• Mario Heiderich• Michele Orru• Sandro Gauci
  58. 58. Q&A Abraham Aranguren @7a_ Project infoWebsite: @owtfp