Offensive (Web, etc) Testing Framework: My gift for the community - BerlinSides 2011
Jan. 1, 2012•0 likes•2,872 views
Download to read offline
Report
Entertainment & Humor
Technology
Introduction to the Offensive (Web, etc) Testing Framework
Demos: http://www.youtube.com/playlist?list=PL1E7A97C1BCCDEEBB&feature=plcp
Download as PDF if fonts look funny.
Offensive (Web, etc) Testing Framework: My gift for the community - BerlinSides 2011
1. Offensive (Web, etc) Testing Framework
My gift for the community
Berlin Sides, December 29th 2011
Abraham Aranguren
@7a_
abraham.aranguren@gmail.com
http://7-a.org
2. Agenda
• About me
• Lessons from:
OSCP
Experience
Chess Players
• OWTF vs Traditional + Demos
• Conclusion
• Q&A
3. About me
• Spanish dude
• Degree + Diploma in Computer Science
• Uni: Security research + honour mark
• IT: Since 2000 (netadmin / developer)
• Comeback to (offensive) security in 2007
• OSCP, CISSP, GWEB, CEH, MCSE, Etc.
• Web App Sec and Dev/Architect
• OWTF, GIAC, BeEF
4. What is OSCP?
• Certification run by Offensive Security *
*Offensive Security maintain the Backtrack distro
100% practical exam:
• 24 hour hacking challenge
• Few pass the 1st time
• Experienced pen testers have failed this
http://www.offensive-security.com/information-security-certifications/
5. Lessons from OSCP
Background: Nessus, etc were forbidden, scripts ok.
Approach to get a 100% score:
• Understand + script everything
• Make scripts reliable (!babysitting)
• Make scripts staged (results in < 10 mins)
• Scripts find vulns in background
• Scripts present information efficiently
The test taker is now:
• Fresh to analyse info + exploit vulns
• Using more time to think
6. Lessons from OSCP cont.
Others spent valuable energy to run (a lot of) tools by
hand … I had this in < 10 minutes via scripts!:
7. Lessons from OSCP cont.
Newer results merged via script with exploitation notes,
etc:
8. Lessons from Experience
Pen testers vs Bad guys
• Pen testers have time/scope constraints Bad guys don’t
• Pen testers have to write a report Bad guys don’t
Complexity is increasing
More complexity = more time needed to test properly
Customers are rarely willing to:
“Pay for enough / reasonable testing time“
A call for efficiency:
• We must find vulns faster
• We must be more efficient
• .. or bad guys will find the vulns, not us
9. Lessons from Experience cont.
Ways to beat time constraints:
• Test ahead of time (i.e. Silent testing)
• Automate as much as possible (i.e. Scripting)
• Efficient testing (i.e. Scripting/Analysis)
• Efficient reporting (i.e. Templates/Scripting)
12. Efficient Chess Analysis
Chess players have time constraints like Pen testers.
From Alexander Kotov - "Think like a Grandmaster":
1) Draw up a list of candidates moves
1) Draw up a list of candidate paths of attack
2) Analyse each variation once and only once
2) Analyse tool output once and only once
3) Having gone through step 1 and 2 make a move
3) After 1) and 2) exploit the best path of attack
Ever analysed X in depth to only see “super-Y” later?
14. Chess Player approach
Chess players:
• Memorise openings
• Memorise endings
• Memorise entire lines of attack/defence
• Try hard to analyse games efficiently
Pen tester translation:
• Chess players precompute all they can
• Chess players analyse info only once
15. Garry Kasparov vs Nigel Short
World Championship Match 1993
“Kasparov was evidently disoriented as he used 1
hour 29 minutes to Short's 11 minutes(!) for the
entire game.“ Short (weaker) was 8 times faster
“In just 9 days after facing it for the first time …
Kasparov and his team had found the best reply
(11.Ne2 ) and even succeeded in completely
bamboozling Short with 12.Be5: <This move was a
surprise for me. I spent 45 minutes on my reply. I
could not fathom out the complications …- Short“
http://www.chessgames.com/perl/chessgame?gid=1070677
http://www.chessgames.com/perl/chessgame?gid=1070681
16. Can we be more efficient?
Can tools, knowledge and human analysis
be coordinated like an army?
Image Credit: http://pakistancriminalrecords.com
18. OWFT vs Traditional: Disclaimer
Existing tools:
• Are great at what they do
• Solve difficult problems
• Their authors are typically very smart people!
• Made OWTF possible
Not all limitations covered next apply to all tools
19. Define once + Automate
Traditional
Too many tools to run manually
Figure out how to call the tool each time
Figure out how to overcome poor defaults (i.e. UA)
poor defaults sometimes hard-coded in the code!
All tools are run for you automatically
Define how to call each tool only once
Useful defaults + Easy to run
21. Comprehensive
Traditional
Remember tests to run
Remember tools/websites to perform each test
Remember best order to run tools / use sites
Tests are run automatically
Use of best known tools + websites
Calls tools/sites in the best known order
Implements tests not found on other tools
23. Staged Report + Vuln Stats
Traditional
No report until end of scan waste of time
Report vulnerabilities 1 by 1 waste of time
Cannot analyse + exploit concurrently
You have a partial report in < 5 seconds
Refresh report = New results are highlighted
Reports vuln stats, which you can drill on
Fresh to analyse + exploit concurrently
25. Dynamic Report, flags, notes, etc.
Traditional
Report is static + poor interaction
Cannot flag / rate / ignore findings
Cannot take notes / filter findings with your criteria
Report is dynamic + interactive
Can flag / rate / ignore findings
Can take notes / filter findings with your criteria
Pen tester can import / export reviews
27. Reliable + Partial results if crashed
Traditional
Require babysitting (i.e. did it crash/stop?)
Lose all results + no report if crashed
Poor exception handling = crashes happen
Limited babysitting required (i.e. often none)
Tries hard not to crash + save results if crashed
Tool or plugin crashed? save data + continue
Robust exception handling (I think ☺)
29. Cancel + Move on support
Traditional
Stuck / Crashed command no report
Stuck / Crashed plugin no report
Stuck / Crashed tool no report
Stuck? Control+C + saves data + moves on
Crashed? Moves on (“finished”) + saves data
You can Control+C commands, plugins and owtf
When Control+C: Choose next cmd / plugin / exit
31. Aligned to Standards
Traditional
Not OWASP Testing Guide aligned
Not PTES aligned
Narrow standard coverage
OWASP Testing Guide aligned
PTES alignment-coverage planned
Extensive standard coverage
33. Simulation + Silent testing support
Traditional
No “Simulation mode” Run and see (!)
Cannot start test without permission (usually)
No passive, semi passive, active test separation
Supports “Simulation mode” 1st see, then run
Can test without permission: Silent testing support
Passive, semi passive, active test separation
Test ahead of time = More efficiency
35. Language agnostic, easy to extend
Traditional
Language dependent (ruby, python, perl, etc.)
Cannot contribute in your language (usually)
Difficult to extend / share info
Language agnostic: if the shell can run it = WIN
Contribute in your language (best if CLI-callable)
Easy to extend / share info
36. Easy setup and greppable DB
Traditional
Hard to setup: libraries, gems, DB installs, etc
DB in obscure format
Cannot custom search DB
Easy to setup: copy dir + run
DB in plain text, links provided to everything
DB is easy to grep for custom searches
38. Chess-like analysis support
Traditional
Cannot pre-compute / define tests (self/other)
Cannot mark “best candidate moves”
Cannot analyse each option only once + !notes
Tests are pre-computed / defined (self + other)
Mark “best candidate moves” via flags
Mark as analysed via strike-through
Filter your analysis with your priorities + notes
40. What about Tactical Fuzzing?
i.e. Burp, ZAP, etc
Traditional
Some tools do not support outbound proxies (!)
Can only pass their own info to the tactical fuzzer
Messy proxying when multiple tools are used
Can scrape results from all tools run
Can pass scraped results to tactical fuzzer
Proxy ok when multiple tools used under the hood
Proxy ok even if tool called has no proxy support
42. Google Hacking without API keys
Traditional
Some GH tools require API keys to work
Others require you to break CAPTCHA (!)
No API keys required
No CAPTCHA breaking required
Use of tunneable blanket searches instead
“Open all in tabs” for ease of use ☺
44. OWTF > Running tools
Traditional
Focused on small problems
Missing a lot from the OWASP Testing Guide
Must find X number of tools to bridge the gap
Calls the “best tool for the job” when possible
Implements many tests on its own too!
Links for test sites / “Suggestions”
Custom template support planned for reporting
48. OWTF Considerations/Limitations
• Relies on existing great tools != replacement
• Developed on python 2.6.5
• CLI Linux-only (dev on Backtrack 5 R1)
• GUI Multiplatform (web page)
• Lots of bugs (but stable! ☺)
• Lots of features in my todo list! ☺
• Not a “script kiddie tool” + Not a silverbullet
• Does not try to rate severity/replace humans:
• Focus is to provide data efficiently for the pen
tester
52. OWTF Goal:
Bring you closer to this
Image Credit: Steve Lord, BSides London 2011
53. OWTF – I need your help
Licence?
• 3-clause-BSD (metasploit)
• GPL v3 / v2, Apache
• Other?
Hosting service?
• github (metasploit, BeEF, whatweb, …)
• googlecode
• sourceforge
• Other?
54. OWTF - I need your help
Tool authors: Can owtf run your tool better?
Pen testers / Python wizards:
• What is missing? (tools, resources, approach,..)
• What could be done better?
Web designers:
• Make the report look better / easier to use
JavaScript gurus:
• More ideas to improve interactive report
Regexp and Selenium gurus:
• To suggest better Regexps and/or approach
55. Conclusion
OWTF aims to make pen testing:
• Aligned with OWASP Testing Guide + PTES
• More efficient
• More comprehensive
• More creative and fun (minimise un-creative
work)
This way pen testers will have time to:
• Focus on sharing information (tools, techniques, ..)
• Think out of the box for real (!babysit, !stupid
work)
• Chain vulnerabilities like attackers do
• Really show impact so that risk is understood
56. Special thanks to
For getting me started:
Justin Searle: “Python Basics for Web App Pentesters” –
OWASP AppSec EU 2011
For showing what I was missing in my process:
Jason Haddix: “The Web Application Hacking
Toolchain” – BruCon 2011
For “do what you love” inspiration:
Haroon Meer: “You and your research” – Brucon 2011
57. Special thanks to
• OWASP Testing Guide + PTES contributors
• Andrés Riancho
• Marcus Niemietz
• Mario Heiderich
• Michele Orru
• Sandro Gauci
58. Q&A
Abraham Aranguren
@7a_
abraham.aranguren@gmail.com
http://7-a.org
Project info
Website: http://owtf.org/
Twitter: @owtfp