Your SlideShare is downloading. ×
Web Applications Under Siege: Defending Against Attack Outbreaks
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Web Applications Under Siege: Defending Against Attack Outbreaks

1,138

Published on

Planning protection based on the average Web application attack can leave your organization exposed to a crippling upper limit attack. A large scale Web application attack will overwhelm and …

Planning protection based on the average Web application attack can leave your organization exposed to a crippling upper limit attack. A large scale Web application attack will overwhelm and immobilize the unprepared organization. Based on the findings of the Imperva semi-annual Web Application Attack Report, this presentation discusses the cumulative characteristics of Web application attack vectors, such as SQLi, XSS, RFI and LFI; seasonal trends in Web application attacks; the intensity of attacks and how organizations can prepare for “battle days”; and proven defense solutions and procedures to combat attack bursts.

Published in: Technology, News & Politics
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,138
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
29
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Applications Under Siege: Defending Against Attack OutbreaksAmichai Shulman, CTO, Imperva
  • 2. Agenda Introduction to our Hacker Intelligence Initiative (HII) and Web Application Attack Report (WAAR) Taking a new approach Analyzing real-life attack traffic + Key findings + Take-aways Summary of recommendations2
  • 3. Amichai Shulman – CTO Imperva Speaker at Industry Events + RSA, Sybase Techwave, Info Security UK, Black Hat Lecturer on Info Security + Technion - Israel Institute of Technology Former security consultant to banks & financial services firms Leads the Application Defense Center (ADC) + Discovered over 20 commercial application vulnerabilities – Credited by Oracle, MS-SQL, IBM and others Amichai Shulman one of InfoWorld’s “Top 25 CTOs”
  • 4. Introduction to HII and WAAR CONFIDENTIAL
  • 5. HII - Hacker Intelligence Initiative Hacker Intelligence Initiative is focused on understanding how attackers are operating in practice + A different approach from vulnerability research Data set composition + ~50 real world applications + Anonymous Proxies More than 18 months of data Powerful analysis system + Combines analytic tools with drill down capabilities 5
  • 6. HII - Motivation Focus on actual threats + Focus on what hackers want, helping good guys prioritize + Technical insight into hacker activity + Business trends of hacker activity + Future directions of hacker activity Eliminate uncertainties + Active attack sources + Explicit attack vectors + Spam content Devise new defenses based on real data + Reduce guess work
  • 7. HII Reports Monthly reports based on data collection and analysis Drill down into specific incidents or attack types 2011 / 2012 reports + Remote File Inclusion + Search Engine Poisoning + The Convergence of Google and Bots + Anatomy of a SQLi Attack + Hacker Forums Statistics + Automated Hacking + Password Worst Practices + Dissecting Hacktivist Attacks + CAPCHA Analysis
  • 8. WAAR – Web Application Attack Report Semi annual Based on aggregated analysis of 6 / 12 months of data Download Roports: Motivation WAAR Edition #1 WAAR Edition #2 + Pick-up trends WAAR Edition #3 + High level take outs + Create comparative measurements over time
  • 9. Taking a New Approach CONFIDENTIAL
  • 10. Retrospective Assumptions + Attack requests are more or less evenly spread over time + Applications are more or less similar Method + Count and analyze individual requests + Look at average over time / application Consequence + “An application experiences an attack every other minute”
  • 11. Contemplation Observations + Attack traffic has a burst nature + Applications in our data set show some outliers Reflections + Do organizations really need to handle an alert every two minutes? + Do organizations handle a steady stream of attacks of an evenly distributed nature?
  • 12. Resolution Abandon individual requests and look at incidents + 30 requests (or more) within 5 mins + Intensity and durability Further aggregate incidents into “battle days” + A day that includes at least one incident
  • 13. Resolution (cont.) Then there is the man who drowned crossing a stream with an average depth of six inches - W.I.E. Gates + Distribution of web attacks is asymmetric and includes rare, yet extremely meaningful, outliers + Security professionals who would prepare for the “average case” will be overwhelmed by the intensity of incidents when these actually happen + We shifted away from average into other measures like median and quartiles + Use Box & Whisker charts to display data – Express dispersion and skewness
  • 14. Box and Whisker Median 75% 95% 5% 25%
  • 15. Data Analysis CONFIDENTIAL
  • 16. Goals Frequency + How many incidents / battle days per time frame Persistency + Duration of incidents Magnitude + Volume of traffic during involved in an incident / battle day Predictability + Can one predict the timing of next incident based on analyzing the timing of past incidents?
  • 17. Overview Typical Worst-case (median) (max) Battle days (over a 6 months 59 141 period) Incidents (over a 6 months 137 1383 period) Incident magnitude (requests 195 8790 per incident) Incident duration (minutes) 7.70 79
  • 18. Overview – Frequency An incident is expected every 3rd day Some applications are attacked almost every day A battle day usually includes more than a single attack Expected frequency affects the resources an organization needs to allocate on a constant basis for handling attacks
  • 19. Overview – Frequency Take-away #1: Find out your expected attack frequency
  • 20. Overview - Magnitude Typical case is ~200 requests Average is 1 every 2 minutes Worst case is more than 400 times that number Affects the size of equipment an organization needs for handling attacks Affects the capabilities required for handling incidents + Aggregation and summary + Quickly take action based on summary
  • 21. Overview - Magnitude Take-away #2: Base line for scaling should be typical numbers. Aim for 3rd quartile.
  • 22. Granular Comparison - Frequency 350 300 250amount of incidents 200 150 100 50 0 SQLi RFI LFI DT XSS HTTP
  • 23. Granular Comparison - Frequency SQL injection is the most prevailing attack type + As opposed to previous edition that showed XSS and DT RFI attacks much more common than indicated by just looking at number of requests Outliers indicate that some applications are heavily targeted by a specific type of attack – SQLi – HTTP (malformed requests of various types) – DT
  • 24. Granular Comparison - Frequency Take-away #3:Attackers would try attacks that have betterpotential benefit regardless of vulnerability assessment.
  • 25. Granular Comparison – Frequency – Battle Days 80 70 60# of battle days in 6 months 50 40 30 20 10 0 SQLi RFI LFI DT XSS HTTP
  • 26. Granular Comparison - Magnitude 1600 1400 1200 Requests per incident 1000 800 600 400 200 0 SQLi RFI LFI DT XSS HTTP
  • 27. Granular Comparison - Intensity LFI is typically the most intensive attack RFI attacks tend to be more intensive than DT and SQLi Incidents are usually at the lowest 100s of requests per incident with extreme cases at the lower thousands
  • 28. Granular Comparison - Intensity Take-away #4:Make sure your solution tackles SQL injection and RFI at large scales.
  • 29. Granular Comparison - Persistence 40 35 30minutes per incident 25 20 15 10 5 0 SQLi RFI LFI DT XSS HTTP
  • 30. Granular Comparison - Persistence Majority of attacks are short + No more than 15 mins + Usually below 10 mins DT attacks tend to last longer, while XSS attacks tend to be shorter Figures suggest that attack type does not affect the intensity (requests per second) of attacks + LFI seems to have a higher tendency to intense incidents (higher magnitude with lower persistence) Supports our assumption with respect to the bursty nature of attack traffic
  • 31. Granular Comparison - Persistence Take-away #4: No time to analyze individual requests and attack vector during an ongoing attack.
  • 32. Worst Case Analysis SQLi RFI LFI DT XSS Magnitude (requests) 359390 35276 3941 8197 16222 Intensity (requests per 543.2 742.2 418.4 378 455.4 minute) Intensity (requests per 359465 41495 8343 11549 21113 battle day)
  • 33. # Attack per week 0 5 10 15 20 25 30 35 40 4505/06/201119/06/201103/07/201117/07/201131/07/201114/08/201128/08/201111/09/201125/09/201109/10/201123/10/201106/11/201120/11/201104/12/201118/12/201101/01/201215/01/201229/01/201212/02/201226/02/201211/03/201225/03/2012 Trending – A Single Application View08/04/201222/04/201206/05/201220/05/201203/06/2012 DT LFI RFI XSS SQLi
  • 34. Trending – A Single Application View Bursty nature of attacks clearly shows in this graph Extreme attack load of attacks during January Second half (even without the January burst) shows more attacks than first half (576 vs. 322) This trend is also true for general malformed HTTP requests + Empiric evidence to the correlation between malformed HTTP traffic and attacks
  • 35. Predictability - Goals Try to predict the timing of next attack / battle day based on history of attacks / battle days We’ve showed that if an application faces an incident during a specific day, it is likely to experience more incidents that same day + Probably due to being part of a list distributed to attack bots + Maybe due to a change that made it pop on the to-do list of attack bots Being able to predict would affect the ability to effectively allocated resources
  • 36. Predictability - Method Looked for Linear predication between battle days Use Auto Correlation Function (ACF) We employed Wessa, a freely available online service that performs auto-correlation
  • 37. Predictability - Results No apparent correlation over a simple time gat
  • 38. Predictability - Results Unreported, periodic, vulnerability scan
  • 39. Summary – Previous Advice Still Holds True Deploy security solutions that deter automated attacks. Detect known vulnerability attacks. Acquire intelligence on malicious sources and apply it in real time. Participate in a security community and share data on attacks.
  • 40. Summary – The Bursty Nature of Attacks Deploy for the right scale – Don’t be fooled by “average” good weather Automated response procedures - When under attack volume is too high Aggregate and summarize data in real time – Too many individual attacks to look at individually Be prepared – Bursts are unpredictable. Test your team’s readiness
  • 41. Imperva: Our Story in 60 Seconds Attack Usage Protection Audit Virtual Rights Patching Management Reputation Access Controls Control
  • 42. Webinar Materials42 CONFIDENTIAL
  • 43. Webinar Materials Join Imperva LinkedIn Group, Imperva Data Security Direct, for… Answers to Post-Webinar Attendee Discussions Questions Webinar Join Group Recording Link
  • 44. www.imperva.com- CONFIDENTIAL -

×