This document is a website security statistics report from 2015 that analyzes vulnerability data from tens of thousands of websites. Some of the key findings include:
- Compliance-driven organizations have the lowest average number of vulnerabilities but the highest remediation rates, while risk reduction-driven organizations have more vulnerabilities but fix them faster.
- Feeding vulnerability results back to development teams significantly reduces vulnerabilities, speeds up fixes, and increases remediation rates.
- Performing static code analysis more frequently is correlated with faster vulnerability fix times.
- Ad hoc code reviews of high-risk applications appear to be one of the most effective activities at reducing vulnerabilities.
- There is no clear evidence that any particular "best practice"
In this report, we put this area of application security understanding to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long; and, how do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?
By analyzing the vulnerability assessment results of more than 30,000 websites under management with WhiteHat Sentinel, we begin to answer these questions. These answers may enable the application security community to ask better and deeper questions, which will eventually lead to more secure websites. Organizations deploying these technologies can have a closer look at particularly risk-prone areas. Software vendors may focus on areas that are found to be lacking. Developers can increase their familiarity with the strengths and weaknesses of their technology stack. All of this is vitally important because security must be baked into development frameworks and must be virtually transparent. Only then will application security progress be made.
WhiteHat Security’s Website Security Statistics Report provides a one-of-a-kind perspective on the state of website security and the issues that organizations must address in order to conduct business online safely.
Website security is an ever-moving target. New website launches are common, new code is released constantly, new Web technologies are created and adopted every day; as a result, new attack techniques are frequently disclosed that can put every online business at risk. In order to stay protected, enterprises must receive timely information about how they can most efficiently defend their websites, gain visibility into the performance of their security programs, and learn how they compare with their industry peers. Obtaining these insights is crucial in order to stay ahead and truly improve enterprise website security.
To help, WhiteHat Security has been publishing its Website Security Statistics Report since 2006. This report is the only one that focuses exclusively on unknown vulnerabilities in custom Web applications, code that is unique to an organization, and found in real-world websites. The underlying data is hundreds of terabytes in size, comprises vulnerability assessment results from tens of thousands of websites across hundreds of the most well-known organizations, and collectively represents the largest and most accurate picture of website security available. Inside this report is information about the most prevalent vulnerabilities, how many get fixed, how long the fixes can take on average, and how every application security program may measurably improve. The report is organized by industry, and is accompanied by WhiteHat Security’s expert analysis and recommendations.
Through its Software-as-a-Service (SaaS) offering, WhiteHat Sentinel, WhiteHat Security is uniquely positioned to deliver the depth of knowledge that organizations require to protect their brands, attain compliance, and avert costly breaches.
Web security is a moving target and enterprises need timely information about the latest attack trends, how they can best defend their websites, and visibility into their vulnerability lifecycle. Through its Software-as-a-Service (SaaS) offering, WhiteHat Sentinel, WhiteHat Security is uniquely positioned to deliver the knowledge and solutions that organizations need to protect their brands, attain PCI compliance and avert costly breaches.
The WhiteHat Website Security Statistics Report provides a one-of-a-kind perspective on the state of website security and the issues that organizations must address to safely conduct business online. WhiteHat has been publishing the report, which highlights the top ten vulnerabilities, tracks vertical market trends and identifies new attack techniques, since 2006.
The WhiteHat Security report presents a statistical picture of current website vulnerabilities, accompanied by WhiteHat expert analysis and recommendations. WhiteHat’s report is the only one in the industry to focus solely on unknown vulnerabilities in custom Web applications, code unique to an organization,
This year WhiteHat SecurityTM celebrates its fteenth anniversary, and the eleventh year that we have produced the Web Applications Security Statistics Report. The stats shared in this report are based on the aggregation of all the scanning and remediation data obtained from applications that used the WhiteHat SentinelTM service for application security testing in 2015. As an early pioneer in the Application Security Market, WhiteHat has a large and unique collection of data to work with.
WhiteHat Security, the Web security company, today released the twelfth installment of the WhiteHat Security Website Security Statistics Report. The report reviewed serious vulnerabilities* in websites during the 2011 calendar year, examining the severity and duration of the most critical vulnerabilities from 7,000 websites across major vertical markets. Among the findings in the report, WhiteHat research suggests that the average number of serious vulnerabilities found per website per year in 2011 was 79, a substantial reduction from 230 in 2010 and down from 1,111 in 2007. Despite the significant improvement in the state of website security, organizational challenges in creating security programs that balance breadth of coverage and depth of testing leave large-scale attack surfaces or small, but very high-risk vulnerabilities open to attackers.
The report examined data from more than 7,000 websites across over 500 organizations that are continually assessed for vulnerabilities by WhiteHat Security’s family of Sentinel Services. This process provides a real-world look at website security across a range of vertical markets, including findings from the energy and non-profit verticals for the first time this year. The metrics provided serve as a foundation for improving enterprise application security online.
Ce rapport produit par WhiteHat en mai 2013 offre une vision pertinente des menaces web et des paramètres à prendre en compte pour assurer sécurité et disponibilité.
With malware attacks growing more sophisticated, swift, and dangerous by the day — and billions of dollars spent to combat them — surprisingly few organizations have a grip on the problem. Only 20 percent of security professionals surveyed by Information Security Media Group (ISMG) rated their incident response program “very effective.” Nearly two-thirds struggle to detect APTs, limiting their ability to defend today’s most pernicious threats. In addition, more than 60 percent struggle with the speed of detection, and more than 40 percent struggle with the accuracy of detection. Those shortcomings give attackers more time to steal data and embed their malware deeper into targeted systems. For the latest threat intelligence reports, visit https://www.fireeye.com/current-threats/threat-intelligence-reports.html.
In this report, we put this area of application security understanding to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long; and, how do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?
By analyzing the vulnerability assessment results of more than 30,000 websites under management with WhiteHat Sentinel, we begin to answer these questions. These answers may enable the application security community to ask better and deeper questions, which will eventually lead to more secure websites. Organizations deploying these technologies can have a closer look at particularly risk-prone areas. Software vendors may focus on areas that are found to be lacking. Developers can increase their familiarity with the strengths and weaknesses of their technology stack. All of this is vitally important because security must be baked into development frameworks and must be virtually transparent. Only then will application security progress be made.
WhiteHat Security’s Website Security Statistics Report provides a one-of-a-kind perspective on the state of website security and the issues that organizations must address in order to conduct business online safely.
Website security is an ever-moving target. New website launches are common, new code is released constantly, new Web technologies are created and adopted every day; as a result, new attack techniques are frequently disclosed that can put every online business at risk. In order to stay protected, enterprises must receive timely information about how they can most efficiently defend their websites, gain visibility into the performance of their security programs, and learn how they compare with their industry peers. Obtaining these insights is crucial in order to stay ahead and truly improve enterprise website security.
To help, WhiteHat Security has been publishing its Website Security Statistics Report since 2006. This report is the only one that focuses exclusively on unknown vulnerabilities in custom Web applications, code that is unique to an organization, and found in real-world websites. The underlying data is hundreds of terabytes in size, comprises vulnerability assessment results from tens of thousands of websites across hundreds of the most well-known organizations, and collectively represents the largest and most accurate picture of website security available. Inside this report is information about the most prevalent vulnerabilities, how many get fixed, how long the fixes can take on average, and how every application security program may measurably improve. The report is organized by industry, and is accompanied by WhiteHat Security’s expert analysis and recommendations.
Through its Software-as-a-Service (SaaS) offering, WhiteHat Sentinel, WhiteHat Security is uniquely positioned to deliver the depth of knowledge that organizations require to protect their brands, attain compliance, and avert costly breaches.
Web security is a moving target and enterprises need timely information about the latest attack trends, how they can best defend their websites, and visibility into their vulnerability lifecycle. Through its Software-as-a-Service (SaaS) offering, WhiteHat Sentinel, WhiteHat Security is uniquely positioned to deliver the knowledge and solutions that organizations need to protect their brands, attain PCI compliance and avert costly breaches.
The WhiteHat Website Security Statistics Report provides a one-of-a-kind perspective on the state of website security and the issues that organizations must address to safely conduct business online. WhiteHat has been publishing the report, which highlights the top ten vulnerabilities, tracks vertical market trends and identifies new attack techniques, since 2006.
The WhiteHat Security report presents a statistical picture of current website vulnerabilities, accompanied by WhiteHat expert analysis and recommendations. WhiteHat’s report is the only one in the industry to focus solely on unknown vulnerabilities in custom Web applications, code unique to an organization,
This year WhiteHat SecurityTM celebrates its fteenth anniversary, and the eleventh year that we have produced the Web Applications Security Statistics Report. The stats shared in this report are based on the aggregation of all the scanning and remediation data obtained from applications that used the WhiteHat SentinelTM service for application security testing in 2015. As an early pioneer in the Application Security Market, WhiteHat has a large and unique collection of data to work with.
WhiteHat Security, the Web security company, today released the twelfth installment of the WhiteHat Security Website Security Statistics Report. The report reviewed serious vulnerabilities* in websites during the 2011 calendar year, examining the severity and duration of the most critical vulnerabilities from 7,000 websites across major vertical markets. Among the findings in the report, WhiteHat research suggests that the average number of serious vulnerabilities found per website per year in 2011 was 79, a substantial reduction from 230 in 2010 and down from 1,111 in 2007. Despite the significant improvement in the state of website security, organizational challenges in creating security programs that balance breadth of coverage and depth of testing leave large-scale attack surfaces or small, but very high-risk vulnerabilities open to attackers.
The report examined data from more than 7,000 websites across over 500 organizations that are continually assessed for vulnerabilities by WhiteHat Security’s family of Sentinel Services. This process provides a real-world look at website security across a range of vertical markets, including findings from the energy and non-profit verticals for the first time this year. The metrics provided serve as a foundation for improving enterprise application security online.
Ce rapport produit par WhiteHat en mai 2013 offre une vision pertinente des menaces web et des paramètres à prendre en compte pour assurer sécurité et disponibilité.
With malware attacks growing more sophisticated, swift, and dangerous by the day — and billions of dollars spent to combat them — surprisingly few organizations have a grip on the problem. Only 20 percent of security professionals surveyed by Information Security Media Group (ISMG) rated their incident response program “very effective.” Nearly two-thirds struggle to detect APTs, limiting their ability to defend today’s most pernicious threats. In addition, more than 60 percent struggle with the speed of detection, and more than 40 percent struggle with the accuracy of detection. Those shortcomings give attackers more time to steal data and embed their malware deeper into targeted systems. For the latest threat intelligence reports, visit https://www.fireeye.com/current-threats/threat-intelligence-reports.html.
A detailed scenario of risks present in a proposed collaborative platform and the various steps involved with detailed risk assessment for the business environment.
A detailed analysis on one of the biggest data breaches in history...What JP Morgan Chase & Co did wrong and proposed mitigation techniques. The data breach at J.P. Morgan Chase is yet another example of how our most sensitive personal information is in danger.
.
Whitepaper | Cyber resilience in the age of digital transformationNexon Asia Pacific
We are living in an always-on world using different communications devices, systems and networks. As privacy and protecting one’s identity is becoming increasingly important, the task of protecting these devices, systems and networks from cyber attack is no longer an option, it is a necessity.
How close is your organization to being breached | Safe SecurityRahul Tyagi
Traditional methods are certainly limited in
their capabilities and this is easily proven by
the multitude of breaches businesses were a
victim of, across the globe. The 2020 Q3 Data
Breach QuickView Report revealed that the
number of records exposed in 2020 has
increased to 36 billion globally. The report
stated that there were 2,953 publicly
reported breaches in the first three quarters
of 2020 itself! 2020 is already named the
“worst year on record” by the end of Q2 in
terms of the total number of records
exposed. With the growing sophistication of
cyber-attacks and global damages related
to cybercrime reaching $6 trillion by 2021, we
need a solution that simplifies
cybersecurity.
To know more about breach probability visit : www.safe.security
Today, the delegation of risk decisions to the IT team
cannot be the only solution and has to be a shared
responsibility. The board and business executives are
expected to incorporate the management of cyber risk
as part of their business strategy since they are
accountable to stakeholders, regulators and
customers. For the CROs, CISOs, and Security and Risk
Management Professionals to be on the same page,
there has to be a single source of truth for
communicating the impact that cyber risk has on
business outcomes, in a language that everyone can
understand.
All product and company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Inside The 10 Biggest and Boldest Insider Threats of 2019-2020Proofpoint
Insider threats come in all shapes and sizes and affect organizations across all industries and geographies. Understanding the motives behind them is key to defense.
One of the best ways to do this is to study some of the bold, headline-generating insider threats that have taken place recently, like the big Twitter debacle of July 2020. This is just one example of what has become a very common problem.
Big Iron to Big Data Analytics for Security, Compliance, and the MainframePrecisely
Security Information and Event Management (SIEM) technologies and practices continue to expand across IT organizations to address security concerns and meet compliance mandates. However, in many of these organizations the mainframe remains an isolated technology platform. Security & compliance issues are addressed using old tools that are not effectively integrated into big data analytics platforms. In this webinar we discuss how to leverage mainframe (Big Iron) data sources into Big Data analytics platforms to address a variety of mainframe security challenges. Additionally, we cover:
• How to integrate IBM z/OS mainframe security data into an enterprise SIEM solution
• How to leverage IBM z/OS security data to detect threats in the mainframe environment using big data analytics
• Review some compliance uses cases that have been addressed using big iron to big data analytics
EMA surveyed IT and IT security respondents to learn how organizations are responding to the threat of bot attacks.
These slides based on the webinar from leading IT research firm Enterprise Management Associates provides highlights from this research.
Convince your board - cyber attack prevention is better than cureDave James
The business case for cyber attack prevention for organisations concerned about the rise in cyber crime and the risk to their data. Includes cyber security tips and resources.
SANS 2013 Report: Digital Forensics and Incident Response Survey FireEye, Inc.
Cloud computing and bring-your-own-device (BYOD) workplace policies are expanding the endpoints in IT infrastructures — and more complexity when it comes to investigating cyber attacks. The SANS 2013 Report on Digital Forensics and Incident Response Survey reveals some of the major difficulties that security professionals face in this new environment and how to better prepare for future investigations. Collecting responses from more than 450 security professionals across a range of industries and company sizes, the survey found that nearly 90 percent of respondents had conducted at least one forensics investigation within the last two years. But just 54 percent called their digital forensics capabilities “reasonably effective.” For the latest threat intelligence reports, visit https://www.fireeye.com/current-threats/threat-intelligence-reports.html
2020 Cost of Insider Threats Global Report with Dr. Larry Ponemon, Chairman ...Proofpoint
Ponemon 2020 Cost Report for Insider Threats: Key Takeaways and Trends How much could Insider Threats cost your company annually? $11.45M, according to a new report from the Ponemon Institute, up from $8.76M in 2018. Ponemon’s 2020 Cost of Insider Threats Report surveyed hundreds of IT security professionals across North America, EMEA, and APAC, covering multi-year trends that prove the significance of this rapidly growing threat type. Join Larry Ponemon, Chairman and Founder of the Ponemon Institute, and Josh Epstein, CMO at ObserveIT a Proofpoint company, in a webinar to break down the key findings of the 2020 report. We will cover: ● What kinds of Insider Threats cost organizations the most ● How investigations are driving up the cost-per-incident for companies ● Which organizations, industries, and regions are being targeted the most ● How companies can potentially save millions by using a dedicated Insider Threat management approach.
A detailed scenario of risks present in a proposed collaborative platform and the various steps involved with detailed risk assessment for the business environment.
A detailed analysis on one of the biggest data breaches in history...What JP Morgan Chase & Co did wrong and proposed mitigation techniques. The data breach at J.P. Morgan Chase is yet another example of how our most sensitive personal information is in danger.
.
Whitepaper | Cyber resilience in the age of digital transformationNexon Asia Pacific
We are living in an always-on world using different communications devices, systems and networks. As privacy and protecting one’s identity is becoming increasingly important, the task of protecting these devices, systems and networks from cyber attack is no longer an option, it is a necessity.
How close is your organization to being breached | Safe SecurityRahul Tyagi
Traditional methods are certainly limited in
their capabilities and this is easily proven by
the multitude of breaches businesses were a
victim of, across the globe. The 2020 Q3 Data
Breach QuickView Report revealed that the
number of records exposed in 2020 has
increased to 36 billion globally. The report
stated that there were 2,953 publicly
reported breaches in the first three quarters
of 2020 itself! 2020 is already named the
“worst year on record” by the end of Q2 in
terms of the total number of records
exposed. With the growing sophistication of
cyber-attacks and global damages related
to cybercrime reaching $6 trillion by 2021, we
need a solution that simplifies
cybersecurity.
To know more about breach probability visit : www.safe.security
Today, the delegation of risk decisions to the IT team
cannot be the only solution and has to be a shared
responsibility. The board and business executives are
expected to incorporate the management of cyber risk
as part of their business strategy since they are
accountable to stakeholders, regulators and
customers. For the CROs, CISOs, and Security and Risk
Management Professionals to be on the same page,
there has to be a single source of truth for
communicating the impact that cyber risk has on
business outcomes, in a language that everyone can
understand.
All product and company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
Inside The 10 Biggest and Boldest Insider Threats of 2019-2020Proofpoint
Insider threats come in all shapes and sizes and affect organizations across all industries and geographies. Understanding the motives behind them is key to defense.
One of the best ways to do this is to study some of the bold, headline-generating insider threats that have taken place recently, like the big Twitter debacle of July 2020. This is just one example of what has become a very common problem.
Big Iron to Big Data Analytics for Security, Compliance, and the MainframePrecisely
Security Information and Event Management (SIEM) technologies and practices continue to expand across IT organizations to address security concerns and meet compliance mandates. However, in many of these organizations the mainframe remains an isolated technology platform. Security & compliance issues are addressed using old tools that are not effectively integrated into big data analytics platforms. In this webinar we discuss how to leverage mainframe (Big Iron) data sources into Big Data analytics platforms to address a variety of mainframe security challenges. Additionally, we cover:
• How to integrate IBM z/OS mainframe security data into an enterprise SIEM solution
• How to leverage IBM z/OS security data to detect threats in the mainframe environment using big data analytics
• Review some compliance uses cases that have been addressed using big iron to big data analytics
EMA surveyed IT and IT security respondents to learn how organizations are responding to the threat of bot attacks.
These slides based on the webinar from leading IT research firm Enterprise Management Associates provides highlights from this research.
Convince your board - cyber attack prevention is better than cureDave James
The business case for cyber attack prevention for organisations concerned about the rise in cyber crime and the risk to their data. Includes cyber security tips and resources.
SANS 2013 Report: Digital Forensics and Incident Response Survey FireEye, Inc.
Cloud computing and bring-your-own-device (BYOD) workplace policies are expanding the endpoints in IT infrastructures — and more complexity when it comes to investigating cyber attacks. The SANS 2013 Report on Digital Forensics and Incident Response Survey reveals some of the major difficulties that security professionals face in this new environment and how to better prepare for future investigations. Collecting responses from more than 450 security professionals across a range of industries and company sizes, the survey found that nearly 90 percent of respondents had conducted at least one forensics investigation within the last two years. But just 54 percent called their digital forensics capabilities “reasonably effective.” For the latest threat intelligence reports, visit https://www.fireeye.com/current-threats/threat-intelligence-reports.html
2020 Cost of Insider Threats Global Report with Dr. Larry Ponemon, Chairman ...Proofpoint
Ponemon 2020 Cost Report for Insider Threats: Key Takeaways and Trends How much could Insider Threats cost your company annually? $11.45M, according to a new report from the Ponemon Institute, up from $8.76M in 2018. Ponemon’s 2020 Cost of Insider Threats Report surveyed hundreds of IT security professionals across North America, EMEA, and APAC, covering multi-year trends that prove the significance of this rapidly growing threat type. Join Larry Ponemon, Chairman and Founder of the Ponemon Institute, and Josh Epstein, CMO at ObserveIT a Proofpoint company, in a webinar to break down the key findings of the 2020 report. We will cover: ● What kinds of Insider Threats cost organizations the most ● How investigations are driving up the cost-per-incident for companies ● Which organizations, industries, and regions are being targeted the most ● How companies can potentially save millions by using a dedicated Insider Threat management approach.
Metrics & Reporting - A Failure in CommunicationChris Ross
Wisegate recently conducted a research initiative to assess the current state of security risks and controls in business today. One of the key takeaways? A concerning lack of metrics and reporting on the subject. While CISOs claim to be improving corporate security all the time, there is little ability to measure that success. In this Drill-Down report, Wisegate uncovers where most organizations stand when it comes to metrics and reporting, and how it is affecting their businesses on the whole.
ADAM ADLER MIAMI Adam Adler is a serial entrepreneur with over 18 years experience all at top level management and ownership. Primarily investing his own capital and building brands from the ground up.
Assessing and Managing IT Security RisksChris Ross
Data privacy and protection has become the gold standard in IT. Scale Venture Partners and Wisegate share what they learned from over 100 IT professionals questioned about the risks and technology trends driving their security programs. Read about the move towards data centric security and the need for improvement in automated security controls and metrics reporting.
COVID-19 free penetration tests by Pentest-Tools.comPentest-Tools.com
We offered companies free penetration tests so they could improve their security and better cope with the emerging cyberattacks.
The report covers top security issues we found and experts' recommendations to avoid attacks that disrupt businesses.
In an era of global connectivity, online information and systems are playing an increasingly central role in business. According to data from Cisco, worldwide internet-connected devices will reach 50 billion by 2020, and with 15 billion devices already in 2015 it is apparent that an increasing numbers of companies, systems and information are working online.
Adversaries and defenders are both developing technologies
and tactics that are growing in sophistication. For their part,
bad actors are building strong back-end infrastructures
with which to launch and support their campaigns. Online
criminals are refining their techniques for extracting money
from victims and for evading detection even as they continue
to steal data and intellectual property.
Como cybercriminals cada vez mais ataques a sua estratégia de risco cibernético está sob o microscópio. Com o Cisco 2016 Annual Security Report, que analisa os avanços da indústria de segurança e dos criminosos, veja como seus empresas avaliam a preparação para a segurança em suas organizações e obtêm idéias sobre onde fortalecer suas defesas. Seja um profissional de Segurança da informação faça o curso de analista de Redes e segurança http://www.trainning.com.br/curso_mcse_ccna_ceh_itil_vmware/?v=Slide
As cybercriminals increasingly profit from brazen attacks, your cyber-risk strategy is under the microscope. With the Cisco 2016 Annual Security Report, which analyzes advances by security industry and criminals, see how your peers assess security preparedness in their organizations and gain insights into where to strengthen your defenses.
Smart Process Applications (SPAs), Intelligent Business Processes, Adaptive BPM: these are all terms
applied to a new generation of applications that use computer intelligence to extract context-relevant
information from the content associated with a business process, and use it to select, modify or re-direct the
next steps in the workflow. One of its primary applications is in case management. Here the term “case” is
used in its widest sense to refer to any process or project that has a defined beginning and end, where the
process steps and outcome may change during the course of the process, and where associated content
needs to be grouped and managed as a case-file or project-file. Applications can range from payment
management, through contract bids, claims handling and loan origination, to traditional healthcare, crime or
legal cases.
Historically, case management systems and indeed most BPM systems have been somewhat rigid in their
workflows, lacking the ability to re-route as the case progresses – much like early satnavs, in fact. However,
a completely free-to-change process definition could introduce shortfalls in compliance and may well be
sub-optimum in terms of productivity. By adapting the process definition as the case progresses and doing
so based on the content and context of documents incoming to the case, the process can be handled
flexibly but compliance is still hard-wired.
In this report, we take an in-depth look at the applicability of smart process applications, the experience of
early users, the drivers for improved case management, and the feature sets required of a modern case
management system.
There is a serious misalignment of interests between Application Security vulnerability assessment vendors and their customers. Vendors are incentivized to report everything they possible can, even issues that rarely matter. On the other hand, customers just want the vulnerability reports that are likely to get them hacked. Every finding beyond that is a waste of time, money, and energy, which is precisely what’s happening every day.
How to Determine Your Attack Surface in the Healthcare SectorJeremiah Grossman
Do you know what an asset inventory is, why it's important, and how it can protect you from cybersecurity vulnerabilities?
In this webinar, you can expect to learn:
- How to prepare yourself and your staff against cybersecurity threats
- What an asset inventory is and why it's the next big thing in information security
- How to identify all your company's Internet-connected assets and which need to be defended
- Why keeping an up-to-date asset inventory is important
- How to obtain your own attack surface map
Exploring the Psychological Mechanisms used in Ransomware Splash ScreensJeremiah Grossman
The present study examined a selection of 76 ransomware splash screens collected from a variety of sources. These splash screens were analysed according to surface information, including aspects of visual appearance, the use of language, cultural icons, payment and payment types. The results from the current study showed that, whilst there was a wide variation in the construction of ransomware splash screens, there was a good degree of commonality, particularly in terms of the structure and use of key aspects of social engineering used to elicit payment from the victims. There was the emergence of a sub-set of ransomware that, in the context of this report, was termed ‘Cuckoo’ ransomware. This type of attack often purported to be from an official source requesting payment for alleged transgressions.
What the Kidnapping & Ransom Economy Teaches Us About RansomwareJeremiah Grossman
Ransomware is center stage, as campaigns are practically guaranteed financial gain. Cyber-criminals profit hundreds of millions of dollars by selling our data back to us. If you look closely, the ransomware economic dynamics closely follow the real-world kidnapping and ransom industry. We’ll explore the eerie similarities, where ransomware is headed, and strategies we can bring to the fight.
What the Kidnapping & Ransom Economy Teaches Us About RansomwareJeremiah Grossman
Ransomware is center stage, as campaigns are practically guaranteed financial gain. Cyber-criminals profit hundreds of millions of dollars by selling our data back to us. If you look closely, the ransomware economic dynamics closely follow the real-world kidnapping and ransom industry. We’ll explore the eerie similarities, where ransomware is headed, and strategies we can bring to the fight.
In the past two decades of tech booms, busts, and bubbles, two things have not changed - hackers are still nding ways to breach security measures in place, and the endpoint remains the primary target. And now, with cloud and mobile computing, endpoint devices have become the new enterprise security perimeter, so there is even more pressure to lock them down.
Companies are deploying piles of software on the endpoint to secure it - antivirus, anti- malware, desktop rewalls, intrusion detection, vulnerability management, web ltering, anti-spam, and the list goes on. Yet with all of the solutions in place, high pro le companies are still being breached. The recent attacks on large retail and hospitality organizations are prime examples, where hackers successfully used credit-card-stealing-malware targeting payment servers to collect customer credit card information.
Ransomware is Here: Fundamentals Everyone Needs to KnowJeremiah Grossman
If you’re an IT professional, you probably know at least the basics of ransomware. Instead of using malware or an exploit to exfiltrate PII from an enterprise, bad actors instead find valuable data and encrypt it. Unless you happen to have an NSA-caliber data center at your disposal to break the encryption, you must pay your attacker in cold, hard bitcoins—or else wave goodbye to your PII. Those assumptions aren’t wrong, but they also don’t tell the whole picture.
During this event we’ll discuss topics such as:
Why Ransomware is Exploding
The growth of ransomware, as opposed to garden-variety malware, is enormous. Hackers have found that they can directly monetize the data they encrypt, which eliminates the time-consuming process of selling stolen data on the Darknet. In addition, the use of ransomware requires little in the way of technical skill—because attackers don’t need to get root on a victim’s machine.
Who the Real Targets Are
Two years ago, the most newsworthy victims of ransomware were various police departments. This year, everyone is buzzing about hospitals. Is this a deliberate pattern? Probably not. Enterprises are so ill-prepared for ransomware that attackers have a green field to wreak havoc. Until the industry shapes up, bad actors will target ransomware indiscriminately.
Where Ransomware Stumbles
Although ransomware is nearly impossible to dislodge when employed correctly, you may be surprised to find that not all bad actors have the skill to do it. Even if ransomware targets your network, you may learn that your attackers have used extremely weak encryption—or that they’ve encrypted files that are entirely non-critical.
As far as ransomware is concerned, forewarned is forearmed. Once you know how attackers deliver ransomware, who they’re likely to attack, and the weaknesses in the ransomware deployment model, you’ll be able to understand how to protect your enterprise.
Where Flow Charts Don’t Go -- Website Security Statistics Report (2015)Jeremiah Grossman
WhiteHat Security’s Website Security Statistics Report provides a one-of-a-kind perspective on the state of website security and the issues that organizations must address in order to conduct business online safely.
Website security is an ever-moving target. New website launches are common, new code is released constantly, new web technologies are created and adopted every day; as a result, new attack techniques are frequently disclosed that can put every online business at risk. In order to stay protected, enterprises must receive timely information about how they
can most efficiently defend their websites, gain visibility into
the performance of their security programs, and learn how they compare with their industry peers. Obtaining these insights
is crucial in order to stay ahead and truly improve enterprise website security.
To help, WhiteHat Security has been publishing its Website Security Statistics Report since 2006. This report is the only one that focuses exclusively on unknown vulnerabilities in custom web applications, code that is unique to an organization, and found in real-world websites. The underlying data is hundreds of terabytes in size, comprises vulnerability assessment results from tens of thousands of websites across hundreds of the most well- known organizations, and collectively represents the largest and most accurate picture of website security available. Inside this report is information about the most prevalent vulnerabilities, how many get fixed, how long the fixes can take on average, and how every application security program may measurably improve. The report is organized by industry, and is accompanied by WhiteHat Security’s expert analysis and recommendations.
No More Snake Oil: Why InfoSec Needs Security GuaranteesJeremiah Grossman
Ever notice how everything in InfoSec is sold “as is”? No guarantees, no warrantees, no return policies. For some reason in InfoSec, providing customers with a form of financial coverage for their investment is seen as gimmicky, but the tides and times are changing. This talk discusses use cases on why guarantees are a must have and how guarantees benefit customers as well as InfoSec as a whole.
In this report, we put this area of application security understanding to the test by measuring how various web programming languages and development frameworks actually perform in the field. To which classes of attack are they most prone, how often and for how long; and, how do they fare against popular alternatives? Is it really true that the most popular modern languages and frameworks yield similar results in production websites?
By analyzing the vulnerability assessment results of more than 30,000 websites under management with WhiteHat Sentinel, we begin to answer these questions. These answers may enable the application security community to ask better and deeper questions, which will eventually lead to more secure websites. Organizations deploying these technologies can have a closer look at particularly risk-prone areas. Software vendors may focus on areas that are found to be lacking. Developers can increase their familiarity with the strengths and weaknesses of their technology stack. All of this is vitally important because security must be baked into development frameworks and must be virtually transparent. Only then will application security progress be made.
http://blackhat.com/us-13/briefings.html#Grossman
Online advertising networks can be a web hacker’s best friend. For mere pennies per thousand impressions (that means browsers) there are service providers who allow you to broadly distribute arbitrary javascript -- even malicious javascript! You are SUPPOSED to use this “feature” to show ads, to track users, and get clicks, but that doesn’t mean you have to abide. Absolutely nothing prevents spending $10, $100, or more to create a massive javascript-driven browser botnet instantly. The real-world power is spooky cool. We know, because we tested it… in-the-wild.
With a few lines of HTML5 and javascript code we’ll demonstrate just how you can easily commandeer browsers to perform DDoS attacks, participate in email spam campaigns, crack hashes and even help brute-force passwords. Put simply, instruct browsers to make HTTP requests they didn’t intend, even something as well-known as Cross-Site Request Forgery. With CSRF, no zero-days or malware is required. Oh, and there is no patch. The Web is supposed to work this way. Also nice, when the user leaves the page, our code vanishes. No traces. No tracks.
Before leveraging advertising networks, the reason this attack scenario didn’t worry many people is because it has always been difficult to scale up, which is to say, simultaneously control enough browsers (aka botnets) to reach critical mass. Previously, web hackers tried poisoning search engine results, phishing users via email, link spamming Facebook, Twitter and instant messages, Cross-Site Scripting attacks, publishing rigged open proxies, and malicious browser plugins. While all useful methods in certain scenarios, they lack simplicity, invisibility, and most importantly -- scale. That’s what we want! At a moment’s notice, we will show how it is possible to run javascript on an impressively large number of browsers all at once and no one will be the wiser. Today this is possible, and practical.
http://blog.whitehatsec.com/top-ten-web-hacking-techniques-of-2012/
Recorded Webinar: https://www.whitehatsec.com/webinar/whitehat_webinar_march2713.html
Every year the security community produces a stunning amount of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivilents. Beyond individual vulnerabilities with CVE numbers or system compromises, here we are solely focused on new and creative methods of Web-based attack. Now it its seventh year, The Top Ten Web Hacking Techniques list encourages information sharing, provides a centralized knowledge-base, and recognizes researchers who contribute excellent work. Past Top Tens and the number of new attack techniques discovered in each year:
Web Breaches in 2011-“This is Becoming Hourly News and Totally Ridiculous"Jeremiah Grossman
In 2011, attitude towards hacks shifted from "It happens," to "It is happening.” A poorly coded website and web application is all that’s needed to wreak havoc – expensive firewall, pervasive anti-virus and multi-factor authentication be damned. But what is possible? What types of attacks and attackers should we be mindful of? This presentation will show the real risks in a post-2011 Internet.
video demos: http://whitehatsec.com/home/assets/videos/Top10WebHacks_Webinar031711.zip
Many notable and new Web hacking techniques were revealed in 2010. During this presentation, Jeremiah Grossman will describe the technical details of the top hacks from 2010, as well as some of the prevalent security issues emerging in 2011. Attendees will be treated to a step-by-step guided tour of the newest threats targeting today's corporate websites and enterprise users.
The top attacks in 2010 include:
• 'Padding Oracle' Crypto Attack
• Evercookie
• Hacking Auto-Complete
• Attacking HTTPS with Cache Injection
• Bypassing CSRF protections with ClickJacking and HTTP Parameter Pollution
• Universal XSS in IE8
• HTTP POST DoS
• JavaSnoop
• CSS History Hack In Firefox Without JavaScript for Intranet Portscanning
• Java Applet DNS Rebinding
Mr. Grossman will then briefly identify real-world examples of each of these vulnerabilities in action, outlining how the issue occurs, and what preventative measures can be taken. With that knowledge, he will strategize what defensive solutions will have the most impact.
Website attacks continue to prevail despite the best efforts of enterprises to fight them. Websites are an ongoing business concern and security must be assured all the time, not just at a point in time. And yet, most websites were exposed to at least one serious vulnerability every day of 2010, leaving valuable corporate and customer date at risk. Why?
In this report, Jeremiah will explore a new way to measure website security, Windows of Exposure, that tracks an organization’s current and historical website security posture. Window of Exposure is a useful combination of vulnerability prevalence, how long vulnerabilities take to get fixed, and the percentage of them that are remediated. By carefully tracking these metrics, an organization can determine where resources would be best invested.
Using data from WhiteHat’s 11th Website Security Statistics Report, based on assessments of over 3,000 websites, Grossman will reveal the most secure (and insecure) vertical markets and the Windows of Exposure of each. Find out how your industry ranks, and the top ten vulnerabilities plaguing your peers. Learn how to determine which metrics are critical to increasing their remediation rates, thereby limiting their Window of Exposure. The good news is that companies that take this approach are increasing remediation rates by 5 percent per year.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
2. Website Security Statistics Report 2015 2
Contents
About This Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Vulnerability Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Window of Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Survey Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Average Number of Open Vulnerabilities . . . . . . . . . . . . . 25
Average Days Open . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Remediation Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Data Set Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Conclusion Recommendations . . . . . . . . . . . . . . . . . . . 29
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About This Report
WhiteHat Security’s Website Security Statistics
Report provides a one-of-a-kind perspective
on the state of website security and the issues
that organizations must address in order to
conduct business online safely.
Website security is an ever-moving target. New website
launches are common, new code is released constantly, new
web technologies are created and adopted every day; as a
result, new attack techniques are frequently disclosed that can
put every online business at risk. In order to stay protected,
enterprises must receive timely information about how they
can most efficiently defend their websites, gain visibility into
the performance of their security programs, and learn how they
compare with their industry peers. Obtaining these insights
is crucial in order to stay ahead and truly improve enterprise
website security.
To help, WhiteHat Security has been publishing its Website
Security Statistics Report since 2006. This report is the only one
that focuses exclusively on unknown vulnerabilities in custom
web applications, code that is unique to an organization, and
found in real-world websites. The underlying data is hundreds of
terabytes in size, comprises vulnerability assessment results from
tens of thousands of websites across hundreds of the most well-
known organizations, and collectively represents the largest and
most accurate picture of website security available. Inside this
report is information about the most prevalent vulnerabilities, how
many get fixed, how long the fixes can take on average, and how
every application security program may measurably improve. The
report is organized by industry, and is accompanied by WhiteHat
Security’s expert analysis and recommendations.
3. Website Security Statistics Report 2015 3
Executive Summary
More secure software,
NOT more security software.
Unfortunately and unsurprisingly, website breaches have become
an everyday occurrence. In fact, hacked websites have become
so common that typically only the biggest data breaches capture
enough attention to make headlines. The rest get to suffer quietly
away from the public eye. Experts have known this eventuality
was coming and honestly, the prediction was easy. All one had
to do was to look at the pervasiveness of web use in modern
society, the amount of data and dollars being exchanged online,
and read any industry report about the volume of vulnerabilities
exposed on the average website. With this information in hand,
the final ingredient that ultimately leads to a breach is a motivated
adversary willing to take advantage of the vulnerability, and
as headlines tell us, there are plenty of motivated adversaries.
Verizon’s 2015 Data Breach Investigations Report1
says for the
financial services industry, web applications are the second-
leading cause of incidents — just behind crimeware. Further,
for healthcare and information technology industries, web
applications are fourth and second respectively, when it comes
to breach.
To this point, what no one could really predict or quantify were
the possible consequences of having no website security
measures in place at all. Now, after countless breaches
on record, we have a fairly good idea. Website breaches
lead directly to fraud, identity theft, regulatory fines, brand
damage, lawsuits, downtime, malware propagation, and loss of
customers. While a victimized organization may ultimately survive
a cyber-crime incident, and fortunately most do, the business
disruption and losses are often severe. Recent studies by the
Ponemon Institute state that 45% of breaches exceed $500,000
in losses2
. In the largest of incidents, many Fortune-listed
companies have given shareholder guidance that the losses
would range from tens of millions to hundreds of millions of
dollars. Obviously, it is far preferable to do something proactive
to avert and minimize harm before becoming the next headline.
The answer to web security, and much of information security,
is we need more secure software, NOT more security software.
While this is easy to say and has been said by us many times in
1 Verizon 2015 Data Breach Investigations Report (DBIR)
http://www.verizonenterprise.com/DBIR/
2 Ponemon: The Post Breach Boom
http://www.ponemon.org/blog/the-post-breach-boom
the past, the process of actually doing so is anything but solved
or widely agreed upon – despite the plethora of so-called best-
practices and maturity models. For example, we would all like to
say, organizations that provide software security training for their
developers experience fewer serious vulnerabilities annually than
those who do not provide training. Or, organizations that perform
application security testing prior to each major production release
not only have fewer vulnerabilities year-over-year, but exhibit a
faster time-to-fix. Broadly, these statements cannot be made
authoritatively as the supporting data is sparse or nonexistent. At
WhiteHat, and in this report, we’re changing that.
For this report we utilized a version of BSIMM3
(Building Security
In Maturity Model), called vBSIMM4
(the ‘v’ stands for ‘vendor’).
Think of vBSIMM as a lite version of BSIMM, a software security
activity checklist you ask third-party software suppliers to fill
out so you get a better idea of what effort they put into it. We
modified the vBSIMM checklist slightly for our purposes, added
some dates and activity frequency questions, and issued it
as a survey to WhiteHat Security customers. We then looked
at the aggregated responses of the survey (118 in total) and
compared those results to WhiteHat Sentinel vulnerability metrics
and mapped those to vBSIMM software security activities and
to outcomes. Simple right? No, not really. As you’ll see further
down, the results were fascinating.
Before getting to the hard numerical statistics, we feel it’s
important to share what the data is signaling to us at a high level.
§§ We see no evidence of ‘best-practices’ in application security.
At least, we see no practice likely to benefit every organization
that implements them in any given scenario or application
security metric. What we found is that certain software security
activities (for example static analysis, architectural analysis,
operational monitoring, etc.) would help certain application
security metrics, but have little-to-no impact on others. For
example, an activity might reduce the average number of
vulnerabilities in a given application, not improve the speed of
which vulnerabilities are fixed or how often. The best advice
3 The Building Security In Maturity Model (BSIMM)
https://www.bsimm.com/
4 BSIMM for vendors (vBSIMM)
https://www.bsimm.com/related/
4. Website Security Statistics Report 20154
we can give is for an organization to create a metrics program
that tracks the area they want to improve upon, and then
identify activities that’ll most likely move the needle. If an
activity does work – great! Keep doing it! If there is no
measurable benefit, stop, save the time and energy, and try
something else. Frankly, this process is much easier and more
effective than blindly following maturity models.
§§ Another thing we noticed was that over the course of 2014,
we saw a lot of high-profile infrastructure vulnerabilities such
as Heartbleed5
, Shellshock6
, and more. These issues were
remotely exploitable, highly dangerous, and pervasive. Some
theorized that if we included these types of vulnerabilities into
our research alongside our usual custom web application
vulnerabilities, it would throw off our analysis. For example,
you cannot blame Heartbleed on the software development
group as it’s the responsibility of IT infrastructure to protect
against such an attack and developers were concerned their
numbers would be unfairly dragged down. Fair enough. After
doing the analysis, we found that including infrastructure
vulnerability data actually improved the overall metrics. It
seems the IT guys are overall faster and more consistent with
patching. Imagine that!
§§ And finally, we had another industry shift over previous
reports. When we asked customers the primary driver for
resolving website vulnerabilities, 35% said risk reduction,
which beat out compliance by more than 20 points. During our
May 2013 report, compliance was the number one driver. We
can only speculate on what’s changed organizationally, but the
leading theory is that most organizations that are required to
be compliant with industry regulations have become so… yet
the hacks keep happening. To keep hacks from happening,
it appears risk reduction has taken center stage – and not a
moment too soon.
With these larger themes out of the way, let’s look at a few more
interesting results:
§§ Organizations that are compliance-driven to remediate
vulnerabilities have the lowest average number of vulnerabilities
(12 per website) and the highest remediation rate (86%).
Conversely and curiously, organizations driven by risk reduction
5 Heartbleed vulnerability
http://heartbleed.com/
6 Shellshock (software bug)
http://en.wikipedia.org/wiki/Shellshock_%28software_bug%29
to remediate have an average of 23 vulnerabilities per website
and a remediation rate of 18%. The skeptical theory is
compliance-driven programs are simply incentivized to look
only for the vulnerabilities which they are legally required to
look for, which is obviously less than the totality. To summarize,
if you look for fewer vulnerabilities you will find less. At the
same time, compliance is a big corporate stick when it
comes to remediating known issues and is likely what drives
remediation rates upward. Risk reduction, right or wrong, often
finds itself in an accepted business risk and risk tolerance
discussion and ultimately drives remediation rates downward.
However, risk reduction exhibits the best average time-to-fix at
115 days. The assumption is that if you are using a risk scale
you are going after a smaller total pile of vulnerabilities and will
therefore close them faster. Compliance on the other hand,
with an average of 158 days time-to-fix, organizations believe
they can afford to wait to fix vulnerabilities just before the
auditor comes back around next year.
§§ Statistically, the best way to lower the average number of
vulnerabilities, speed up time-to-fix, and increase remediation
rates is to feed vulnerability results back to development
through established bug tracking or mitigation channels.
Doing so makes application security front and center in
a development group’s daily work activity and creates an
effective process to solve problems. For organizations that
have made the vulnerability feed to development process
connection, they exhibit roughly 45% fewer vulnerabilities,
fixed issues nearly a month faster on average, and increased
remediation rates by 13 points.
§§ Organizations performing automated static code analysis saw
a progressively improved average vulnerability time-to-fix as
the activity frequency increased. For organizations who do not
employ static code analysis, their time-to-fix was 157 days on
average, for those at each major software release it was 138
days, and 96 days for those performing daily. These results are
most likely due to the nature of static analysis taking place as
code is being written and is fresh in the developer’s mind.
§§ Utilizing a top N list of most important vulnerabilities looks to
be a solid way to improve time-to-fix and remediation rates,
but interestingly doesn’t do very much to affect the average
number of vulnerabilities. Organizations using top N lists see a
two-month improvement in their time-to-fix vulnerabilities (from
300 to 243 days) and a seven-point increase in remediation
rates (from 39% to 46%).
5. Website Security Statistics Report 20155
§§ An activity that seems to have a dramatic positive effect on the
average number of vulnerabilities is ad hoc code reviews of
highrisk applications. We found that organizations that never
do ad hoc code reviews see an average of 35 vulnerabilities
per website, while those who perform the activity with each
major release see only 10, which amounts to a 71% decrease!
There also seems to be a notable improvement in time-to-fix
and remediation rates, making this activity closest to a best
practice.
§§ Frequency of QA feedback of security reviews seems to have
no strong correlation to any data points, which is interesting as
common sense would tell you that this would have similar data
points to frequency of static analysis as it is a small feedback
loop. We would venture a guess that this is due to poor
communication lines between QA, development, and security
teams as they are speaking different languages.
In coordinating the research for this report, we have found
that there is good news. For the vast majority of website
vulnerabilities that are identified and exploited, we essentially
know everything there is to know about them. We know how to
prevent them, find them, and fix them. So you might ask: ‘why
are we still having problems with them?’ The answer is two-fold:
legacy and new code.
Legacy code. There are mountains of legacy code in existence,
even mission-critical code, which is riddled with vulnerabilities
waiting to be exploited. This software must be cleaned up and
that effort is going to take a while. There is no way around that,
but at least we know how. The rest is just going to take a lot of
hard work and dedication.
New code. We now have more new code going into production
than ever. Today’s new code must be more secure than
yesterday’s code. With the right processes and measurement, it
will never be perfect, but it can be done and it can significantly
reduce the likelihood of a breach. When it’s all said and done,
once an organization really decides to improve upon application
security, the answers are there – and many of those answers are
in these pages.
6. Website Security Statistics Report 2015 6
Vulnerability Likelihood
Application vulnerability likelihood has significantly changed in
the last few years. In 2012, an application was most likely to
have Information Leakage (with 58% likelihood), or Cross-site
Scripting (with 55% likelihood) vulnerabilities. However, in 2014,
applications are most likely to have Insufficient Transport Layer
Protection (with 70% likelihood) or Information Leakage (with
56% likelihood).
The sharp rise in the likelihood of Insufficient Transport
Layer Protection can be explained by discovery of zero-day
vulnerabilities such as Heartbleed and the new tests added as a
result of that.
INSUFFICIENTTRANSPORTLAYERPROTECTION
INFORMATIONLEAKAGE
CROSS-SITESCRIPTING
BRUTEFORCE
CONTENTSPOOFING
CROSS-SITEREQUESTFORGERY
URLREDIRECTORABUSE
PREDICTABLERESOURCELOCATION
SESSIONFIXATION
INSUFFICIENTAUTHORIZATION
DIRECTORYINDEXING
ABUSEOFFUNCTIONALITY
SQLINJECTION
INSUFFICIENTPASSWORDRECOVERY
FINGERPRINTING
70%
Vulnerability Likelihood
56%
47%
29%
26%
24%
16%
15%
11%
11%
8%
6%
6%
6%
5%
Likelihood of Content Spoofing, Cross-site Scripting and
Fingerprinting has sharply declined in recent years. Content
Spoofing was 33% likely in 2012, but only 26% in 2014.
Likelihood of Fingerprinting vulnerabilities has dropped from 23%
in 2012 to 5% in 2014. Cross-site Scripting has significantly
declined as well (from 53% in 2012 to 47% in 2014).
Insufficient Transport Layer Protection, Information Leakage
and Cross-Site Scripting are the most likely vulnerabilities in
applications.
§§ Likelihood of Insufficient Transport Layer Protection: 70%
§§ Likelihood of Information Leakage: 56%
§§ Likelihood of Cross-site Scripting: 47%
7. Website Security Statistics Report 20157
Likelihood of Insufficient Transport Layer
Protection has sharply gone up in recent years
(from 0% in 2010 to 70% likelihood in 2014).
Insufficient Transport Layer Protection and Information Leakage
are the two most likely vulnerabilities in Retail Trade, Health Care
/ Social Assistance, Information, and Finance/Insurance sites.
Various industries (Retail Trade, Health Care / Social Assistance,
Information, and Finance / Insurance) show similar patterns of
likelihood for commonly found vulnerability classes.
The pattern of vulnerability likelihood remains unchanged across
industries, as shown in the graph below.
Vulnerability Likelihood by Industry
76%
73%
75%
65%
60%
67%
64%
53%
46%
56%
62%
50%
29%
27%
42%
28%
24%
32%
37%
28%
23%
34%
24%
25%
INSUFFICIENT
TRANSPORTLAYER
PROTECTION
INFORMATION
LEAKAGE
CROSS-SITE
SCRIPTING
BRUTE
FORCE
CONTENT
SPOOFING
CROSS-SITE
REQUEST
FORGERY
n RETAIL TRADE
n HEALTH CARE / SOCIAL ASSISTANCE
n INFORMATION
n FINANCE / INSURANCE
8. Website Security Statistics Report 2015 8
Window of Exposure
Window of exposure is defined as the number of days an
application has one or more serious vulnerabilities open during a
given time period. We categorize window of exposure as:
Always Vulnerable: A site falls in this category if it is vulnerable
on every single day of the year.
Frequently Vulnerable: A site is called frequently vulnerable if it
is vulnerable for 271-364 days a year.
Regularly Vulnerable: A regularly vulnerable site is vulnerable for
151-270 days a year.
Occasionally Vulnerable: An occasionally vulnerable application
is vulnerable for 31-150 days a year.
Rarely Vulnerable: A rarely vulnerable application is vulnerable
for less than 30 days a year.
Our analysis shows that 55% of the Retail Trade sites, 50% of
Health Care / Social Assistance sites, and 35% of Finance /
Insurance sites are always vulnerable. Similarly, only 16% of the
Retail Trade sites, 18% of Health Care / Social Assistance sites,
and 25% of Finance / Insurance sites are rarely vulnerable.
Conversely, Educational Services is the best performing industry
with the highest percentage of rarely vulnerable sites (40%). Arts,
Entertainment, and Recreation is the next best industry with 39%
of sites in rarely vulnerable category.
Retail Trade
16%
RARELY
VULNERABLE
ALWAYS
VULNERABLE
55%
Health Care / Social Assistance
18%
RARELY
VULNERABLE
ALWAYS
VULNERABLE
50%
Finance / Insurance
ALWAYS
VULNERABLE
35%
25%
RARELY
VULNERABLE
9. Website Security Statistics Report 20159
Window of exposure is an organizational key performance
indicator that measures the number of days a website has at
least one serious vulnerability over a given period of time.
Window of Exposure
ACCOMMODATIONS/FOODSERVICES
ARTS/ENTERTAINMENT/RECREATION
EDUCATIONSERVICES
FINANCE/INSURANCE
HEALTHCARE/SOCIALASSISTANCE
INFORMATION
MANUFACTURING
OTHERSERVICES(EXCEPTPUBLICADMINISTRATION)
PROFESSIONAL/SCIENTIFIC/TECHNICALSERVICES
PUBLICADMINISTRATION
TRANSPORTATION/WAREHOUSING
RETAILTRADE
UTILITIES
n ALWAYS VULNERABLE
n FREQUENTLY VULNERABLE 271-364 DAYS A YEAR
n REGULARLY VULNERABLE 151-270 DAYS A YEAR
n OCCASIONALLY VULNERABLE 31-150 DAYS A YEAR
n RARELY VULNERABLE 30 DAYS OR LESS A YEAR
55%
27%
27%
35%
50%
35%
51% 53%
30%
64%
55%
29%
36%
7%
8%
9%
11%
10%
10%
3% 0%
6%
0%
8%
2%
8%
2%
9%
7%
11%
12%
11%
6% 6%
31%
10%
18%
15%
18%
18%
17%
39%
17%
40%
18%
25%
10%
18%
15%
28%
14%
26%
24%
18%
14%
20%
21%
14%
11%
16%
31%
20%
8%
34%
10. Website Security Statistics Report 2015 10
Survey Analysis
Overview
The analysis is based on 118 responses on a survey sent to
security professionals to measure maturity models of application
security programs at various organizations.
The responses obtained in the survey are correlated with the
data available in Sentinel to get deeper insights.
§§ Sentinel data was pulled for 2014 timeframe.
§§ Data was pulled from sites that were assessed with WhiteHat’s
premium service covering all WASC vulnerability classes.
§§ Data included all vulnerability classes except Insufficient
Transport Layer Protection, Directory Indexing, URL Redirector
Abuse, Improper File System permissions, and Fingerprinting
Survey Responses
Total Responses: 118
§§ Information, and Finance / Insurance have the highest number
of responses.
§§ Other industries do not have enough responses to draw
meaningful industry level conclusions from the survey.
Summary of Survey Analysis
24% of the survey respondents have experienced a data or
system breach.
§§ In Finance / Insurance, 17% have experienced a data or
system breach
§§ In Information, 20% have experienced a data or system
breach.
56% of all respondents did not hold any part of the organization
accountable in case of data or system breach. Listed below is
how various parts of organizations are held responsible for data
or system breach:
§§ Board of Directors 8%
§§ Executive Management 27%
§§ Software Development 26%
§§ Security Department 29%
Risk Reduction is the most commonly cited reason (with 35% of
the respondents) for resolving website vulnerabilities. Only 14%
of the respondents cited Compliance as the primary reason for
resolving website vulnerabilities.
Static Analysis:
§§ 87% of the respondents perform static analysis. 32% perform
it with each major release and 13% perform it daily.
Penetration Testing
§§ 92% of the respondents perform penetration testing. 21%
perform it annually, 26% perform it quarterly and 8% never
perform penetration testing.
11. Website Security Statistics Report 201511
Basic Adversarial testing
Organizations that do not perform basic adversarial testing tend
to have higher number of open vulnerabilities than those that do
perform it.
§§ Open vulnerabilities when adversarial testing is performed
on each major release: 12
§§ Open vulnerabilities when adversarial testing is performed
every quarter: 9
§§ Open vulnerabilities when adversarial testing is never
performed: 34
Organizations that do not perform basic adversarial testing have
lower remediation rate than those that do perform it.
§§ Remediation rate when adversarial testing is performed on
each major release: 19%
§§ Remediation rate when adversarial testing is performed every
quarter: 50%
§§ Remediation rate when adversarial testing is never
performed: 11%
79% of the respondents performed ad-hoc
code reviews on high risk applications
Organizations that do not perform ad-hoc code reviews on high
risk applications have higher open vulnerabilities than the overall
average open vulnerabilities.
§§ Open vulnerabilities when adhoc code review is never
performed: 35
§§ Open vulnerabilities when adhoc code review is performed
in a planned manner: 6
§§ Open vulnerabilities when adhoc code review is performed
with each major release: 10
§§ Remediation rate when adhoc code review is never performed:
18%
§§ Remediation rate when adhoc code review is performed
in a planned manner: 25%
§§ Remediation rate when adhoc code review is performed
with each major release: 29%
This is how integrating application security best practices into
the SDLC processes affected vulnerability count and
remediation rate:
§§ After QA team began performing adversarial testing, average
number of open vulnerabilities declined by 64% (from 13 to 5)
and average remediation rate increased from 30% to 33%
§§ After organizations began using penetration testers, average
number of open vulnerabilities declined by 65% (from 31 to 11)
and average remediation rate increased from 22% to 31%
§§ After organizations began performing adhoc code reviews,
average number of open vulnerabilities declined by 59% (from
32 to 13) and average remediation rate increased from 36%
to 38%
§§ After organizations began sharing security result reviews with
the QA Department, average number of open vulnerabilities
declined 21% (from 20 to 16) and average remediation rate
grew from 35% to 42%
§§ After incident response plan was updated, average open
vulnerability count declined 60% (from 12 to 5) while average
remediation rate declined from 29% to 28%
§§ After organizations began performing architecture analysis,
average open vulnerability count declined 47% from 12 to 6
while average remediation rate declined from 32% to 31%
§§ After organizations began performing security focused design
reviews, average open vulnerabilities count declined 17% from
8 to 7 while average remediation rate went up from 33%
to 37%
§§ After organizations began empowering a group to take the lead
in performing architecture analysis, average number of open
vulnerabilities declined by 43% (from 9 to 5) while average
remediation rate declined from 40% to 36%
§§ After organizations began using a risk questionnaire to rank
applications, average number of vulnerabilities declined 35%
from 9 to 6, while average remediation rate declined from 39%
to 38%
§§ After organizations began feeding penetration testing results
back to development, average open vulnerabilities declined
by 45% (from 12 to 7) while average remediation rate went up
from 27% to 41%
12. Website Security Statistics Report 201512
BOARDOFDIRECTORS
EXECUTIVEMANAGEMENT
SOFTWAREDEVELOPMENT
SECURITYDEPARTMENT
8%
27%
26%
29%
Have any of your organizations website(s)
experienced a data or system breach as a
result of an application layer vulnerability?
24% of the survey respondents have experienced a data or
system breach
§§ In Finance / Insurance 17% have experienced a data or system
breach
§§ In Information, 20% have experienced a data or system breach
HEALTHCARE/SOCIALASSISTANCE
RETAILTRADE
INFORMATION
FINANCE/INSURANCE
ALL
100%
50%
20%
17%
24%
EXPERIENCED A DATA
OR SYSTEM BREACH
AS A RESULT OF
APPLICATION LAYER
VULNERABILITY
24%
If an organization experiences a website(s)
data or system breach, which part of the
organization is held accountable and what is
it’s performance?
56% of all respondents did not hold any part of the organization
accountable in case of data or system breach.
§§ Board of Directors 8%
§§ Executive Management 27%
§§ Software Development 26%
§§ Security Department 29%
OF ALL RESPONDENTS DID
NOT HAVE ANY PART OF
THE ORGANIZATION HELD
ACCOUNTABLE IN CASE OF
DATA OR SYSTEM BREACH
56%
13. Website Security Statistics Report 201513
BOARDOFDIRECTORS
EXECUTIVEMANAGEMENT
SOFTWAREDEVELOPMENT
SECURITYDEPARTMENT
159
152
145
145
If an organization experiences a website(s)
data or system breach, which part of the
organization is held accountable and what is
it’s performance?
§§ Count of open vulnerabilities is lowest (at 8) and remediation
rate is highest at 40% when Board of Directors is held
responsible for breach.
§§ Remediation rate is lowest (at 19%) when software
development is held accountable for a system breach.
§§ Average number of open vulnerabilities is highest (at 19) when
security department is held accountable for a system breach.
BOARDOFDIRECTORS
EXECUTIVEMANAGEMENT
SOFTWAREDEVELOPMENT
SECURITYDEPARTMENT
8
8
12
19
BOARDOFDIRECTORS
EXECUTIVEMANAGEMENT
SOFTWAREDEVELOPMENT
SECURITYDEPARTMENT
461
410
400
342
BOARDOFDIRECTORS
EXECUTIVEMANAGEMENT
SOFTWAREDEVELOPMENT
SECURITYDEPARTMENT
40%
31%
19%
29%
Average Number
of Vulnerabilities Open
Average
Time Open
Average
Time to Fix
Remediation Rate
§§ Organizations with accountability tend to find and fix more
vulnerabilities than those that don’t have clear accountability.
§§ 24% remediation in organizations without accountability vs.
33% for those with accountability.
§§ 16 average open vulnerabilities in organizations with
accountability versus 13 in those without accountability.
14. Website Security Statistics Report 201514
Please rank your organization’s drivers for
resolving website vulnerabilities. 1 being the
lowest priority, 5 the highest.
§§ 14% of the respondents cite Compliance as the primary reason
for resolving website vulnerabilities
§§ 6% of the respondents cite Corporate Policy as the primary
reason for resolving website vulnerabilities
§§ 35% of the respondents cite Risk Reduction as the primary
reason for resolving website vulnerabilities
§§ 20% of the respondents cite Customer or Partner Demand as
the primary reason for resolving website vulnerabilities
§§ 24% of the respondents cite other reasons for resolving
website vulnerabilities
COMPLIANCE
CORPORATEPOLICY
RISKREDUCTION
CUSTOMERORPARTNERDEMAND
OTHER
14%
6%
35%
20%
24%
Primary Driver for
Resolving Website
Vulnerabilities
15. Website Security Statistics Report 201515
Please rank your organization’s drivers for
resolving vulnerabilities.
§§ Average number of open vulnerabilities is highest (at 23) when
Risk Reduction is the primary reasons for fixing vulnerabilities.
§§ Average remediation rate is highest at 86% when compliance
is the primary driver for fixing vulnerabilities.
Average Number
of Vulnerabilities
Average
Time Open
Average
Time to Fix
Remediation Rate
COMPLIANCE
CORPORATEPOLICY
RISKREDUCTION
CUSTOMERORPARTNERDEMAND
OTHER
12
17
23
18
8
COMPLIANCE
CORPORATEPOLICY
RISKREDUCTION
CUSTOMERORPARTNERDEMAND
OTHER
352
294
326
559
394
COMPLIANCE
CORPORATEPOLICY
RISKREDUCTION
CUSTOMERORPARTNERDEMAND
OTHER
158
140
115
191
169
COMPLIANCE
CORPORATEPOLICY
RISKREDUCTION
CUSTOMERORPARTNERDEMAND
OTHER
86%
0%
18%
40%
25%
16. Website Security Statistics Report 201516
How frequently do you perform automated
static analysis during the code review process?
Percent of respondents for various frequencies of automatic
static analysis:
§§ Daily: 13%
§§ With each major release: 32%
§§ Never: 13%
Number of open vulnerabilities for various frequencies of
automatic static analysis:
§§ Daily: 5
§§ With each major release: 28
§§ Never: 12
Average time open for various frequencies of automatic static
analysis:
§§ Daily: 400 days
§§ Each major release: 325 days
§§ Never: 423 days
Remediation rate for various frequencies of automatic static
analysis:
§§ Daily: 17%
§§ Each major release: 38%
§§ Never: 29%
Time to fix for various frequencies of automatic static analysis:
§§ Daily: 96 days
§§ Each major release: : 138 days
§§ Never: 157 days
ALL
HEALTHCARE/SOCIALASSISTANCE
RETAILTRADE
INFORMATION
FINANCE/INSURANCE
Frequency of Automated Static
Analysis by Industry
13% 14%15%
13%
14%15%
13%
9%
4%
100%
32% 32%
35%
25%
11%
14%
15%
25%
2%
5%
8%
5%
8%
50%
8% 9%8%
n DAILY
n MONTHLY
n NEVER
n OTHER
n PLANNED
n QUARTERLY
n WEEKLY
n WITH EACH RELEASE
OR MAJOR UPDATE
17. Website Security Statistics Report 201517
How frequently does the QA team go
beyond functional testing to perform basic
adversarial tests (probing of simple edge cases
and boundary conditions; example: What
happens when you enter the wrong password
over and over?)
% of respondents for various frequencies of adversarial testing:
§§ Each major release: 32%
§§ Quarterly: 11%
§§ Never: 21%
Number of open vulnerabilities for various frequencies of
adversarial testing:
§§ Each major release: 12
§§ Quarterly: 9
§§ Never: 34
Average time open for various frequencies of adversarial testing:
§§ Each major release: 383 days
§§ Quarterly: 391 days
§§ Never: 295 days
Remediation rate for various frequencies of adversarial testing:
§§ Each major release: 19%
§§ Quarterly: 50%
§§ Never: 11%
Time-to-fix for various frequencies of adversarial testing:
§§ Each major release: 144 days
§§ Quarterly: 139 days
§§ Never: 153 days
How frequently do you use external penetration
testers to find problems?
% of respondents for various frequencies of penetration testing:
§§ 21% Annually
§§ 26% Quarterly
§§ 8% Never
Number of open vulnerabilities for various frequencies of
penetration testing:
§§ Annually: 10
§§ Quarterly: 32
§§ Never: 22
Average time open for various frequencies of penetration testing:
§§ Annually: 292 days
§§ Quarterly: 302 days
§§ Never: 431 days
Remediation rate for various frequencies of penetration testing:
§§ Annually: 50%
§§ Quarterly: 36%
Time-to-fix for various frequencies of penetration testing:
§§ Annually: 168 days
§§ Quarterly: 116 days
§§ Never: 149 days
18. Website Security Statistics Report 201518
How often does your organization use defects
identified through operations monitoring fed
back to development and used to change
developer behavior?
% of respondents for various frequencies of operation monitoring
feedback:
§§ 17% Daily
§§ 17% With each major release
§§ 9% Never
Number of open vulnerabilities for various frequencies of
operation monitoring feedback:
§§ Daily: 38
§§ With each major release: 19
§§ Never: 6
Average time open for various frequencies of operation
monitoring feedback:
§§ Daily: 332 days
§§ With each major release: 369 days
§§ Never: 273 days
Remediation rate for various frequencies of operation monitoring
feedback:
§§ Daily: 13%
§§ With each major release: 44%
§§ Never: 0%
Time-to-fix for various frequencies of operation monitoring
feedback:
§§ Daily: 99 days
§§ With each major release: 218 days
§§ Never: 121 days
How frequently does your organization perform
ad hoc code reviews of high risk applications in
an opportunistic fashion?
% of respondents for various frequencies of ad hoc code
reviews:
§§ 21% Never
§§ 15% Planned
§§ 15% with each major release
Number of open vulnerabilities for various frequencies of ad hoc
code reviews:
§§ 35 Never
§§ 6 Planned
§§ 10 with each major release
ALL
RETAILTRADE
INFORMATION
FINANCE/INSURANCE
Frequency of Adhoc Code Review
by Industry
9% 9%8%
15%
4%
15%
9%
15%
5%
14%
15%
15% 12%
25%
9%
23%
9%
18%
15%
50%
21% 23%23%
n MONTHLY
n NEVER
n OTHER
n PLANNED
n QUARTERLY
n WEEKLY
n WITH EACH RELEASE
OR MAJOR UPDATE
19. Website Security Statistics Report 201519
Average time open for various frequencies of ad hoc code
reviews:
§§ Never: 335 days
§§ Planned: 282 days
§§ With each major release: 293 days
Remediation rate for various frequencies of ad hoc code reviews:
§§ Never: 18%
§§ Planned: 25%
§§ With each major release: 29%
Time-to-fix for various frequencies of ad hoc code reviews:
§§ Never: 163 days
§§ Planned: 117 days
§§ With each major release: 133 days
n FINANCE / INSURANCE
n INFORMATION
n RETAIL TRADE
n HEALTH CARE / SOCIAL ASSISTANCE
n ALL
ALL
8
20
16
2
14
BLANK
7
13
15
1
11
WITHEACH
RELEASEOR
MAJORUPDATE
10
12
-
5
10
WEEKLY
-
3
-
-
3
QUARTERLY
7
11
-
-
9
PLANNED
8
5
-
-
6
OTHER
7
22
17
-
11
NEVER
6
59
-
-
35
MONTHLY
16
50
-
-
33
Average Number of Vulnerabilities at Different Frequencies of Adhoc Code Review
20. Website Security Statistics Report 201520
n FINANCE / INSURANCE
n INFORMATION
n RETAIL TRADE
n HEALTH CARE / SOCIAL ASSISTANCE
n ALL
ALL
309
321
464
569
332
230
271
520
639
290
292
248
-
429
293
-
523
-
-
523
345
450
-
-
408
268
296
-
-
282
523
304
408
-
467
342
330
-
-
335
271
372
-
-
321
BLANK
WITHEACH
RELEASEOR
MAJORUPDATE
WEEKLY
QUARTERLY
PLANNED
OTHER
NEVER
MONTHLY
Average Time Open at Different Frequencies of Adhoc Code Review
n FINANCE / INSURANCE
n INFORMATION
n RETAIL TRADE
n HEALTH CARE / SOCIAL ASSISTANCE
n ALL
ALL
24%
30%
25%
0%
26%
15%
35%
0%
0%
24%
33%
33%
-
0%
29%
-
25%
-
-
25%
0%
67%
-
-
40%
25%
25%
-
-
25%
40%
0%
50%
-
38%
40%
0%
-
-
18%
0%
50%
-
-
25%
BLANK
WITHEACH
RELEASEOR
MAJORUPDATE
WEEKLY
QUARTERLY
PLANNED
OTHER
NEVER
MONTHLY
Average Remediation Rate at Different Frequencies of Adhoc Code Review
21. Website Security Statistics Report 201521
n FINANCE / INSURANCE
n INFORMATION
n RETAIL TRADE
n HEALTH CARE / SOCIAL ASSISTANCE
n ALL
ALL
138
134
122
117
134
107
124
161
80
118
170
77
-
192
133
-
144
-
-
144
131
181
-
-
161
116
119
-
-
117
224
76
83
-
171
145
179
-
-
163
103
166
-
-
135
BLANK
WITHEACH
RELEASEORMAJOR
UPDATE
WEEKLY
QUARTERLY
PLANNED
OTHER
NEVER
MONTHLY
Average Time-to-Fix at Different Frequencies of Adhoc Code Review
22. Website Security Statistics Report 201522
How frequently does your organization
share results from security reviews with the
QA department?
% of respondents for various frequencies of security review
sharing:
§§ Monthly: 13%
§§ With each major release: 28%
§§ Never: 19%
Number of open vulnerabilities for various frequencies of security
review sharing:
§§ Monthly: 10
§§ With each major release: 26
§§ Never: 18
Average time open for various frequencies of security review
sharing:
§§ Monthly: 309 days
§§ With each major release: 436 days
§§ Never: 307 days
Remediation rate for various frequencies of security review
sharing:
§§ Monthly: 43%
§§ With each major release: 21%
§§ Never 0%
Time-to-fix for various frequencies of security review sharing:
§§ Monthly: 116 days
§§ With each major release: 192 days
§§ Never: 122 days
When did your organization incorporate
automated static analysis into the code
review process?
After incorporating static analysis into the code review process:
§§ Average number of vulnerabilities slightly increased (from 15
to 18)
§§ Average time-to-fix declined (from 174 days to 150 days)
§§ Average time open increased (175 days to 197 days)
§§ Remediation rate declined (from 33% to 29%)
When did the QA team begin performing basic
adversarial testing?
After QA team began performing basic adversarial testing:
§§ Average number of vulnerabilities declined (from 13 to 5)
§§ Average time-to-fix declined (from 97 days to 94 days)
§§ Average time open increased (295 days to 432 days)
§§ Remediation rate increased (from 30% to 33%)
When did your organization begin using
penetration testers?
After organizations began using penetration testers:
§§ Average number of vulnerabilities declined (from 31 to 11)
§§ Average time-to fix decreased (from 203 days to 195 days)
§§ Average time open increased (from 198 days to 257 days)
§§ Remediation rate increased (from 22% to 31%)
When did your organization begin performing
ad hoc code reviews?
After organizations began performing ad hoc code reviews:
§§ Average number of vulnerabilities declined (from 32 to 13)
§§ Average time to fix declined (from 191 days to 174 days)
§§ Average time open increased (from 202 days to 282 days)
§§ Remediation rate increased (from 36% to 38%)
23. Website Security Statistics Report 201523
When did your organization begin
sharing results from security reviews with
the QA department?
After organizations began sharing security review results with the
QA department:
§§ Average number of vulnerabilities declined (from 20 to 16)
§§ Average time-to-fix declined (from 179 days to 175 days)
§§ Average time open increased (from 214 days to 246 days)
§§ Remediation rate increased (from 35% to 42%)
When was your incident response plan updated
to include application security?
After incident response plan is updated to include application
security:
§§ Average number of vulnerabilities declined (from 12 to 5)
§§ Average time-to-fix increased (from 216 days to 221 days)
§§ Average time open increased (from 188 days to 220 days)
§§ Remediation rate decreased (from 29% to 28%)
When did you begin performing architecture
analysis focused on security features
(authentication, access control, use of
cryptography, etc.)?
After organizations began performing architecture analysis:
§§ Average number of vulnerabilities declined (from 12 to 6)
§§ Average time-to-fix decreased (from 285 days to 280 days)
§§ Average time open increased (from 182 days to 245 days)
§§ Remediation rate decreased (from 32% to 31%)
When did your organization begin using
operational monitoring to improve or change
developer behavior?
After organizations began using operational monitoring:
§§ Average number of vulnerabilities declined (from 4 to 3)
§§ Average time-to-fix increased(from 135 days to 151 days)
§§ Average time open increased (from 195 days to 304 days)
§§ Remediation rate decreased (from 37% to 34%)
When did your organization begin performing
security focused design reviews of web
applications?
After organizations began performing security focused design
reviews:
§§ Average number of vulnerabilities declined (from 8 to 7)
§§ Average time-to-fix declined (from 230 days to 202 days)
§§ Average time open increased (from 226 days to 284 days)
§§ Remediation rate increased (from 33% to 37%)
When did your organization form or empower
a group to take a lead in performing
architecture analysis?
After organizations began forming a group to take a lead in
architecture analysis:
§§ Average number of vulnerabilities declined (from 9 to 5)
§§ Average time-to-fix declined (from 184 days to 165 days)
§§ Average time open increased (from 237 days to 348 days)
§§ Remediation rate declined (from 40% to 36%)
When did your organization begin using a risk
questionnaire to rank applications?
After organizations began using a risk questionnaire:
§§ Average number of vulnerabilities declined (from 9 to 6)
§§ Average time-to-fix decreased (from 160 days to 155 days)
§§ Average time open increased (from 163 days to 244 days)
§§ Remediation rate declined (from 39% to 38%)
24. Website Security Statistics Report 201524
When did your organization begin
maintaining a company specific top N list
of the most important kinds of bugs that
need to be eliminated?
After organizations began maintaining a company specific
top N list of the most important kinds of bugs that need to be
eliminated:
§§ Average number of vulnerabilities declined (from 8 to 7)
§§ Average time-to-fix declined (from 300 days to 243 days)
§§ Average time open increased (from 183 days to 239 days)
§§ Remediation rate increased(from 39% to 46%)
When did your organization begin feeding
penetration-testing results back to
development through established defect
management or mitigation channels/systems?
After organizations began feeding penetration-testing results
back to development:
§§ Average number of vulnerabilities declined (from 12 to 7)
§§ Average time-to-fix declined (from 207 days to 197 days)
§§ Average time open increased (from 209 days to 270 days)
§§ Remediation rate increased (from 27% to 41%)
Have any of your organizations website(s)
experienced a data or system breach as a
result of an application layer vulnerability?
§§ Those who have experienced a data or system breach have
higher average number of open vulnerabilities than those who
haven’t experienced a breach (18 vs. 17)
§§ Those who have experienced a breach have lower remediation
rate than those who haven’t experienced a breach (34% vs.
27%)
§§ Those who have experienced a breach have higher average
time open than those who haven’t experienced a breach (361
days vs. 394 days)
§§ Those who have experienced a breach have lower average
time to fix than those who haven’t experienced a breach (130
days vs. 155 days)
25. Website Security Statistics Report 2015 25
Average Number of
Open Vulnerabilities
While the window of exposure is high for websites, average
number of open vulnerabilities is relatively small, ranging from
2 (for Public Administration sites) to 11 (for Transportation and
Warehousing sites). Finance / Insurance, Health Care / Social
Assistance, Retail Trade and Information have average number of
open vulnerabilities fairly low at 3, 4, 4 and 6 respectively.
PUBLICADMINISTRATION
FINANCE/INSURANCE
UTILITIES
PROFESSIONAL/SCIENTIFIC/TECHNICALSERVICES
RETAILTRADE
HEALTHCARE/SOCIALASSITANCE
OTHERSERVICES(EXCEPTPUBLICADMINISTRATION)
MANUFACTURING
INFORMATION
ACCOMMODATIONS/FOODSERVICES
EDUCATIONALSERVICES
ARTS/ENTERTAINMENT/RECREATION
TRANSPORTATION/WAREHOUSING
2
3 3
4 4 4
5 5
6 6
7 7
11Average Number
of Open Vulnerabilities
26. Website Security Statistics Report 2015 26
Average Days Open
On average, vulnerabilities stay open for a long time in all
industries. The smallest average time open is observed in
Transportation and Warehousing industry (at 299 days or ~1
year) and the longest average time open is observed in Public
Administration industry (at 1033 days, or ~3 years). Listed below
are the average time open data for some of the key industries:
Health Care / Social Assistance: 572 days (~1.6 years)
Information: 654 days (~1.8 years)
Finance / Insurance: 739 days (~2 years)
Retail Trade: 947 days (~2.6 years) TRANSPORTATION/WAREHOUSING
ARTS/ENTERTAINMENT/RECREATION
ACCOMMODATIONS/FOODSERVICES
MANUFACTURING
HEALTHCARE/SOCIALASSISTANCE
INFORMATION
EDUCATIONSERVICES
UTILITIES
FINANCE/INSURANCE
OTHERSERVICES(EXCEPTPUBLICADMINISTRATION)
RETAILTRADE
PROFESSIONAL/SCIENTIFIC/TECHNICALSERVICES
PUBLICADMINISTRATION
299
Average Number of Days
Vulnerability Open
361
502
556
572
654
665
734
739
937
947
1027
1033
Retail trade ranked third from the bottom after Professional,
Scientific, and Technical Services (with 1027 average days open)
and Public Administration (1033 days open)
27. Website Security Statistics Report 2015 27
Remediation Rates
Average remediation rate for industries varies significantly from
16% (for Professional, Scientific, and Technical Services sites)
to 35% (for Arts, Entertainment, and Recreation sites). Sites in
Health Care / Social Assistance, Retail Trade and Information
industries have comparatively low average remediation rates at
20%, 21% and 24% respectively. Finance / Insurance sites have
an average remediation rate of 27%.
PROFESSIONAL/SCIENTIFIC/TECHNICALSERVICES
PUBLICADMINISTRATION
OTHERSERVICES(EXCEPTPUBLICADMINISTRATION)
HEALTHCARE/SOCIALASSISTANCE
TRANSPORTATION/WAREHOUSING
RETAILTRADE
MANUFACTURING
INFORMATION
EDUCATIONSERVICES
FINANCE/INSURANCE
ACCOMMODATIONS/FOODSERVICES
UTILITIES
ARTS/ENTERTAINMENT/RECREATION
16%Average Remediation Rate
16%
18%
20%
20%
21%
22%
24%
26%
27%
27%
30%
35%
28. Website Security Statistics Report 2015 28
Data Set
Methodology
This analysis is primarily based on data obtained from Sentinel,
which is WhiteHat’s flagship Application Security Testing
software. As part of this analysis, we also surveyed customers to
identify and measure various Software Development Life Cycle
(SDLC) activities that they perform on a regular basis. Wherever
applicable, survey responses were combined with Sentinel data
to gain deeper insights into SDLC practices of organizations and
their impact on application security.
Time frame of this analysis is 2014.
Data was aggregated and classified in meaningful categories
for analysis. We looked at remediation rate, time to fix, average
open vulnerability count, vulnerability likelihood and window
of exposure. We also assessed the impact of infrastructure
vulnerabilities on security posture of applications by comparing
metrics (all vulnerabilities vs. infrastructure vulnerabilities such
as Insufficient Transport Layer Protection, Directory Indexing,
URL Redirector Abuse, Improper File System permissions, and
Fingerprinting).
To assess the impact of SDLC best practices on security
posture, we compared application security metrics six months
before and six months after the organizations started engaging in
those activities.
29. Website Security Statistics Report 2015 29
Conclusion
Recommendations
In this year’s report, we strive to make one thing perfectly clear:
we at WhiteHat Security, and the industry at large, have become
incredibly adept at finding vulnerabilities. And while everyone
should continue to look and increase their skills at finding
vulnerabilities, it has become crucial for everyone to focus on
helping make the vulnerability remediation process faster and
easier. Remediation, more than anything else, is the hardest
problem in application security. It should go without saying that
vulnerabilities found but not fixed, does not make things more
secure. Making the web progressively more secure is the mission
that we as a community are collectively working towards every day.
And together, we can do exactly that!
This is also a good opportunity to look back on everything we have
learned in our quest to figure out what works and what does not
in application security, both technically and procedurally. What is it
that really makes some websites, and their underlying code, secure
– or at least more secure than others? That’s the question we have
been seeking to answer since we started this research. Answering
that question first required us to know approximately how many or
what kinds of vulnerabilities exist in the average website and how
long they remain exposed as a way to measure performance.
We accomplished this and in the process we learned a great
deal: we learned that vulnerabilities are plentiful, they stay open
for weeks or months, and typically only half get fixed. And while
a great many websites are severely lacking in security, many
websites are actually quite secure. So, what’s the difference
between them? Is it the programming language that matters when
it comes to security? Is it the industry the organizations are in? Is it
the size of the organization? Is it the process they use to develop
their software? Is it something else?
At present time we can say that all of these aforementioned items
don’t matter much, and if they do, it’s only slightly and under
very specific conditions. On the whole, what matters more than
anything else ends up first being a non-technical answer – visibility
and accountability. The websites and organizations that are more
secure then others have a solid understanding of the performance
of their software development lifecycle and have developed a
security metrics program that best reflects how to maintain security
across that lifecycle. Additionally, these same organizations
have a culture of accountability – both in terms of when and if a
breach occurs – and they can measure performance. Without an
executive-level mandate, it’s going to be very challenging, if not
impossible, to adequately protect an organization’s systems. The
incentives simply won’t be in alignment.
And here is the point where we get to very specific guidance
as a take away from this report. Like we’ve recommended
many times in previous reports, the first order of business is
to determine what websites an organization owns and then to
prioritize as much metadata about those websites as possible.
Grouping them by department or business unit is even better.
Secondly, through dynamic or static vulnerability assessment,
begin creating an application security metrics program;
something that tracks the volume and type of vulnerabilities
that exist, how long reported issues take to get fixed, and the
percentage that are actually getting fixed. As the saying goes,
anything measured tends to improve. With visibility through
data, the answers to the problem become much clearer.
Once these steps have been achieved, the organization can
then set goals for which metrics need to improve, by how much
and when. With these goals in hand, it becomes much easier
and more efficient to begin implementing or improving the
SDLC process with very specific activities designed to positively
affect whatever metrics that are missing. For example, if the
reasons SQL Injection vulnerabilities are not getting fixed fast
or comprehensively enough is that the developers are not well
educated on that type of vulnerability? If so, the organization
might decide to host a workshop that focuses just on that class
of attack. Or perhaps the reason so many Cross-site Scripting
vulnerabilities enter the system with each release is the lack of
a helpful centralized security framework. In which case, create
one, advertise it’s existence internally, and mandate its usage.
Tactical approaches like the above that are straight-forward
and customizable are ideal because very little in application
security is one-size-fits-all. Every organization is different: the
software being built is different; the tolerance for risk is different;
the goal in the market place is different. These variables cannot
be accounted for in a one-size-fits all model. So, what security
teams can do is support the SDLC process by bringing visibility
and expertise to the table and let the business guide what’s
acceptable from an outcome perspective. Steadily adding,
improving, and measuring the effect of very specific security
controls is the best way to ensure better and more secure code.