Ten Steps to Improve Enterprise Security Strategies


Published on

The full webcast that accompanies this slide deck is available here:


An organization’s investment in security should not stop at simply meeting the compliance standards. A risk-based approach to information security will not only help you achieve continuous compliance, but also protect the information security assets of your organization.

This webcast (click here) and slide deck (below) will discuss ten steps to improve risk and security strategies and provide a simple framework for executing a risk-based security management program.

In addition, Techtonica‘s Daniel Blander (@djbphaedrus) and I share stories about how organizations are successfully relating compliance and security initiatives to risk management and aligning their efforts with business objectives.

We also discuss how enterprises are finding the need to be more proactive in security. Essentially they want to move things from simply focusing on alerting, to provide useful information that actually enables strategic decisions.

Another dynamic at play is that compliance is really beginning to drive conversations around risk management area, this is a result of audits focus on top-down, risk-based compliance.

In addition, there is the need by executive management to more effectively allocate budgets based on objective measures. Since many of these executives are financial professionals, they are accustomed to balancing risk versus reward.

Finally, many of the higher profile information security events and breaches are more visible than ever to non-technical executives and our environment. How many executives in your company read the Wall Street Journal or other digital source of news, then send around lots of links to stories that relate to information security?

This surge of interest provides a prime opportunity for us to engage with them around the importance of what we do every day.

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Huge growth opportunity with our market, with the problems we help our customers solve and have a long, sustained history of success.Headquartered in Portland, OROpen source legacy since 1980’s (1988); First code released 1994Founded in 1997Tripwire Enterprise released 2004Tripwire Log Center acquired/released 2009Acquired by Thoma Bravo in 2011302 employees worldwide (as of 2/1/12) 6,092 customers worldwide (as of 1/1/12)46% of Fortune 50027 of 30 US Federal Government agencies96 countriesWorld-class customer support – 96% customer satisfaction; 86% of customers would recommend us to a colleague/friend2011 Winner for Best Enterprise Security Solution (SC Magazine)7 US patents granted; 15 US patents in process 8 consecutive years of revenue growth, bookings growth and profitability
  • let’s begin by talking about some of the changes that are facing us in the world of information security today. As I speak with customers around the world, a number of trends are emerging. For example many information security professionals executives included now have to rip appeals to other parts of the business to get funding. This often includes speaking with non-technical audiences. For example, I was working with a group of hospitals recently, and they have to appeal to hospital boards for project funding, IT investments, and other staffing needs for example. In these cases, it can be very difficult for the non-technical executives to really understand why the security executives are asking for more money. This presents its own challenges, which I’ll discuss a little more in detail later.I’ve also spoken with a number of enterprises, who are trying to be more proactive in security. Essentially they want to move things from simply focusing on alerting, to moving into providing useful information actually enables strategic decisions in other words, a decision centerAnother dynamic I’ve observed is that compliance is really beginning to drive conversations around risk management area I believe, this is a result of audits focus on top-down, risk-based compliance This translates into a focus that brings risk more into the picture as discussions occur around information security.Another aspect of this is the need by executive management, to more effectively allocate budgets based on objective measures Since many of these executives are financial professionals, they are accustomed to balancing risk versus rewardFinally, many of the higher profile information security events and breaches are more visible than ever to non-technical executives and our environment. This is due to something I called the iPad effect. How many of you have executives in your company who read the Wall Street Journal or some other newspaper on the iPad, then send around lots of links to stories that relate to information security? The good news is, this provides a prime opportunity for us to engage with them around the importance of what we do every day.When you put all of this together, I hope you'll understand some of the reasons we undertook this study of the state of risk management and specifically risk-based security management, and hopefully you'll pick up a few pointers that will help you getting your organization to embrace risk and as a key part of security.
  • PCI Req 12.1.2Establish, publish, maintain, and disseminate a security policy that Includes an annual process that identifies threats, and vulnerabilities, and results in a formal risk assessmentPCI DSS Req. 6.2 Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.Notes: Risk rankings should be based on industry best practices. For example, criteria for ranking “High” risk vulnerabilities may include a CVSS base score of 4.0 or above, and/or a vendor-supplied patch classified by the vendor as “critical,” and/or a vulnerability affecting a critical system component. The ranking of vulnerabilities as defined in 6.2.a is considered a best practice until June 30, 2012, after which it becomes a requirement.IT Grundschutzhttp://rm-inv.enisa.europa.eu/methods_tools/m_it_grundschutz.html
  • Risk based security management is a method of choosing where to focus scarce resources and money through a systematic approach.It involves identifying and analyzing threats, identifying the probability of the threats occurring, and the impact if the threats are realized.It is important to note that the key element to this equation is “Probability”. Many confuse this with “possibility”. Many things are possible – including a meteor striking the earth, an earthquake in New York City, and a squirrel eating through a power line. Each of these are threats and have possibilities of happening. But the important question that risk based analysis raises is: Which of these are more probable, or to say it another way, more likely to happen? This does not mean that we dismiss the things we find “possible” but rather that we should first focus on the things that are highly probable. Too often I have found security professionals distracted by the latest buzzword, hype or research discovery. And as many of the breach reports have borne out, too many of the breaches happening are the result of things that are far more simplistic and probable.A few definitions I want to point out:Threats are anything that can have a negative impact on things that we value.Vulnerabilities are situations that can be exploited by a threat.Impact is the result when a threat applies itself to a vulnerability.Risk based security management has a foundation in a wider Enterprise Risk Management system that is focused on enabling decision makers – providing them with the information they need to plan run the company and plan new initiatives in a way that helps maximize their probability of success, or at least to minimize any negative impacts.
  • We believe that there are some key elements that make up the framework of risk based security management. I’ve outlined them in the ten steps in the paper, but there are some inherent benefits of this approach – ones that may not seem apparent if you move mechanically through the steps.First, and most obviously, decisions can be based on the clear identification, analysis and prioritization of risks. Risk analysis should be based on observable facts, and whenever possible on measurable data. It is not always possible to gather exact measurable data, nor is it possible to always make risk analysis a quantitative process. But using facts based on observation and relevant data is important in that it roots the decisions in reality. Attempting to make decisions without a point of reference based in reality is a game of pin the tail on the donkey. We end up following long-held beliefs that are based on old or anecdotal observations, and in some cases hearsay. When you root your decisions on observable facts and data, you will find that many of these a priori beliefs will be challenged. [Jay Jacobs study on his honeypot data.]There is another side affect of risk analysis – decisions become more explicit and open to examination. By having facts and data to discuss, the discussions can be opened to discourse and offer a step towards objectifying the decision making process – versus it being a belief system which is often more difficult to open to discourse. One of my longest running research projects has been around single- and double-loop learning and how opening decision making to examination and testing creates situations where decisions are more thoroughly tested and refined. In addition, the participants in the process believe in the decision and are also far more likely to participate in improving and correcting it when challenges occur. (Chris Argyris & Donald Schon)In the end, basing your risk management program on a risk analysis process allows you and your management to make decisions based on factual data, and can focus your resources, efforts and worries in areas that produce the greatest benefits.“The essence of risk management lies in maximizing the areas where we have some control over the outcome, while minimizing the areas where we have absolutely no control over the outcomes and the linkage between effect and cause is hidden from us.” P. Bernsten – Against the Gods
  • Step 1:The first step is identifying what matters to the organization and the decision makers. We often think of them as assets – but take care as the way we often use this term in our industry constrains us. Assets are items of value – but keep in mind that this includes items that are intangible.An executive such as a CEO or CFO is more likely focused on impact to revenue stream and the elements that directly affect them than any particular hard asset, computer or single set of data. Elements such as customer satisfaction and retention, protection of intellectual property (inventions, proprietary knowledge) are more likely on his radar. Understanding his priorities, business objectives, and motivations are cßritical to framing your analysis.I usually start with the stakeholders for the area being examined. In one case, when I found out that the security department had a limited view of the business, we spent two weeks interviewing every VP and Director in the company about what their department did, how it played into the priorities of the company, and what things kept them up at night. We did this under the thin guise of a “Business Impact Assessment” (BIA – for those of you not familiar with a BIA, in short, it is a method used in Business Continuity Planning to identify key business processes and their importance in relation to the overall business goals). We identified the overall priorities and goals of the business, the key assets (revenue through customer sales) and were able to create this simple but beautiful diagram of the business on a whiteboard that we kept in my office. We had max, min, and median revenue for the company. It also included measures such as key processes for each group, things that they worried about, the group’s impact on revenue and the amount of time before a group’s operational outage impacted revenue. This allowed us to think about the impact that various systems and data had on their operations, and how the loss or damage to them could impact them.[The same will apply to any analysis, no matter how larger or small – the key is understanding the values, motivations and objectives of the stakeholders.]Now we could base our risk analysis in the context of business’s own goals and objectives. If you do not know what these are, you are not going to have the context to provide them meaningful insight into the factors affecting their decision.Step 2:Collecting data on what matters is probably the most involved yet exciting part of the process. It is also one of the most important steps that is often bypassed or shortcut.Collecting data is a process of collecting observable data. It can include data about revenue or the impact to the revenue stream if something fails as I discussed in the first step. It can include data from studies based on empirical data such as data breach reports, or direct observations collected from honeypots. The objective here is to collect empirical data – not opinions, but data that verifies or invalidates previously held opinions.I discussed an example of collecting data for identifying assets and impact in the previous step. For the step of identifying frequency, likelihood, and probability I have used a myriad of tools. Recently I used the available data breach reports from several organizations to estimate probabilities of certain types of attacks. I have also used data from honeypots that are located in a company’s network to determine the frequency certain types of attacks are seen on the network. In one case I used the measures from a DLP system to show the frequency of the tool’s ability to identify incidents, and the actual frequency of incidents. Every tool in your environment can be used to collect data that can feed a risk analysis.However not all the data you want will be easy to gather, and some will resist quantifying. This often stymies those who expect perfection and exactness, and likewise is the ammunition that some use to attempt to dismiss risk analysis. Lets be clear. Risk analysis is not a game of precise prediction. It is a practice of identifying both probabilities of things happening, and impact of what happens when they occur. The data we collect allows us to make these two analyses more accurately by removing the improbable and being more accurate in the impact. Do not confuse that with precision. Precision is best achieved in hindsight. Accuracy is knowing where you should be able to aim. The data and measures you are going to collect here will be a guide of where you can expect you should aim your efforts.Care should be taken not to try to measure the seemingly unmeasurable, but to identify what can be measured to reduce your uncertainty. For example, attempting to measure the number of threat actors can seem problematic, but identifying simple things that you can measure from such an environment can prove highly effective. Imagine categorizing threat actors by their capabilities (sponsored skilled, unsponsored skilled, semi-skilled, script-kiddies, unskilled), or even by where they reside (outside your environment, inside your environment) These are all measures that can be used in analysis and are very valid.
  • Step #3:This is where the various elements that affect risk are brought together and exercised. You need to perform a risk analysis and make it meaningful for your audience.There are multiple methodologies that are available that are “risk analysis” methodologies. Take care however which one you choose. There is no perfect methodology, but there are some key criteria that we feel reflect the usefulness of a methodology:It should require that you define the objectives of the assessment (which should meet the needs of the decision makers)It uses observable and tangible dataIt focuses on accuracy, not precision (ranges, not precision)It focuses on identifying probability and impact.It uses measures that are normalized and equally applicable at any scale (which means they can be reused in another analysis) As an example, an ordinal scale of “High, Medium, Low” or “5,4,3,2,1” has little meaning if they are not tied to a specific tangible meaning that reflects an agreed upon set of values that the stakeholders share and find useful in decision making.Again, the methodology can be either quantitative or qualitative. Both have their strengths – a qualitative can be very useful when quantitative measures are either difficult to collect or unavailable at the time of analysis. Likewise quantitative measures can be highly beneficial when data can be collected and greater precision is needed. Let me give an example of a risk assessment that is meaningful, based on observable data (where available), identifies probabilities and uses re-usable measures. And more importantly is a bridge between qualitative and quantitative….With a client I recently examined a vulnerability we had identified in their environment. Our needs were to understand the urgency of this vulnerability versus our day-to-day vulnerability remediation efforts. We had little data on it except measures provided by our vulnerability scanning vendor. We focused on this vulnerability because it had become a point of contention with management and engineers at odds over the risk associated with it. So I suggested we do a quick risk analysis and give them a response in an hour.In doing this analysis, I asked five questions of the security team:What [assets] are related to the affected systems? (for example email, payment card data, PII, intellectual property…) The answer was email. I asked what the email was used for – so that I understood the real value, the processes behind it. With a quick question to a nearby VP, we understood that it could be used for M&A activity, executive communication, and several compliance related activities. Good information on asset, and some insight into impact. This was qualitative information – we could not define it by an exact revenue number or lost business opportunities in the 30 minutes we wanted to spend on this analysis, but it gave us a narrative about the assets that an executive could relate to quickly.What population of people would have access to directly exploit this vulnerability? We thought of who could exploit this vulnerability, who would want to exploit it, and what our controls could do to limit the number of actors. We knew that this information would be useful to competitors, acquisition targets and the like. However, based on a quick examination of the firewalls, logs from various systems, we could identify that the population that could reach this system was limited to internal employees and administrators via the network. To be fair, we also considered a rarified subset of external entities (hackers) who had already penetrated our network. This is obviously a smaller subset of threat actors than a system that was exposed to the Internet. We use this information when we next considered motivation and capability.What is the level of difficulty in exploiting this vulnerability?We looked at the scale given by CVSS for the exploitability of the vulnerability. It gave us a measure that indicated that it was fairly difficult to exploit. We looked for tools that contained working examples of exploits. We looked for publicly available tools, and inside Metasploit. We drew a blank on all counts. A quick reading of the MITRE listing for the vulnerability showed that as of that time it had not really emerged beyond theoretical. The level of difficulty was considered quite high.What is the frequency that this type of exploit has occurred elsewhere, and what have we seen in our organization? We considered using a mix of research here, but ultimately realized that given the lack of publically working exploit, it was unlikely we would find anything. We relied on looking for publicly available examples of this exploit being used in the wild. We found none. The frequency was considered quite low if at all.What controls are in place that would mitigate the ability of someone to exploit this vulnerability? We had tools such as a firewall blocking access to the system which reduced the population which could access it, and we had a few configuration items that might help, but not much else would help mitigate the vulnerability.I took all the data that was collected and turned the risk into a few sentences that read something like this:“A vulnerability has been identified that can be used to expose internal email communications. The value of this email is related to the value of keeping confidential any sensitive communication between company personnel (which could relate to competitive advantage, knowledge of confidential financials, M&A activities, HR communications, and corporate planning). There are no publicly known example of this particular compromise occurring, the controls in place limit the threat to primarily internal personnel, and a high level of competency is required which likely exceeds nearly everyone at the company.”If you are looking for numbers, there aren’t any. It’s a qualitative analysis. It uses a normalized set of scales:Threat population and their capabilities: General Public, Internet Based Attackers, Internal Personnel via Network, Internal Administrators….Assets: identification of known business processes which can be associated to an executives value of themLevel of Difficulty: Theoretical, High Sophistication and Coordination, Skilled Attacker, Exists in Sophisticated Tools, Common in several tools, any computer user can do it….These are all measurable. They are all lacking a hard number, but they carry significant and useful meaning in decision making. We made data available in a way that centered on the stakeholder’s objectives and their mindset. We stated the problem at hand (what is the objective of the analysis), the assets that were considered, key threats to those assets, and the identified likelihood of those threats being realized.What we also did is made as explicit as possible the assumptions that we made in the analysis. We made it clear that the exploit, while theoretical, could still be part of an underground exploit tool. We also made it explicit that we assumed email was used for sensitive communications, and that our perceptions were that the internal network had not already been compromised or overrun with outsiders (hackers and third-parties alike). We made it open to their questions, and that we were open to inquiry and adjustments based on their perception of these values and assessments. It allows us and the executives to explore alternative views, other possible scenarios or explanations of our findings. The result is that we could be open about our analysis, and reach a mutually agreed upon statement that we all felt comfortable about because we were all (including management) were intimate with it. If the risk assessment is based on relevant data then the discourse should be collaborative, highly interactive and very rewarding. The objective of this discussion is not to win with your analysis, but to develop an even more refined analysis – one that management has participated in. One where the assumptions and analysis are challenged and subject to testing, alternative ideas are considered and tested, and communication is open.An important part of this is to know that the assessment and analysis should not make decisions, but rather show information that can affect decisions. The decision on next steps of action are up to the stakeholders. Priorities vary as do tolerance for risk. So will the assumptions and attributions made when looking at your analysis.
  • Step #5:We often jump straight to the solutions – and what we end up missing is identifying the objectives of the solution. We have been conditioned by vendors to believe that tools will solve our security issues, and that they have the answer. The reality is that every organization has different environments, requirements and priorities. Each organization’s risk assessment will be different, and so will the risks that need to be mitigated. Jumping to a solution without identifying objectives misses the all important combination of people, processes, and *then* technology that are necessary to create an effective solution.A favorite example of mine is several companies’ rush to implement DLP (or Data Loss Protection) technology. I have had clients rush to deploy DLP and implement immediate “blocking” functionality that stops inappropriately exfiltrated data. Many of their justifications for this technology was that they believed in a high likelihood that data was being exfilitrated. What was amusing was that many did not have observable data to justify it.So in steps me with the idea that we needed to validate the risk before running to an end solution.The perceived risk: data exfiltrationControl objective: identify and minimize the exfiltration of sensitive data (sensitive data was well defined, but I will not belabor the details around it)As you can see we might first say “ah, perfect for DLP!” You’ve listened to too many vendors. Most DLP products that I have worked with only examine well known protocols. HTTP, FTP, Email, and instant messaging clients. In my situation I asked the tough question: what scenarios are you most concerned about where data will be exfiltrated. Most of the clients start with “hackers sending data out of the company”, jump to “malicious insiders” and then get to “employees who do the wrong thing”. I then ask them how hackers would perform this activity….a few good security people know this answer – and it is typically not through the protocols examined by DLP. Then I asked about the next group (scenario likely a mix of DLP scanned protocols and non-DLP scanned protocols). My question now became, “based on your biggest risks, how effective will this tool be at even *detecting* data exfiltration from that type of scenario.The problem here came down to two mistakes: A lack of an explicit risk analysis that did not make the threats (scenarios) explicit, and the client loosing track of the objectives and immediately attaching to a solution.Ironically enough, in one client’s case we found that the network DLP detected a grand total of 7 incidents per month. 6 of which were external parties sending us their sensitive information, and on average on per month internal employees sending their own personal sensitive information for personal purposes. Risk averted was….well, you can figure that out.Identifying the objectives allows us to establish the aim or purpose of the controls we want to put in place – what needs to be achieved. This is important because this is the tie between the control and the risk. A control objective will identify the risk being addressed, and will identify ways that minimize an element of that risk – whether reducing threat landscape, frequency, or vulnerabilities.Whenever you examine your objectives you need to also need to include the asset owners and those who will be affected by the controls and be open to exploration. This will broaden the range of mitigation strategies, and build collaboration and buy-in to the solution. The rigor of open discourse, inquiry, making supporting data explicit, testing of inferences, encouraging alternatives and competing views, making assumptions and attributions explicit, and applying rigor to this process builds collaboration and refinement that unilateral action stifles.Do not ignore the objectives or the risk that they are designed to mitigate.Do not assume there is one perfect control. Be open to unique and unusuall ideas.The design process should be open, inclusive of everyone effected, and open to challenge. Be prepared to have your ideas and things you assume to be “facts” challenged frequently, and allow it to happen. Let your ideas be tested. Fire makes good steel better. Testing makes good ideas better.Most importantly, if inserted back into the risk analysis, does the control reduce the risk by an expected amount?
  • Step #9 -- Monitor and MeasureNow it is important to monitor the control you have put in place. The goal is to validate that the control is satisfying the intended objectives. Measure the effectiveness of the control in relation to the original risks it is designed to mitigate. The measures must focus on clearly identifying changes in risks.I have frequently merged the work in step 2 with the monitoring and measuring I do here in step 9. While setting up monitoring might seem a good idea to identify the holes in the immature areas, it also has served well in the mature areas where controls are already in place. The DLP example I provided earlier is a great example – after we implemented rudimentary network DLP, we were able to collect accurate data on the frequency of what the control (tool) could detect and would act on. In the case I listed, the frequency was so low that the value of the tool was minimal. It did not tell us if data was being exfiltrated, so we still had questions about the level of risk, but in terms of the control we had very clear measures of the effectiveness.I have never assumed that the data collected was absolute, but that it would likely reflect relative changes in things like frequency. At one client we collected data from DLP, Web Application Firewalls, Firewalls, Anti-Virus, and our log system to identify current frequency levels and any changes. The trending and fluctuations were enlightening to show what our systems were experiencing. We watched as various improvements in software code led to slow trends away from certain application attacks.Other measures can also help you identify changes in your vulnerability. Measuring something such as changes in technical vulnerabilities are easy to measure and trend because there are a plethora of tools available to measure them. I have used these measures to demonstrate reduced risk as the likelihood of exploit was reduced due to the lower probability that threat actors had tools to exploit systems that were only vulnerable to recently discovered vulnerabilities. (Suffice to say, most compromises are achieved through old exploits, so reducing them reduces the largest number of possible break-ins). These measures can indicate changing levels in the technical vulnerability of your environment to a threat. So can measuring in-place controls such as configuration standards and change. Changes in these measures can begin to indicate possible vulnerability to threats to availability.Examining and measuring as many of the elements as possible used in the risk analysis can create an understanding of the effectiveness of the control at meeting the objectives, mitigating the intended risks, as well as validating the accuracy of the risk analysis. These measures can also help to create an understanding of the shifts in the environment and how they affect risk.Step #10 -- Operate a Feedback Loop (Adjust & Repeat)Risk based security management is cyclical and ongoing. The monitoring and measuring of the controls will provide indicators of how effectively the control is being operated, if it is reducing the vulnerabilities, reducing frequency or affecting the threat landscape. This knowledge allows for the re-examination of the risks, the control objectives, the controls, their implementation and their operation. It allows refinement, adjustment and re-examination.The data that is collected should be used to create a feedback loop to examine the risk analysis using the same model to determine if the new data affects the resulting risk. The examination should consider the multiple possibilities:Are there changes in the environment that can also affect the metrics?Are there changes in the threats as time changes? Does the nature of threat actors change, or do the threats adapt to controls put in place to thwart them?Is the control being operated as intended and or are the measures acting as indicators of control design and its operation?In the DLP example I gave earlier, we used the findings to adjust our plans going forward on the use of DLP. It was also used it as a baseline for future data exfiltration examination, realizing that a better approach was to identify examples of exfiltration before throwing a tool after a purely perceived threat. The alternative of scanning every protocol and every transmission was so unwieldy (with the tools at the time) that management became a bit more circumspect in their approach.The goal is to use the measures and observations to continuously adjust perceptions and approach. If the data collected indicates the effectiveness of the control in mitigating risk then it is valuable. The data can also affect the risk analysis, the control objectives, the control design and the operation of the controls.
  • Now, let's talk about about the study itself. This is a broad-based study that was responded to by over 2000 individuals spanning 4 different countries. I mentioned I work for tripwire, but I want to stress that we commissioned an independent research organization, in this case the Ponemon Institute, to perform this objective study on our behalf. In other words, we didn't want to lead the witness, we wanted an accurate depiction of the state of risk-based security management in today's world.This is the 1st of what we hope will be an annual benchmark of the global state of risk-based security management.Not only do we want to learn about the condition of risk management, we want to derive some prescriptive guidance from these findings. Then, as we resurvey about the same topics in the future, we can determine whether things are getting better, worse, or staying the same.
  • ×