Alert monitoring-tools-and-logs-make-all-the-difference 6909212


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Alert monitoring-tools-and-logs-make-all-the-difference 6909212

  1. 1. May 23, 2011 $ A n a l y t i c s A l e r t Presented in conjunction with C o n t e n t s Monitoring Tools and Logs 2 Log Management Spurs Data Collection Debate 4 Verizon Data Breach Report: Bad Make All the Difference Guys Target Low-Hanging Fruit It’s no longer a matter of “if” you get hacked, but when. In this 6 Tech Insight: Updating Your Security special retrospective of recent news coverage, Dark Reading Toolbox 8 Searching for Security’s Yardstick takes a look at ways to measure your security posture and the 12 RSA Breach a Lesson in Detection challenges that lie ahead with the emerging threat landscape. and Mitigation 15 An Advanced Persistent Threat Reality Check
  2. 2. Security A n a l y t i c s A l e r t May 3, 2011 Log Management Spurs Data Collection Debate By Ericka Chickowski As log management and security information and event management (SIEM) experts pore over the latest results from the annual SANS survey on log management, debate lingers over whether organizations really have mastered the art of useful data collection, or whether they need to adjust their log collection behaviors to better enable more analysis down the road. At first blush, consensus from the SANS report seems to be that most organizations have mas- tered log data collection, so now it is time to worry about such things as log data search, catego- rization and correlation. “We’ve got the collection down, and we’ve got the securing the logs and the chain of custody and those things that make the compliance auditors happy, but actually turning this information into something that is meaningful and actionable is the challenge,” says Michael Maloof, CTO at TriGeo Network Security. However, when data comes in such an avalanche of information that the tools at hand are still not able to give organizations a consistent way to sift through it, then how much collection is too much? Some might argue that the better a job organizations do with collection without improving their ability to categorize data and search through it, the more likely they are to have lots of meaning- less information drown out the important data. This point brings up a long-raging debate about how much information organizations really should be collecting. Many experts believe that organ- izations need to temper and focus their collection efforts for a long while before they can catch up with analysis of all data sets. “First of all, ask yourself, can your event collection be more focused?” says Scott Crawford, a research director with Enterprise Management Associates. “Do you necessarily have to pick up data from everywhere, or are there key points where you really do need insight or where insight would be more valuable, rather than collecting all of it?” 2 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  3. 3. Security A n a l y t i c s A l e r t According to Andrew Hay, senior security analyst with The 451 Group, the issue of deciding which data to collect is a balancing act. “There are two schools of thought. One is that some organizations say, ‘I’m going to log absolutely everything and anything,’ and then that becomes a management nightmare. Logging everything for any sort of real-time analytics or security operations is going to be very difficult,” Hay says. “You really need to understand what those logs are before you log them. So the other camp says, ‘Only log what you need.’ But the challenge is, how many organizations really understand what they need?” It is that question that makes Dr. Anton Chuvakin of Security Warrior Consulting lean toward amass- ing as much log data as possible at first, and then worrying more about how that data is reviewed. “If you’re in doubt, just collect it,” he says. “The filter you apply is what you actually review and what you take action on. I would prefer to err on the side of too much data all of the time. Essentially you want to collect more data, but review less of it. That’s the magic trick.” And, Chuvakin says, the only way to review more effectively is to practice. “I would say if you can get daily, maybe weekly, log reviews in a consistent manner, then you can know better what to do with the data. You know when to scream and when to relax,” he says. “If you have a repeatable, consistent process for log review, then you will detect your intrusions and you’d save more time and eventually understand where you could automate in correlations and with real-time tracking. Log review processes help to figure out what’s normal, figure out what’s not, and take action. To me that is more important than how to tune correlation rules; you learn that later.” Regardless of how many data feeds your organization depends on, the sheer volume of logs can actually be put to good use in and of itself, Crawford suggests. “There are ways to take a different look at log data that might be indicative of an issue. Rather than looking at every single event and correlating individual events for possibility of high-risk activities, [look for] changes in log volume,” he says. “These are things I would consider ’second-order’ indica- tors. Sometimes an attack might itself create a volume of log data, so you see spikes and changes in the average amount of data. Conversely, if log data really dried up from a given source, it would sug- gest someone is either covering their tracks, has interfered with a service, or created some other dis- ruption we should be aware of.” 3 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  4. 4. Security A n a l y t i c s A l e r t April 19, 2011 Verizon Data Breach Report: Bad Guys Target Low-Hanging Fruit By Tim Wilson Cybercriminals are making a leap from the big score to easy money, according to Verizon Business’ annual report on data breaches, which was published recently. According to Verizon’s much-awaited 2011 Data Breach Investigations Report, the number of compromised records involved in data breaches investigated by Verizon and the U.S. Secret Service dropped from 144 million in 2009 to only 4 million in 2010, representing the lowest volume of data loss since the report’s launch in 2008. But this year’s report covers approximately 760 data breaches, the largest caseload to date, according to the researchers. So while the number of breaches continues to go up, the number of records affected is going down. “The seeming contradiction between the low data loss and the high number of breaches likely stems from a significant decline in large-scale breaches, caused by a change in tactics by cyber- criminals,” the report says. “They are engaging in small, opportunistic attacks, rather than large- scale, difficult attacks, and are using relatively unsophisticated methods to successfully penetrate organizations.” “I think what we’re seeing is that there’s a big change in the type of data that criminals are going after,” says Dave Ostertag, global investigations manager at Verizon Business. “There’s a glut of personal data out there now, and there really isn’t a great market for it. The value of intellectual property, on the other hand, is much higher—criminals are finding that they can make as much money from stealing a smaller number of highly sensitive records as they can from stealing a big database of customer information.” The report also found that outsiders are responsible for 92% of breaches, a significant increase from the 2010 findings. Although the percentage of insider attacks decreased significantly over the previous year (16% vs. 49%), this is largely due to the huge increase in smaller external 4 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  5. 5. Security A n a l y t i c s A l e r t attacks, Verizon says. The total number of insider attacks actually remained relatively constant. Hacking (50%) and malware (49%) were the most prominent types of attack, with many of those attacks involving weak or stolen credentials and passwords, the report says. For the first time, physical attacks—such as compromising ATMs—appeared as one of the three most com- mon ways to steal information, and constituted 29% of all cases investigated. Large-scale breaches dropped dramatically last year, while attacks involving smaller numbers of records increased, Verizon says. “Small to medium-sized businesses represent prime attack tar- gets for many hackers, who favor highly automated, repeatable attacks against these more vul- nerable targets,” the report states. “The guys responsible for a big breach are more likely to get caught than somebody who does a lot of little breaches,” Ostertag says. “The criminals are learning that they don’t need to do a large intrusion to make a steady business. They just follow supply and demand.” Greater reliance on automated attacks means that there are more attempted intrusions than ever, but the level of sophistication has dropped, says Steve Dauber, vice president of marketing at RedSeal Systems, a maker of tools for measuring enterprise security risk and posture. “If you look at the [Verizon report], you see that most attacks were not targeted at a specific company, but were designed to find the enterprises that were most vulnerable,” Dauber says. In fact, “97% of the breaches could have been avoided by using simple controls,” he adds. “What this says to me is that we’re seeing more and more automated attacks, but most enter- prises are responding with human defenses that can’t keep up,” Dauber says. “With so many automated attacks, companies are going to have to start looking harder at more automated defenses.” Malware was a factor in about half of the 2010 caseload and was responsible for almost 80% of lost data, according to the report. The most common kinds of malware found were those involving sending data to an external entity, opening backdoors, and keylogger functionalities. Ineffective, weak or stolen credentials continue to wreak havoc on enterprise security, according to Verizon. “Failure to change default credentials remains an issue, particularly in the financial services, retail and hospitality industries,” the report states. 5 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  6. 6. Security A n a l y t i c s A l e r t Ostertag offered advice for enterprises that want to avoid the types of breaches that Verizon sees. “Protect your data—only store data as long as you need it,” he advises. “Enable your logs and look at them—in 46% of our cases, the breach is discovered not by the victim, but by a third party. If you’re not sure what to look for, ask your security companies about it.” Enterprises should also work hard to enforce the policies they already have in place, Ostertag advises. Companies should be aware of what their employees are using at home, and how per- sonal systems are interacting with corporate systems. “There’s a very high correlation between employees who frequently violate security policy and actual breaches and compromises,” Ostertag says. “Make sure your employees are following the protocol, and that they are only getting access to the resources they need to do their jobs.” April 15, 2011 Tech Insight: Updating Your Security Toolbox By Adam Ely Every now and then, security departments should take a look at their “toolboxes” and ask whether they have all of the right tools to deal with the current range of threats. What open- source tools are available to help combat new exploits, analyze defenses or automate our jobs so we can work less and slack off more? As threats change, new technologies are released and tools are updated, we occasionally must replace our old favorites with the new hotness. After digging through our applications folder, speaking to consultants and security teams, we’ve compiled a list of some trusty tools that you should think about keeping on hand. And here’s a bonus: These are all open-source products. No big corporate budgets required. In no particular order, let’s look at some tools that we use regularly and can’t live without. We’ll start with a few oldies that we still love: 6 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  7. 7. Security A n a l y t i c s A l e r t Burp and Paros proxies. Burp and Paros are client-side proxies used to intercept, modify, replay, and craft HTTP requests. They are very similar, so most people use whichever one they like best. I like Paros; when performing a Web application assessment, I use it to intercept and modify HTTP requests for a variety of reasons, from understanding what the application is doing to cookie manipulation. I even use Paros occasionally when I need to debug and test Web appli- cations I’m developing. Firebug and Tamper Data. Both Firebug and Tamper Data are Firefox plug-ins designed to help Web developers debug their code in the browser. Many security experts use these to under- stand Web applications, quickly examine code, and follow JavaScript logic in Ajax calls. Both are valuable tools for Web application assessments. Metasploit. The one, the only, and a favorite of penetration teams. Metasploit is about as simple as it gets when trying to exploit a system and obtain pure ownage. In the good ol days, we had to obtain, compile, and pray an exploit worked. Now Metasploit takes much of the work out of exploitation. W3af. This Web application attack and audit framework has been called the Metasploit of Web application security. Its goal is simple: to make it easy to find and exploit Web application defects. This project is still much younger than many other tools, but shows promise and is sponsored by the owners of Metasploit, Rapid 7. Skipfish. Skipfish is a Web application scanner developed by Google that is offered as an open- source tool and overcomes some problems that are common to other scanners. It works in a way that is similar to other scanners, crawling a Web application and testing for common vulnerabili- ties. Skipfish claims high-performance, ease-of-use and well-designed security checks. Selenium. Selenium is a suite of tools used to automate Web application testing. While Selenium wasn’t developed for security teams, it is used by some security organizations to help automate testing of common Web application security problems in place of commercial testing suites. EtherApe. EtherApe is a graphical network monitoring tool useful for inspecting network traffic and seeing what is coming and going on a host. BackTrack. Technically, BackTrack is actually a collection of tools, but we couldn’t leave it out of 7 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  8. 8. Security A n a l y t i c s A l e r t this list. It’s a great place to start when building a toolkit and features some of the most common tools ready to work out of the box. Nessus. While no longer officially an open-source product, Nessus is still the de facto free vul- nerability scanning tool. Many network penetration tests start by using Nessus to sweep across infrastructure and identify services, hosts, and vulnerabilities. There are more. Ophcrack, Kismet/Kismac and John the Ripper come to mind—but this small set of open-source tools is a great start for security departments that are just starting out or look- ing to update their arsenals. If you haven’t taken a look at these tools yet, then check them out—they might be just the ones you need for the next new threat. March 30, 2011 Searching for Security’s Yardstick By Tim Wilson There’s an old saying in IT: You can’t manage what you can’t measure. If that’s true, however, security managers must be in a world of hurt. Across this usually contentious security industry, there is violent agreement about two points: Security departments need better ways to prove that their organizations are safe, and there are no clear-cut numbers that definitively prove that point. “So you’re in the management meeting, and the sales guy gives specific numbers about orders and gross revenue,” says Steve Dauber, vice president of marketing at RedSeal, which makes software designed to monitor security posture. “The networking guy gives numbers about uptime and throughput and response time. Then it comes around to the security guy, and he says, ‘Well, we didn’t get hacked today.’” The basic problem, experts say, is that it’s tough to measure a negative. If security’s primary goal is to prevent outsiders from getting in—and insider data from getting out—what numbers are 8 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  9. 9. Security A n a l y t i c s A l e r t there to measure its success? The only clear metric is a negative: How many times has a compro- mise been discovered? “The measure of success in security is that nothing bad happened,” says Mike Rothman, an ana- lyst for Securosis, a security consulting firm. “Your best day is going to be that zero bad things occurred. There’s never going to be a measurement that shows that good things are happening.” If security is about prevention of leaks and attacks, then what metrics should security depart- ments show their bosses to prove that they are doing their jobs well? “I think you have to start with things you can control,” says Scott Crawford, an analyst at Enterprise Management Associates, a consulting firm that focuses on systems and network man- agement. “If you can’t change the controls, then metrics won’t do you any good.” Setting a security policy—and the means to monitor it—is a good place to start, Crawford says. “If you set a policy, and there is a growing number of systems or users that are operating outside the policy, then that’s something you can act on, either through education or through greater controls,” he observes. But security professionals should be wary of “dashboards” and artificial measures that don’t have meaning for the specific business that their enterprises are in, says Gary Hinson, CEO of security consulting firm iSecT in New Zealand. “Some companies begin with a long list of ‘security things that can be measured’ and then try to shoehorn them into some sort of metrics system or dashboard. That, to me, is the wrong way to go about things,” Hinson says. “You don’t design an aircraft cockpit’s information systems with a list of things that can be readily measured on the aircraft. You start by asking what does the pilot need to know—altitude, azimuth/heading, etc.—and then prioritizing those things, organizing them into related groupings and finally filling the dashboard. “Then you get lots and lots of feedback from pilots about what is missing, superfluous, mislead- ing, wrongly positioned, too big/too small, too annoying/too discreet, etc.,” Hinson continues. “In other words, the metrics design process is very interactive, involving the system designers, instrumentation specialists, engineers and pilots all working together to define, design and refine the metrics system.” 9 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  10. 10. Security A n a l y t i c s A l e r t But no matter how customized your measurement system, every company needs some basic met- rics to start off with, notes Steven Piliero, who heads up the Benchmarks Division of the Center for Internet Security (CIS). The CIS Consensus Information Security Metrics benchmark is per- haps the closest thing the industry has to a set of standards for security metrics today. “There are three kinds of metrics: those that are broad enough and understood well enough that they can be used across industries; those that are industry-sector specific; and those that are organization-specific,” Piliero says. “We’re helping to define that first category: the metrics that many industries can use.” The CIS Consensus defines some basic metrics that organizations can measure frequently, as com- panies do with certain financial numbers, or as hospitals measure post-surgical infection rates, Piliero says. “They’re a starting point for building out your metrics—some unambiguous stan- dards for measuring specific security functions.” “It is possible to get some level of agreement on high-level metrics,” Rothman agrees. “CIS Consensus is a great resource to kick-start the metrics effort.” The CIS Consensus offers standardized methods for tracking measurable activities, such as the frequency of incidents and the time/cost to mitigate them; scanning of vulnerabilities and the time/cost to repair them; and the frequency/time required to do patch management. “You can measure things like the number of times you investigated potential indicators of anom- alous activity,” says EMA’s Crawford. “You can track the number of cases of investigation and the number of cases that have had to be escalated to mitigation. You can measure the percent of unplanned IT work related to that escalation and the resulting security spend.” However, experts warn against measuring aspects of security that may not be meaningful to the business—or worse, may cause the security department to focus its efforts on the wrong priorities. “Tracking vulnerabilities, days to patch, [antivirus] performance—these might be useful at an operational level, but measuring these in order to show security effectiveness is a load of crap,” Rothman says. And while tracking incident response or mitigation time might be useful in benchmarking the performance of the security department for upper management, these metrics still don’t provide 10 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  11. 11. Security A n a l y t i c s A l e r t enough meat to serve as a gauge for the organization’s security posture, experts advise. “To measure security posture means arriving at compound and composite metrics—something like the cost of sales numbers that many companies track,” says CIS’ Piliero. “I’m not aware of any standard metrics that can measure that today.” There is an emerging class of tools for security posture management (SPOM), such as those made by RedSeal, currently on the market. Such products harvest firewall configuration data and other information to show the potential for access to critical business data—a measure of both vulnerabilities and risk. “Companies use us to do a risk analysis on a specific vulnerability—what’s the potential impact if it’s exploited?” Dauber explains. “They can use this data to help with prioritization of security actions—to help figure out what issues they should handle first.” Rothman says the SPOM concept has merit for measuring security posture, but the market hasn’t taken off. “The big problem is the cost,” he says. “Executives have to see that it’s worth that much to be able to judge security posture, and that’s only going to happen in industries where that sort of data is critical to the business.” Still, there is clearly a need for tools that can not only provide simple metrics for reporting to upper management, but can provide real insight into the company’s state of security, Rothman observes. Core Security’s new Core Insight tool—essentially a penetration-testing appliance—is one such emerging product, and nCircle’s Suite 360 Intelligence Hub offers a way to benchmark one company’s security against other, similar companies, he notes. “One promising way to get some security metrics is to benchmark one organization’s state and processes relative to others,” Rothman says. “The problem with that is how do you attribute the data back without giving away too much about its source? Sharing between security companies is still the main constraint on this.” Organizations such as the CIS and the new Open Security Intelligence forum are attempting to provide a basis for the definition and sharing of security data and metrics, but there is still a lot of work to be done, experts say. Part of the problem is that there are so many different functions and players in the security metrics game. 11 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  12. 12. Security A n a l y t i c s A l e r t “There is still a big gap between operations people, compliance people, and security peo- ple,” says Joe Gottlieb, CEO of SenSage and founder of the Open Security Intelligence ini- tiative. SenSage earlier this week published a study in which a majority of security people said they thought their security processes are less effective because data is not effectively shared among the various functional areas, such as compliance, incident response, and real- time monitoring. “Most security metrics [initiatives] start because some enlightened executive up the chain asks for the numbers,” says CIS’ Piliero. “Once that happens, you see companies trying to get their own house in order, working together to pull together operational metrics before they start reporting up the chain.” May 23, 2011 RSA Breach a Lesson in Detection and Mitigation By Ericka Chickowski While pundits say RSA, as a standards bearer in security, should be held to a higher measure of security than the average enterprise, some say the company’s recent breach is less a black mark on the company than a lesson to organizations at large about the scope of today’s threats. And as details emerge about how RSA dealt with its breach, it is clear that most organizations need to do a better job with not just real-time monitoring—but also real-time blocking—of threats. According to those in the security information and event management (SIEM) space, the RSA breach should be a wake-up call for any enterprise that needs to protect its “special sauce” to maintain customer confidence and smooth operations. “What we can take away from it is whether you’re making a widget for a car, an airplane, [or] software for the banking industry, you should really consider who might be targeting you and why would they target you, and you have to put protections in place,” says Brendan Hannigan, president and chief operating officer of SIEM firm Q1 Labs. “Targeted threats are 12 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  13. 13. Security A n a l y t i c s A l e r t serious and are coming from a variety of different sources, whether they be state actors or industrial espionage or criminals.” And these determined crooks are not just seeking out the big dogs like RSA. “We’ve increasingly been seeing within our own practice specifically targeted attacks, and I’m not talking great, big Fortune 500 companies,” says Bobby Kuzma, owner of managed securi- ty service provider Central Florida Technology Solutions. “I’m talking targeted, against 10- doctor medical practices.” In order to detect sneaky multivector threats like the one that struck RSA, organizations need to count on a higher level of intelligence than is currently utilized today. “You have a variety of different security controls in place, but in addition you need to have this blanket of security intelligence that’s overlaying this that’s looking for very sophisticated, low, slow, insidious, unusual behavior in your environment,” Hannigan says. “That’s the important layer we think customers haven’t focused on. They focus on the point products, [but] they haven’t focused on the security intelligence layer that takes all of these controls and puts them together.” While the breach is a blow to RSA, many within the industry have said the security firm still did better than the average organization that probably wouldn’t have even known it had been struck. “Instead of pointing fingers, I’d probably take a look at my house and wonder, ‘Do I have similar problems?’” says Philip Cox, principal consultant at IT security consulting firm Systems Experts. “If the ‘A’ team is getting broken into, that should cause some worries because other companies might also be suffering the same attack and not even know it.” While SIEM tools may certainly go some ways toward detecting attacks, such as the one that struck RSA through a phishing email and a zero-day Flash exploit, they are hardly a panacea. According to the report that RSA made publicly recently via a company blog and an analyst briefing, the company did not depend solely on its own in-house tools to find the attack. It credits the tools from NetWitness with helping detect the attack, though when pressed was not willing to divulge technical details about the way the product worked. 13 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  14. 14. Security A n a l y t i c s A l e r t Interestingly, the NetWitness revelation came on the very day EMC and RSA insiders were closing a deal to acquire that firm and just a business day before it would publicly announce the acquisition. While the disclosure of some limited details about the breach was seen by some as a way to advertise the benefits of a product line that it was poised to acquire, RSA executives say the deal wasn’t precipitated by the breach. “This [deal] was in the works before that,” says Tom Heiser, president of RSA. “Having said all that, I don’t think it could have happened at a better time than it did right now.” It’s clear that even before it was stung that RSA saw the need for more advanced means of detecting threats in real time. The real problem highlighted by this recent blow-up, though, is not so much about real-time detection of threats as it is about blocking threats before they do damage. RSA claims it did, in fact, detect the attack on its systems in real time. But the fact remains it was unable to stop attackers from stealing some part of its SecurID intellectual property, details about which the firm still have not disclosed. Until the company divulges how much was or was not stolen, it is hard to show how effective real-time detection is in mitigating risk. Regardless, the lesson is that something was exfiltrated. “You’ve got to be able to use monitoring tools intelligently, not just from a forensic view- point, but from a proactive viewpoint to stop the transactions,” says Avivah Litan, vice presi- dent and distinguished analyst at Gartner, who believes it doesn’t do a company much good to detect an attack but be unable to prevent it from doing damage. She believes the current monitoring and SIEM tools need to evolve to offer better blocking capabilities. “Log management and SIEM are not going to get you there. All those compliance SIEM systems are not in line to the transactions; they score in real time, but their architec- tures aren’t made to be inline and interdict.” she says. “It wouldn’t be that difficult for the SIEM vendors to build that in, and they probably will when they start getting demand for it.” In the meantime, she suggests organizations work to build APIs or have vendors build APIs that sync their SIEM into fraud detection and prevention tools that call authentication or transaction verification that has blocking capabilities. 14 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  15. 15. Security A n a l y t i c s A l e r t Jan. 27, 2011 An Advanced Persistent Threat Reality Check By Kelly Jackson Higgins Most victims of targeted attacks that originate from so-called advanced persistent threat (APT) attackers have been under siege for so long by the time they discover it that forensics investiga- tors can’t even trace the original machine that was infected. The majority of the 120 victim organizations that enlisted the help of Mandiant in the past 18 months were first hit by the targeted attack two years before, according to Kevin Mandia, founder and CEO of Mandiant, which published its annual report on APT, “MTrends: When Prevention Fails.” And there’s the danger of making it more difficult to track or contain the perpetrators if the vic- tim organization shares its malware sample with its antivirus company too soon. “If you’re good at this, you don’t share with vendors, only with your industry brethren,” Mandia says, like the defense industry typically does. “Malware has a shelf life. If you share it [with too many parties], people take action and it changes the tools of the bad guy.” That’s because once a signature is released for a piece of malware, the bad guys quickly reinvent it with a new variant via the backdoors they typically place in the victim organization that keeps their foothold strong. “All you’ve done by sharing is change the fingerprint and made the prob- lem worse,” Mandia says. Their response is just that fast, he says. At one Fortune 50 company that Mandiant was working with in the wake of a targeted attack, around 100 people gathered to remediate the network. But the company’s antivirus vendor updated one piece of the malware, which then “destroyed” the remediation drill altogether, Mandia says. “The attackers were responding to the AV update,” he says. Eddie Schwartz, chief security officer at NetWitness, concurs that you shouldn’t hand off mal- ware samples until your breach investigation is completed. “Submitting to AV vendors early takes the control of the incident out of your hands because of how the APT operates, commonly activating secondary systems when primary ones are discovered,” Schwartz says. And a virus 15 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  16. 16. Security A n a l y t i c s A l e r t definition update would wipe out the malware you were analyzing, thus requiring you start your investigation all over again. It’s not about preventing a targeted attack from the ATP adversary, which typically hails from various organized groups out of China who are hell-bent on snatching as much information as they can. It’s more like a game of chess, where businesses and government agencies have to assume the ATP perpetrators are inside and focus on predicting, detecting and responding to their moves, according to Mandiant’s report. The biggest shift during the past year is the volume of these attacks and the wider scope of industries being targeted. Mandia says he believes these attacks mostly go under the radar screen; his firm sees only about 2% of them. In the past 18 months, 42% of these victims were commercial firms, with law firms surprisingly representing 10% of that sector. “Law firms are getting absolutely hammered,” he says, but no one knows for sure why. It’s possible these firms have become collateral damage from other hacks that resulted in access to the firm’s email addresses or other information, according to Mandia. “We haven’t seen a pat- tern” to explain it, he says. And the initial attack vector for most of the cases Mandiant investigated were either email-borne or from improperly remediated cases where the attackers were still inside even though the vic- tim had thought it had eradicated them. “We see a number of APT attacks in our work with customers,” NetWitness’ Schwartz says. “The volume of victims has gone up across the board, as well as the number of platform-independent vectors for exploitation, which is far more worrisome. The public hears about very few of the actual compromises of organizations.” And even more disconcerting is that victim organizations aren’t likely to be able to discern everything the victims stole or accessed by APT actors. “We don’t see them doing keyword searches, so we can’t tell that they are searching for this or that,” Mandia says. These attackers typically are cagier about what they are actually after: It appears they are rewarded by the vol- ume of information they grab, he says. “A real APT never really damages anything. They tweak a log file here and there... They are steal- 16 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  17. 17. Security A n a l y t i c s A l e r t ing stuff, but you still have your copy. You never see them taint it,” he says. Mandiant has witnessed APT attackers stealing PKI credentials and even hacking smartcard readers to grab credentials to various systems. “We have seen cybercriminals successfully bypass two-factor authentication in banking sys- tems for years. Likewise, a common activity for Zeus is to steal any local PKI certs it finds,” NetWitness’ Schwartz says. “A great example [is] the Kneber Zeus botnet we reported on in 2010. There also have been multiple pieces of malware in the recent past that have been legitimately signed, which points to the theft and use of software certs: Stuxnet is a great example here.” 17 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited
  18. 18. Security A n a l y t i c s A l e r t Want More Like This? Making the right technology choices is a challenge for IT teams everywhere. Whether it’s sorting through vendor claims, justifying new projects or implementing new sys- tems, there’s no substitute for experience. And that’s what InformationWeek Analytics provides—analysis and advice from IT professionals. Our subscription-based site houses more than 800 reports and briefs, and more than 100 new reports are slated for release in 2011. InformationWeek Analytics members have access to: Research: 2011 Strategic Security Survey: Security professionals often feel that execu- tives don’t prioritize information security and risk management, in terms of attention, budgets or both. But the 1,084 security pros responding to our InformationWeek Analytics 2011 Strategic Security Survey suggest that may be changing. Research: Security Technologies: We’re pouring literally billions of dollars into prod- ucts that are gaining us very little. So we pile on more layers, leading to increased complexity, expense and exposure. Best Practices: The New Perimeter: Attackers want to sell your personal, financial and proprietary corporate information, and the traditional perimeter security model is next to useless for stopping them.What’s a CISO to do? IT Pro Ranking: Web Security Gateways: IT pros give high marks to makers of Web security gateways for their ability to block malware. But when it comes to manage- ment, there’s room for improvement. Strategy: IPv6 Security: IPv6 advocates have long touted the elimination of NAT and the return to a true peer-to-peer Internet. But IT pros who’ve come to see NAT as an essential network security element are worried, and they have some questions: PLUS: Signature reports, such as the InformationWeek Salary Survey, InformationWeek 500 and the annual State of Security report; full issues; and much more. For more information on our subscription plans, please CLICK HERE. 18 May 23, 2011 © 2011 InformationWeek, Reproduction Prohibited