FIRST 2006 Full-day Tutorial on Logs for Incident Response

2,695 views

Published on

Outline:
Incident Response Process
Logs Overview
Logs Usage at Various Stages of the Response Process
How Log from Diverse Sources Help
Log Review, Monitoring and Investigative processes
Standards and Regulation Affecting Logs and Incident Response
Incident Response vs Forensics
Case Studies
Log Analysis Mistakes

Published in: Technology
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,695
On SlideShare
0
From Embeds
0
Number of Embeds
252
Actions
Shares
0
Downloads
0
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide
  • Source: http://www.sans.edu/resources/securitylab/loglogic_chuvakin.php
  • Audit records is sometimes viewed as a specific kind of log, related to process controls. However, we will treat them as the same for clarity. Some people even manage to define the “event” (as “observable occurrence”), but the ideas is pretty self-explanatory: event is something that happened. We have less crazy marketing terms than the “intrusion prevention” crowd, but I figured I’d mention those explicitly so there is no confusion. Confusions reigns supreme… Also: events, alerts, logs, records, etc. “ Audit of monitoring logs”? “Monitoring of audit logs”? “Logging of monitoring audit?  Auditing – reviewing audit logs Message – some system indication that the event has transpired. Log or audit record – recorded message related to the event Log file – collection of the above records Logging – recording log records Alert – a message usually sent to notify an operator Device – a source of security-relevant logs etc
  • I did mention security data, events, etc on the previous slides. But what am I really talking about? In other words, what do we LOG and MONITOR? What is called “security data” in this presentation consists of various audit records (left), generated by various devices and softwares (right). It should be noted that business applications also generate security data, such as by recording access decisions or generating messages indicative of exploitation attempts.
  • Now, the above only covers system logs, not network or application-specific ones Does all of it has security relevance? You bet!
  • Those are some of the common messages/log records/alerts
  • Yes, specific will be covered later!
  • Underlies the logging and monitoring. So, now that we know it’s a good idea, how do we go about centralizing the data? Here is the plan. The whole process starts from the initial data generation, collection, preliminary analysis and possible volume reduction, secure (against attacks and DoS attacks) and reliable transportation to the central point. It is then followed by further processing before storage (for real-time cross-environment analysis) and long term trend analysis. Visualization of data also helps in the analysis and may be separated as another step (both real-time and historical visualization may sometimes reveals new properties of the collected data). Note that I skipped obvious things, such as filtering. We have a plan, but does it really that simple to follow it? Challenges abound. Some of the above challenges are inherent, but others can be and are overcome by Security Information Management solutions. Too much data is the main data volume problem. Hundreds of firewalls (not uncommon for a large environment) and thousands of desktop security applications have a potential of generating millions of records every day. Not enough data might hinder the response process in case the essential data was not collected or is not being recorded by the application or a security device. Visibility of data. Like in yesterday snort example, SOC missed the internal stuff. Cross NAT and cross-proxy data analysis. Diverse records problem is due to the lack of the universal audit standard; most applications log in whatever formats developed by their creators, thus leading to the massive analysis challenge. False alarms are common for network intrusion detections systems (NIDS). Those might be false positives (benign triggers) and false alarms (malicious triggers with no potential of harming the target) Duplicate data is due to multiple devices recording the same event in their own different ways. Hard to get data problem is less common and might hinder analysis in case some legacy software or hardware is in use. For example, getting the detailed mainframe audit records may be a challenge. Chain of custody concerns refer to higher security and handling standards that need to be in use if the data might end up in the court of law. So, now that we know it’s a good idea, how do we go about centralizing the data? Here is the plan. The whole process starts from the initial data generation, collection, preliminary analysis and possible volume reduction, secure (against attacks and DoS attacks) and reliable transportation to the central point. It is then followed by further processing before storage (for real-time cross-environment analysis) and long term trend analysis. Visualization of data also helps in the analysis and may be separated as another step (both real-time and historical visualization may sometimes reveals new properties of the collected data). Note that I skipped obvious things, such as filtering.
  • We have a plan, but does it really that simple to follow it? Challenges abound. Some of the above challenges are inherent, but others can be and are overcome by Security Information Management solutions. Too much data is the main data volume problem. Hundreds of firewalls (not uncommon for a large environment) and thousands of desktop security applications have a potential of generating millions of records every day. Not enough data might hinder the response process in case the essential data was not collected or is not being recorded by the application or a security device. Visibility of data. Like in yesterday snort example, SOC missed the internal stuff. Cross NAT and cross-proxy data analysis. Diverse records problem is due to the lack of the universal audit standard; most applications log in whatever formats developed by their creators, thus leading to the massive analysis challenge. False alarms are common for network intrusion detections systems (NIDS). Those might be false positives (benign triggers) and false alarms (malicious triggers with no potential of harming the target) Duplicate data is due to multiple devices recording the same event in their own different ways. Hard to get data problem is less common and might hinder analysis in case some legacy software or hardware is in use. For example, getting the detailed mainframe audit records may be a challenge. Chain of custody concerns refer to higher security and handling standards that need to be in use if the data might end up in the court of law. Also: Volume is getting even higher Audit data standards don’t exist Binary and text logs Undocumented formats Free form logs Same events described differently Different level of detail in collected data Now, how many folks only use one security device type and only from single vendor? Not many. Most companies have multiple types of devices from multiple vendors. And a weird mix it is! Now, we move away from homogenous data of one device type to multi-vendor diverse device data centralization, which brings lots of new challenges. Advantages of cross-device analysis will also be shown further in the presentation. Heterogeneous environment brings forth new problems and also boosts some of the old ones, familiar from single device centralization. For example, more peculiar file formats need to be understood and processed to get to the big picture. Volume just gets out of control, firewalls spew events, IDSs pitch in, etc. Horrendous, eh?
  • How to plan a response strategy to activate when monitoring?
  • You actually can log and not monitor. No slide set is valid w/o a compliance slide. “ Too much stuff out there – why even bother?”  Situational awareness What is going on? New threat discovery Unique perspective from combined logs Getting more value out of the network and security infrastructures Get more that you paid for! Extracting what is really actionable automatically Measuring security (metrics, trends, etc) Compliance and regulations
  • Including spyware And other internal abuses
  • One of the primary aims of a traditional forensic investigation is to reconstruct past events in an attempt to answer the ‘what happened’ question. To achieve this aim forensic investigators treat the scene as the witness examining the environment as a source of trace evidence (Chisum and Turvey, 2000). In principle a witness is potentially a useful source of evidence. The witness may be able to recount the sequence of events that took place thereby assisting the reconstruction of the scenario for the investigation. In the digital world where activity is conducted by processes, the ‘scene’ is the entire computing system including the processor, the memory, secondary storage devices, applications and so forth. Historically, investigators have typically studied storage devices like hard disks as they are usually the only source of preserved evidence. Although interactions among computing processes do not drop bodily properties akin to hair or blood, their interaction with and use of resources may leave traces that can be used to help reconstruct past events. In trying to ask ‘what happened’, computer forensic investigations tend to concentrate on the state of the filesystem including slackspace and virtual memory space for traces of deleted data or indications of the nature of programs previously run on the system (Yasinsac 2001). From a forensic point of view perhaps the most significant advantage of the computing scene over the real world scene is the computing system’s provision of an event log. The log is an ongoing record of events taking place in the operating system. In addition, since event logs are collected as part of the routine course of system operation they are generally considered ‘direct evidence’ and may be admissible in court (Casey 2000, p.46). At first the provision of a readily available history of computing activity appears to have the capacity to resolve the problem of reconstructing past events. However, the event logging mechanism of computing systems has proven to be largely unsuitable for forensic purposes and rarely used in litigation. Evidentially, the weight of the information contained in the event log does not readily conform to the requirements of a forensic investigation. In fact although the weighting criteria of the investigator’s evidence extraction process has been somewhat discussed (Sommer 1998), the weighting of the system’s own evidence extraction facility (event logs) has been relatively left unexplored in scientific research. SOMMER’S CRITERIA With the traditional forensic investigation process in mind we present Sommer’s criteria for the weighting of non-testimonial evidence. Sommer identifies three main attributes – authenticity, accuracy and completeness (Sommer, 1998 cites Miller, 1992): (1) Accurate: free from any reasonable doubt about the quality of procedures used to collect the material, analyse the material if that is appropriate and necessary and finally to introduce it into court – and produced by someone who can explain what has been done.. (2) Complete: tells within its own terms a complete story of particular set of circumstances or events (3) Authentic: specifically linked to the circumstances and persons alleged Sommer expands these attributes for more technical types of evidence and presents five main tests designed to assess the reliability of the evidence derived from digital environments (Sommer 1998). Sommer expands these attributes for more technical types of evidence and presents five main tests designed to assess the reliability of the evidence derived from digital environments (Sommer 1998). 1. Computer’s Correct Working Test Sommer argues that the computer must be shown to be behaving “correctly” or “normally”. In cases where the computer is acting simply as an information store then such a requirement may be easy to satisfy. However if the computer is providing a service such as a database query function and given the investigation is related to precisely that function then it must be tested and shown to be “correct” or “ normal”. 2. Provenance of Computer Source Test The evidence collected that is deemed relevant to the investigation must be proven to be taken from the specific computer and from nowhere else. 3. Content/Party Authentication Test The evidence collected must be relevant i.e. linked to the incident or parties accused in the investigation. 4. Evidence Acquisition Test: The information evidence must have been gathered accurately, must be free from contamination, and must be complete (note this refers back to the three main attributes of non-testimonial evidence). 5. Continuity of Evidence/Chain of Custody Test A full account of what happened to the retrieved evidence after it was extracted must be provided. Frequently, all of the individuals involved in the collection and transportation of evidence may be requested to testify in court. Thus, to avoid confusion and to retain complete control of the evidence at all times, the chain of custody should be kept to a minimum (Casey 2000 cites Saferstein, 1998, p 58)
  • “• Log Forensics provides indexing and "Google-like" search algorithms for near-instant data retrieval, searching terabytes of data in seconds in order to find critical information for investigations and legal proceedings.” http://en.wikipedia.org/wiki/Computer_forensics Computer forensics is application of the scientific method to digital media in order to establish factual information for judicial review. This process often involves investigating computer systems to determine whether they are or have been used for illegal or unauthorized activities. Mostly, computer forensics experts investigate data storage devices, either fixed like hard disks or removable like compact disks and solid state devices . Computer forensics experts: Identify sources of documentary or other digital evidence Preserve the evidence Analyze the evidence Present the findings http://computer-forensics.safemode.org/ What is Computer Forensics? Computer forensics, sometimes known as Digital Forensics , is often described as "the preservation, recovery and analysis of information stored on computers or electronic media". It often embraces issues surrounding Digital Evidence with a significant legal perspective, and is sometimes viewed as a Four Step Process . http://en.wikipedia.org/wiki/Digital_evidence Digital evidence or electronic evidence is any probative information stored or transmitted in digital form that a party to a court case may use at trial .
  • Challenges with log forensics…
  • “ Computer Records and the Federal Rules of Evidence”, Orin S. Kerr, USA Bulletin, (March 2001) http://www.usdoj.gov/criminal/cybercrime/usamarch2001_4.htm Challenges to the authenticity of computer records often take one of three forms. First, parties may challenge the authenticity of both computer-generated and computer-stored records by questioning whether the records were altered, manipulated, or damaged after they were created. Second, parties may question the authenticity of computer-generated records by challenging the reliability of the computer program that generated the records. Third, parties may challenge the authenticity of computer-stored records by questioning the identity of their author. E.g. Computer records can be altered easily, and opposing parties often allege that computer records lack authenticity because they have been tampered with or changed after they were created. For example, in United States v. Whitaker , 127 F.3d 595, 602 (7th Cir. 1997), the government retrieved computer files from the computer of a narcotics dealer named Frost. The files from Frost's computer included detailed records of narcotics sales by three aliases: "Me" (Frost himself, presumably), "Gator" (the nickname of Frost's co-defendant Whitaker), and "Cruz" (the nickname of another dealer). After the government permitted Frost to help retrieve the evidence from his computer and declined to establish a formal chain of custody for the computer at trial, Whitaker argued that the files implicating him through his alias were not properly authenticated. Whitaker argued that "with a few rapid keystrokes, Frost could have easily added Whitaker's alias, 'Gator' to the printouts in order to finger Whitaker and to appear more helpful to the government." Id. at 602. The courts have responded with considerable skepticism to such unsupported claims that computer records have been altered. Absent specific evidence that tampering occurred, the mere possibility of tampering does not affect the authenticity of a computer record. See Whitaker , 127 F.3d at 602 (declining to disturb trial judge's ruling that computer records were admissible because allegation of tampering was "almost wild-eyed speculation . . . [without] evidence to support such a scenario"); United States v. Bonallo , 858 F.2d 1427, 1436 (9th Cir. 1988) ("The fact that it is possible to alter data contained in a computer is plainly insufficient to establish untrustworthiness."); United States v. Glasser , 773 F.2d 1553, 1559 (11th Cir. 1985) ("The existence of an air-tight security system [to prevent tampering] is not, however, a prerequisite to the admissibility of computer printouts. If such a prerequisite did exist, it would become virtually impossible to admit computer-generated records; the party opposing admission would have to show only that a better security system was feasible."). Id. at 559. This is consistent with the rule used to establish the authenticity of other evidence such as narcotics. See United States v. Allen , 106 F.3d 695, 700 (6th Cir. 1997) ("Merely raising the possibility of tampering is insufficient to render evidence inadmissible."). Absent specific evidence of tampering, allegations that computer records have been altered go to their weight, not their admissibility. See Bonallo , 858 F.2d at 1436.
  • FIRST 2006 Full-day Tutorial on Logs for Incident Response

    1. 1. Logs in Incident Response Anton Chuvakin, Ph.D., GCIA, GCIH, GCFA Director of Security Research LogLogic [email_address] Mitigating Risk. Automating Compliance.
    2. 2. Who is Anton? <ul><li>Dr. Anton Chuvakin [ formerly ] from LogLogic “is probably the number one authority on system logging in the world ” </li></ul><ul><li>SANS Institute (2008) </li></ul>
    3. 3. Outline - I <ul><li>Incident Response Process </li></ul><ul><li>Logs Overview </li></ul><ul><li>Logs Usage at Various Stages of the Response Process </li></ul><ul><li>How Log from Diverse Sources Help </li></ul>
    4. 4. Outline - II <ul><li>Log Review, Monitoring and Investigative processes </li></ul><ul><li>Standards and Regulation Affecting Logs and Incident Response </li></ul><ul><li>Incident Response vs Forensics </li></ul><ul><li>Case Studies </li></ul><ul><li>Log Analysis Mistakes </li></ul>
    5. 5. Incident Response Methodologies: SANS <ul><li>SANS Six-Step Process </li></ul><ul><ul><li>Preparation </li></ul></ul><ul><ul><li>Identification </li></ul></ul><ul><ul><li>Containment </li></ul></ul><ul><ul><li>Eradication </li></ul></ul><ul><ul><li>Recovery </li></ul></ul><ul><ul><li>Follow-Up </li></ul></ul>
    6. 6. Incident Response Methodologies: NIST <ul><li>NIST Incident Response 800-61 </li></ul><ul><ul><li>Preparation </li></ul></ul><ul><ul><li>Detection and Analysis </li></ul></ul><ul><ul><li>Containment , Eradication and Recovery </li></ul></ul><ul><ul><li>Post-incident Activity </li></ul></ul>
    7. 7. Other IH/IR Frameworks and Methodologies <ul><li>Process from “Incident Response and Forensics” </li></ul><ul><li>Company-specific Policies and Procedures </li></ul><ul><li>etc </li></ul>
    8. 8. Why Have a Process? <ul><li>It helps… </li></ul><ul><ul><li>Predictability </li></ul></ul><ul><ul><li>Efficiency </li></ul></ul><ul><ul><li>Auditability </li></ul></ul><ul><ul><li>Constant Improvement </li></ul></ul><ul><li>It shrinks… </li></ul><ul><ul><li>Indecision </li></ul></ul><ul><ul><li>Uncertainty </li></ul></ul><ul><ul><li>Panic!  </li></ul></ul>
    9. 9. From Incident Response to Logs <ul><li>From Incident Response to Logs </li></ul>
    10. 10. Terms and Definitions <ul><li>Message – some system indication that an event has transpired </li></ul><ul><li>Log or audit record – recorded message related to the event </li></ul><ul><li>Log file – collection of the above records </li></ul><ul><li>Alert – a message usually sent to notify an operator </li></ul><ul><li>Device – a source of security-relevant logs </li></ul><ul><li>Logging </li></ul><ul><li>Auditing </li></ul><ul><li>Monitoring </li></ul><ul><li>Event reporting </li></ul><ul><li>Log analysis </li></ul><ul><li>Alerting </li></ul>
    11. 11. Log Data Overview <ul><li>Audit logs </li></ul><ul><li>Transaction logs </li></ul><ul><li>Intrusion logs </li></ul><ul><li>Connection logs </li></ul><ul><li>System performance records </li></ul><ul><li>User activity logs </li></ul><ul><li>Various alerts </li></ul><ul><li>Firewalls/intrusion prevention </li></ul><ul><li>Routers/switches </li></ul><ul><li>Intrusion detection </li></ul><ul><li>Hosts </li></ul><ul><li>Business applications </li></ul><ul><li>Anti-virus </li></ul><ul><li>VPNs </li></ul>What data? From Where?
    12. 12. What Commonly “Gets Logged”? <ul><li>System or software startup, shutdown, restart, and abnormal termination (crash) </li></ul><ul><li>Various thresholds being exceeded or reaching dangerous levels such as disk space full, memory exhausted, or processor load too high </li></ul><ul><li>Hardware health messages that the system can troubleshoot or at least detect and log </li></ul><ul><li>User access to the system such as remote (telnet, ssh, etc.) and local login, network access (FTP) initiated to and from the system, failed and successful </li></ul><ul><li>User access privilege changes such as the su command—both failed and successful </li></ul><ul><li>User credentials and access right changes , such as account updates, creation, and deletion—both failed and successful </li></ul><ul><li>System configuration changes and software updates—both failed and successful </li></ul><ul><li>Access to system logs for modification, deletion, and maybe even reading </li></ul>
    13. 13. “ Standard” Messages 10/09/200317:42:57,146.127.94.13,48352,146.127.97.14,909,,,accept,tcp,,,,909,146.127.93.29,,0,,4,3,,' 9Oct2003 17:42:57,accept,labcpngfp3,inbound,eth2c0,0,VPN-1 & FireWall-1,product=VPN-1 & FireWall-1[db_tag={0DE0E532-EEA0-11D7-BDFC-927F5D1DECEC};mgmt= labcpngfp3;date=1064415722;policy_name= Standard],labdragon,48352,146.127.97.14,909, tcp,146.127.93.145,',eth2c0,inbound Oct 9 16:29:49 [146.127.94.4] Oct 09 2003 16:44:50: %PIX-6-302013: Built outbound TCP connection 2245701 for outside:146.127.98.67/1487 (146.127.98.67/1487) to inside:146.127.94.13/42562 (146.127.93.145/42562) PIX 2003-10-20|15:25:52|dragonapp-nids|TCP-SCAN|146.127.94.10|146.127.94.13|0|0|X|------S-|0|total=484,min=1,max=1024,up=246,down=237,flags=------S-,Oct20-15:25:34,Oct20-15:25:52| Oct 20 15:35:08 labsnort snort: [1:1421:2] SNMP AgentX/tcp request [Classification: Attempted Information Leak] [Priority: 2]: {TCP} 146.127.94.10:43355 -> 146.127.94.13:705 SENSORDATAID=&quot;138715&quot; SENSORNAME=&quot;146.127.94.23:network_sensor_1&quot; ALERTID=&quot;QPQVIOAJKBNC6OONK6FTNLLESZ&quot; LOCALTIMEZONEOFFSET=&quot;14400&quot; ALERTNAME=&quot;pcAnywhere_Probe“ ALERTDATETIME=&quot;2003-10-20 19:35:21.0&quot; SRCADDRESSNAME=&quot;146.127.94.10&quot; SOURCEPORT=&quot;42444&quot; INTRUDERPORT=&quot;42444&quot; DESTADDRESSNAME=&quot;146.127.94.13&quot; VICTIMPORT=&quot;5631&quot; ALERTCOUNT=&quot;1&quot; ALERTPRIORITY=&quot;3&quot; PRODUCTID=&quot;3&quot; PROTOCOLID=&quot;6&quot; REASON=&quot;RSTsent&quot;
    14. 14. Example 1 FTP Hack Case <ul><li>Server stops </li></ul><ul><li>Found ‘rm-ed’ by the attacker </li></ul><ul><li>What logs do we have? </li></ul><ul><li>Forensics on an image to undelete logs </li></ul><ul><li>Client FTP logs reveals… </li></ul><ul><li>Firewall confirms! </li></ul>
    15. 15. Using Logs at Preparation Stage <ul><li>Verify Controls </li></ul><ul><li>Ongoing Monitoring </li></ul><ul><li>Change Management Support </li></ul><ul><li>“ If you know the cards, you’d live on an island”  </li></ul><ul><li>In general, verifying that you have control over your environment </li></ul>
    16. 16. Using Logs at Identification Stage <ul><li>Detect Intrusion, Infections and Attacks </li></ul><ul><li>Observe Attack Attempts, Recon and Suspicious Activity </li></ul><ul><li>Perform Trend Analysis and Baselining for Anomaly Detection </li></ul><ul><li>Mine the Logs for Hidden Patterns, Indicating Incidents in the Making… </li></ul><ul><li>“ What is Out There?” </li></ul>
    17. 17. Using Logs at Containment Stage <ul><li>Quickly Assess Impact of the Infection, Compromise, Intrusion, etc </li></ul><ul><li>Correlate Logs to Know What You Can [Still] Trust </li></ul><ul><li>Verify that Containment Measures Are Working </li></ul><ul><li>“ What Else is Hit?” </li></ul>
    18. 18. Using Logs at Eradication Stage <ul><li>Preserving the Log Evidence from Previous Stages </li></ul><ul><li>Confirming that Backups are Safe (Using Logs, How Else?) </li></ul><ul><li>“ Is it Gone? </li></ul>
    19. 19. Using Logs at Recovery Stage <ul><li>Increased Post-Incident Monitoring </li></ul><ul><li>Watch for Recurrence </li></ul><ul><li>Watch for Related Incidents Elsewhere </li></ul><ul><li>“ Better Safe than Sorry” </li></ul>
    20. 20. Using Logs at Follow-Up Stage <ul><li>Train Analysts, Responders and Administrators </li></ul><ul><li>Create Management Reports ( Don’t You Love Those!  ) </li></ul><ul><li>Verify and Audit Newly Implemented Controls </li></ul>
    21. 21. So, What Logs are Useful for Incident Response? <ul><li>Security Logs vs “Non-Security” Logs </li></ul><ul><li>Witness confusion in the NIST guide on log management …. </li></ul><ul><li>Let’s quickly go through various logs and see how they help (and helped in specific cases!) </li></ul>
    22. 22. Firewall Logs in Incident Response <ul><li>Proof of Connectivity </li></ul><ul><li>Proof of NO Connectivity </li></ul><ul><li>Scans </li></ul><ul><li>Malware: Worms, Spyware </li></ul><ul><li>Compromised Systems </li></ul><ul><li>Misconfigured Systems </li></ul><ul><li>Unauthorized Access and Access Attempts </li></ul><ul><li>Spam (yes, even spam!) </li></ul>
    23. 23. NIDS Logs in Incident Response <ul><li>Attack, Intrusion and Compromise Detection </li></ul><ul><li>Malware Detection: Worms, Viruses, Spyware, etc </li></ul><ul><li>Network Abuses and Policy Violations </li></ul><ul><li>Unauthorized Access and Access Attempts </li></ul><ul><li>Recon Activity </li></ul><ul><li>[NIPS] Blocked Attacks </li></ul>
    24. 24. Server Logs in Incident Response <ul><li>Confirmed Access by an Intruder </li></ul><ul><li>Service Crashes and Restarts </li></ul><ul><li>Reboots </li></ul><ul><li>Password, Trust and Other Account Changes </li></ul><ul><li>System Configuration Changes </li></ul><ul><li>A World of Other Things  </li></ul>
    25. 25. Database Logs in Incident Response <ul><li>Database and Schema Modifications </li></ul><ul><li>Data and Object Modifications </li></ul><ul><li>User and Privileged User Access </li></ul><ul><li>Failed User Access </li></ul><ul><li>Failures, Crashes and Restarts </li></ul>
    26. 26. Proxy Logs in Incident Response <ul><li>Internet Access </li></ul><ul><li>Malware: Spyware, Trojans, etc </li></ul>
    27. 27. Client Logs in Incident Response <ul><li>FTP Client: Remote Connections and File Transfers </li></ul><ul><li>Other client software: usually no logs, but usually leave other traces </li></ul><ul><ul><li>E.g. web browser cache </li></ul></ul>
    28. 28. Antivirus Logs in Incident Response <ul><li>Virus Detection and Clean-up (or lack thereof!) </li></ul><ul><li>Failed and Successful Antivirus Signature Updates </li></ul><ul><li>Other Protection Failures </li></ul><ul><li>Antivirus Software Crashes and Terminations </li></ul>
    29. 29. “ Back to the Process”  <ul><li>Main idea… </li></ul><ul><li>Log everything </li></ul><ul><li>Audit little </li></ul><ul><li>Monitor a bit </li></ul><ul><li>During the incident you’d be grateful you did! </li></ul>
    30. 30. Log Management Process <ul><li>Collect the log data </li></ul><ul><li>Convert to a common format </li></ul><ul><li>Reduce in size, if possible </li></ul><ul><li>Transport securely to a central location </li></ul><ul><li>Process in real-time </li></ul><ul><li>Alert on when needed </li></ul><ul><li>Store securely </li></ul><ul><li>Report on trends </li></ul>
    31. 31. Log Management Challenges <ul><li>Not enough data </li></ul><ul><li>Too much data </li></ul><ul><li>Diverse records </li></ul><ul><li>Time out of sync </li></ul><ul><li>False records </li></ul><ul><li>Duplicate data </li></ul><ul><li>Hard to get data </li></ul><ul><li>Chain of custody issues </li></ul>
    32. 32. Monitoring or Ignoring Logs Before the Incident? <ul><li>How to plan a response strategy to activate when monitoring logs? </li></ul><ul><li>Where to start? </li></ul><ul><li>How to tune it? </li></ul>
    33. 33. Monitoring Strategy
    34. 34. Value of Logging and Monitoring <ul><li>Monitoring </li></ul><ul><li>Incident detection </li></ul><ul><li>Loss prevention </li></ul><ul><li>Compliance </li></ul><ul><li>Logging </li></ul><ul><li>Audit </li></ul><ul><li>Forensics </li></ul><ul><li>Incident response </li></ul><ul><li>Compliance </li></ul><ul><li>Analysis and Mining </li></ul><ul><li>Deeper insight </li></ul><ul><li>Internal attacks </li></ul><ul><li>Fault prediction </li></ul>
    35. 35. “ Real-Time” Tasks <ul><li>Malware outbreaks </li></ul><ul><li>Convincing and reliable intrusion evidence </li></ul><ul><li>Serious internal network abuse </li></ul><ul><li>Loss of service on critical assets </li></ul>
    36. 36. Daily Tasks <ul><li>Unauthorized configuration changes </li></ul><ul><li>Disruption in other services </li></ul><ul><li>Intrusion evidence </li></ul><ul><li>Suspicious login failures </li></ul><ul><li>Minor malware activity </li></ul><ul><li>Activity summary </li></ul>
    37. 37. Weekly Tasks <ul><li>Review inside and perimeter log trends and activities </li></ul><ul><li>Account creation/removal </li></ul><ul><li>Other host and network device changes </li></ul><ul><li>Less critical attack and probe summary </li></ul>
    38. 38. Monthly Tasks <ul><li>Review long-term network and perimeter trends </li></ul><ul><li>Minor policy violation summary </li></ul><ul><li>Incident team performance measurements </li></ul><ul><li>Security technology performance measurements </li></ul>
    39. 39. Logs and Laws, Rules, Standards, Frameworks <ul><li>Logs and Laws, Rules, Standards, Frameworks </li></ul>
    40. 40. Logs in Support of Compliance <ul><li>Application and asset risk measurement </li></ul><ul><li>Data collection and storage to satisfy auditing of controls requirements </li></ul><ul><li>Support for security metrics </li></ul><ul><li>Industry best-practices for incident management and reporting </li></ul><ul><li>Proof of security due diligence </li></ul><ul><li>Example regulation include: HI PAA , SOX, GLBA, FISMA … </li></ul>
    41. 41. Regulations Recommend Log Management <ul><li>ISO 17799 </li></ul><ul><li>Maintain audit logs for system access and use, changes, faults, corrections, capacity demands </li></ul><ul><li>Review the results of monitoring activities regularly </li></ul><ul><li>Ensure the accuracy of the logs </li></ul><ul><li>NIST 800-53 </li></ul><ul><li>Capture audit records </li></ul><ul><li>Regularly review audit records for unusual activity and violations </li></ul><ul><li>Automatically process audit records </li></ul><ul><li>Protect audit information from unauthorized deletion </li></ul><ul><li>Retain audit logs </li></ul><ul><li>PCI </li></ul><ul><li>Requirement 10 </li></ul><ul><li>Logging and user activities tracking are critical </li></ul><ul><li>Automate and secure audit trails for event reconstruction </li></ul><ul><li>Review logs daily </li></ul><ul><li>Retain audit trail history for at least one year </li></ul><ul><li>CobiT 4 </li></ul><ul><li>Provide adequate audit trail for root-cause analysis </li></ul><ul><li>Use logging and monitoring to detect unusual or abnormal activities </li></ul><ul><li>Regularly review access, privileges, changes </li></ul><ul><li>Monitor performance </li></ul><ul><li>Verify backup completion </li></ul>
    42. 42. COBIT 4.0 is Leading IT Controls Framework <ul><li>(Re-)released in Dec 2005 </li></ul><ul><li>Four (4) Goals for IT </li></ul><ul><ul><li>Align IT with business </li></ul></ul><ul><ul><li>Maximize IT benefits </li></ul></ul><ul><ul><li>Use IT assets responsibly </li></ul></ul><ul><ul><li>Manage IT risks </li></ul></ul><ul><li>34 IT Processes </li></ul><ul><li>Most used framework for SOX compliance </li></ul>
    43. 43. Log Data Evidences COBIT 4.0 Controls Identity and Access DS5.3 Identity management DS5.3 User account management PO7.8 Job change and termination User Activity PO4.11 Segregation of duties AI2.3 Application control and audit ability Change AI6.1 Change standards and procedures DS9.3 Configuration integrity review Security DS5.2 IT security plan DS5.5 Security testing, surveillance, monitoring DS5.10 Network security DS11.6 Security requirements for data mgmt <ul><li>IT Infrastructure </li></ul>DS1.5 Monitoring of service level agreements DS2.4 Supplier performance monitoring DS3.5 Monitoring of performance and capacity DS13.3 IT infrastructure monitoring DS10.2 Problem tracking and resolution Business Continuity DS4.1 IT continuity framework DS4.5 Testing of the IT continuity plan DS11.5 Backup and restoration
    44. 44. Compliance Drives New Controls SOX GLBA HIP AA Patriot Commercial Diversified Ins - Mutual Ins - Stock Savings Securities PCI 1386/1950 Basel 2 Financial Services Services Pharma Biotech Healthcare Energy Govt Telco Retail F&B F1000 21CFR/Annex EU/DPD Japan Privacy COBIT FFIEC NIST ISO17799 General
    45. 45. Logs and Forensics <ul><li>What Makes Your Investigation a “Forensic” Investigation? </li></ul><ul><li>Incident Response vs Forensics </li></ul><ul><li>… and is the ‘vs’ really appropriate? </li></ul>
    46. 46. Forensics Brief <ul><li>“ Computer forensics is application of the scientific method to digital media in order to establish factual information for judicial review. This process often involves investigating computer systems to determine whether they are or have been used for illegal or unauthorized activities (Wikipedia)” </li></ul>
    47. 47. So, What is “Log Forensics” <ul><li>Log analysis is trying to make sense of system and network logs </li></ul><ul><li>“ Computer forensics is application of the scientific method to digital media in order to establish factual information for judicial review.” </li></ul><ul><li>So…. </li></ul><ul><li>Log Forensics = trying to make sense of system and network logs + in order to establish factual information for judicial review </li></ul>
    48. 48. How Logs Help… Sometimes <ul><li>If logs are there , we can try to </li></ul><ul><li>… figure out who , where , what , when , how, etc </li></ul><ul><li>but </li></ul><ul><li>Who as a person or a system? </li></ul><ul><li>Is where spoofed? </li></ul><ul><li>When ? In what time zone? </li></ul><ul><li>How ? More like ‘how’d you think’… </li></ul><ul><li>What happened or what got recorded? </li></ul>
    49. 49. Logs Forensics Challenges <ul><li>What? You think this is evidence? Bua-ha-ha-ha  </li></ul><ul><li>“ Computer Records and the Federal Rules of Evidence “ </li></ul><ul><li>“ First , parties may challenge the authenticity of both computer-generated and computer-stored records by questioning whether the records were altered, manipulated, or damaged after they were created. </li></ul><ul><li>Second , parties may question the authenticity of computer-generated records by challenging the reliability of the computer program that generated the records. </li></ul><ul><li>Third , parties may challenge the authenticity of computer-stored records by questioning the identity of their author.” </li></ul>
    50. 50. Example 2 Scan of the Month Challenge #34 <ul><li>Honeypot hacked </li></ul><ul><li>All logs available </li></ul><ul><li>In fact, too many  </li></ul><ul><li>Analysis process </li></ul>
    51. 51. Example 3 Sysadmin Gone Bad <ul><li>Service Restarts Out of Maintenance Windows </li></ul><ul><li>Correlated with Some Personnel Departures </li></ul><ul><li>Information Leaks Start </li></ul><ul><li>Log Analysis Reveals Unauthorized Software Installation </li></ul>
    52. 52. Example 4 Spyware Galore! <ul><li>System Seen Scanning – Firewall Logs </li></ul><ul><li>Analysis of Logs Shows Antivirus Failures </li></ul><ul><li>VPN Logs Help Track the Truth </li></ul><ul><li>Full Forensic Investigation Confirms the Results of Log Analysis </li></ul>
    53. 53. Example 5 Compromise Detection <ul><li>Miscellaneous Examples of Detecting Compromised Machines Using Logs </li></ul>
    54. 54. Anton’s Five Log Mistakes <ul><li>How many have you committed?  </li></ul><ul><li>Not looking at logs </li></ul><ul><li>Not retaining long enough </li></ul><ul><li>Not normalizing logs </li></ul><ul><li>Not prioritizing actions on logs </li></ul><ul><li>Only looking at known bad </li></ul>
    55. 55. Conclusion <ul><li>Turn ON Logging!!! </li></ul><ul><li>Make Sure Logs Are There When You Need Them (and need them you will  ) </li></ul><ul><li>Include Log Analysis into the IH Process </li></ul><ul><li>Avoid Above (and Other) Mistakes </li></ul><ul><li>Prepare and Learn the Analysis Tools </li></ul><ul><li>When Going Into the Incident-Induced Panic Think ‘Its All Logged Somewhere – We Just Need to Dig it Out’  </li></ul>
    56. 56. More information? <ul><li>Anton Chuvakin, Ph.D., GCIA, GCIH, GCFA </li></ul><ul><li>anton@chuvakin.org </li></ul><ul><li>Chief Logging Evangelist </li></ul><ul><li>LogLogic, Inc </li></ul><ul><li>Author of “Security Warrior” (O’Reilly 2004), “PCI Compliance” (Elsevier, 2009) and other security books </li></ul><ul><li>Blog: www.securitywarrior.org </li></ul><ul><li>See www.info-secure.org for my papers, books, reviews and other security resources related to logs </li></ul>
    57. 57. Thank You Q & A

    ×