The easiest log analysis method (Linux/Unix):# grepailure /var/log/messagesLook for interesting failure message in messages log. It makes sense to also look for “ailed.” We are losing the first letter to not worry about the case sensitive. You can also switch grep to a case insensitive mode by typing “grep -i” (for ignore case) instead.# grepanton /var/log/messagesLook for particular user actions; this will definitely miss more than a few user actions, and so manual review of logs is needed. For example, some messages will not be marked with that use, such as when a user becomes “root” via “sudo” command.More Examples:grepsshd” *.log | (looks for all logs with “sshd” string in them)grep –i user messages (looks for “user”, “USER”, “User”, etc in “messages” files)grep –v sendmailsyslog(looks for all log lines without “sendmail” in them)===This slides reminds Unix people and teaches Windows people about the “grep” command that can be used to manually filter logs.grepsshd” *.log | process_ssh.shFilters all logs with “sshd” string in them and sends them to another programgrep –i user messages | grep –v ailureFilter for “user”, “USER”, “User”, messages which are not failuresgrep –v sendmailsyslog(looks for all log lines without “sendmail” in them)Using ”grep” is an example of positive filtering mentioned on the previous slide:, trying to focus on the bad things that one needs to see, investigate, and then act on: attacks, failures, etc. “-v” option showcases negative filtering.
So how easy is it to data mine with Splunk? In the above example I told Splunk I was interested in all log entries that contained the word “failed”. This refreshed the screen and showed me 25 entries that matched this keyword. Looking through the list I noticed that one of the entries was for a failed logon attempt. At that point I clicked the “similar” hyperlink for the log entry which produced the screen shown above. Note:it is showing us that we have ten failed logon attempts in the log file (four are not shown as they are off the bottom of the screen). So in less than 60 seconds I was able to identify all of the failed logon attempts for my network.
OSSEC rule shown
Marcus Ranum’s “nbs” tool can be obtained at http://www.ranum.com/security/computer_security/code/index.htmlThe description says: “Never Before Seen Anomaly detection driver. This utility creates a fast database of things that have been seen, and includes tools to print and update the database. Includes PDF documentation and walkthroughs.”Use the tool to pick up anomalous messages from your log pool.One can also build the same using grep, awk and other shell tools: ‘grep –v –f’ can be used to look for log entries excluding ones stored in file.
This slide shows one of the open source visualization tools , afterglow (that can be found at http://afterglow.sourceforge.net/ or at http://www.secviz.org/)The tool has been successfully used to visualize many types of log data.
Here we learn how to start using the tools we just discussed for taking control of your logs.Start by collecting logs; use syslog-ng or whatever syslog variant is available on your systems. To combine these with Windows logs use Snare or LASSO, which convert Windows logs to syslog.Store logs in files (compressed or not) or in a database such as open source MySQL.To start peeking at logs use search logs such as free “grep” or “splunk” that we mentioned above.When ready to move to correlation and alerting, get OSSEC or other tools. At this point, you gain a degree of awareness of what is going on in your environment.