Frontline solutions For Security Practitioners 1008
Frontline Solutions for
A presentation of
The Internet Storm Center,
The SANS Institute and
The GIAC Certification Program
Frontline Solutions for Security Practitioners SANS/GIAC 2008®
Welcome to Frontline Solutions for Security Practitioners presented by the SANS Institute and GIAC Certifications. Frontline Solutions for Security Practitioners is an informative presentation for everyone involved with IT security. Security is only as good as the person implementing it, so make sure you and your team have the knowledge and expertise needed to ensure the security of your organization’s vital data and systems.
Just me. Feel free to contact me if you have questions. I will endeavour to help.
From http://isc.sans.org/about.html ISC History and Overview The ISC was created in 2001 following the successful detection, analysis, and widespread warning of the Li0n worm. Today, the ISC provides a free analysis and warning service to thousands of Internet users and organizations, and is actively working with Internet Service Providers to fight back against the most malicious attackers. On March 22, 2001, intrusion detection sensors around the globe logged an increase in the number of probes to port 53 â€“ the port that supports the Domain Name Service. Over a period of a few hours, more and more probes to port 53 were arriving - first from dozens and then from hundreds of attacking machines. Within an hour of the first report, several analysts, all of whom were fully qualified as SANS GIAC certified intrusion detection experts, agreed that a global security incident was underway. They immediately sent a notice to a global community of technically savvy security practitioners asking them to check their systems to see whether they had experienced an attack. Within three hours a system administrator in the Netherlands responded that some of his machines had been infected, and he sent the first copy of the worm code to the analysts. The analysts determined what damage the worm did and how it did it, and then they developed a computer program to determine which computers had been infected. They tested the program in multiple sites and they also let the FBI know of the attack. Just fourteen hours after the spike in port 53 traffic was first noticed, the analysts were able to send an alert to 200,000 people warning them of the attack in progress, telling them where to get the program to check their machines, and advising what to do to avoid the worm. The Li0n worm event demonstrated what the community acting together can do to respond to broad-based malicious attacks. Most importantly, it demonstrated the value of sharing intrusion detection logs in real time. Only in the regional and global aggregates was the attack obvious. The technology, people, and networks that found the Li0n worm were all part of the SANS Institute's Consensus Incident Database (CID) project that had been monitoring global Internet traffic since November 2000. CIDâ€™s contribution the night of March 22 was sufficient to earn it a new title: the SANS Internet Storm Center. Today the Internet Storm Center gathers millions of intrusion detection log entries every day, from sensors covering over 500,000 IP addresses in over 50 countries. It is rapidly expanding in a quest to do a better job of finding new storms faster, identifying the sites that are used for attacks, and providing authoritative data on the types of attacks that are being mounted against computers in various industries and regions around the globe. The Internet Storm Center is a free service to the Internet community. The work is supported by the SANS Institute from tuition paid by students attending SANS security education programs. Volunteer incident handlers donate their valuable time to analyze detects and anomalies, and post a daily diary of their analysis and thoughts on the Storm Center web site. Behind the Internet Storm Center The ISC relies on an all-volunteer effort to detect problems, analyze the threat, and disseminate both technical as well as procedural information to the general public. Thousands of sensors that work with most firewalls, intrusion detection systems, home broadband devices, and nearly all operating systems are constantly collecting information about unwanted traffic arriving from the Internet. These devices feed the DShield database where human volunteers as well as machines pour through the data looking for abnormal trends and behavior. The resulting analysis is posted to the ISC's main web page where it can be automatically retrieved by simple scripts or can be viewed in near real time by any Internet user. In many ways, the ISC parallels the data collection, analysis, and warning system used by weather forecasters. For example, the National Weather Service uses small sensors in as many places as possible to report pressure, wind speed, precipitation and other data electronically to regional weather stations. These local stations provide technical support to maintain the sensors, and they summarize and map the sensor data and display it for local meteorologists. They also forward the summarized data to national weather center or transnational weather analysis centers. If analysts are available to monitor the data, they can provide early warnings of storms in their areas. The national and transnational weather analysis centers summarize and map all the regional data to provide an overall picture of the weather. They monitor the data constantly looking for early evidence of major storms and can provide early warnings whenever possible. Likewise, the Internet Storm Center uses small software tools to send intrusion detection and firewall logs (after removing identifying information) to the DShield distributed intrusion detection system. The ISC's volunteer incident handlers monitor the constantly changing database to provide early warnings to the community of major new security threats. The ISC also provides feedback to participating analysis centers comparing their attack profiles to those of other centers, and provides notices to ISPs of IP addresses that are being used in widespread attacks. The ISC maintains a very popular daily diary of incident handlerâ€™s notes, and can generate custom global summary reports for any Internet user. The value of the Internet Storm Center is maximized when the sensors are collecting data on attacks touching all corners of the Internet. Because of the vastness of cyberspace it is impossible to instrument the entire Internet. Instead, samples are taken in as many diverse places as possible to create an accurate representation of current Internet activity. Many ISC users send their log data directly to the ISC databases without going through an organizational or local analysis and coordination center. Several large organizations have expressed interest in mirroring the ISC's distributed intrusion detection system, placing sensors at the edges and within their networks to provide early detection of anomalous behavior. Early Warning In addition to hundreds of users who monitor the ISC's website and provide some of the best early warnings, the ISC is supported by a core team of expert volunteer incident handlers , making it a virtual organization composed of the top tier of intrusion detection analysts from around the globe. The all-volunteer team monitors the data flowing into the database using automated analysis and graphical visualization tools and searches for activity that corresponds with broad based attacks. They report their findings to the Internet community through the ISC main web site, directly to ISPs, and via general postings and emails to newsgroups or public information sharing forums. The team determines whether a possible attack is real and whether it is worth follow-up action. If so, the team can request an immediate email to the 100,000 subscribers to the SANS Security Alert Consensus - an alerting service used primarily by very advanced security- conscious system and network administrators and analysts. The email would ask for data and code from anyone who has hard evidence of the attack. Once the attack is fully understood, the team determines the level of priority to place on the threat, whether to make a general announcement or simply post it, and whether to get core Internet backbone providers involved so they may consider cutting off traffic to and from sites that may be involved in the attacks. The ISC maintains a private web site and private reports for each reporting site. Reports include lists of the most recent attacks along with the indications of how many other sites the attackers have targeted, the severity of each attack, and background data about why attackers target specific ports. The web page helps the reporting site manage its intrusion data and keeps track of attacks. Users can show the results of submissions in a variety of formats including columnar data or pie charts. Data can also be exported in formats usable in other data visualization programs.
Why choose SANS courses and GIAC certifications? SANS Institute is the leading training organization for system administration, audit, network, and security. GIAC (Global Information Assurance Certification) provides certification that validates the skills of security professionals.
Cyber Threats are growing at an alarming rate. Although the internet was once a ‘safe place’ this is no longer the case (and hasn’t been for quite some time).
The Internet is just a large community of individuals. Like any other community most people are law abiding citizens. Like any other city a small portion of the population are willing to break the law. Like any city there are good neighbourhoods and bad neighbourhoods. The difference is that good neighbourhoods and bad neighbourhoods are only separated by a maximum of 150 milliseconds. In order to protect yourself in the city you live in you put locks on your doors and windows, install alarms, don’t let people in unless you know them or think you understand their motives. But yet for some reason when we put a computer and application on the Internet we are oblivious to the risks and don’t lock the doors and windows and expect the criminals to stay out. The population of the Internet is approximately 1.5 Billion people. If even .1% of them have evil intentions that is 1.5 Million evil doers.
Strong IT Security skills benefit everyone (except the bad guys). Being made an example of by a hacker is one of the worst things that can happen. Being owned is learning the hard way.
Everyday your organization’s vital information systems are coming under attack. Make sure you and your team have the knowledge necessary to prevent, detect, and resolve the threats and incidents that could result in loss of money, integrity, confidentiality, and availability.
Risks, threats, and vulnerabilities are highly interrelated. Their relationship can be expressed by this simple formula: Risk (due to a threat) = Threat x Vulnerability (to that threat) This formula shows that risk is directly related to the level of threat and vulnerability you, your systems, or your network face. Here is how the formula works: If you have a very high threat, but a very low vulnerability to that threat, your resulting risk will be only moderate. For example, if you live in a high crime neighborhood (thus, high threat) but you keep your doors and windows locked (so you have low vulnerability to that threat), your overall risk is moderate. If you have a high vulnerability to a threat (by keeping your doors and windows unlocked), but the threat itself is minor (by living in a safe neighborhood), once again you have only a moderate risk factor. If however, you have a high level of threat potential (a high crime area) and your vulnerability to that threat is very high (no locks), you have a very high risk factor.
What exactly about the system or information do we wish to protect? Traditionally, information security professionals focus on ensuring confidentiality, integrity, and availability. Simply “CIA,” in “infosec” jargon. These are the bedrock principles about which we will be concerned. When first exploring any new business application or system, it is a good habit to begin thinking about confidentiality, integrity, and availability – and countermeasures for protecting these, or the lack thereof. Attacks may come against any or all of these. Let us use an example: You have been assigned to oversee the security of your employer’s new e-commerce site, its first attempt at conducting business directly on the Internet. How do you approach this? What should you consider? What could go wrong? Think C-I-A confidentiality, integrity, and availability. Customers will expect that the privacy of their credit card numbers, their addresses, and phone numbers, and other information shared during the transaction be ensured. These are examples of confidentiality. They will expect quoted prices and product availability to be accurate; the quantities they ordered and the prices to which they agreed not to be changed; and anything downloaded to be authentic and complete. These are examples of integrity. Customers will expect to be able to place orders when convenient for them, and the employer will want the revenue stream to continue without disruption. These are examples of availability. Keep in mind that the dimensions we have been discussing can be interrelated. An attacker may exploit an unintended function on a Web server and use the cgi-bin program “phf” to list the password file. Now, this would breach the confidentiality of this sensitive information (the password file). Then, in the privacy of his own computer system, the attacker can use brute force or dictionary-driven password attacks to decrypt the passwords. With a stolen password, the attacker can execute an integrity attack when he gains entrance to the system. And he can even use an availability attack as part of the overall effort to neutralize alarms and defensive systems so they cannot report his existence. When this is completed, the attacker can fully access the target system – and all three dimensions (confidentiality, integrity, and availability) would be in jeopardy. Always think C-I-A.
We chose a very simple, well-known attack for a reason. A large number (in fact, an embarrassingly large number) of corporate, government, and educational systems that are compromised and exploited are defeated by these well-known, well-publicized attacks. Most of the time an attack doesn’t have to be the latest and greatest in order to be successful. Countless number of attacks, covering years of experience, are detailed on the Internet and in books and courses. Often these are still viable, especially when the security teams are not practicing defense-in-depth. Which pillar of the CIA triad is most important to your organization? At SANS, we rely on our online resources for registration and online training. Without our online resources we are unable to provide services to our students. Because we cannot operate without students, our priority is availability. After availability, the next most important dimension of CIA is integrity. SANS is the most trusted source for computer security training, so our information must be correct. Because the bulk of our information is protected by copyright, even though we have some trade secrets, confidentiality is the least important CIA pillar to SANS. Different organizations will have different priorities in the CIA triad. Confidentiality is usually very important to health-care-oriented organizations; and integrity is important to financial institutions. Understanding what the priorities are for your organization is a tremendous help in prioritizing security plans for your organization, from design to incident response.
We have been talking about what we need to protect – the confidentiality, integrity, and availability of our systems. Next, we’ll discuss from what we need to protect them – the threats to them and their vulnerabilities to those threats. We’ll see how risk is a function of threat and vulnerability. Now, not all the bad things that happen to computer systems are attacks per se. These are fires, water damage, mechanical breakdowns, accidental errors by system administrators, and plain old user error. But all these are called threats. We use threat models to describe a given threat and the harm it could do if the system has a vulnerability. There are a large number of approaches to threat models, but one that you should consider is the one used by Microsoft: http://www.microsoft.com/downloads/details.aspx?FamilyID=62830F95-0E61-4F87-88A6-E7C663444AC1&displaylang=en (or type “threat model” into Google).
In security terms, a vulnerability is a weakness in your systems or processes that allow a threat to occur. However, simply having a vulnerability by itself is not necessarily a bad thing. It is only when the vulnerability is coupled with a threat that the danger starts to set in. Let us look at an example. Suppose you like to leave the doors and windows to your house unlocked at night. If you live in the middle of the woods, far away from anyone else, this may not be a bad thing. There really are not many people who wander around, and if you are high enough on the hill, you will be able to see them coming long before they present a danger. So, in this case, the vulnerability of having no locks is present, but there really isn’t any threat to take advantage of that vulnerability. Now suppose you move to a big city full of crime. In fact, this city has the highest burglary rate of any city in the country. If you continue your practice of leaving the doors and windows unlocked, you will have exactly the same vulnerability as you had before. However, in the city the threat is much higher. Thus your overall danger and risk is much higher. Vulnerabilities are the gateways by which threats are manifested. Therefore, we can think of threats as the agents of risk, the mathematical probability of loss. Without vulnerabilities, threats do not pose a risk to the organization. Of course, vulnerabilities do not have to exist solely in software flaws. Vulnerabilities can be flawed configurations, poor physical security, poor hiring practices, etc. When we couple vulnerabilities with threats, we introduce risks to an organization. Vulnerabilities can be reduced or even prevented, provided, of course, that you know about them. The problem is that many vulnerabilities lay hidden, undiscovered until somebody finds out about them. Unfortunately, the somebody is usually a bad guy. The bad guys always seem to find out about vulnerabilities long before the good guys.
Let’s look at threats to our systems and take a “big picture” look at how to defend against them. Protections need to be layered – a principle called defense-in-depth. We’re going to talk about some principles that will serve you well in protecting your systems and use actual real-world attacks that were “successful” to illustrate these points. We’ll examine why the attacks were successful and, more importantly, what measures someone could have taken to lessen the impact or to stop them altogether – practical defense-in-depth.
Network security is a comprehensive, integrated approach in which multiple solutions are tiered together to accomplish a goal. There is no single security solution that will make an organization secure, because any single measure could be bypassed (and miss an attack all together) or compromised. When protecting any entity, take the President for example, there are many people, measures, and systems put into place to keep him secure. The same robust approach needs to be applied to your network or any critical asset at your organization. When it comes to network security there is no silver bullet. Multiple measures that compliment each other must be put in place across a variety of control options. For example, you would deploy a preventive measure such as a firewall, a detective measure such as an IDS, and a deterrent measure such as a guard at your front gate just to name a few. Even if one of the measures failed, the other measures would be able to detect the attack before there was a problem – or catch an attack in action – to minimize the amount of damage caused.
The concept behind defense-in-depth is simple. The picture we have painted so far is that a good security architecture, one that can withstand an attack, has many aspects and dimensions. We need to be certain that if one countermeasure fails, there are more behind it. If they fail, we need to be ready to detect that something has occurred, clean up the mess expeditiously and completely, and then tune our defenses to keep it from happening to us again. We will now examine four approaches to defense-in-depth.
Uniform protection treats all systems as equally important. No special consideration, or protection, is given to the “crown jewels” of an organization. As a result, this approach can be more vulnerable to malicious insiders, because the systems are not separated or categorized within the network. The majority of attacks succeed because they take advantage of well-publicized vulnerabilities for which exploits have been created. The best answer is to patch the systems, but this takes time. Of all the approaches to defense-in-depth, this one can be the weakest, unless you have a good uniform protection design. This is by far the most common approach.
Protected enclaves involve segmenting your network. This can be done by implementing many VPNs across a single network, VLAN segmentation of switches, or firewalls to separate out the network.
This slide shows another way to think of the defense-in-depth concept. At the center of the diagram is your information. However, the center can be anything you value, or the answer to the question, “What are you trying to protect?” Around that center you build successive layers of protection. In the diagram, the protection layers are shown as blue rings. In this example, your information is protected by your application. The application is protected by the security of the host it resides on, and so on. In order to successfully get your information, an attacker would have to penetrate through your network, your host, your application, and finally your information protection layers. Information centric defense starts with an awareness of the value of each section of information within an organization. Identify the most valuable information and implement controls to prevent non-authorized employees from accessing it. A good starting point is to identify your organization’s intellectual property, restrict it to a single section of the network, assign a single group of system administrators to do it, mark the data, and thoroughly check for this level of data leaving your network.
Vector-oriented defense-in-depth involves identifying various vectors by which threats can manifest and providing security mechanisms to shut down the vector – for example, disabling USB thumb drives and floppy drives.
Let’s briefly look at access control to emphasize the importance of defense-in-depth. In order to protect critical assets you have to be able to identify, verify, approve, and track who has access to a given piece of intellectual property (IP). Identification is the process of claiming to be a certain person. Typing in a user ID is a form of identification. The problem is anyone could claim they are the given entity, so how do you know that they are who they say they are. This is accomplished through authentication. Authentication is proving that you are who you say you are and is done in four ways: Something you know – by remembering a piece of information and presenting it, you can prove that you are who you say you are. The best example of something you know is a password. Something you have – by possessing something you prove that you are a given entity. Token-based schemes in which you carry a token that generates a new password is an example of something you have. If you have the token and can type in the numbers on the token screen you can authenticate, otherwise, you cannot. Something you are – an alternative way to authenticate is by presenting a unique attribute tied to your physical make-up. This is often called biometrics. Hand scan, thumb prints and retina scans are all examples of biometrics. Someplace you are – GPS or global positioning systems can also be used to authenticate that you are in a given geographic area. With sensitive information you might want to only allow someone to open a document if they are within the walls of a five-sided building in Washington, DC. Once you have properly authenticated, you then have to determine what you are allowed or authorized to do within the system. Authorization should be based on a principle of least privilege, where an entity is only given the minimal access required to do their job. Once access is granted using the principle of least privilege, you want to make sure individuals are held accountable for their actions and you can trace back what occurred on a system through detailed auditing. As you can see, all of the measures work together in synergy to properly protect critical assets.
Now that we have looked at the role that identification, authentication, authorization, and accountability play, we will look at some principles associated with access control that you should utilize to make sure your security is as robust as it can possibly be. In assigning access you should give someone the least amount of access they need to do their job. However, this access should not be given all of the time; the access should only be granted when it is needed to perform a job function. For example, if I am the director of HR, the principle of least privilege would say that I need access to every employee’s personnel file. On the other hand, the need to know principle would say you should only give me access when I have to review a file during a performance assessment – and not all of the time. With least privilege we are allowing people to do their job, however we are only giving them the minimal access needed and no more. In some situations this works. But, what happens in the case where minimal access granted is still too great a risk and cannot be taken? In those cases, separation of duties needs to be implemented, where a given task is split between two individuals so no single individuals by themselves can make a decision. Separation of duties works; but the more people work together, the more power of separation of duties erodes away, because people build trust. To minimize the chance of this occurring, rotation of duties needs to be performed. This is where people are rotated out of certain jobs at set intervals so the chance of two people colluding is minimized.
The problem is that it doesn’t matter how well you design and deploy your defense-in-depth, the fact is that money is not infinite, technology is not perfect, and you can’t think of everything. Eric Cole, a SANS instructor, preaches “protection is ideal, detection is a must”. This is a good thing to take to heart. Be sure that when you are designing your security architecture that you design in the ability to detect and/or analyze attacks that you didn’t plan for. This usually means controls like secondary logging, and network instrumentation designed in.
The bad guys are checking out your network. If you’re controls are working, then it shouldn’t be a problem...should it? The goal of penetration testing is to test your security controls from an attacker’s point of view.
This is a generalized attack methodology used by an attacker. It begins with determining as much as possible about your company by researching publicly available sources to see what they can learn, this is called reconnaissance. During the reconnaissance phase the attacker does not need to touch your network. The second phase is usually scanning. This is where the attacker starts poking at your network to see what he can see, to see what servers and apps you are showing to the world. Once he has found a potential target the attacker will attempt to exploit any potential vulnerabilities to gain a toe hold into your network. If he can gain purchase on your network he will usually try and ensure he can maintain access and get in whenever he wants through the use of backdoors, trojans, zombie processes or some other method. Then the skilled attacker will attempt to cover his tracks so you cannot detect his presence in your systems. He will endeavour to do this though modification of log files, installation of rootkits, removal of logins, and other methods.
Penetration testing closely mirrors the attackers methodology. The goal of the penetration test is to find the weak points in your defenses, document the and hopefully fix them before an attacker can take advantage of them so the tail end of the process involves analyzing and reporting on any issues you detect.
The preparation stage is probably the most critical. This is when you need to define the parameters of the penetration test. What machines and services are in scope and which are out of out of scope. Who will do what? Are there any machines which must be avoided at all costs? How will we measure success. How long should the Penetration Testing project take and when will the work be done? The most important consideration is documented permission. Once you have determined all the parameters of the Pen Test, summarize it in one or two pages and have it signed by someone with authority to approve it and by all means if the scope needs to expand have it resigned. Don’t skip getting permission. More than a few security people have found themselves in serious trouble for unapproved security testing.
To do a basic discovery scan in nmap: nmap -top-ports 20 <address> nmap -top-ports 20 192.168.1.0/24 -F is fast scan, scans top 100 TCP and UDP ports
Nmap –top-ports 20 –A <host> Nmap –top-ports 20 –A 184.108.40.206 -A is the equivalent of –O (OS Detection) and –sV (version and application detection) as well as Script scanning and Traceroute Starting Nmap 4.76 ( http://nmap.org ) at 2008-10-30 13:21 Canada Central Standard Time Interesting ports on 192.168.1.200: PORT STATE SERVICE VERSION 21/tcp closed ftp 22/tcp closed ssh 23/tcp closed telnet 25/tcp closed smtp 53/tcp closed domain 80/tcp open http Apache httpd 2.2.6 ((Fedora)) |_ HTML title: Rick Wanner's Web Page</title> <META NAME=&quot;description&quot; CONTE... 110/tcp closed pop3 111/tcp open rpcbind | rpcinfo: | 100000 2 111/udp rpcbind | 100024 1 834/udp status | 100000 2 111/tcp rpcbind |_ 100024 1 837/tcp status 135/tcp closed msrpc 139/tcp closed netbios-ssn 143/tcp closed imap 443/tcp open ssl/http Apache httpd 2.2.6 ((Fedora)) |_ HTML title: Rick Wanner's Web Page</title> <META NAME=&quot;description&quot; CONTE... 445/tcp closed microsoft-ds 993/tcp closed imaps 995/tcp closed pop3s 1723/tcp closed pptp 3306/tcp open mysql MySQL (unauthorized) 3389/tcp closed ms-term-serv 5900/tcp closed vnc 8080/tcp closed http-proxy MAC Address: 00:48:54:8B:EB:B0 (Unknown) Device type: general purpose Running: Linux 2.6.X OS details: Linux 2.6.9 - 2.6.25 Network Distance: 1 hop OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 23.23 seconds
The fact is that the bad guys aren’t stupid. If anything they are getting increasingly smarter. We’ve deployed all these layers of security around our network, but we have to draw the line somewhere. You have to leave some ports opened so you can actually do business. Stretching the house analogy well beyond where we should… You’ve locked all the doors and windows, set the alarm, but the dog still needs to go in and out of the doggy door.
From scmagazineus.com - http://www.scmagazineus.com/Yahoos-HotJobs-site-vulnerable-to-cross-site-scripting-attack/PrintArticle/120008/
Attacks like SQL Injection truly demonstrate the need for a defense in depth strategy. Think about how web servers are set up at your organization. The system itself likely sits within the segment of a network that is Internet accessible. If you have done your due diligence, it is up to date with the most recent security patches and only HTTP (80) and HTTPS (443) ports are open through the firewall. There are many layers of defense in this typical scenario, but none of them protect your organization against SQL Injection. A typical SQL Injection attack is demonstrated in this video. It runs over ports allowed through the firewall (80, 443) into a DMZ and doesn’t attempt to exploit any weaknesses that can be fixed with an operating system or web server patch. In many occasions, the SSL communications actually make network IDS and sniffers blind to the attack since it rides an encrypted channel straight to the web server. The demonstrated attacks will be used to bypass authentication and gain access to unauthorized data. How can we protect ourselves against these attacks? As we see, typical defense in depth isn’t enough and the attacker has the advantage; this entire exploit was performed with a standard web browser. Further security must be implemented within the software development lifecycle. Application developers must perform proper validation on all incoming input to ensure malicious commands are not being executed by remote users. Additional controls, such as a web application firewall, log monitoring, and event correlation software may be implemented in addition to improved development practices. Open Web Application Security Project http://www.owasp.org/
#./msfconsole - start Metasploit msf > use windows/dcerpc/ms03_026_dcom - the exploit to use. This is an older Windows RPC vulnerability. msf > setg PAYLOAD windows/exec - if the exploit succeeds try and execute something remotely msf > setg CMD nc –L –p 80 cmd.exe - this is the command to be executed. In this case start a netcat listener on port 80. msf > setg RHOST 192.168.0.2 - this is the host to be attacked. msf > exploit - execute the attack.
The lessons in defense in depth, configuration management, and malicious code can all be applied to this next demonstration. An attacker performs a quick port scan of your network range and discovers a pair of Windows systems. The first system is chosen for attack, and the attacker launches the Metasploit exploitation framework. A common Windows exploit is selected and Metasploit is configured to open up a listening command shell on the vulnerable system. Once the exploit is launched, the attacker connects to the back door and issues a command. If the attacker found the listening port to be blocked by a firewall, another exploit could be used to initiate an outbound command shell effectively bypassing the controls. This attack would not be possible if proper patch management procedures were in place and followed. Many organizations have patch management solutions, but sometimes systems slide through the cracks or legacy software does not support the latest service pack leaving the entire system vulnerable. Firewalls won’t always protect systems against exploitation as some ports must remain open for functionality purposes. The ease of exploitation can be shocking if you haven’t seen this type of demonstration before. It takes little effort to perform (or even automate) this attack. This exploit was used in the Blaster worm in 2003 that infected machines all over the world. All it takes is one accessible vulnerable system or one rogue infected laptop to bring a devastating worm or exploit into your organization.
The only commercial exploitation framework that I know of is Core Impact. As with most of these tools the big difference over the open-sourced version is the reporting capabilities, although Core is a fair bit easier to use than Metasploit.
Think about your audience. In most cases they will be Executives who don’t give a hoot that you compromised a Solaris 8.0 box using a box cutter and two pieces of twine. What they care about is what it means to the corporation. The best type of report for this audience uses a risk based approach and describes what the root cause of the failures are and how they should be addresses. Usually it is best to write your recommendations citing standards or best practices as the basis for your recommendations. I usually like to write 2 reports in one, each two sections: Executive Summary (1 page maximum) Executive Report (3-5 pages maximum) Technical Summary (3-5 pages maximum) Detailed Technical Report ( ???? Pages)
Now that you are aware of threats, let’s take a look at how to handle an incident once it occurs.
What you just heard was an example of incident handling. Incident handling is the action or plan for dealing with intrusions, cyber-theft, denial of service, and other computer security related events. Your Incident Handling Plan should include hooks to your general Disaster Recovery and Business Continuity Plans that deal with fire, floods, and other disastrous events. The scope of incident handling is greater than just intrusions, it covers insider crime, and intentional and unintentional events that cause a loss of availability. Furthermore, intellectual property is becoming more and more important as we move into a primarily information age. Types of Intellectual Property include brands, proprietary information, trade secrets, patents, copyrights, and trademarks. The other key point of the definition is the notion of action. Sitting there watching is not incident handling. Identifying an incident is important, but you must act on that information to secure your systems in a timely manner. The best way to act on an incident and minimize your chance of a mistake is by having proper procedures in place. Well-documented procedures make sure that you know what to do when an incident occurs and minimizes the chances that you will forget something.
It does not matter how big your company is or what type of business you are in; sooner or later you are going to have an incident. Companies of all sizes and types have incidents. In some extreme cases, those that are not prepared and did not handle it correctly are no longer around to talk about it. When it comes to having to deal with an incident, it is not a matter of IF an incident is going to occur but WHEN is it going to occur. Unfortunately, some companies choose to deal with an incident by ignoring it. However, as you can imagine, this is very risky to do. I bring this up because some companies say. “I have never had an incident in two years so why do I have to worry about it?” In this case, the truth of the matter is, they probably have had several incidents. Yet, since they failed to detect them, these organizations took a stance of ignoring each incident. As we stated, this practice is very dangerous and it is only a matter of time until this catches up with you. One of the reasons for a module on incident handling is this central idea: planning is everything. If you are prepared and know what to do, dealing with an incident can be fairly straightforward. On the other hand, if it catches you off-guard, there can be many sleepless nights.
This slide and the next one are for the purpose of defining what we mean when we use a word like “incident” or “event.” Incident, as we are using it, refers to actions that might result in harm or the significant threat of harm to your computer systems or data. Looking for incidents involves finding deviations from the normal state of the network and systems. There are several important points for an incident handler that flow from this definition. First, because we are dealing with harm or potential harm, our task is to limit the damage. We want to be careful to choose courses of action that do not cause further harm. Secondly, your organization may well have a right to redress. There are criminal and civil law remedies associated with computer incidents. In either case, the incident handler should proceed in a manner that does not preclude use of the evidence gathered in a court setting. A handler does not know in advance whether a given case will go to court. Although only a small fraction of most cases end up in court, you need to treat all of them from the outset as though they may go to court. Don’t worry; that’s not an enormous burden. It just means doing your job thoroughly and documenting your actions carefully.
Events are observable, measurable occurrences in our computer systems. An event is something that happened that someone either directly experienced or that you can show actually occurred. An event is something that you saw flash on the screen, or that you heard. It can also be something that you know occurred because it was collected in a log or audit file. In the back of the SANS Incident Handling Step-by-Step book (included as a supplemental download with the online version of SEC 505: Incident Handling Step-by-Step and Computer Crime Investigation at www.sans.org/incidentforms/), there are forms which can help you write down the information that should be documented; they can help you to be alert for the things for which you should be looking. The forms’ copyright allows you to make all the copies you want, and if you have suggestions for improvement, please them to firstname.lastname@example.org. If there is any chance of the incident ending in a court case, having corroborating information is better than a single source claiming an event happened. For instance, if two people saw a message flash on a screen, that will likely have more validity in court than if one person saw it. Further, attackers sometimes use tools to alter or delete their traces in log files. If you can produce two independent sources for the information, your evidence has more validity. This is one reason we really push intrusion analysts to become familiar with a large number of log formats.
Preparation: The goal of the preparation phase is to get our team ready to handle incidents. Preparation includes everything from getting the right people on the team to having a plan of action and communication when an incident occurs. The team needs to ensure proper policy is in place, required computing resources are available, and that all forms and documentation are ready for use. Don’t underestimate the importance of a warning banner during the preparation phase. Warning banners are very important to an incident handler. They make a major difference in the amount of trouble you have to go through to collect and use evidence. Identification: How do you detect an incident? The bulk of all detects will come from either sensor platforms or the things people just happen to notice. Sensors include firewalls or intrusion detection systems and system logs, especially with logwatcher software. To increase your chance of detection, you may wish to consider burglar alarms sprinkled throughout your organization. These include personal firewalls and also intrusion detection systems. Containment: The goal of containment is to keep the problem from getting worse. Before we fire, we really should take the time to aim! Try to do a decent survey and review of the situation before altering the system. When an incident handler first arrives on location, there is a chance that the system is pristine in terms of evidence and information. As soon as the handler starts to recover the system, there is a point in which the evidence starts to become contaminated. If at all possible, the system backup should come before this point so there is a copy of the unaltered system. Always let this management sponsor know that you are in incident mode, either via e-mail or, for a more serious incident, with a phone call or visit. If you do not have a formal incident team reporting structure, you should advise your manager and the security point of contact at a minimum. It takes time to mobilize people; as soon as the incident is identified you may wish to put them on alert.
Eradication: Now, with the bleeding stopped, the goal of the eradication phase is to get rid of the attacker's artifacts on the machine, including accounts, malicious code, pirated software, porn, or anything else the bad guy left on the machine. A simple malware infection or worm may be as easy as recovering a known-good backup of the system, but a rootkit infection typically creates a need for a complete system rebuild. Reformatting and reinstalling the operating system from scratch may be considered a valuable shortcut in the handling process. While it is certainly true that total destruction of the contents of the disk will take care of any malevolent code, the opportunity for re-infection via the same channel after you reload the operating system still exists. There are many cases where handlers have taken systems down and reloaded the operating system only to have the box compromised again a few days later. The best course of action is to determine what the cause of the incident was, to find the vector of infection, and take action to prevent this from happening again. Recovery: The decision to place the system back into production falls upon the system owner. Keep in mind that after you, the incident handler, have touched the machine, everything that breaks will be blamed on you. Be sure to get the owner of the machine to sign that it is back in full operation. Make every effort to ensure the system is working properly before leaving the scene. If some functionality is not present, the default stance is usually to blame the incident handling team in some organizations. You need to proactively avoid such a situation by having the business unit test the machine before going back into production. Lessons Learned: The only one that really can or will write the report is the on-site handler. The handler submits the draft to the head of the incident handling team. This chief edits the document and interacts with the handler to make sure the document reflects what actually occurred, in light of the organizations' culture. We should allow everyone involved to review the draft. Have everyone involved in handling the incident sign off on the report, agreeing to its contents.
You heard the six primary steps in incident handling. They are preparation, identification, containment, eradication, recovery, and lessons learned. The steps serve the handler as a compass or a roadmap, a way to keep in mind what they are trying to do and the things they need to do next. The steady-state, day-to-day practices of most incident handlers are the first two steps: preparation and identification. We spend a lot of our time getting ready to fight the next battle, and looking for events that could be signs of trouble. Once we’ve identified an incident (that is, events that indicate harm or the attempt to do harm), we move into containment. Then, the general flow is down the page. You move from containment to eradication to recovery to lessons learned. Don’t skip steps! Also, I caution you. Please try to complete an entire given step in the containment and later phases before moving to the next phase for a single incident. In other words, for one incident, don’t contain it partially on a few systems, and then move to eradication on those machines while containment on other systems begins. Do all of containment first, then move to eradication, and so on. You will likely get organizational push-back on such an approach, but it is really the best way to go to successfully handle incidents. Also, while the general flow of this process is down the page, sometimes you have to jump back up given changing circumstance. You might be in the midst of the recovery phase, when your attacker or malicious code sneaks back in. You’ve got to be flexible enough to jump back and redo the containment phase, then eradication, and then return to recovery.
Deleted data, whether accidental or malicious in nature, is many times still intact on the file system. When a file is deleted, its physical location on the storage device is marked as free space. This means that the data itself remains intact until new files on the system begin to allocate the storage locations used by the previously deleted file. This demonstration shows how easy it can be to recover deleted data. Though the demo shows the recovery of deleted images on a memory card, the exact same methods can be used to forensically recover deleted files from a PC that is part of a criminal investigation. First, a bit-for-bit image of the storage device is performed so that both allocated and unallocated data is analyzed and the original device is left intact. Next, a data forensic tool ‘autopsy’ is used to study the file system and look for deleted files. Those deleted files are recovered and viewed to demonstrate success. Many free bootable forensic Linux distributions exist, the most popular of which include Helix and Trinity. http://www.e-fense.com/helix/ http://trinityhome.org/
This is a list, in approximate chronological order, of the mistakes that are most likely to occur in the incident handling process. A good handler thinks a few steps ahead and tries to avoid the problems. Of course unexpected things happen. Don’t lose your cool if a re-infection occurs, or anything like that. It doesn’t mean you are not a good handler, but if you can avoid mistakes, you might well be able to get home, get that shower and jump into bed several hours earlier.
Law enforcement agents tell story after story of the well meaning system administrators that ruined the evidence – usually just a couple of minutes after the incident. You need to act, but take time to think. There is a critical point to this story. No one can run so fast that they can outrun a computer with a 3 GHz multi-core processor attached to a Gigabit Ethernet. More importantly, when one is working as root, or administrator, or supervisor, there are many operations that do not have an “undo”. To help you stick to the Six – Step Process, please use the forms available on the SANS website. They provide a template for useful information you need to capture during an incident. The FREE forms at this site include: Incident Contact List, Identification Checklist, Survey, Containment Checklist, Eradication Checklist, and Communications Log. And, for further materials, NIST has developed a Computer Security Incident Handling Guide that covers the same bases we do here. It’s a solid read, and goes hand-in-glove with this material as well. You can get it at no charge from http://www.csrc.nist.gov/publications/nistpubs/800-61.pdf.
The attacker community cooperates with one another (albeit sometimes in an antisocial manner). They share hacked accounts, exploits, and tricks of the trade. Within the security community, we often don’t share. There is some idea that the fact the we came under attack is a big secret. This will not come as a surprise, but virtually everyone connected to the Internet comes under attack. Eventually your organization is bound to take a hit. You can learn from that and you can share, and by doing so others can learn. If your attackers share and you don’t, your organization is outnumbered – big time! So how can you share attack and incident information? You can post something to bugtraq at www.securityfocus.com, or submit information to handlers’ lists at the Internet Storm Center (isc.sans.org) The handler’s list always has an experienced handler on duty, waiting for reports to come in. Each day, the handlers’ diary is updated with the latest information about computer attacks. You should check it out!
As a handler, you make the call on what is necessary in your incident handling processes and procedures. As you develop or refine your procedures, we just ask you to consider the information we provide on incident handling. If an incident occurred, would you be thankful if you had done, bought, or prepared a given countermeasure? Perhaps the rephrase of this thought is the best way to look at things, though: If an incident occurred and you had NOT done this, would you be really sorry? There is nothing in this information that is an absolute; the nature of incident handling requires us to be flexible and to adjust the processes to meet the circumstance. There are numerous different valid approaches to handling incidents and what applies in one case may not in another. These slides represent a synthesis of best practices.
Defense-in-Depth from SEC401: Security Essentials Bootcamp Style , the penetration testing material is related to SEC560: Network Penetration Testing and Ethical Hacking, and the incident handling overview is from SEC504: Hacker Techniques, Exploits and Incident Handling. Both courses feature GIAC certifications.
This page intentionally left blank.
Why choose SANS courses and GIAC certifications? SANS Institute is the leading training organization for system, audit, network, and security. GIAC (Global Information Assurance Certification) provides certification that validates the skills of security professionals.
Education and Community are the guiding principles of SANS and of GIAC. SANS’ goal for a number of years has been to provide the best technical training, delivered by the best instructors. In this, we have a proven track record. Many of the core SANS courses now form the basis of the GIAC certification program. In the past, our efforts have focused on “live” classroom training at conferences. While this provides an excellent educational forum, it limits us in both time (how often we can offer courses) and space (seating limitations). Another difference between SANS/GIAC and other programs is that SANS and GIAC are constantly evolving. SANS courses and GIAC objectives are not static – and therefore they don’t become dated. Information security (like technology in general) is a rapidly changing field. Our material is revised on an ongoing basis – generally, every few months. Student feedback and new technical developments lead to new consensus on best practices, which are incorporated into GIAC material through instructor revisions…and the cycle begins again. Courses are revised, exams updated to reflect new material, new practical assignments developed to build on earlier research. GIAC continues to raise the bar, setting new standards for excellence. In addition, GIAC has a very strong community focus. One of GIAC’s primary goals is to continually advance the defensive state of practice of information security. We do this not only through education, but also by sharing our research with others so that they too can continue to learn. Community consensus drives our curriculum and shapes the future direction of the program. Public disclosure on our web site – through GIAC and www.incidents.org, through consensus documents, through the research of GIAC certified professionals – provides free public information and education.
SANS and GIAC constantly updates course and certification information to keep you on top of current threats and vulnerabilities. We use real-world, hands-on scenarios. While tools are an important part of the IT security toolbox, we teach you actual skills so you don’t have to rely on a tool. The SANS Promise - You will be able to apply our information security training the day you get back to the office.
GIAC offers a series of certification levels to assess the different degrees of knowledge mastery a student possesses in specific subject areas. Early in 2005, GIAC announced a major shift: a written practical assignment was no longer required to obtain any GIAC Certification. All of the base GIAC certifications assess knowledge through online multiple choice exams, and they assess industry standard practices and scenario based knowledge. The current GIAC exam system assesses a wider range of material than the original written practical. Students who scored at least 70 on their exams for their certification have earned GIAC SILVER. Please note that SANS Technology Institute students must score an 80 or above to receive STI credit. Those students who have earned a GIAC certification and want to take their learning to the next level have the option to apply for GIAC Gold. GIAC Gold requires the candidate to research and write a technical report based on a specific aspect of the core certification that would benefit the info-sec community. Students attempting GIAC Gold will have an advisor to work with throughout the development of their project. The GIAC Platinum series is the top of the line certification. The platinum level requires multiple GIAC certifications in a specific discipline and involves many days of additional testing. The platinum series ensures that an individual is a true subject area expert.
GIAC certifications verify that an individual has a working understanding of a specific Information Security discipline. GIAC certified individuals prove on a day to day basis that they can secure systems and apply the knowledge they purport to possess. Would you want someone without a drivers license behind the wheel of your new car? The more qualified security professionals there are, the better protected our Internet neighborhoods become. It is much like having more police officers watching over us, or at the very least a really strong Neighborhood Watch group. Our “neighborhood” is world wide, so we need a lot of qualified “police officers” to do the job right. Increased recognition of the importance of computer and information security in general and a growing recognition of the quality of the GIAC program have led to prominent recognition. Many large companies and government agencies (for example: State Farm, National Security Agency, Northrop Grumman, Symantec, and Department of Energy), now request or require GIAC certification for new job candidates. US Department of Defense directive 8570 is an enterprise-wide program to train, certify, and manage the DoD Information Assurance (IA) workforce, requiring technicians and managers to be trained and certified to a DoD baseline requirement. GIAC certifications serve as a bench mark for five out of the six defined job levels within the DoD 8570 program. In addition to personal benefit, a certification is also a manager’s tool. First, it is a way to verify the time and money you have invested in an employee’s education, your employee can walk away with something tangible to show for it. Second, it is a way for a new manager to know that an employee is capable because they have the credentials to show they know what they are talking about.
This page intentionally left blank.
GIAC has been an industry leader in information security certifications for years. The number of certifications has grown with the demands of students, new threats and new technologies. Each GIAC certification is designed to stand on its own, and represents a certified individual's mastery of a particular set of knowledge and skills. There is no particular &quot;order&quot; in which GIAC certifications must be earned; though we recommend that candidates master lower level concepts before moving on to more advanced topics.
SANS and GIAC offer a variety of free resources readily available on the web. The Internet Storm Center or ISC, provides a free analysis and warning service to thousands of Internet users and organizations, and is actively working with Internet Service Providers to fight back against the most malicious attackers Top 15 Malicious Spyware Actions - Spyware authors have ramped up their malicious code to invade users' privacy at unprecedented levels. The list on this page describes some of the most malicious activities of today's spyware, illustrating the need for solid antispyware defenses. SANS Security Policy Samples – is a consensus research project of the SANS community. The ultimate goal of the project is to offer everything you need for rapid development and implementation of information security policies. The Internet Guide to Popular Resources on Information Security is an FAQ providing answers to common information requests about computer security and links to additional reading More FAQ’s – You will also find FAQ’s regarding intrusion detection and malware. SCORE is a community of security professionals from a wide range of organizations and backgrounds working to develop consensus regarding minimum standards and best practice information, essentially acting as the research engine for CIS. Security Tool White Papers - A collection of White Papers to help you research and find the security tools that best fit your needs. Glossary of Security Terms – A comprehensive list of terms used in computer security and intrusion detection
Thanks for coming. We hope you have gained some valuable information from this presentation Please let us know if you have any questions about SANS training or GIAC certifications. And, do not forget to sign up for your free GIAC assessment!