I
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

I

on

  • 118 views

 

Statistics

Views

Total Views
118
Views on SlideShare
118
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

I Document Transcript

  • 1. Why Security? The Internet today is a widespread information infrastructure, since it is free flow of information; it is inherently an insecure channel for exchanging information. Internet security is the set of rules and measures taken against such attacks. National Institute of standards and technology mainly oversees the issues of security and sets forth the rules and regulations. Different methods have been used to protect the transfer of data over the internet. This includes encryption, in which data is transformed and a decryption key is generated for the receiver of the data. Security concerns are in some ways peripheral to normal business working, but serve to highlight just how important it is that business users feel confident when using IT systems. Security will probably always be high on the IT agenda simply because cyber criminals know that a successful attack can be very profitable. This means they will always strive to find new ways to circumvent IT security, and users will consequently need to be continually vigilant. Whenever decisions need to be made about how to enhance a system, security will need to be held uppermost among its requirements. Internet security professionals should be fluent in the four major aspects:  Penetration testing  Intrusion Detection  Incidence Response  Legal / Audit Compliance A hacker who compromises or impersonates a host will usually have access to all of its resources: files, storage devices, phone lines, etc. From a practical perspective, some hackers are most interested in abusing the identity of the host, not so much to reach its dedicated resources, but to launder further outgoing connections to other, possibly more interesting, targets. Others might actually be interested in the data on your machine, whether it is sensitive company material or government secrets. Machines with sensitive files may require extra levels of passwords or even file encryption. Computer security is not a goal, it is a means toward a goal: information security. Figure shows two measures of the growth of the Internet. The top shows a count of hosts detected by automated sweeps of the Internet. The counts for recent years are certainly on the low side of the actual number: there is no reliable technology available to count all the computers connected to a large internet. The lower plot shows the number of networks registered on NSFnet over the past few years. Please note: the vertical scale on both charts is logarithmic. These growths are exponential.
  • 2. Picking a Security Policy A security policy is the set of decisions that, collectively, determines an organization’s posture toward security. More precisely, a security policy determines the limits of acceptable behavior, and what the response to violations should be. Naturally, security policies will differ from organization to organization. An academic department in a university has different needs than a corporate product development organization, which, in turn, differs from a military site. But every organization should have one, if only to let it take action when unacceptable events occur. The first step, then, is to decide what is and is not permitted. . A key decision in the policy is the stance of the firewall design. The stance is the attitude of the designers. It is determined by the cost of the failure of the firewall and the designers’ estimate of that likelihood. We’ve tried to give our firewalls a fail-safe design: if we have overlooked a security hole or installed a broken program, we believe our firewalls are still safe. Compare this approach to a simple packet filter. If the filtering tables are deleted or installed improperly, or if there are bugs in the router software, the gateway may be penetrated. This nonfail-safe design is an inexpensive and acceptable solution if your stance allows a somewhat looser approach to gateway security. For reasons that are spelled out below, we feel that firewalls are an important tool that can minimize the danger, while providing most—but not necessarily all—of the benefits of a network connection.
  • 3. Axiom 1 (Murphy) All programs are buggy. Theorem 1 (Law of Large Programs) Large programs are even buggier than their size would indicate. Proof: By inspection. Corollary 1.1 A security-relevant program has security bugs. Theorem 2 If you do not run a program, it does not matter whether or not it is buggy. Proof: As in all logical systems, false _ true ___ true. Corollary 2.1 If you do not run a program, it does not matter if it has security holes. Theorem 3 Exposed machines should run as few programs as possible; the ones that are run should be as small as possible. Proof: Follows directly from Corollaries 1.1 and 2.1. Corollary 3.1 (Fundamental Theorem of Firewalls) Most hosts cannot meet our requirements: they run too many programs that are too large. Therefore, the only solution is to isolate them behind a firewall if you wish to run any programs at all. Our math, though obviously not meant to be taken seriously, does lead to sound conclusions. Firewalls must be configured as minimally as possible, to minimize risks. we configure our firewalls to reject everything, unless we have explicitly made the choice—and accepted the risk—to permit it. Taking the opposite tack, of blocking only known offenders, strikes us as extremely dangerous. If nothing else is said or implemented, the default policy is “anything goes.” Needless to say, this stance is rarely acceptable in a security-conscious environment. If you do not make explicit decisions, you have made the default decision to allow almost anything. Strategies for a Secure Network
  • 4. Virus Definition A computer virus is a program written with functions and intent to copy and disperse itself without the knowledge and cooperation of the owner or user of the computer. A common definition is “a program which modifies other programs to contain a possibly altered version of itself.” This definition is generally attributed to Fred Cohen from his seminal research in the mid-1980s. Currently, viruses are generally held to attach themselves to some object, although the object may be a program, disk, document, e-mail message, computer system, or other information entity. Computer viral programs are not a “natural” occurrence. Viruses are programs written by programmers. They do not just appear through some kind of electronic evolution. Viral programs are written, deliberately, by people. However, the definition of “program” may include many items not normally thought of in terms of programming, such as disk boot sectors, and Microsoft Office documents or data files that also contain macro programming. Many people have the impression that anything that goes wrong with a computer is caused by a virus. From hardware failures to errors in use, everything is blamed on a virus. A virus is not just any damaging condition. Similarly, it is now popularly believed that any program that may do damage to your data or your access to computing resources is a virus. Viral programs are not simply programs that do damage. Indeed, viral programs are not always damaging, at least not in the sense of being deliberately designed to erase data or disrupt operations. The first e-mail virus was spread in 1987. The first virus hoax message (then termed a “metavirus”) was proposed in 1988. Viruses are the largest class of malware. a virus spreads and uses the resources of the host, it affords a kind of power to software that parallel processors provide to hardware. Therefore, some have theorized that viral programs could be used for beneficial purposes, similar to the experiments in distributed processing that are testing the limits of cryptographic strength. A virus generally requires an action on the part of the user to trigger or aid reproduction and spread. The action on the part of the user is generally a common function, and the user generally does not realize the danger of the action or the fact that he or she is assisting the virus. There is no necessity that the virus carries a payload, although a number of viruses do. FIRST GENERATION VIRUSES First generation viruses tended to spread very slowly but hang around in the environment for a long time. Boot Sector Viruses: Viruses are generally partly classified by the objects to which they attach. Most desktop computer operating systems have some form of boot sector, a specific location on disk that contains programming to bootstrap the startup of a computer. Boot sector infectors (BSIs) replace or redirect this programming in order to have the virus invoked, usually as the first programming running on the computer. Some viral programs, such as the Stoned virus, always attack the physical first sector: the boot sector on floppy disks and the master boot record on hard disks. Thus viral programs that always attack the boot sector might be termed “pure” BSIs, whereas programs like Stoned might be referred to as an “MBR type” (master boot record )of BSI. File-Infecting Viruses: A file infector infects program (object) files. System infectors that infect operating system program files (such as COMMAND.COM in DOS) are also file infectors. File infectors can attach to the front of the object file (prependers), attach to the back of the file and create a jump at the front of the file to the virus code (appenders), or overwrite the file or portions of it (overwrites).eg: Jerusalem. Polymorphic Viruses: Polymorphism (literally many forms) refers to a number of techniques that attempt to change the code string on each generation of a virus. These vary from using modules that can be rearranged to encrypting the virus code itself, leaving only a stub of code that can decrypt the body of the virus program when invoked. Polymorphism is sometimes
  • 5. also known as self-encryption or self-garbling, but these terms are imprecise and not recommended. Examples of viruses using polymorphism are Whale and Tremor. Many polymorphic viruses use standard “mutation engines” such as MtE. These pieces of code actually aid detection because they have a known signature. Virus Creation Kits: The term “kit” usually refers to a program used to produce a virus from a menu or a list of characteristics. Use of a virus kit involves no skill on the part of the user. Fortunately, most virus kits produce easily identifiable code.Packages of antiviral utilities are sometimes referred to as tool kits, occasionally leading to confusion of the terms. MACRO VIRUSES (SECONDGENERATION): A macro virus uses macro programming of an application such as a word processor. (Most known macro viruses use Visual Basic for Applications in MicrosoftWord: some are able to cross between applications and function in, for example, a PowerPoint presentation and a Word document, but this ability is rare.). Macro viruses infect data files, and tend to remain resident in the application itself by infecting a configuration template such as MS Word’s normal.dot. Macro viruses can operate across hardware or operating system platforms as long as the required application platform is present. (For example, many MSWord macro viruses can operate on both the Windows and Macintosh versions of MS Word.). Examples are Concept and CAP. Melissa is also a macro virus, in addition to being an e-mail virus: it mailed itself around as an infected document. E-MAIL VIRUSES (THIRD GENERATION):With the addition of programmable functions to a standard e-mail user agent (usually Microsoft’s Outlook), it became possible for viruses to spread worldwide in mere hours, as opposed to months. Melissa was far from the first e-mail virus. The first e-mail virus to successfully spread in the wild was the Christma.exec, in the fall of 1987. Trojans Trojans, or Trojan horse programs, are the largest class of malware. However, the term is subject to much confusion, particularly in relation to computer viruses. A Trojan is a program that pretends to do one thing while performing another, unwanted action. The extent of the “pretense” may vary greatly. Trojans relied merely on the filename and a description on a bulletin board. “Log-in” Trojans, popular among university student mainframe users, mimicked the screen display and the prompts of the normal log-in program and could, in fact, pass the user name and password along to the valid log-in program at the same time as they stole the user data. recently, Trojan programs have been distributed by mass e-mail campaigns, by posting on Usenet newsgroup discussion groups, or through automated distribution agents (bots) on Internet relay chat (IRC) channels. Since source identification in these communications channels can be easily hidden, Trojan programs can be redistributed in a number of disguises, and specific identification of a malicious program has become much more difficult. the term trojan refers to a deliberately misleading or modified program that does not reproduce itself. A major aspect of Trojan design is the social engineering component. Remote access Trojans (RATs) are programs designedto be installed, usually remotely, after systems are installed and working Worms . A worm reproduces and spreads, like a virus, and unlike other forms of malware.Worms are distinct from viruses, although they may have similar results. Most simply, a worm may be thought of as a virus with the capacity to propagate independently of user action. they do not rely on (usually) human-initiated transfer of data between systems for propagation, but instead spread across networks of their own accord, primarily by exploiting known vulnerabilities in common software. The technical origin of the term “worm program” matched that of modern distributed processing experiments: a program with “segments” working on different computers, all communicating over a network. The first worm to garner significant attention was the Internet Worm of 1988. Code Red and a number of Linux programs (such as Lion) are modern examples of worms. (Nimda is an example of a worm, but it also spreads in a number of other ways. WORMS (FIRST AND THIRD GENERATIONS): the Internet/UNIX/MorrisWorm did not actually bring the Internet in general and e-mail in particular to the proverbial grinding halt. It was able to run and propagate only on machines running specific versions of the UNIX operating system on specific hardware platforms. The Worm used “trusted” links, password cracking, security “holes” in standard programs, standard and default operations, and, of course, the power of viral replication. It used standard password-cracking techniques such as simple variations on the name of the account and the user. It carried a dictionary of words likely to be used as passwords, and would also look for a dictionary on the new machine and attempt to use that as well. Eg: Ramen worm,Lion. PREVENTION AND PROTECTION TECHNIQUES: All antiviral technologies are based on the three classes outlined by Fred Cohen in his early research. The first type performs an ongoing assessment of the functions taking place in the computer. The second checks regularly for changes in the computer system where changes should occur only infrequently. The third examines files for known code found in previous viruses. in the United States: it may be legal to write a virus, but illegal to release it. String Search (Signature-Based): Scanners examine files, boot sectors, and/or memory for evidence of viral infection. They generally look for viral signatures, sections of program code known to be in specific viral programs but not in most other programs. Because of this, scanning software will generally detect only known viruses and must be updated regularly. Real-Time Scanning: Real-time, or on-access, scanning is not really a separate type of antivirus technology. It uses standard signature scanning, but attempts to deal with each file of object as it is accessed, or comes into the machine. Because on-access scanning can affect the performance of the machine, vendors generally try to take shortcuts in order to reduce the delay when a file is read. Therefore, real-time scanning is signifcantly less effective at identifying virus infections than a normal signature scan of all files. Heuristic Scanning A recent addition to scanners is intelligent analysis of unknown code, currently referred to as heuristic scanning. It should be noted that heuristic scanning does not represent a new type of antiviral software. More closely akin to activity monitoring functions than traditional signature scanning, this looks for “suspicious” sections of code generally found in viral programs. Although it is possible for normal programs to want to “go resident,” look for other program files, or even modify their own code, such activities are telltale signs that can help an informed user come to some decision about the advisability of running or installing a given new and unknown program.
  • 6. Permanent Protection: The ultimate object, for computer users, is to find some kind of antiviral system that you can set and forget: that will take care of the problem without further work or attention on your part. Unfortunately, as previously noted, it has been proved that such protection is impossible. Vaccination In the early days of antiviral technologies, some programs attempted to add change detection to every program on the disk. Unfortunately, these packages, frequently called vaccines, sometimes ran afoul of different functions within normal program that were designed to detect accidental corruption on disk. No program has been found that can fully protect a computer system in more recent operating environments. Activity Monitoring (Behavior-Based) An activity monitor performs a task very similar to an automated form of traditional auditing: it watches for suspicious activity. It may, for example, check for any calls to format a disk or attempts to alter or delete a program file while a program other than the operating system is in control. Training and some basic policies can greatly reduce the danger of infection. A few guidelines that can really help in the current environment are the following: _ Do not double-click on attachments. _ When sending attachments, be really specific when describing them. _ Do not blindly use Microsoft products as a company standard. _ Disable Windows Script Host. Disable ActiveX. Disable VBScript. _ Disable JavaScript. Do not send HTML-formatted email. _ Use more than one scanner, and scan everything.