This paper explores two questions: What
methods can be used to deceive someone who is
in an investigative role into trusting an object
which has been exploited? What kind of impact
does operating system and application run-time
linking have on live investigations? After
experimenting with dynamic object
dependencies and kernel modules in the UNIX
environment, it is the opinion of the authors that
run-time linking can be exploited to alter the
execution of otherwise trusted objects. This can
be accomplished without having to modify the
objects themselves. If an investigator trusts an
inherently un-trusted object, it can result in the
possible misdirection of a digital investigation.
Anti-Forensics: Real world identification, analysis and preventionSeccuris Inc.
Reliance on forensic investigation of information systems has become a daily requirement for law enforcement and security practitioners around the world.
Effective evidence collection and analysis is the foundation of any investigation; identification of suspects, motives and methods demand the acquisition of the largest amount information that evidence can provide us. Anti-Forensics – Real world identification, analysis and prevention will discuss how criminals, attackers, non-enlightened investigators all have the ability to impact the amount useful information we have at our disposal. Michael will show the audience real world scenarios detailing how Anti-forensics tools are used to
hide and destroy incriminating evidence, outlining common anti-forensic techniques. This will be followed by discussion of hands-on identification and prevention
practices used to raise awareness around current academic research and identify potential solutions for practitioners and law enforcement organizations.
Digital Forensics is the use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation, and presentation of digital evidence derived from digital devices.
Anti-Forensics: Real world identification, analysis and preventionSeccuris Inc.
Reliance on forensic investigation of information systems has become a daily requirement for law enforcement and security practitioners around the world.
Effective evidence collection and analysis is the foundation of any investigation; identification of suspects, motives and methods demand the acquisition of the largest amount information that evidence can provide us. Anti-Forensics – Real world identification, analysis and prevention will discuss how criminals, attackers, non-enlightened investigators all have the ability to impact the amount useful information we have at our disposal. Michael will show the audience real world scenarios detailing how Anti-forensics tools are used to
hide and destroy incriminating evidence, outlining common anti-forensic techniques. This will be followed by discussion of hands-on identification and prevention
practices used to raise awareness around current academic research and identify potential solutions for practitioners and law enforcement organizations.
Digital Forensics is the use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation, and presentation of digital evidence derived from digital devices.
Nidhal K. EL Abbadi 2017, In this research. Skin lesion is determined on the ABCD rule. The median filter is used during pre-processing to get rid of bubbles, hair and other lighting effects. In order to segment data, follow these steps: First, a median filtering to filter out hair and background noise.
In this presentation we have covered the topic Data Security from the subject of Information Security. Where Data, Data Security, Security, Security Policy, Tools to secure data, Security Overview (Availability, Integrity, Authenticity, Confidentiality), Some myths and Dimensions of System Security and Security Issues are discussed.
Cloud Forensics...this presentation shows you the current state of progress and challenges that stand today in the world of CLOUD FORENSICS.Based on lots of Google search and whites by Josiah Dykstra and Alan Sherman.The presentation builds right from basics and compares the conflicting requirements between traditional and Clod Forensics.
Anti forensics-techniques-for-browsing-artifactsgaurang17
Anti-forensics refers to any technique, gadget or software designed to hamper a computer investigation. Achieve Security using Anti Forensics. Anti-forensics Includes: Encryption, stenography, disk cleaning, file wiping. Anti-Forensics mainly for the security purpose.For confidentiality of Information or Securing the Web-Transaction. Smart Criminals are using it to Harden the forensic Investigation.
INTRODUCTION TO COMPUTER FORENSICS
Introduction to Traditional Computer Crime, Traditional problems associated with Computer Crime. Introduction to Identity Theft & Identity Fraud. Types of CF techniques – Incident and incident response methodology – Forensic duplication and investigation. Preparation for IR: Creating response tool kit and IR team. – Forensics Technology and Systems – Understanding Computer Investigation – Data Acquisition.
The presentation is all about computer forensics. the process , the tools and its features and some example scenarios.. It will give you a great insight into the computer forensics
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Nidhal K. EL Abbadi 2017, In this research. Skin lesion is determined on the ABCD rule. The median filter is used during pre-processing to get rid of bubbles, hair and other lighting effects. In order to segment data, follow these steps: First, a median filtering to filter out hair and background noise.
In this presentation we have covered the topic Data Security from the subject of Information Security. Where Data, Data Security, Security, Security Policy, Tools to secure data, Security Overview (Availability, Integrity, Authenticity, Confidentiality), Some myths and Dimensions of System Security and Security Issues are discussed.
Cloud Forensics...this presentation shows you the current state of progress and challenges that stand today in the world of CLOUD FORENSICS.Based on lots of Google search and whites by Josiah Dykstra and Alan Sherman.The presentation builds right from basics and compares the conflicting requirements between traditional and Clod Forensics.
Anti forensics-techniques-for-browsing-artifactsgaurang17
Anti-forensics refers to any technique, gadget or software designed to hamper a computer investigation. Achieve Security using Anti Forensics. Anti-forensics Includes: Encryption, stenography, disk cleaning, file wiping. Anti-Forensics mainly for the security purpose.For confidentiality of Information or Securing the Web-Transaction. Smart Criminals are using it to Harden the forensic Investigation.
INTRODUCTION TO COMPUTER FORENSICS
Introduction to Traditional Computer Crime, Traditional problems associated with Computer Crime. Introduction to Identity Theft & Identity Fraud. Types of CF techniques – Incident and incident response methodology – Forensic duplication and investigation. Preparation for IR: Creating response tool kit and IR team. – Forensics Technology and Systems – Understanding Computer Investigation – Data Acquisition.
The presentation is all about computer forensics. the process , the tools and its features and some example scenarios.. It will give you a great insight into the computer forensics
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Automated Live Forensics Analysis for Volatile Data AcquisitionIJERA Editor
The increase in sophisticated attack on computers needs the assistance of Live forensics to uncover the evidence
since traditional forensics methods doesn’t collect volatile data. The volatile data can ease the difficulty towards
investigation in fact it can provide investigator with rich information towards solving a case. Here we are trying
to eliminate the complexity involved in normal process by automating the process of acquisition and analyzing
at the same time providing integrity towards evidence data through python scripting.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Use of network forensic mechanisms to formulate network securityIJMIT JOURNAL
Network Forensics is fairly a new area of research which would be used after an intrusion in various
organizations ranging from small, mid-size private companies and government corporations to the defence
secretariat of a country. At the point of an investigation valuable information may be mishandled which
leads to difficulties in the examination and time wastage. Additionally the intruder could obliterate tracks
such as intrusion entry, vulnerabilities used in an entry, destruction caused, and most importantly the
identity of the intruder. The aim of this research was to map the correlation between network security and
network forensic mechanisms. There are three sub research questions that had been studied. Those have
identified Network Security issues, Network Forensic investigations used in an incident, and the use of
network forensics mechanisms to eliminate network security issues. Literature review has been the
research strategy used in order study the sub research questions discussed. Literature such as research
papers published in Journals, PhD Theses, ISO standards, and other official research papers have been
evaluated and have been the base of this research. The deliverables or the output of this research was
produced as a report on how network forensics has assisted in aligning network security in case of an
intrusion. This research has not been specific to an organization but has given a general overview about
the industry. Embedding Digital Forensics Framework, Network Forensic Development Life Cycle, and
Enhanced Network Forensic Cycle could be used to develop a secure network. Through the mentioned
framework, and cycles the author has recommended implementing the 4R Strategy (Resistance,
Recognition, Recovery, Redress) with the assistance of a number of tools. This research would be of
interest to Network Administrators, Network Managers, Network Security personnel, and other personnel interested in obtaining knowledge in securing communication devices/infrastructure. This research provides a framework that can be used in an organization to eliminate digital anomalies through network forensics, helps the above mentioned persons to prepare infrastructure readiness for threats and also enables further research to be carried on in the fields of computer, database, mobile, video, and audio.
USE OF NETWORK FORENSIC MECHANISMS TO FORMULATE NETWORK SECURITYIJMIT JOURNAL
Network Forensics is fairly a new area of research which would be used after an intrusion in various
organizations ranging from small, mid-size private companies and government corporations to the defence
secretariat of a country. At the point of an investigation valuable information may be mishandled which
leads to difficulties in the examination and time wastage. Additionally the intruder could obliterate tracks
such as intrusion entry, vulnerabilities used in an entry, destruction caused, and most importantly the
identity of the intruder. The aim of this research was to map the correlation between network security and
network forensic mechanisms. There are three sub research questions that had been studied. Those have
identified Network Security issues, Network Forensic investigations used in an incident, and the use of
network forensics mechanisms to eliminate network security issues. Literature review has been the
research strategy used in order study the sub research questions discussed. Literature such as research
papers published in Journals, PhD Theses, ISO standards, and other official research papers have been
evaluated and have been the base of this research. The deliverables or the output of this research was
produced as a report on how network forensics has assisted in aligning network security in case of an
intrusion. This research has not been specific to an organization but has given a general overview about
the industry. Embedding Digital Forensics Framework, Network Forensic Development Life Cycle, and
Enhanced Network Forensic Cycle could be used to develop a secure network. Through the mentioned
framework, and cycles the author has recommended implementing the 4R Strategy (Resistance,
Recognition, Recovery, Redress) with the assistance of a number of tools. This research would be of
interest to Network Administrators, Network Managers, Network Security personnel, and other personnel
interested in obtaining knowledge in securing communication devices/infrastructure. This research
provides a framework that can be used in an organization to eliminate digital anomalies through network
forensics, helps the above mentioned persons to prepare infrastructure readiness for threats and also
enables further research to be carried on in the fields of computer, database, mobile, video, and audio.
Intrusion detection and anomaly detection system using sequential pattern miningeSAT Journals
Abstract
Nowadays the security methods from password protected access up to firewalls which are used to secure the data as well as the networks from attackers. Several times these types of security methods are not enough to protect data. We can consider the use of Intrusion Detection Systems (IDS) is the one way to secure the data on critical systems. Most of the research work is going on the effectiveness and exactness of the intrusion detection, but these attempts are for the detection of the intrusions at the operating system and network level only. It is unable to detect the unexpected behavior of systems due to malicious transactions in databases. The method used for spotting any interferes on the information in the form of database known as database intrusion detection. It relies on enlisting the execution of a transaction. After that, if the recognized pattern is aside from those regular patterns actual is considered as an intrusion. But the identified problem with this process is that the accuracy algorithm which is used may not identify entire patterns. This type of challenges can affect in two ways. 1) Missing of the database with regular patterns. 2) The detection process neglects some new patterns. Therefore we proposed sequential data mining method by using new Modified Apriori Algorithm. The algorithm upturns the accurateness and rate of pattern detection by the process. The Apriori algorithm with modifications is used in the proposed model.
Keywords — Anomaly Detection, Modified Apriori Algorithm, Misuse detection, Sequential Pattern Mining
Intrusion detection and anomaly detection system using sequential pattern miningeSAT Journals
Abstract
Nowadays the security methods from password protected access up to firewalls which are used to secure the data as well as the networks from attackers. Several times these types of security methods are not enough to protect data. We can consider the use of Intrusion Detection Systems (IDS) is the one way to secure the data on critical systems. Most of the research work is going on the effectiveness and exactness of the intrusion detection, but these attempts are for the detection of the intrusions at the operating system and network level only. It is unable to detect the unexpected behavior of systems due to malicious transactions in databases. The method used for spotting any interferes on the information in the form of database known as database intrusion detection. It relies on enlisting the execution of a transaction. After that, if the recognized pattern is aside from those regular patterns actual is considered as an intrusion. But the identified problem with this process is that the accuracy algorithm which is used may not identify entire patterns. This type of challenges can affect in two ways. 1) Missing of the database with regular patterns. 2) The detection process neglects some new patterns. Therefore we proposed sequential data mining method by using new Modified Apriori Algorithm. The algorithm upturns the accurateness and rate of pattern detection by the process. The Apriori algorithm with modifications is used in the proposed model.
Techniques in Computer Forensics: A Recovery PerspectiveCSCJournals
Computer forensics has recently gained significant popularity with many local law enforcement agencies. It is currently employed in fraud, theft, drug enforcement and almost every other enforcement activity. The research paper includes the types of attempts to destroy or tamper the files by the culprits and unleashes various recovery techniques, and their significance in different situations from those attempts, which destroy files or inflict physical damage to the computer. The paper also presents the nature and immediate need of enhancing the existing automated forensics tools. The paper gives a quick glance of various methods used by culprits to destroy the information in the electronic storage media and their corresponding forensic approach done by the computer forensic experts in the perspective of recovery.
Design for A Network Centric Enterprise Forensic SystemCSCJournals
Increased profitability and exposure of enterprise’s information incite more attackers to attempt exploitation on enterprise network, while striving not to leave any evidences. Although the area of digital forensic analysis is evolving to become more mature in the modern criminology, the scope of network and computer forensics in the large-scale commercial environment is still vague. The conventional forensic techniques, consisting of large proportion of manual operations and isolated processes, are not adequately compatible in modern enterprise context. Data volume of enterprise is usually overwhelming and the interference to business operation during the investigation is unwelcomed. To evidence and monitor these increasing and evolving cyber offences and criminals, forensic investigators are calling for more comprehensive forensic methodology. For comprehension of current insufficiencies, this paper starts from the probes for the weaknesses of various preliminary forensic techniques. Then it proposes an approach to design an enhanced forensic system that integrates the network distributed system concept and information fusion theory as a remedy to the drawbacks of existing forensic techniques.
CYBER FORENSICS AND AUDITING
Topics Covered: Introduction to Cyber Forensics, Computer Equipment and associated storage, media Role of forensics Investigator, Forensics Investigation Process, Collecting Network based Evidence Writing, Computer Forensics Reports, Auditing, Plan an audit against a set of audit criteria, Information Security Management, System Management. Introduction to ISO 27001:2013
Similar to Digital Anti-Forensics: Emerging trends in data transformation techniques (20)
Building an enterprise forensics response serviceSeccuris Inc.
What issues are enterprises facing that require digital forensics?
• In-depth technical issues within the IT environment
o Complex attack / virus analysis
o Packet analysis
o Complex environment investigation coordination (VMWare)
• Separation of duties / transparency issues with IT staff
o Integrity and audit-ability issues from regulators and common due diligence requirements
• System Audit Functionality verification
o Audit System Investigation / Recovery
• Ensure systems are preserved for forensic investigation*
o Banking Standards
o NIST Standards
o PCI
o US State Laws
• Legal issues such as eDiscovery
o Prepare, Preserve & Produce electronically stored information
• Privacy issues from legislation, regulation and clients
o “DNA Forensics” – Identification for good & evil
• Records Management issues
o Historical Data Retrieval
o Data reconstruction
• Human Resources issues / employee investigations
o Inappropriate Use
o Harassment / Workplace Safety
o Loss management issues / evidence verification
o Theft / Fraud investigation support
o Sabotage
What is an Enterprise Forensics Response Service?
• Enables business owners to actively enforce corporate policy and protect and preserve digital assets through the use of forensic methods.
• Handles investigation requests from many different parts of the organization
o IT (Network / Applications)
o Internal Audit / Compliance
o Legal
o Privacy
o Records Management
o Human Resources / Employee Managers
o Loss Management / Physical Security
• An Enterprise Architectural Perspective of an EDF Service (Overview)
o Conceptual linkages to the business & information security strategy
o Logical service definition, examples of peer services
o Physical mechanisms that the EDF service is comprised of
o Examples of components that the EDF service utilizes
- What does the presentation cover?
• Identification & definition of required forensic services
• Review of common service mechanisms and components
• Considerations for implementing & service management in the enterprise
Security Information Management: An introductionSeccuris Inc.
Information Security managers have long been tasked with monitoring the enterprises they work for while the business requirements for enterprise security monitoring continue to mutate and be redefined with ever increasing speed. The definition and location of our assets shifts on a daily basis requiring a new unsurpassed level of flexibility and visibility in managing information security/ Traditional security technologies have continued their overlap with network, information and audit management solutions creating workplace silos for managing information security.
The ability to monitor in the enterprise, identifying, interpreting and intelligently responding to the true needs of our organizations seems impossible.
This presentation introduces Security Information Management (SIM) technologies and concerns, outlining potential solutions and approaches you can take to move your security posture forward.
Building Critical Infrastructure For Business RecoverySeccuris Inc.
How can we enhance our business continuity plans by incorporating critical technical infrastructure, remote access and technical incident handling strategies? How do these strategies differ between man-made or natural disasters? Michael’s presentation will look at key issues that companies need to address now in order to recover from a business altering disaster. Considering key components that should be embedded in your business continuity plan. Michael will highlight key business continuity issues that technology can help address and detail implementation options available.
Information Security Architecture: Building Security Into Your OrganziationSeccuris Inc.
Controls and solutions can mitigate risk, but can also deeply undermine business productivity and the benefits that new technologies may bring. Harnessing the SABSA Information Security framework will allow your organization to build robust enterprise security architecture, directly supporting and enabling your organization's core objectives.
This presentation will highlight the key concerns you should be aware of within your organization and current security program, as well as provide specific recommendations to successfully move your security and compliance goals ahead. Learn more about the techniques and tools readily available in the industry and how you can use these tools to create immediate wins and security improvements in your organization.
Virtually Secure: Uncovering the risks of virtualizationSeccuris Inc.
Virtually Secure: Uncovering the risks of virtualization
Organizations have been quickly leveraging the benefits of virtualized platforms in their datacenters, often unknowingly increasing the exposure of their most prized assets.
Michael will highlight the key concerns around virtualization technologies including the answers to questions such as are virtualized servers PCI compliant and what minimum controls must exist to protect the hypervisor? He will walk the audience through the latest technical threats and shed light on the solutions and controls available to secure your virtual environments.
Making Executives Accountable for IT SecuritySeccuris Inc.
How do we make executives accountable for IT Security?
Michael outlines the general challenges, details key items of concern and discusses the focus areas that can be taken to improve the daily governance of IT security in your organization.
Improving Your Information Security ProgramSeccuris Inc.
Michael walks the audience through the key focus areas in the creation of information security dashboards and discuss topics such as: What about our Information Security Program is important?
How can I represent my Information Security Program in a dashboard? What elements of my program should I measure and report on? What must happen with the output?
Digital Anti-Forensics: Emerging trends in data transformation techniques
1. Digital Anti-Forensics:
Emerging trends in data transformation techniques
Christian S.J. Peron &
Michael Legary
Seccuris Labs
Abstract the integrity of certain datasets which are
commonly used to form evidence.
This paper explores two questions: What
methods can be used to deceive someone who is If an attacker manipulates the information in
in an investigative role into trusting an object those datasets such that anomalies, failures, or
which has been exploited? What kind of impact specific outcomes do not arise, an investigator
does operating system and application run-time will have no grounds to suspect the integrity of
linking have on live investigations? After the data. This could lead the investigation away
experimenting with dynamic object from the source criminal activity.
dependencies and kernel modules in the UNIX
environment, it is the opinion of the authors that This paper will review traditional methods used
run-time linking can be exploited to alter the by adversaries mainly within UNIX
execution of otherwise trusted objects. This can environments to mislead investigators
be accomplished without having to modify the performing online or live analysis of systems.
objects themselves. If an investigator trusts an Additionally, less mainstream methods used to
inherently un-trusted object, it can result in the deceive investigators will be examined along
possible misdirection of a digital investigation. with the manner in which they can be detected.
Keywords: anti-forensics run-time linker hijack 2 Anti-forensics
1 Introduction Over the past decade academic researchers, law
enforcement agencies, and investigative
Judgments we make ultimately affect our practitioners have worked together to create
decisions, which in turn affect our actions. In a processes around the evolving discipline of
similar sense, investigations are based on a set of Digital Forensics. This discipline has been
judgments which are influenced by technical and founded through traditional forensic study which
non-technical factors. If those factors are in any combines science and technology to investigate
way different from normalcy, a judgment will and establish facts in courts of law. As
occur and ultimately affect the decisions of an standardized processes and techniques are
investigator. established and adopted into the daily practices
of forensic analysis the techniques utilized by
Criminals who intend to remain hidden try to adversaries will change in an attempt to
take advantage of the factors which influence overcome detection.
judgments made during preliminary
investigations1. In regard to digital forensics, 2.1 Anti-forensic methods
those factors which affect judgment tend to be
The primary building blocks for any type of
digital evidence consist of electronic data. If an
adversary can limit the identification, collection,
1
Preliminary investigations in this context refer collation and validation of electronic data the
to the initial suspicions that a system has been ability for an investigator to develop clear
exploited which lead to online investigations or evidence will be hindered. To achieve this goal
analysis.
2. an adversary can destroy, hide, manipulate or
prevent the creation of evidence. During the regular transactions between an
adversary and an information system, a large
2.2 Destruction of data amount of potential evidence is created. If an
adversary can prevent the creation of relevant
The most basic set of techniques in anti- data before an investigator has the ability to
forensics consist of the physical and logical identify or collect it, they can improve their
destruction of data or evidence. Physical chances of success in bypassing detection.
destruction can be accomplished with the use of Direct prevention techniques include the use of
magnetic techniques such as the degaussing of Root Kits4 or the modification of system binaries
media or through the application of brute force. such that the ability to generate relevant and
credible data has been removed from the system.
Logical destruction includes techniques that This method will limit the creation of data, but
reinitialize media or significantly change the may also limit functionality of the system itself
composition of data residing on the media2. The in turn limiting the ability of an adversary to
goal of physical and logical destruction achieve their goals.
techniques is to significantly damage integrity,
or remove traces of relevant data from a targeted 2.5 Emerging techniques
media. Although destruction of data can be the
most thorough and efficient means to limit the In an attempt to maintain functionality of a
amount of evidence an investigator can identify system while limiting the creation of relevant
and collect, it also limits the ability of the data adversaries utilize data transformation
adversary to achieve their motives. techniques which maintain or re-establish an
investigator’s trust in a system. Transformation
2.3 Hiding of data techniques are utilized by Root Kits or rogue
shared libraries which can hijack system calls or
Obfuscation and encryption of data give an run time linkages to modify data during the
adversary the ability to limit identification and creation process. Data identified and collected
collection of evidence by investigators while by and investigator that was created during the
allowing access and use to themselves. transformation process will not be relevant to the
Obfuscation techniques include the use of investigation.
Steganography3 which allows the hiding of data
in such a manner that the presence or contents of This paper will focus on and discuss data
the data can not be identified. Encryption transformation techniques adversaries utilize to
techniques may not prevent the detection of prevent the creation of data in an effort to limit
data, but will limit access to data contents by an investigators ability to identify direct or
modifying data in such a manner that it will be circumstantial evidence throughout an
unintelligible to investigators. investigation. Conventional and advanced
methods that adversaries utilize will be outlined
2.4 Data creation prevention along with traditional detection techniques
investigators use. Finally, emerging techniques
that are uncommon in standard adversary modus
2
operandi will be introduced along with possible
A lot of secure deletion utilities operate by
performing a series of operations like setting
every bit of a file to 1 or true, then setting every
4
bit to 0 or false. A root kit is generally a tool or collection of
3
Steganography is the practise of hiding one tools used on a compromised note to create
piece of information inside another. An example back doors, capture or hide various types of
might be putting a message inside a .JPG image information like passwords, network traffic or
file to bypass text based pattern matching. other resources.
3. detection strategies to identify the use of such
techniques. Process-related UNIX utilities, such as ps, top,
crontab, and w, are used for listing information
3 Conventional transformation methods about processes, threads or Light Weight
Processes, or jobs that have been scheduled to
3.1 Initial system compromise run. Modification of these types of utilities could
result in processes or jobs being hidden.
Processes or practices performed during
investigations are generally in response to a Network-related UNIX utilities, such as netstat,
security vulnerability which has been sockstat, fstat, and tcpdump, were designed to
discovered, targeted, and exploited by an list information about network connections and
adversary. Most security-related incidents occur sockets5. When modified, these utilities could
due to a lack of effective information technology help to hide sockets, listening services, network
governance and management within an connections, and general network activity
organization. However, it should be noted that existing on the system.
even with management direction and policy
enforcement, break down in technical In addition, an adversary may introduce a
implementation can result in servers, backdoor or Trojan into a commonly used
workstations, and other network devices to application or process. This provides a covert
remain un-patched. This problem is channel which enables an adversary to gain
exponentially worse if management policies do access to the system, bypassing any auditing or
not exist or are not enforced. logging mechanisms put in place by the
operating system.
3.2 Deception of security personal
4 Advanced transformation methods
Once an adversary has compromised a
component of a remote system, their primary 4.1 OS kernel services
goal is to retain control over the newly acquired
resources for as long as possible. This is Most modern operating systems support multi-
achieved through deceiving administrative and tasking or multi-programming environments.
investigative individuals, ensuring that the Each task or thread of execution is known as a
adversary is not discovered. process, which is uniquely identified by an
integer-based Process ID (PID). Typically, a
An adversary is commonly discovered when the process executes in two different contexts: user-
existence of certain unauthorized objects are state and system or kernel-state. Manipulating
verified. These objects can include: files, data flow between these contexts can be
directories, processes, jobs, network instrumental to successfully hiding information
connections, or listening services. from live investigation processes.
Conventionally, these objects are hidden from
regular operations through the modification or In kernel-space, operations like interrupt
replacement of various system utilities made handling, credential management, operating
available in most UNIX environments. system access control, and scheduler activations
are performed. While the current execution
Disk-related UNIX utilities such as df, ls, and context remains in kernel-space, manipulation of
du, are normally used for displaying mounted
file systems, disk consumption, and searching or
displaying information about file and file meta- 5
In general network sockets are treated the
data. Modification of these types of utilities may same as file handles. Except rather than the end
help hide files, file hierarchies, and disk point being a file, it can be another host or node
consumption. on a network.
4. system control registers and the execution of malicious block of code contained within the
privileged instructions is permitted. adversary’s module.
The kernel is responsible for servicing hardware By hijacking software interrupts, an adversary
and software interrupts. Software interrupts, or can manipulate information which is returned to
“system calls”, which are services or functions user-space utilities, such as ps, netstat, or ls.
provided by the operating system. These This allows an adversary to hide files, processes,
software interrupts are a critical component in network connections, and other objects which
the day-to-day operations of any system. Among could be exposed through a software interrupt,
many other things software interrupts are without actually modifying the user-space
responsible for: opening files, handling disk and utilities which could otherwise expose them.
network I/O, credential management, and
modification of file attributes. 4.3 Detection of system call hijacking
System calls are generally represented by Some operating system kernels provide the
integers. The value of the system call is its index facilities for dynamic module loading, and in
inside a global array within the kernel. These some cases, provide a facility to prevent it. For
arrays are also known as a software interrupt instance, some operating systems create
vectors. Most operating systems provide a different security levels which make extremely
facility which allows system administrators to difficult for an adversary to load dynamic
load kernel modules. This enables for functionality into the kernel. This can be
programmers to execute code in the context of accomplished by restricting access to certain
the kernel and allows system administrator to system calls and writing to kernel memory
dynamically add and remove functionality from devices like /dev/mem6.
a running system. This ability also helps
operating system developers to write new parts Detection of a malicious or rogue kernel module
of the kernel without constantly rebooting the can be very difficult. Depending on how
system. complex the kernel module itself is, it is possible
that the adversary can hijack system calls
Generally, these system calls are responsible for responsible for the loading and unloading of
performing access control functionality. This kernel modules themselves. Otherwise, it may
ensures that a subject has the appropriate access be possible to detect the presence of a kernel
level to interact with an object through level “root-kit” which is hijacking software
credential management, control of file attributes, interrupts by comparing the address currently
permissions, and ownerships. The integrity of loaded into the slot against the address of the
these system calls is paramount to the security of original system call.
the entire system.
Essentially, when the adversary loads their
4.2 Kernel modules and hijacking system module and re-defines the system call, they
calls change the address of the system to their own.
Therefore, if the address for the system call,
By enabling the dynamic linker facility within which is stored in the interrupt vector, is
the kernel, the administrator provides an different than the address of the proper system
adversary the facilities to load modules which
can re-initialize the elements or entries stored
within the software interrupt vectors. This 6
/dev/mem is a pseudo device which provides
allows an adversary to effectively “hijack” an interface to the physical memory of the
software interrupts. Essentially, an adversary can system. Reading or writing to /dev/mem is the
load a kernel module, re-initialize a system call same as reading or writing the physical memory
stored within the interrupt vector to reference a of the system.
5. call, it can be determined within a good degree hijacked (Figure 1).
of certainty that the system call has been
/*--
* In FreeBSD the operating system, “interrupt vectors” for system calls
* are stored as arrays called “sysent arrays”. The sysent data structure
* contains a member called “sy_call” which stores the address of the
* routine to execute when a software interrupt comes in.
*
* It is important to note that an operating system can have many different
* interrupt vectors. For example if the operating system supports many
* emulations, typically each emulation will have it’s own interrupt
* vector table.
*/
if (sysent[SYS_open].sy_call != open)
panic(“open system call has been hi-jacked”);
if (sysent[SYS_write].sy_call != write)
panic(“write system call has been hi-jacked”);
Figure 1:
Code snippet for the FreeBSD [1] operating system which when executed in the context of the kernel, could be used
to detect the presence of a hi-jacked system call.
5 Traditional transformation detection within the file should result in a drastically
methods different fingerprint.
In order to detect possible intrusion events, Most host-based intrusion detection systems
system administrators and investigators look for (HIDS) or root-kit detection agents operate on
network and system anomalies, modification of the same premise. To verify the authenticity of a
system objects, and unusual system resource file, these systems calculate the fingerprint or
consumption. digest of the original file in its trusted state, and
store the fingerprint in a trusted location. A
5.1 Cryptographic hashing for data integrity fingerprint is calculated on the current file, and
compared against the trusted fingerprint. If the
Cryptographic hashing implements concepts two fingerprints conflict, it is safe to assume the
provided by mathematical one-way functions to current file has been modified (Figure 2).
verify the integrity of data. Using these methods,
files can be uniquely identified through
% md5 ps.trusted
fingerprints. File fingerprints are generally MD5 (ps.trusted) =
calculated using cryptographic hashing functions 9501ef286ef3ab8687b7920ca4fee29f
% md5 /bin/ps
like MD5 [2], SHA [3], and RIPE-MD [4]. MD5 (/bin/ps) =
02b2f8087896314bafd4e9f3e00b35fb
The goal of these functions is to input some %
message or file of arbitrary length, and compute Figure 2:
Output of the “md5” utility. Hashes for the
a fingerprint of constant length. Hash functions
“ps.trusted” file and “bin/ps” file are calculated
are also known as “one way functions”, meaning based on the MD5 cryptographic hashing algorithm
they should be trivial to calculate in one and compared
direction, but computationally infeasible to
calculate in the other7. Changing a single bit
hash collision. The chances that two file hashes
7
It is possible for two different files to result in can collide can be calculated using the
the same checksum or hash. This is known as a something called the “Birthday Paradox” [5]
6. 5.2 Process analysis In some cases, the utilities themselves could be
modified to not report information on certain
Processes can be thought of as threads of processes. In this case, the use of cryptographic
execution which are represented by data checksums can be used along with cross
structures in memory. The entities contain referencing output with information stored in the
objects like: open files (file descriptor table), process file system can help to identify the
memory maps or VM objects, ownership labels, source of any discrepancies.
and resource consumption statistics. In most
UNIX environments, details about sockets, 5.3 Signature/pattern matching
network connections and open file handles are
normally located in a descriptor table stored Many adversaries use pre-written “root kits” to
within a process object. aide them in hiding or covering up objects they
do not want investigators to find. If these root
There are several utilities that analyze kits are made available to the public, samples or
information about these process data structures. specific patterns can be extracted and used to
These utilities can be used to locate various discover the potential presence of the same kit
objects that an adversary would like to keep on other nodes. (Figure 3)
hidden.
Detection agents typically contain a database of
The process objects can be retrieved through commonly known patterns and signatures. Using
different portals including /dev/mem, an this database, detection agents perform routine
interface to the systems physical memory, and scans of system memory and files, iterating
procfs, which implements a view of the system through signatures looking for matches.
process table and makes it accessible inside a
special file system.
% file libtransform.so.1
libtransform.so.1: ELF 32-bit LSB shared object, Intel 80386, version 1 (FreeBSD), stripped
%
Figure 3:
Output of the “file” utility on a shared object. The “file” utility attempts to figure the file type for a specified file.
This is accomplished by performing a number of tests, including one which checks for particular fixed data formats.
I.E. ELF [6] executable file formats
5.4 Network monitoring
Signatures can take on a number of forms. In
In some cases, intrusion events are detected by many cases, signatures which are found in most
various network monitoring mechanisms. These common attacks are matched against binary
mechanisms can include: network intrusion sequences stored in network transmissions.
detection systems (NIDS), firewall monitoring, (Figure 4) However, when monitoring
and bandwidth utilization trending. Many NIDS transmissions that are unencrypted, a network
base their detection through the use of pattern intrusion detection sensor can monitor for the
matching and signatures. Sensors monitor output of utilities which require super user
network traffic looking for traces of known access. For example, the output of the id(1)
attack signatures and generate alerts based on command returning UID 0.
positive matches.
7. Nov 10 21:59:06 <4.1> 172.16.1.20 snort: [1:466:1] SHELLCODE x86 stealth NOOP [Priority: 2]:
{PROTO001} 10.0.1.125 -> 10.5.1.3
Figure 4:
This is an example Snort [7] intrusion detection log which has detected the op-codes or machine instructions for a
“stealth NOOP” (machine instruction). This log contains information about the transmission source and destination
IP addresses and services in which the pattern was recognized.
On compromised hosts, processes can be created commonly generates a bandwidth utilization
which listen on network sockets for instructions anomaly which can be detected through traffic
from the adversary, similar to a Trojan. These trending. In some cases adversaries can expose
instructions can tell a process to launch a packet themselves by connecting to services which are
flood against a specific target. When the host not permitted by firewall devices. (Figure 5)
initiates an attack against the target, it
% tcpdump -nett -i pflog0
listening on pflog0, link-type PFLOG (OpenBSD pflog file), capture size 96 bytes
1100221136.677441 rule 1/0(match): block in on sis0: IP 10.0.0.35.4646 > 205.11.11.11.445: S
552159036:552159036(0) win 64240 <mss 1460,nop,nop,sackOK>
1100221138.370423 rule 1/0(match): block in on sis0: IP 10.0.0.35.4646 > 205.11.11.11.445: S
552159036:552159036(0) win 64240 <mss 1460,nop,nop,sackOK>
Figure 5:
This is an example of some packets which have been analyzed using the tcpdump [8] program on the OpenBSD [9]
PF Firewall after the packets have been logged via the pflog virtual interface.
6 Emerging transformation methods Calling function “X” is equivalent to jumping to
a particular position in memory and executing
6.1 Program execution the code located there. These entry points have
an attribute called a “symbol”, which uniquely
A new process is initially created when its identifies the routine. Processes maintain symbol
parent process forks its execution using the fork tables which allow the kernel to locate
system call. Once the resources and scheduler instructions based on symbol name.
activations have been allocated and registered
for the process object itself, another system call, 6.2 Run-time linker
“exec”, gets called. The exec system call will
replace the current execution image with the Programs on a system often share the same
executable image associated with the new functions. As the number of utilities or programs
program. increase, the concept of building a shared library
where all programs can share functions, rather
These executable images or machine instructions then replicating the same code, started to
are loaded into memory, where the kernel pre- surface. Most UNIX variants have a “Run-Time
emptively runs the task or machine code on a Linker”, which provides this functionality.
time-shared basis. In short, programs can be Commonly used functions, such as printf8 and
thought of as a set of functions which is fgets9, are stored in a share object. Trends
executed in some logical order which is
determined at run-time.
8
printf is standard function defined in ISO/IEC
Routines or functions in a computer program can (``ISO C90'') to print formatted data to the
be thought of as entry points to a series of screen.
9
instructions located in computer memory. fgets is a standard function defined in ISO/IEC
(``ISO C90'') to read a line from a stream.
8. indicate that more and more operating systems override certain functions that the application
are moving towards implementing dynamically will execute, without actually modifying the
linked programs by default (Table 1). application itself.
As the program executes with the requirement to 6.4 Hijacking of user space library calls
execute a particular shared function, the pages
associated with that function will be mapped Depending on the level of privilege acquired by
into the process’ address space from the shared an adversary on a compromised machine, they
object using the mmap10 system call. This could be in a position where they can manipulate
process is also known as “on demand paging”. run-time linking operations of otherwise trusted
processes. This allows an adversary to change
Dynamically the execution behaviour of a process, without
Linked actually modifying the binary or kernel interrupt
Operating System vectors. This implies that investigators relying
Fundamental Utils.
on cryptographic checksums to verify the
integrity of programs execution, could be
Solaris 10 [10] YES mislead or have a false sense of trust in the
Redhat Linux 9 [11] YES binary in question.
FreeBSD 5.x YES
FreeBSD 4.x NO The following example will illustrate how an
OpenBSD 3.5 NO adversary might mislead investigators by
NetBSD 1.6.X [12] NO utilizing the run-time linker, manipulating
NetBSD 2.0 YES certain system logs which contain incriminating
Apple’s Darwin 7.3.1 YES information without modifying system binaries,
[13] operating system interrupt vectors, or the log
files themselves.
Table 1:
This table displays which operating systems ship with The adversary writes a shared library which
dynamic or run-time linking on fundamental system defines two entry points or symbols: recvfrom11
binaries by default. These binaries are typically and write12. These functions are utilized by
found in the /bin and /sbin directories. processes like syslogd, (a daemon which
centrally manages logs used by most processes
6.3 Preloading shared objects on the system) to read and write messages
to/from a logging or network sockets.
The run-time linkers can be configured for a
variety of different types of real-time linking These newly defined entry points have an
operations. In some cases, the user may choose interesting characteristic: they are capable of
to avoid “on demand paging” and have the performing transformations on the data before
functions mapped into the address space submitting it to the syslog process. The module
immediately, improving run-time performance. opens a “recipe” file and loads in a series of
“transformation instructions”. These
In other cases, the user may choose to preload
another shared library before any other library
gets loaded. This allows programmers to 11
recvfrom is an optional system call provided
by most UNIX variants to receive messages
from a socket.
10 12
mmap is a function which maps files or write is a standard system call defined by
devices into memory. This interface is not POSIX.1 for writing information to a descriptor.
standard, but most operating systems implement Descriptors may include handles to files or
it. sockets.
9. transformation instructions determine what STDOUT13 via the printf function (which would
words or patterns are to be substituted with manipulate the information returned from
adversary-defined data. system utilities like ps, netstat or fstat), or writes
to independent log files from other processes.
This allows the adversary to change, delete, or Any executable files which depend on run-time
transform system logs centrally, without actually linking can be affected by these types of attacks.
editing the log files themselves. Even if the
system has write operations to the file system 7 Emerging transformation detection
being audited, the modifications to the log files methods
will be dispatched as the syslogd process.
7.1 Shared library analysis
In a common case scenario, the system might
have a host-based intrusion detection agent When extensive analysis on processes is being
running, which monitors commonly used system performed, it is recommended that a vmcore14 be
binaries for any kind of modification, ranging captured and transferred to a trusted system.
from content, permission, or attribute Utilities, such as lsof [14], can be used to
modifications. However, the adversary does not analyze the process data structures within the
have to modify the binaries of the syslogd or any vmcore. Any irregular shared library which has
of its usual shared objects in order to change the been opened by the process and mapped into the
execution behaviour of the process. process’s address space should be an indicator to
the investigator that something suspicious is
One technique the adversary may use is to occurring
replace any occurrence of their IP address or
subnet with an arbitrary address, or delete any
log which contains their IP address or subnet
pattern all together. Commonly, logging
operations could be tested and, in most cases,
logs generated by investigators would be
processed and un-touched by the adversary’s
transformation rules.
Since the adversary may have applied these
transformation filters to any outgoing and
incoming logs, network-based intrusion
detection sensors or firewall devices that are
logging to the compromised device could also be
modified. If the syslog server has been
configured to log to a remote host,
transformations will be applied to those
messages as well. This could result in an
external trusted logging server housing incorrect
or modified logs.
13
STDOUT is a standard output stream defined
It should be noted that this sort of attack can by ISO/IEC (``ISO C90'') which is typically used
occur in several different ways: An adversary for writing conventional output
14
could implement these type of hooks in other A virtual memory core (vmcore) is a snapshot
of the current memory. On some operating
commonly used functions, such as writes to systems VM cores can be created by using the
“halt” command. These cores can be used by
utilities like ps to retrieve the current process
state of the OS at the time the core was created.
10. .
Dev0# lsof -p 40374
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
test 40374 root cwd VDIR 233,9 512 5888013 /usr/home/modulus/transform
test 40374 root rtd VDIR 233,4 1536 2 /
test 40374 root txt VREG 233,9 4795 5888022 /usr/home/modulus/transform/test
test 40374 root txt VREG 233,4 136276 553 /libexec/ld-elf.so.1
test 40374 root txt VREG 233,9 7130 5888027
/usr/home/modulus/transform/libtransform.so.1
test 40374 root txt VREG 233,4 869100 550 /lib/libc.so.6
test 40374 root 0u VCHR 5,1 0t46554 93 /dev/ttyp1
test 40374 root 1u VCHR 5,1 0t46554 93 /dev/ttyp1
test 40374 root 2u VCHR 5,1 0t46554 93 /dev/ttyp1
test 40374 root 5u PIPE 0xc190e000 16384 ->0xc190e0ac
dev0#
Figure 6:
The “lsof” utility will retrieve process objects from a system core and process the file descriptor tables so an
analyst can investigate any activities which require the use of open files.
This process contains an open file handle to an symbols or functions are being registered for the
object called “libtransform.so.1”. Further recvfrom and write entry points.
analysis indicates that alternate dynamic
dev0# objdump -T libtransform.so.1 | grep text
0000083c l d .text 00000000
00000b94 g DF .text 0000009a recvfrom
00000c30 g DF .text 00000099 _write
dev0#
Figure 7:
Information from the objdump [15] utility is being passed into a grep command to filter out any irrelevant
information. Objdump is responsible for displaying information about object files, and can be used to analyze the
details about specific aspects of the object files, such as what dynamic symbols it contains. Shared libraries are
essentially special object files.
The results of objdump illustrate that alternate
entry points have been registered for the 8 Conclusions
recvfrom and write library functions. This may
indicate that the adversary has hijacked these An adversary can utilize run-time linking
functions to manipulate the operating mechanisms to alter the execution logic of either
environment to avoid detection. Any an operating system or application without
information which was obtained or produced as having to modify the binary. Commonly,
a result of these calls can not be trusted. integrity checks are based on the ability to detect
modifications to the binary itself. Objects which
In many operating systems, the pre-loading of appear to be trust-worthy, possibly are not. This
shared objects is accomplished through the use can result in compromised logging, audit or
of process environment variables. Less complex information sources being trusted by
implementations of this attack may be detectable investigative bodies, resulting in the possible the
by analyzing the process’s environment avoidance of more thorough offline forensic
variables. It should be noted that this should be analysis or the misdirection of the investigative
a less trusted technique because an adversary is bodies themselves.
able to re-define environment variables within
their shared object.
11. 9 References
[1] Project FreeBSD. FreeBSD Home [11] Red Hat Inc. The Redhat Linux
Page. http://www.FreeBSD.org. Operating System.
November, 1993. http://www.redhat.com. 1993.
[2] Rivest R. The MD5 Message-Digest [12] Project NetBSD. The NetBSD Home
Algorithm, IETF RFC 1321. Page. http://www.netbsd.org. 1993
ftp://ftp.rfc-editor.org/in-
notes/rfc1321.txt. April, 1992. [13] Apple Inc. The Darwin Operating
System. http://www.apple.com,
[3] Burrows J. The Secure Hash Standard, http://www.apple.com/opensource/.
FIPS PUB 180-1, IETF RFC 3174. 2001.
http://csrc.nist.gov/cryptval/shs.html.
1993. [14] Abell V. The lsof (LiSt Open Files)
Project. http://people.freebsd.org/~abe/.
[4] Dobbertin Hans, Bosselaers Antoon,
Preneel Bart. The RIPEMD-160 [15] GNU Binutils. The Binutils Home
Cryptographic Hash Function. Page.
http://www.esat.kuleuven.ac.be/~bossel http://www.gnu.org/software/binutils/.
ae/ripemd160.html. 1988-1992.
[5] Eric W. Weisstein. "Birthday Attack."
From MathWorld--A Wolfram Web
Resource.
http://mathworld.wolfram.com/Birthday
Attack.html
[6] Unix System Laboratories. The
Executable and Linking Format,
http://x86.ddj.com/ftp/manuals/tools/elf
.pdf.
[7] Project Snort, The Snort Home Page.
http://www.snort.org. 2002.
[8] Project Tcpdump, The Tcpdump Home
Page.
http://www.tcpdump.org. 2002
[9] Project OpenBSD. The OpenBSD
Home Page. http://www.OpenBSD.org.
1996.
[10] Sun Microsystems. The Solaris
Operating System, SunOS.
http://www.sun.com. 1989.