The document discusses performing digital forensics investigations using open source tools. It covers the major steps of the process: data acquisition, examination, and report preparation. For data acquisition, it describes how to gather volatile system data like memory dumps and network traffic, as well as disk images. For examination, it discusses forensic analysis software like The Sleuth Kit and Autopsy, analyzing memory dumps with Volatility, and examining network traffic with Wireshark. It also provides examples of timeline creation and registry analysis.
The document discusses several targeted malware attacks and surveillance techniques. It describes malware programs like FinFisher, DarkComet RAT, and Hacking Team's RCS that are used to monitor devices, steal files and communications, and enable remote control. It also mentions vulnerabilities exploited in Microsoft Word and Adobe Reader to install malware. Targets discussed include Uyghurs, Tibetans, and activists. Sources cited include CitizenLab.org, Kaspersky, and Symantec reports on commercial spyware vendors and their government clients.
The document summarizes hacking techniques used by hackers:
1) Hackers perform reconnaissance like scanning public information, networks, and systems to find vulnerabilities.
2) This allows them to gain initial access, often by exploiting configuration or software errors.
3) They then use this initial access to get further system privileges or access additional machines.
Course Objectives:
• Help the student to achieve a broad understanding of the
main types of memory forensic data gathering and analysis
• Serve as an introduction to low level concepts necessary for
a proper understanding of the task of performing memory
forensics on Windows, MacOSX and Linux (incl. Android).
• Put the student in contact with different memory forensics
tools and provide him information on how to use the
gathered forensic data to perform a wide range of
investigations
Networking in Linux discusses DNS related commands in Linux. It begins by listing DNS concepts like zones and records. It then demonstrates commands like nslookup, host and dig to query DNS records like A, MX, NS, SOA records and perform operations like reverse lookups. It shows how to use specific nameservers, change ports and timeouts. The document provides examples of using these tools to troubleshoot DNS issues like propagation.
This document provides summaries of Linux commands that can be used to select and manipulate parts of files. It discusses the cat, head, tail, cut, split, sort, tac, uniq, and tr commands. Cat concatenates or displays files, head displays the first few lines of a file, and tail displays the last few lines. Cut extracts columns from a file, split divides a file into smaller parts, and sort sorts lines alphabetically or numerically. Tac displays the lines of a file in reverse order and uniq removes duplicate lines from a sorted file. Tr translates characters within a file, such as converting uppercase to lowercase.
In this workshop we will make a brief introduction to the basics of networking: IP addresses, MAC addresses, DNS, DHCP. Concepts as a router, gateway and firewall are explained. Then we will see in practice how to share files on a local network (NFS, Samba), establish a FTP connection, or log on to another (Linux) machine remotely (SSH, VNC, RDP). Finally, we review some useful networking tools like ping, netstat, lookup, port scan, traceroute, whois.
The document discusses the virtual filesystem (VFS) and the process of mounting the root filesystem during system bootup. It explains that the VFS acts as an interface between the kernel and actual filesystem implementations. During bootup, the following key steps occur:
1. The root filesystem is mounted
2. Directories are created based on the initramfs
3. Additional directories are created based on the ramdisk
4. Default mounts like tmpfs, devpts, proc and sysfs are performed
5. The init process mounts additional filesystems like data, system based on the init.rc file
The document discusses several targeted malware attacks and surveillance techniques. It describes malware programs like FinFisher, DarkComet RAT, and Hacking Team's RCS that are used to monitor devices, steal files and communications, and enable remote control. It also mentions vulnerabilities exploited in Microsoft Word and Adobe Reader to install malware. Targets discussed include Uyghurs, Tibetans, and activists. Sources cited include CitizenLab.org, Kaspersky, and Symantec reports on commercial spyware vendors and their government clients.
The document summarizes hacking techniques used by hackers:
1) Hackers perform reconnaissance like scanning public information, networks, and systems to find vulnerabilities.
2) This allows them to gain initial access, often by exploiting configuration or software errors.
3) They then use this initial access to get further system privileges or access additional machines.
Course Objectives:
• Help the student to achieve a broad understanding of the
main types of memory forensic data gathering and analysis
• Serve as an introduction to low level concepts necessary for
a proper understanding of the task of performing memory
forensics on Windows, MacOSX and Linux (incl. Android).
• Put the student in contact with different memory forensics
tools and provide him information on how to use the
gathered forensic data to perform a wide range of
investigations
Networking in Linux discusses DNS related commands in Linux. It begins by listing DNS concepts like zones and records. It then demonstrates commands like nslookup, host and dig to query DNS records like A, MX, NS, SOA records and perform operations like reverse lookups. It shows how to use specific nameservers, change ports and timeouts. The document provides examples of using these tools to troubleshoot DNS issues like propagation.
This document provides summaries of Linux commands that can be used to select and manipulate parts of files. It discusses the cat, head, tail, cut, split, sort, tac, uniq, and tr commands. Cat concatenates or displays files, head displays the first few lines of a file, and tail displays the last few lines. Cut extracts columns from a file, split divides a file into smaller parts, and sort sorts lines alphabetically or numerically. Tac displays the lines of a file in reverse order and uniq removes duplicate lines from a sorted file. Tr translates characters within a file, such as converting uppercase to lowercase.
In this workshop we will make a brief introduction to the basics of networking: IP addresses, MAC addresses, DNS, DHCP. Concepts as a router, gateway and firewall are explained. Then we will see in practice how to share files on a local network (NFS, Samba), establish a FTP connection, or log on to another (Linux) machine remotely (SSH, VNC, RDP). Finally, we review some useful networking tools like ping, netstat, lookup, port scan, traceroute, whois.
The document discusses the virtual filesystem (VFS) and the process of mounting the root filesystem during system bootup. It explains that the VFS acts as an interface between the kernel and actual filesystem implementations. During bootup, the following key steps occur:
1. The root filesystem is mounted
2. Directories are created based on the initramfs
3. Additional directories are created based on the ramdisk
4. Default mounts like tmpfs, devpts, proc and sysfs are performed
5. The init process mounts additional filesystems like data, system based on the init.rc file
1. Tape backups allow for archiving large amounts of data in a sequential access format and are generally cheaper than hard disks. The tar command can be used to create and extract tape archives.
2. Compression tools like gzip, bzip2, zip, etc. compress files to reduce size and allow for more efficient storage and transfer. They each use different algorithms and result in varying file extensions and compression ratios.
3. Remote connection utilities like telnet, ftp, rlogin, rcp, and ssh allow users to access and transfer files between systems. Some like rlogin and rcp rely on .rhosts files for authentication without passwords between trusted hosts.
Hide and seek - interesting uses of forensics and covert channels.tkisason
In this talk, we will discuss some interesting uses of forensic methods like memory extraction and carving in non-law enforcement scenarios. Also, some interesting methods for achieving covert channels will be covered with their detection possibilities.
Bio: Junior researcher at Faculty or organization and informatics with interest in Security, Cryptography and FLOSS.
Enumeration is the process of extracting user names, machine names, network resources, shares, and services from a system. This lab demonstrates how to enumerate a target network using Nmap to obtain lists of computers, open ports, operating systems, machine names, and network services. Specifically, it shows scanning a Windows Server 2008 virtual machine to discover open NetBIOS ports 135, 139, and 445. Nmap output reveals the target is running Windows 7/Vista/2008. Further enumeration using nbtstat extracts additional information like computer names and user names from the target network.
This document provides an overview of various networking tools in Linux, including commands for network configuration (ifconfig, route), connectivity testing (ping, traceroute), name resolution (host, nslookup), port and protocol inspection (netstat, tcpdump), and secure remote access (SSH, PuTTY). It also covers tools for firewall management (ufw), network mapping (Nmap), raw socket programming (netcat), link status (ethtool), and more. Examples are given for common tasks like viewing routing tables, capturing packets, remotely controlling systems, and accessing services over Telnet versus SSH. A references section at the end provides additional learning resources.
This presentation contains information about how to manage the network and connectivity in linux flavors based on debian. You will understand how to monitor network resources, viewing ethernet and wireless adapter information, checking name resolution , downloading files with wget, etc
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
This document discusses how eBPF (extended Berkeley Packet Filter) can be used for kernel tracing. It provides an overview of BPF and eBPF, how eBPF programs are compiled and run in the kernel, the use of BPF maps, and how eBPF enables new possibilities for dynamic kernel instrumentation through techniques like Kprobes and ftrace.
Kernel Recipes 2019 - Faster IO through io_uringAnne Nicolas
io_uring provides a new asynchronous I/O interface in Linux that aims to address limitations with existing interfaces like aio and libaio. It uses a ring-based model for submission and completion queues to efficiently support asynchronous I/O operations with low latency and high throughput. Though initially skeptical, Linus Torvalds ultimately merged io_uring into the Linux kernel due to improvements in missing features, ease of use, and efficiency over alternatives.
The log file documents repeated errors from the com.xerox.scan.phaser6128mfp.ButtonListenerAgent process on gen-pc07. Specifically, the process is throwing errors and referencing the PThreadUtils.h file at line 58, and this message is repeated at 30 minute intervals throughout the day.
This document provides an overview of common Linux networking commands such as ifconfig, route, traceroute, nslookup, arp, dig, and netstat that are used to configure network interfaces, display routing tables, trace network routes, lookup domain names, manage address resolution, query DNS servers, and view network statistics. It also discusses how to use ifconfig to assign IP addresses to interfaces, route to view routing tables, arp to manage the address resolution cache, and dig for more powerful DNS lookups than nslookup.
Computers are connected in a network to exchange information or resources with each other. Two or more computer are connected through network media called computer media.
There are a number of network devices or media that are involved to form computer network.
Computer loaded with Linux Operation System can also be a part of network whether it is a small or large network by multitasking and multi user natures.
Maintaining of system and network up and running is a task of System / Network Administrator’s job. In this article we are going to review frequently used network configuration and troubleshoot commands in Linux.
Fire in the Sky: An Introduction to Monitoring Apache Spark in the Cloud with...Spark Summit
Writing intelligent cloud native applications is hard enough when things go well, but what happens when there are performance and debugging issues that arise during production? Inspecting the logs is a good start, but what if the logs don’t show the whole picture? Now you have to go deeper, examining the live performance metrics that are generated by Spark, or even deploying specialized microservices to monitor and act upon that data. Spark provides several built-in sinks for exposing metrics data about the internal state of its executors and drivers, but getting at that information when your cluster is in the cloud can be a time consuming and arduous process. In this presentation, Michael McCune will walk through the options available for gaining access to the metrics data even when a Spark cluster lives in a cloud native containerized environment. Attendees will see demonstrations of techniques that will help them to integrate a full-fledged metrics story into their deployments. Michael will also discuss the pain points and challenges around publishing this data outside of the cloud and explain how to overcome them. In this talk you will learn about: Deploying metrics sinks as microservices, Common configuration options, and Accessing metrics data through a variety of mechanisms.
(120513) #fitalk an introduction to linux memory forensicsINSIGHT FORENSIC
This document discusses Linux memory forensics and provides an overview of tools and techniques for acquiring and analyzing memory. It begins by covering live forensics commands and then discusses memory forensics in more depth. Several open source tools for dumping physical memory are described, including fmem and LiME, as well as tools for analyzing memory images like Foriana and Volatilitux. Commercial memory analysis solutions are also briefly mentioned.
Netcat (nc) is a networking utility that can be used to transfer files, run commands remotely, and scan ports on remote systems. It allows establishing TCP and UDP connections to ports on remote systems. The document provides examples of using nc to scan ports, transfer files between systems, set up reverse shells, and perform basic network tasks and administration. Google dorking techniques are also presented for searching websites and finding specific pages or files using keywords, titles, and URLs. The Whois tool is demonstrated to query registration records for domain names and obtain information like registrar, IP address, and name servers.
101 4.2 maintain the integrity of filesystemsAcácio Oliveira
The document discusses maintaining the integrity of Linux filesystems. It describes tools like fsck, e2fsck, and tune2fs for checking and repairing filesystems, as well as df, du, and debugfs for monitoring filesystem usage and exploring ext filesystem internals. The document provides examples of using these tools to check filesystems by label, UUID, or device node; view free space and inode information; and attempt undeletion of files.
This document describes a new method for exploiting PL/SQL injection without needing to create functions or procedures. It involves injecting a pre-compiled cursor using the DBMS_SQL package to execute arbitrary SQL. The attacker can use this to grant privileges to themselves or create their own functions without any system privileges beyond CREATE SESSION. It provides an example exploiting the SDO_DROP_USER_BEFORE trigger in Oracle to gain DBA privileges in this way without needing CREATE PROCEDURE permission.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
The document describes Windows Credentials Editor (WCE), a tool that manipulates Windows logon sessions to dump and modify credentials in memory. WCE has two main features - it can dump in-memory credentials like usernames, domains, and NTLM hashes from current, future, and terminated logon sessions; and it supports pass-the-hash by allowing changes to NTLM credentials or creation of new logon sessions with arbitrary credentials. The document discusses two methods WCE could use - directly calling authentication package APIs, which requires running code in LSASS; or reading LSASS memory to locate logon session and credential structures and decrypt credentials without injecting code.
The document discusses database forensics and analysis techniques. It introduces current challenges, available tools, and new approaches using external tables to preserve metadata when collecting evidence. Typical patterns seen in database objects like SYS.USER$ are shown, like multiple accounts with login attempts or similar lock times indicating password guessing. Timeline creation is demonstrated to combine data from different sources.
The document discusses how Windows Credentials Editor (WCE) can be used to obtain credentials stored in memory on Windows systems, allowing an attacker to steal usernames and hashes to perform pass-the-hash attacks without cracking passwords. WCE enables bypassing common pre-exploitation techniques by directly using harvested credentials. Leaving logon sessions disconnected rather than logged off can leave credentials exposed in memory as "zombie sessions".
Santorini is a popular Greek island located in the Cyclades islands in the southern Aegean Sea. It formed from volcanic explosions that left the island with steep cliffs surrounding a central caldera filled with water. The island is home to around 7,000 residents spread across 10 villages and has a temperate climate. Santorini has archaeological sites from the Minoan era and was devastated by a massive volcanic eruption around 1600 BC. Tourism is now the main industry, attracting thousands of visitors each year to see the scenic caldera views and sunset from towns like Oia and Fira.
This document provides an overview of database security platforms and the evolution of this market. Some key points:
- Database security platforms have evolved beyond just monitoring database activity and now incorporate features like vulnerability assessment, user rights management, data discovery/filtering, and blocking capabilities.
- The increased scope of monitoring coverage and additional security features mean "Database Activity Monitoring" is no longer an accurate term - these solutions are now more appropriately called "Database Security Platforms."
- These platforms consolidate multiple database security tools into a single solution and can monitor both relational and non-relational databases as well as multiple database types.
- Vendors are beginning to differentiate their database security platforms based on primary use cases
1. Tape backups allow for archiving large amounts of data in a sequential access format and are generally cheaper than hard disks. The tar command can be used to create and extract tape archives.
2. Compression tools like gzip, bzip2, zip, etc. compress files to reduce size and allow for more efficient storage and transfer. They each use different algorithms and result in varying file extensions and compression ratios.
3. Remote connection utilities like telnet, ftp, rlogin, rcp, and ssh allow users to access and transfer files between systems. Some like rlogin and rcp rely on .rhosts files for authentication without passwords between trusted hosts.
Hide and seek - interesting uses of forensics and covert channels.tkisason
In this talk, we will discuss some interesting uses of forensic methods like memory extraction and carving in non-law enforcement scenarios. Also, some interesting methods for achieving covert channels will be covered with their detection possibilities.
Bio: Junior researcher at Faculty or organization and informatics with interest in Security, Cryptography and FLOSS.
Enumeration is the process of extracting user names, machine names, network resources, shares, and services from a system. This lab demonstrates how to enumerate a target network using Nmap to obtain lists of computers, open ports, operating systems, machine names, and network services. Specifically, it shows scanning a Windows Server 2008 virtual machine to discover open NetBIOS ports 135, 139, and 445. Nmap output reveals the target is running Windows 7/Vista/2008. Further enumeration using nbtstat extracts additional information like computer names and user names from the target network.
This document provides an overview of various networking tools in Linux, including commands for network configuration (ifconfig, route), connectivity testing (ping, traceroute), name resolution (host, nslookup), port and protocol inspection (netstat, tcpdump), and secure remote access (SSH, PuTTY). It also covers tools for firewall management (ufw), network mapping (Nmap), raw socket programming (netcat), link status (ethtool), and more. Examples are given for common tasks like viewing routing tables, capturing packets, remotely controlling systems, and accessing services over Telnet versus SSH. A references section at the end provides additional learning resources.
This presentation contains information about how to manage the network and connectivity in linux flavors based on debian. You will understand how to monitor network resources, viewing ethernet and wireless adapter information, checking name resolution , downloading files with wget, etc
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
This document discusses how eBPF (extended Berkeley Packet Filter) can be used for kernel tracing. It provides an overview of BPF and eBPF, how eBPF programs are compiled and run in the kernel, the use of BPF maps, and how eBPF enables new possibilities for dynamic kernel instrumentation through techniques like Kprobes and ftrace.
Kernel Recipes 2019 - Faster IO through io_uringAnne Nicolas
io_uring provides a new asynchronous I/O interface in Linux that aims to address limitations with existing interfaces like aio and libaio. It uses a ring-based model for submission and completion queues to efficiently support asynchronous I/O operations with low latency and high throughput. Though initially skeptical, Linus Torvalds ultimately merged io_uring into the Linux kernel due to improvements in missing features, ease of use, and efficiency over alternatives.
The log file documents repeated errors from the com.xerox.scan.phaser6128mfp.ButtonListenerAgent process on gen-pc07. Specifically, the process is throwing errors and referencing the PThreadUtils.h file at line 58, and this message is repeated at 30 minute intervals throughout the day.
This document provides an overview of common Linux networking commands such as ifconfig, route, traceroute, nslookup, arp, dig, and netstat that are used to configure network interfaces, display routing tables, trace network routes, lookup domain names, manage address resolution, query DNS servers, and view network statistics. It also discusses how to use ifconfig to assign IP addresses to interfaces, route to view routing tables, arp to manage the address resolution cache, and dig for more powerful DNS lookups than nslookup.
Computers are connected in a network to exchange information or resources with each other. Two or more computer are connected through network media called computer media.
There are a number of network devices or media that are involved to form computer network.
Computer loaded with Linux Operation System can also be a part of network whether it is a small or large network by multitasking and multi user natures.
Maintaining of system and network up and running is a task of System / Network Administrator’s job. In this article we are going to review frequently used network configuration and troubleshoot commands in Linux.
Fire in the Sky: An Introduction to Monitoring Apache Spark in the Cloud with...Spark Summit
Writing intelligent cloud native applications is hard enough when things go well, but what happens when there are performance and debugging issues that arise during production? Inspecting the logs is a good start, but what if the logs don’t show the whole picture? Now you have to go deeper, examining the live performance metrics that are generated by Spark, or even deploying specialized microservices to monitor and act upon that data. Spark provides several built-in sinks for exposing metrics data about the internal state of its executors and drivers, but getting at that information when your cluster is in the cloud can be a time consuming and arduous process. In this presentation, Michael McCune will walk through the options available for gaining access to the metrics data even when a Spark cluster lives in a cloud native containerized environment. Attendees will see demonstrations of techniques that will help them to integrate a full-fledged metrics story into their deployments. Michael will also discuss the pain points and challenges around publishing this data outside of the cloud and explain how to overcome them. In this talk you will learn about: Deploying metrics sinks as microservices, Common configuration options, and Accessing metrics data through a variety of mechanisms.
(120513) #fitalk an introduction to linux memory forensicsINSIGHT FORENSIC
This document discusses Linux memory forensics and provides an overview of tools and techniques for acquiring and analyzing memory. It begins by covering live forensics commands and then discusses memory forensics in more depth. Several open source tools for dumping physical memory are described, including fmem and LiME, as well as tools for analyzing memory images like Foriana and Volatilitux. Commercial memory analysis solutions are also briefly mentioned.
Netcat (nc) is a networking utility that can be used to transfer files, run commands remotely, and scan ports on remote systems. It allows establishing TCP and UDP connections to ports on remote systems. The document provides examples of using nc to scan ports, transfer files between systems, set up reverse shells, and perform basic network tasks and administration. Google dorking techniques are also presented for searching websites and finding specific pages or files using keywords, titles, and URLs. The Whois tool is demonstrated to query registration records for domain names and obtain information like registrar, IP address, and name servers.
101 4.2 maintain the integrity of filesystemsAcácio Oliveira
The document discusses maintaining the integrity of Linux filesystems. It describes tools like fsck, e2fsck, and tune2fs for checking and repairing filesystems, as well as df, du, and debugfs for monitoring filesystem usage and exploring ext filesystem internals. The document provides examples of using these tools to check filesystems by label, UUID, or device node; view free space and inode information; and attempt undeletion of files.
This document describes a new method for exploiting PL/SQL injection without needing to create functions or procedures. It involves injecting a pre-compiled cursor using the DBMS_SQL package to execute arbitrary SQL. The attacker can use this to grant privileges to themselves or create their own functions without any system privileges beyond CREATE SESSION. It provides an example exploiting the SDO_DROP_USER_BEFORE trigger in Oracle to gain DBA privileges in this way without needing CREATE PROCEDURE permission.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
The document describes Windows Credentials Editor (WCE), a tool that manipulates Windows logon sessions to dump and modify credentials in memory. WCE has two main features - it can dump in-memory credentials like usernames, domains, and NTLM hashes from current, future, and terminated logon sessions; and it supports pass-the-hash by allowing changes to NTLM credentials or creation of new logon sessions with arbitrary credentials. The document discusses two methods WCE could use - directly calling authentication package APIs, which requires running code in LSASS; or reading LSASS memory to locate logon session and credential structures and decrypt credentials without injecting code.
The document discusses database forensics and analysis techniques. It introduces current challenges, available tools, and new approaches using external tables to preserve metadata when collecting evidence. Typical patterns seen in database objects like SYS.USER$ are shown, like multiple accounts with login attempts or similar lock times indicating password guessing. Timeline creation is demonstrated to combine data from different sources.
The document discusses how Windows Credentials Editor (WCE) can be used to obtain credentials stored in memory on Windows systems, allowing an attacker to steal usernames and hashes to perform pass-the-hash attacks without cracking passwords. WCE enables bypassing common pre-exploitation techniques by directly using harvested credentials. Leaving logon sessions disconnected rather than logged off can leave credentials exposed in memory as "zombie sessions".
Santorini is a popular Greek island located in the Cyclades islands in the southern Aegean Sea. It formed from volcanic explosions that left the island with steep cliffs surrounding a central caldera filled with water. The island is home to around 7,000 residents spread across 10 villages and has a temperate climate. Santorini has archaeological sites from the Minoan era and was devastated by a massive volcanic eruption around 1600 BC. Tourism is now the main industry, attracting thousands of visitors each year to see the scenic caldera views and sunset from towns like Oia and Fira.
This document provides an overview of database security platforms and the evolution of this market. Some key points:
- Database security platforms have evolved beyond just monitoring database activity and now incorporate features like vulnerability assessment, user rights management, data discovery/filtering, and blocking capabilities.
- The increased scope of monitoring coverage and additional security features mean "Database Activity Monitoring" is no longer an accurate term - these solutions are now more appropriately called "Database Security Platforms."
- These platforms consolidate multiple database security tools into a single solution and can monitor both relational and non-relational databases as well as multiple database types.
- Vendors are beginning to differentiate their database security platforms based on primary use cases
Time Sensitive Networking in the Linux Kernelhenrikau
Time Sensitive Networking provides mechanisms for sending data accross the network with very low latency, low jitter and low framedrops, opening up a whole range of new applications.
This talk primarily focuses on media, but the driver should be interesting for industrial applications and automotive as well.
Methods and Instruments for the new Digital Forensics Environmentspiccimario
This document discusses Mario Piccinelli's PhD thesis focused on digital forensics of modern devices like the iPhone, eBook readers, and voyage data recorders. It summarizes that these devices store digital data that may be required for investigations but have not been well-studied forensically. It describes analyzing backups of iOS devices and eBook readers to extract useful metadata and build timelines of usage. For voyage data recorders, it explains recovering raw data files and building tools to parse and visualize the data in NMEA format for use in accident investigations.
The document discusses various digital forensic techniques for analyzing a Windows system to uncover evidence. It outlines 10 different avenues of analysis including NTFS attributes, the registry, prefetch files, print spool files, the recycle bin, thumbs.db, event logs, internet history files, shortcut files, and system restore points. Each technique is briefly described with examples of how they can be used to answer questions about "who, when, why and how" on a system.
Lecture 09 - Memory Forensics.pdfL E C T U R E 9 B Y .docxsmile790243
Lecture 09 - Memory Forensics.pdf
L E C T U R E 9
B Y : D R . I B R A H I M B A G G I L I
Memory Forensic Analysis
P A R T 1
RAM overview
Volatility overview
http://www.bsatroop780.org/skills/images/ComputerMemory.gif
Understanding RAM
• Two main types of RAM
– Static
• Not refreshed
• Is still volatile
– Dynamic
• Modern computers
• Made up of a collection of cells
• Each cell contains a transistor and a capacitor
• Capacitors charge and discharge (1 and zeros)
• Periodically refreshed
RAM logical organization
• Programs run on computers
• Programs are made up of processes
– Processes are a set of resources used when executing an
instance of a program
– Processes do not generally access the physical memory directly
– Each process has a �virtual memory space�
• Allows operating system to stay in control of allocating memory
– Virtual memory space is made up of
• Pages (default size 4K)
• References (used to map virtual address to physical address)
• May also have a reference to data on the disk (Page file) – used to
free up RAM memory
RAM logical organization
! Each process is represented by an EPROCESS Block:
Normal memory
• Each process is represented by an _EPROCESS block.
• Contained within each _EPROCESS block is both a pointer to the next process
(fLink – Forward Link) and a pointer to the previous process (bLink – Back Link).
• When OS is operating, the _EPROCESS blocks and their pointers come
together to resemble a chain, which is known as a doubly-linked list.
• Chain is stored in kernel memory and is updated every time a process is
launched or terminated.
• Windows API walks this list from head to tail when enumerating processes via
Task Manager, for example.
Not so normal
• Hides processes from windows API
• Known as Direct Kernel Object Manipulation (DKOM)
• Involves manipulating the list of _EPROCESS blocks to �unlink� a
given process from the list
• By changing the forward link of process 1 to point to the third process,
and changing the �bLink� of process 3 to point to process 1, the
attacker�s process is no longer part of the list of _EPROCESS blocks.
• Since the Windows API uses this list to enumerate processes, the
malicious process will be hidden from the user but still able to operate
normally.
P A R T 2
Introduction to Memory
forensics
Before & Now
! Traditionally
! We have always been told to �pull the plug� on a live system
! This is done so that the reliability of the digital evidence is not
questioned
! Now
! People are considering live memory forensics
" Data relevant to the investigation may lie in memory
" Whole Disk Encryption….
Challenges in traditional method
• High volume of data (Aldestein, 2006)
– Increases the time in an investigation
– Increases storage capacity needed for forensic images
– Number of machines that could be included in th ...
Forensic artifacts in modern linux systemsGol D Roger
This document provides an overview of a workshop on forensic artifacts in modern Linux systems. It will cover topics such as partitions and filesystems, boot loaders, the Linux file system layout, systemd configuration for services and scheduled tasks, installed software and packages, log files and the systemd journal. The workshop format will involve both presentations and demonstrations of analyzing disk images and artifacts. It is aimed at forensic investigators and showing what can be discovered from compromised, criminal, or seized Linux systems.
Live data collection_from_windows_systemMaceni Muse
This document discusses techniques for collecting volatile data and performing a live response investigation on a Windows system. It provides a list of tools to create a response toolkit and obtain information such as running processes, open ports, logged on users, and network connections. The document recommends using these tools to review the event logs and registry for evidence, obtain passwords from the SAM database, and dump system memory for a more in-depth investigation.
This document discusses digital evidence and its analysis methodology. Digital evidence includes information stored on electronic devices like computers, cell phones, hard drives, etc. It must be properly seized, secured and analyzed to avoid contamination. A bit-stream image of storage devices should be created and verified using hashing. Files, slack space and unallocated space are analyzed for keywords. File dates, names and anomalies are documented. The Information Technology Act of 2000 covers various cybercrimes and penalties.
The document provides an overview of the UNIX operating system through a seminar presentation. It discusses the history of UNIX from the 1970s to the 2000s, defines what UNIX is, describes common UNIX commands and the file system structure, and covers topics like memory management, interrupts, reasons for using UNIX, and some applications of UNIX like storage consulting and middleware/database administration. The presentation is intended to educate about the key aspects and functionality of the UNIX operating system.
Debian Linux as a Forensic Workstation Vipin George
This document discusses using Debian GNU/Linux as a forensic workstation. It begins with an introduction to digital forensics and defines it as the gathering and analysis of digital information for use as legal evidence. It then discusses why Debian is suitable as a forensic workstation due to its stability, large set of forensic tools, and ability to avoid infecting evidence. The rest of the document outlines the stages of a forensic investigation and various tools that can be used at each stage, including acquiring disk images, examining disk images, collecting volatile memory data, and network forensics.
This document provides instructions for building your own neural machine translation system in 15 minutes using open source tools. It discusses the benefits of having your own translator, including handling private data, large custom datasets, and domain-specific translation. The workflow outlined trains a basic model on public parallel corpus data, splitting it for training and validation. Steps include preprocessing, training a bidirectional LSTM model, and releasing and using the model to translate. Public corpus sources and tools like OpenNMT and Google's Seq2Seq library are referenced.
The document discusses the key aspects of computer forensics including the goals, processes, rules, software, and reporting. The goal of computer forensics is to perform a structured investigation while maintaining evidence to determine what happened on a computer and who was responsible. The typical stages are preparation, search and seizure, acquisition, analysis and reporting. Key rules include never mishandling evidence, never trusting the subject system, documenting everything, and never working on original evidence. Common software used includes FTK Imager, Stegsuite, and Helix. Important files that can be analyzed include SAM, SYSTEM32, index, and NTLDER. Sites listed provide access to forensics tools, software, and resources.
This document discusses operating systems and computer security. It defines operating systems as software that coordinates activities between computer hardware resources. It describes common operating system functions like booting up a computer, managing memory, running programs, and connecting to networks. The document also discusses types of operating systems like DOS, Windows, and Linux. It notes that computer security is important to protect private information exchanged over the internet from hackers.
This document is a report on cyber and digital forensics submitted by three students from G.H. Raisoni College of Engineering in Nagpur, India. The report discusses digital forensic methodology, tools used in digital analysis like Backtrack and Nuix, techniques such as live analysis and analyzing deleted files, analyzing USB device history from the Windows registry, and concludes that digital forensics is an evolving field with no set standards yet and constant updates are needed to investigate modern cyber crimes.
The document discusses computer forensics, which involves gathering digital evidence from computers to investigate security incidents and cybercrimes. It outlines the goals, stages, rules, and tools used in the digital investigation process. Key aspects include maintaining a documented chain of custody, never modifying original evidence, using forensic software to acquire disk images while calculating hash values for authentication. The summary provides an overview of the key topics covered in the document.
02 Types of Computer Forensics Technology - NotesKranthi
The document discusses various types of computer forensics technology used by law enforcement, military, and businesses. It describes the Computer Forensics Experiment 2000 (CFX-2000) which tested an integrated forensic analysis framework to determine motives and identity of cyber criminals. It also discusses specific computer forensics software tools like SafeBack for creating evidence backups and Text Search Plus for quickly searching storage media for keywords. The document provides details on different types of computer forensics technology used for remote monitoring, creating trackable documents, and theft recovery.
The document discusses indicators of compromise from a cyber attack. It describes the various stages an attacker goes through from initial access to installing malware and establishing command and control. The summary analyzes the host to find malware samples, network connections, and extracted files. It also looks for indicators in network traffic, such as tools downloaded and data uploaded to attacker infrastructure. The document concludes with monitoring effectiveness of security tools and ongoing attribution of attacks.
Computer forensics is the process of applying scientific and analytical techniques to determine potential legal evidence from computers and digital storage devices. It involves lawfully establishing evidence and facts found digitally. There are different types of digital evidence like persistent data that remains when a computer is turned off and volatile data that is lost. Common tools used in computer forensics include Blacklight, Internet Evidence Finder, and SIFT. The standard methodology involves making a copy of the digital evidence, analyzing the copy, and documenting any findings. Computer forensics is used in criminal prosecutions, civil litigation, and corporate investigations.
OS Forensics is one of the categories in digital forensics. As MS Windows is the most popular OS in the world, we focus on Windows forensics and some important methods in this presentation.
NYC4SEC - An Introduction to the Microsoft exFAT File System (Draft 2.01)overcertified
This document provides a technical summary of the Microsoft Extended File Allocation Table (exFAT) file system format. It discusses exFAT's background and history in relation to other FAT file system versions. It also notes that exFAT support was added to forensic analysis tools like The Sleuth Kit and that exFAT is designed for use with removable media storage due to limitations of NTFS for such use cases. The document provides technical details on exFAT specifications, limitations, and terminology used in accordance with Microsoft's published exFAT specifications.
This document discusses a vulnerability in Oracle databases that allows privilege escalation from CREATE USER privileges to SYSDBA privileges. It provides code examples demonstrating how a user with CREATE USER privileges can create a function with the same name as a built-in SYS function to override the namespace and elevate their privileges when SYS executes the function. The document outlines best practices for prevention, including not logging in as SYS, closely monitoring CREATE USER privileges, and using a tool like Sentrigo Hedgehog for advanced monitoring and alerts. It also provides recommendations for forensic response if privilege escalation occurs.
1. The document discusses SSH tricks and configuration tips for securing SSH connections and servers. It provides examples of SSH client-side one-liners and ways to quickly set up an SSH server.
2. SSH is a secure network protocol for exchanging data between networked devices. The document outlines ways to lock down SSH servers and clients through configuration files and access controls.
3. The document shows examples of SSH port forwarding, tunnels, and other one-liners that can enable remote access or administration through SSH connections.
The document discusses a Layer 7 DDOS attack called an HTTP POST attack. It works by sending legitimate HTTP POST requests to a server but slowly sending the content over an extended period, tying up server resources. This attack is more effective than the HTTP GET Slowloris attack as it fully sends the HTTP headers immediately, bypassing defenses against Slowloris. The attack code example shows how it generates random content lengths and sends payload bytes slowly over time to perform the DDOS attack.
This document summarizes optimizations to TLS/SSL including False Start, Snap Start, and defenses against the BEAST attack. False Start allows the client to send application data before receiving the server's Finished message to reduce latency. Snap Start uses cached handshake parameters to further reduce latency. However, both introduce security risks. The BEAST attack exploits TLS CBC encryption and IV reuse, but can be prevented by changing the encryption mode or adding padding.
The document provides an overview of practical cryptography and the GPG/PGP encryption tools. It discusses symmetric and public key cryptography theory. It then demonstrates how to use GPG/PGP to generate keys, encrypt and decrypt files, digitally sign documents, verify signatures, and distribute public keys through a key server. It also discusses how the web of trust model works to validate identities through in-person key signing after carefully verifying a user's identity.
Kyle Young presents on SSH tricks and configuration tips. He discusses the history and uses of SSH, how to securely connect to SSH servers by verifying fingerprints, and ways to lock down SSH servers and clients through configuration files like sshd_config and ssh_config. He also shares some useful SSH client-side one-liners.
This document describes padding oracle attacks on cryptographic hardware devices that allow encrypted keys to be imported. It presents two types of attacks: 1) An improved Bleichenbacher attack that exploits RSA PKCS#1v1.5 padding to reveal an imported private key in an average of 49,000 oracle queries. 2) An adaptation of the Vaudenay CBC attack to reveal keys encrypted with CBC and PKCS#5 padding. It demonstrates these attacks on commercial security tokens, smartcards, and electronic ID cards to reveal stored cryptographic keys.
The document discusses proper password hashing methods for securely storing passwords. It begins by stating that most websites currently do not properly store passwords, either in plaintext or with a single hash without salt. This is irresponsible. The document then discusses proper hashing methods that should be used, including adding salt, using key derivation functions like PBKDF2, ARC4PBKDF2, and bcrypt. PBKDF2 works by repeatedly hashing the password with a salt, while ARC4PBKDF2 additionally encrypts the password and hashes with an evolving ARC4 stream for added complexity. Bcrypt is also an adaptive function that works similarly to PBKDF2 but in a more complicated way. The document
This document proposes a new method for improving the cryptanalytic time-memory trade-off technique. The original technique, introduced by Hellman in 1980, precomputes ciphertexts to reduce cryptanalysis time at the cost of memory usage. The new method reduces the number of calculations needed during cryptanalysis by a factor of two compared to the existing approach using distinguished points. As an example, the new method can crack 99.9% of Windows password hashes in 13.6 seconds using 1.4GB of precomputed data, much faster than the 101 seconds taken by the existing approach.
This document provides an introduction and overview of threading and concurrency in Perl. It begins with definitions of threads and concurrency basics. It then discusses Perl's implementation of threads since version 5.6, noting that global variables are non-shared by default and sharing must be explicit. The document outlines various threading primitives and synchronization mechanisms in Perl like locks, condition variables, and shows examples of building thread-safe data structures like queues. It concludes with best practices and implementing other common synchronization primitives.
The document is a series of lines repeatedly stating "Author: Bill Buchanan". It does not contain any other substantive information in the content. The author of the document is Bill Buchanan, as his name is listed on every line.
This document discusses various network security concepts including firewalls, proxies, NAT, and VPNs. It provides examples of network infrastructures using different types of firewalls such as packet filtering, stateful, and proxy firewalls. It also discusses standard and extended access control lists (ACLs) used with firewalls to filter traffic. Finally, it covers network address translation (NAT) and port address translation (PAT) which help hide private network addresses from the public internet.
Snort is an open source intrusion detection and prevention system that uses rules written in its own language to inspect network traffic in real-time, detect anomalous activity, and generate alerts. It works by matching packets against signatures in its rules database to identify attacks and exploits, and can detect protocol anomalies, custom signatures, and payload analysis. Snort rules allow it to detect specific patterns in network traffic including payload signatures, TCP flags, and port numbers to identify malicious activity.
The document discusses various types of denial of service (DoS) attacks including layer 4 distributed denial of service (DDoS) attacks using botnets, layer 7 attacks that can be carried out by a single attacker, and link local attacks using fraudulent IPv6 router advertisements. It also profiles various hacktivist groups that have carried out such attacks and outlines defenses against DoS attacks like ModSecurity, load balancing, and router advertisement guard.
This document is the user's manual for sqlmap, an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over database servers. The manual provides information on installing and using sqlmap, including requirements, basic usage, supported features, techniques, and numerous configuration options for optimization, injection, detection, enumeration and brute forcing capabilities.
The document is a report from Arbor Networks that analyzes data from a survey of over 500 network operators regarding infrastructure security threats in 2011. Some key findings include:
- Distributed denial-of-service (DDoS) attacks were considered the most significant operational threat. Application-layer DDoS attacks using HTTP floods were most common.
- The largest reported DDoS attacks exceeded 100 Gbps in bandwidth. Major online gaming and gambling sites were frequently targeted.
- Most respondents experienced multiple DDoS attacks per month and detected increased awareness of the DDoS threat over the previous year.
- Network traffic detection, classification, and event correlation tools were commonly used to identify attacks and trace sources. DDo
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
1. Performing
D IGITAL F ORENSICS
with Open Source tools
Dimitris Glynos
{ dimitris at census-labs.com }
Census, Inc.
FOSSCOMM 2011, Patras
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
2. O VERVIEW
I NTRODUCTION
D ATA A CQUISITION
D ATA E XAMINATION
R EPORT P REPARATION
C ONCLUSIONS
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
3. I NTRODUCTION
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
4. D IGITAL F ORENSICS
Electronic transactions leave digital trails
A Digital Forensics investigator follows these trails
searching for evidence
This evidence may later be used in court to combat
crimes such as cyber-attacks, digital fraud, corporate
espionage and others
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
5. W HEN TO PERFORM A D IGITAL F ORENSICS
INVESTIGATION
A crime has been commited and related evidence
must be presented in court
An incident has occured and the IT department
needs more information in order to perform proper
service recovery
Upper management needs inside information on the
actions of a rogue employee
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
6. I NCIDENT R ESPONSE
Find out what you will be allowed to examine
Gather as much volatile information as possible
Processes
Drivers
Sockets
Network traffic
Use statically compiled tools (busybox?) and execute
these from external media
Collect disk data
Look for traces of known malware
Analyze captured data
Create a short report to assist service recovery
Work on longer report
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
7. D ATA A CQUISITION
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
8. T HE D ATA A CQUISITION P ROCESS
Gather information about the host
Collect volatile data (memory, network dumps,
mounted decrypted filesystems)
Collect disk data
Gather other related media (logfiles, documents,
CDROMs, images of flash drives etc.)
Acquired data are hashed
Fill in Chain of Evidence document
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
9. A CQUIRING VOLATILE DATA
Dump the RAM
Through Firewire
Windows
No OSS solution available that works for a good set
of Windows releases.
Lots of freeware alternatives.
Linux
No more /dev/mem, /dev/kmem
Dump RAM using a kernel module (fmem)
Capture network traffic (pcap format)
tcpdump
wireshark
ettercap
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
10. A CQUIRING DISK DATA
The Linux kernel supports a large number of disk
controllers
Boot from Linux CD but don’t mount anything!
Create HDD images using a known good version of
dcfldd
An enhanced version of dd
Developed at Dept. of Defense Comp. Forensics Lab
Hashes data while copying them from the input
device
If you encounter a faulty drive use ddrescue
Watch out for Host Protected Areas (HPA) and
Device Configuration Overlays (DCO)
You will need RAID support to capture RAID
volumes
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
11. D ATA E XAMINATION
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
12. F ORENSIC ANALYSIS SOFTWARE
First there was TCT (The Coroner’s Toolkit)
Then came the Sleuthkit
Autopsy provided a web front-end for Sleuthkit
Now there’s a plethora of new software around, with
pyflag being perhaps the most promising one
supports AFF format
stores computed/extracted metadata in database
allowing for faster queries
performs log analysis
supports network forensic analysis
supports memory forensic analysis
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
13. M EMORY DUMP ANALYSIS
The Volatility framework analyzes memory dumps
from Windows XP SP2/SP3 and some GNU/Linux
(beta) systems
Identifies running processes
Identifies open sockets and connections
Performs process memory space analysis (memory
maps, loaded libraries, list of open files)
# python2.6 volatility connections -f /tmp/xp-NIST-sample
Local Address Remote Address Pid
127.0.0.1:1056 127.0.0.1:1055 2160
127.0.0.1:1055 127.0.0.1:1056 2160
192.168.2.7:1077 64.62.243.144:80 2392
192.168.2.7:1082 205.161.7.134:80 2392
192.168.2.7:1066 199.239.137.200:80 2392
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
14. N ETWORK TRAFFIC ANALYSIS
Wireshark is your friend!
Identify talking hosts
Identify abnormal traffic
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
15. I MAGE ANALYSIS AND FILE RECOVERY
DEMO
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
16. L OOKING FOR DATA
The forensic equivalent of grep on a file
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
17. L INUX LOG RECOVERY
Most logs in /var/log are text based
Syslog appends a time prefix to each log entry
You can search for a time prefix that matches log
entries that have been deleted!
Jan 12.*servername
Locate the longest version of a log excerpt (you may
encounter more than one!)
Join together the log excerpts found on different disk
locations
...great fun! (sic)
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
18. B UILDING A TIMELINE FROM F ILESYSTEM
EVENTS
Gather file activity events from structures of existing
and deleted files and encode in mactime format
Use Sleuthkit’s fls tool
Create a timeline by sorting the events in
chronological order
Use Sleuthkit’s mactime tool
Filesystem m a c b
Ext2/3 Modified Accessed Changed N/A
FAT Written Accessed N/A Created
NTFS File Modified Accessed MFT Modified Created
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
19. Q UIZ #1: W HAT DO YOU SEE HERE ?
Mon May 02 2011 13:45:35 .a.. /etc/protocols
.a.. /etc/hosts.allow
.a.. /etc/hosts.deny
.a.. /etc/ssh/moduli
Mon May 02 2011 13:45:37 .a.. /etc/pam.d/sshd
Mon May 02 2011 13:45:38 .a.. /etc/shadow
Mon May 02 2011 13:45:39 .a.. /lib/terminfo/x/xterm
Mon May 02 2011 13:46:25 mac. /var/log/lastlog
Mon May 02 2011 13:46:29 .a.. /home/john
Mon May 02 2011 13:48:04 .a.. /etc/pam.d/su
Mon May 02 2011 13:50:27 m.c. /etc/passwd
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
20. Q UIZ #2: W HAT DO YOU SEE HERE ?
15:13:29 .a.. /tmp/...
15:13:40 .a.. /etc/wgetrc
.a.. /usr/bin/wget
15:14:02 ..c. /tmp/.../la.c
15:14:40 .a.. /tmp/.../la.c
.a.. /usr/include/stdio.h
.a.. /usr/lib/gcc/i486-linux-gnu/4.3/cc1
15:14:41 .a.. /usr/include/pcap/pcap.h
15:14:42 .a.. /usr/bin/as
.a.. /usr/lib/crt1.o
15:14:43 m.c. /tmp/.../t
15:14:48 .a.. /tmp/.../t
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
21. Q UIZ #3: W HAT DO YOU SEE HERE ?
10:04:01 macb C:/Documents and Settings/john/
Local Settings/Temporary
Internet Files/Content.IE5/XXXXXXXX/
ABCDE8FG
10:04:05 .a.. C:/Program Files/Adobe/Acrobat 9.0/
Acrobat/plug_ins/PfuSsPCapPI/
PfuSsPCapPI.api
10:04:12 m.c. C:/Documents and Settings/john/
Local Settings/Temporary
Internet Files/Content.IE5/XXXXXXXX/
sexy.pdf
10:05:00 .a.. C:/Documents and Settings/john/
Local Settings/Temp/foo.bat
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
22. W INDOWS REGISTRY TIMELINE
Windows keeps an MTIME record for each registry
key
We can browse Windows registry files with
reglookup
..and sort them in chronological order with
reglookup-timeline
# reglookup-timeline /mnt/WINDOWS/system32/config/system
MTIME,FILE,PATH
2010-09-23 06:55:20,system,/WPA/MediaCenter
2010-09-23 07:07:44,system,/WPA/SigningHash-XXXXXXXXXXXXX
2010-09-23 07:07:49,system,/WPA/Key-YYYYYYYYYYYYYYYYY
...
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
23. F ILE IDENTIFICATION
Check
with databases of known file hashes
with databases of known file patterns
information entropy
contents manually
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
24. NSRL DB
NIST’s National Software Reference Library
Hash values of known files
md5 & sha1
file origin information (filename, system)
7.4GB as of June 2010 (updated every 3 months)
They are admissible as evidence by US courts
All data is traceable to its origin
NIST keeps copies at secure facility
Sleuthkit’s hfind searches an indexed NSRL DB
$ hfind NSRLFile.txt 5f7eaaf5d10e2a715d5e305ac992b2a7
5f7eaaf5d10e2a715d5e305ac992b2a7 CHKDSK.EXE
5f7eaaf5d10e2a715d5e305ac992b2a7 chkdsk.exe
### time: real 0m0.003s, user 0m0.004s, sys 0m0.000s
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
25. T HE F ILE UTILITY
The magic database associates data with a file type,
based on known patterns, e.g.
0 string MZ
>0x18 leshort <0x40 MS-DOS executable
The file utility consults the magic database and
reports the type of a file
$ file /tmp/obj
/tmp/obj: PE32 executable for MS Windows (GUI)
Intel 80386 32-bit
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
26. A NTIVIRUS CHECK
Antiviruses use signatures (content hashes and
pattern-matching) to identify malicious software
ClamAV is an Open Source Antivirus Engine
It detects Trojans, Viruses, Malware and other
(possibly) unwanted applications irregardless of their
filename
# freshclam
ClamAV update process started at Wed Apr 27 ...
bytecode.cld updated (version: 143, sigs: 40, ...)
Database updated (952543 signatures) from
db.local.clamav.net
$ clamscan --detect-pua /tmp/obj2
/tmp/obj2: PUA.Script.PDF.EmbeddedJS FOUND
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
27. S ORTING FILES
File sorting allows the investigator:
to filter out files that are known and good
to focus the investigation on files of a certain type
(e.g. Microsoft Word documents)
Sleuthkit’s sorter sorts allocated and unallocated
files according to both NSRL-type and magic-type
databases
It also identifies files that have an extension
mismatch!
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
28. S ORTING FILES
sorter example on a tiny ext2 image with 2 present
and 1 deleted files
$ sorter -d . -s /tmp/img
$ tree
.
|-- documents
| ‘-- mpi-12.pdf
|-- documents.txt
|-- images
| |-- mpi-13.jpg
| ‘-- mpi-14
|-- images.txt
‘-- sorter.sum
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
29. S ORTING FILES
$ cat images.txt
name.jpg
JPEG image data, EXIF standard
Image: /tmp/mpi Inode: 13
Saved to: images/mpi-13.jpg
$OrphanFiles/OrphanFile-14
JPEG image data, JFIF standard 1.01
Image: /tmp/mpi Inode: 14
Saved to: images/mpi-14
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
30. C HECKING FILE METADATA
Look at a file’s internal metadata to obtain
information about the environment it was created in
exifprobe
pdfinfo
...
Do you suspect that steganography is taking place?
Check with tools like stegdetect
Check your sample data against various
steganography decoding tools
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
31. I NFORMATION ENTROPY
Measuring the information entropy of a file may give
us a hint as to whether a file contains:
compressed data
random data
encrypted data (well, not always)
ent to the rescue!
measures entropy
performs x2 test
calculates arithmetic mean
calculates monte carlo value for π
measures serial correlation coefficient
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
32. I NFORMATION ENTROPY
Ent. Comp. x2 exceed
urandom 7.996433 0% 256.63 50%
calc.exe 6.003569 24% 1661018.85 0.01
calc.zip 7.992996 0% 487.11 0.01
calc.gpg 7.996440 0% 257.08 50%
Mean MC MC error Serial Cor.
urandom 127.2937 3.102924246 1.23 -0.005558
calc.exe 102.2017 3.080255310 1.95 0.379018
calc.zip 128.2233 3.114373668 0.87 -0.005195
calc.gpg 127.3222 3.142988717 0.04 -0.002486
AES256 encrypted data (calc.gpg) look very much
like random data!
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
33. M ANUAL FILE INSPECTION
Use a hex editor to inspect the file structure
hd
Extract any strings available
strings file
extracts ASCII strings
strings -e l file
extracts UTF-16 little endian strings
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
34. R EVERSE ENGINEERING
static / runtime analysis in protected environment
(e.g. in qemu guest)
for Windows binaries
pefile / peid
ndisasm
winedbg / zerowine
metasm / radare
for Linux binaries
readelf
objdump
strace / ltrace
metasm / radare / elfsh
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
35. F ILE CARVING
Use signatures to locate files within raw data
Search for a particular file
Search for a particular file type
Structural information is useful in determining the
exact length of a file
foremost is a file carver
supports a wide variety of file types
the user can add more types through the
configuration file
$ foremost -v -t jpg -i image -o outdir
Num Name (bs=512) Size File Offset
0: 00000134.jpg 33 KB 68608
1: 00000204.jpg 28 KB 104448
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
36. W INDOWS LOG RECOVERY
Windows logs are stored in a record-based binary
format (!)
Part of the textual description of each entry lies
within DLL files (!?)
grokevt can parse Windows (evt) logs and turn them
into their textual counterparts
It resolves the textual descriptions from the
corresponding DLL’s for logs of known type
It can also locate Windows log entries within raw
disk images (carving!)
15367,Error,2011-02-02 10:00:08, Symantec AntiVirus, HOST,
Security Risk Found! Bloodhound.SONAR.1 in File: c:nc.exe
by: TruScan scan. Action: Leave Alone succeeded.
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
37. E VIDENCE CORRELATION
How do you know if a piece of information is
trustworthy evidence?
Was it found on a tamper-proof medium?
Was it produced by a trusted source?
Do other evidence also support this?
Always look for related events
A remote login event (a log entry?) may also be
supported by Access Time changes to the user’s files.
Combine the evidence under a single timeline
Use log2timeline to join different types of logs
Watch for clock skew between hosts
Watch for logs that keep time in UTC or other formats
A wall clock reference (time of acquisition?) is always
useful!
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
38. R EPORT P REPARATION
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
39. K EEPING NOTES
Document all steps of the investigation process
Independent investigators must be able to follow all
of your steps (and reach the same conclusions!)
Many GUI forensic analysis tools provide a
notes-keeping functionality
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
40. P REPARING THE REPORT
What usually happens
First draft of report goes to client and legal
representative
Investigator collects feedback (detached notes)
Revised copy is sent to client
The client doesn’t edit the report directly, so the
investigator is free to use the editing suite of his
choice!
OpenOffice / LibreOffice
XeLaTeX
...
Tool output is presented in the Appendix
You can pretty-print this using scripts + XSLT.
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
41. E XAMPLE OF AN APPLICATION - GENERATED
REPORT
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
42. C ONCLUSIONS
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
43. C ONCLUSIONS
Open Source Landscape: A growing arsenal of
forensic tools!
Many of the tools were created
in an “as-needed” basis (by professionals / others)
as part of calls in conferences (by the academia)
as part of a certification process (by investigators)
Some of them have been recognized as the “de facto”
standard (e.g. dcfldd)
You might find that the tool development process
and related research is much more exciting than the
actual investigation process itself... :-)
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
44. A ND SOME RANTS ...
Need for better coordination between filesystem
community and forensic community
e.g. once a new filesystem is released, both filesystem
and forensic tools should have access to its internal
data structures through a common library.
We’ve lost a lot (of evidence) in the race towards
efficiency
Administrators should have the option to switch a
filesystem (or logging mechanism) to a more
“forensic-friendly” mode.
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .
45. Q UESTIONS ?
Image courtesy of South Park Studios.
D IGITAL F ORENSICS WITH O PEN S OURCE T OOLS :: FOSSCOMM 2011 :: C ENSUS , I NC .