The document summarizes research on analyzing the memory of the Dalvik virtual machine used in Android. It describes acquiring memory from Android devices, locating key data structures in memory like loaded classes and their fields, and analyzing specific Android applications to recover data like call histories, text messages, and location information. The goal is to develop forensics capabilities for investigating Android devices through memory analysis.
Forensic Memory Analysis of Android's Dalvik Virtual MachineSource Conference
This document summarizes a presentation on analyzing the memory of the Dalvik virtual machine used in Android. It discusses acquiring Android phone memory, locating key data structures in Dalvik like loaded classes and instance fields, and analyzing specific Android applications to recover data like call histories, text messages, browser sessions, and wireless network information. The goal is to extract runtime information and data from Android apps through memory forensics.
This document summarizes Linux memory analysis capabilities in the Volatility framework. It discusses general plugins that recover process, network, and system information from Linux memory images. It also describes techniques for detecting rootkits by leveraging kmem_cache structures and recovering hidden processes. Additionally, it covers analyzing live CDs by recovering the in-memory filesystem and analyzing Android memory images at both the kernel and Dalvik virtual machine levels.
The document discusses performing memory forensic analysis on Windows systems using EnCase, including acquiring memory images using tools like WinEn and MoonSols Windows Memory Toolkit, and analyzing the images using EnScripts to extract information on processes, modules, and open files through techniques like traversing kernel data structures and searching for object fingerprints. Hands-on examples are provided to demonstrate acquiring memory from a system and analyzing it using EnScripts to identify processes and differences before and after terminating a process.
Android Mind Reading: Android Live Memory Analysis with LiME and VolatilityJoe Sylve
This document discusses acquiring and analyzing volatile memory from Android devices. It provides an overview of traditional Linux memory forensics and the challenges with analyzing Android memory. It introduces the tools LiME and Volatility for acquiring memory from a running Android device and then analyzing that memory image. It demonstrates how to use these tools to extract useful forensic artifacts like processes, network connections, and files.
This document provides an overview of the Volatility memory forensics framework. It discusses the Volatility Foundation, which supports development of the open-source Volatility tool. It highlights new features in Volatility 2.4, including improved support for Windows, Linux, MacOS, and analyzing application artifacts from programs like Chrome, Firefox, and Notepad. It outlines the Volatility roadmap and concludes by discussing the 2014 Volatility Plugin Contest winners, whose new plugins will enhance Volatility's capabilities.
De-Anonymizing Live CDs through Physical Memory AnalysisAndrew Case
- The document discusses analyzing physical memory to recover information from live CDs that normally evades digital forensics investigations.
- It presents research on developing algorithms to enumerate the in-memory file system structure of live CDs like TAILS to recover file metadata and contents directly from RAM.
- The goal is to apply traditional investigative techniques like timeline analysis that are usually impossible for live CDs by reconstructing the complete in-memory file system and contents through memory analysis.
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityAndrew Case
The document summarizes the analysis of several Linux rootkits using the Volatility memory forensics framework. It describes how the Average Coder rootkit hides processes, modules and users by hooking various file operations. It also details how the KBeast rootkit hides its module, hooks system calls and network connections. Finally, it discusses how the Jynx rootkit operates by preloading a shared library to hook filesystem and network functions and implement a backdoor. The document demonstrates how Volatility plugins can detect these rootkits and recover hidden data.
Forensic Memory Analysis of Android's Dalvik Virtual MachineSource Conference
This document summarizes a presentation on analyzing the memory of the Dalvik virtual machine used in Android. It discusses acquiring Android phone memory, locating key data structures in Dalvik like loaded classes and instance fields, and analyzing specific Android applications to recover data like call histories, text messages, browser sessions, and wireless network information. The goal is to extract runtime information and data from Android apps through memory forensics.
This document summarizes Linux memory analysis capabilities in the Volatility framework. It discusses general plugins that recover process, network, and system information from Linux memory images. It also describes techniques for detecting rootkits by leveraging kmem_cache structures and recovering hidden processes. Additionally, it covers analyzing live CDs by recovering the in-memory filesystem and analyzing Android memory images at both the kernel and Dalvik virtual machine levels.
The document discusses performing memory forensic analysis on Windows systems using EnCase, including acquiring memory images using tools like WinEn and MoonSols Windows Memory Toolkit, and analyzing the images using EnScripts to extract information on processes, modules, and open files through techniques like traversing kernel data structures and searching for object fingerprints. Hands-on examples are provided to demonstrate acquiring memory from a system and analyzing it using EnScripts to identify processes and differences before and after terminating a process.
Android Mind Reading: Android Live Memory Analysis with LiME and VolatilityJoe Sylve
This document discusses acquiring and analyzing volatile memory from Android devices. It provides an overview of traditional Linux memory forensics and the challenges with analyzing Android memory. It introduces the tools LiME and Volatility for acquiring memory from a running Android device and then analyzing that memory image. It demonstrates how to use these tools to extract useful forensic artifacts like processes, network connections, and files.
This document provides an overview of the Volatility memory forensics framework. It discusses the Volatility Foundation, which supports development of the open-source Volatility tool. It highlights new features in Volatility 2.4, including improved support for Windows, Linux, MacOS, and analyzing application artifacts from programs like Chrome, Firefox, and Notepad. It outlines the Volatility roadmap and concludes by discussing the 2014 Volatility Plugin Contest winners, whose new plugins will enhance Volatility's capabilities.
De-Anonymizing Live CDs through Physical Memory AnalysisAndrew Case
- The document discusses analyzing physical memory to recover information from live CDs that normally evades digital forensics investigations.
- It presents research on developing algorithms to enumerate the in-memory file system structure of live CDs like TAILS to recover file metadata and contents directly from RAM.
- The goal is to apply traditional investigative techniques like timeline analysis that are usually impossible for live CDs by reconstructing the complete in-memory file system and contents through memory analysis.
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityAndrew Case
The document summarizes the analysis of several Linux rootkits using the Volatility memory forensics framework. It describes how the Average Coder rootkit hides processes, modules and users by hooking various file operations. It also details how the KBeast rootkit hides its module, hooks system calls and network connections. Finally, it discusses how the Jynx rootkit operates by preloading a shared library to hook filesystem and network functions and implement a backdoor. The document demonstrates how Volatility plugins can detect these rootkits and recover hidden data.
(120513) #fitalk an introduction to linux memory forensicsINSIGHT FORENSIC
This document discusses Linux memory forensics and provides an overview of tools and techniques for acquiring and analyzing memory. It begins by covering live forensics commands and then discusses memory forensics in more depth. Several open source tools for dumping physical memory are described, including fmem and LiME, as well as tools for analyzing memory images like Foriana and Volatilitux. Commercial memory analysis solutions are also briefly mentioned.
Workshop - Linux Memory Analysis with VolatilityAndrew Case
Slides from my 3 hour workshop at Blackhat Vegas 2011. Covers using Volatility to perform Linux memory analysis investigations as well Linux kernel internals.
This document discusses computer memory forensics. It explains that memory forensics involves acquiring volatile memory contents from RAM and preserving them for later forensic analysis. The document outlines the different types of forensic analysis that can be performed on memory contents, including storage, file system, application, and network analysis. It also discusses the challenges of memory forensics, such as anti-forensic techniques used by malware to hide processes, drivers, and other artifacts in memory.
Volatility is an open source memory forensics tool that analyzes RAM dumps to detect malware. It supports Windows, Linux, Mac and Android and uses plugins to parse memory images and extract useful information. Some key things Volatility can do include detecting injected code, unpacked files, hooks, and kernel driver modifications left by malware in memory. It allows analyzing ransomware locked systems, unpacking encrypted files, and dumping executable content of running processes for further reverse engineering.
The document provides an overview of memory forensics and the Rekall memory analysis tool. It discusses why memory forensics is useful, describes how Rekall supports multiple operating systems through profiles, and covers memory imaging, virtual memory concepts, and analyzing live memory. Rekall's interfaces like the command line, console, notebook, and web console are also introduced.
This document discusses memory forensics and summarizes a presentation about investigating memory artifacts. It introduces dracOs, an open-source Linux distribution for security research. It describes the current and planned forensics tools being integrated into dracOs. It then explains the basics of memory forensics, including why it is important, how memory is organized, common memory acquisition formats, and analysis tools like Volatility and Rekall. The presentation aims to provide knowledge about digital forensic analysis of memory dumps.
The document analyzes and classifies 123 PlugX malware samples into 7 groups based on their configurations and relationships to known targeted attacks. The largest group is "Starter" with 24 samples, followed by "*Sys" with 20 samples. Various techniques are used to associate samples, including matching registry values, domains, IP addresses, debug strings, and network ranges. Many samples in the "*Sys" and "WS" groups are found to share the same owner, domains, or network.
Live Memory Forensics on Android devicesNikos Gkogkos
This presentation deals with some RAM forensics on the Android OS using the LiME tool for getting a RAM dump and the Volatility framework for the analysis part!
Unmasking Careto through Memory Forensics (video in description)Andrew Case
My presentation from SecTor 2014 on analyzing the sophisticated Careto malware with memory forensics & Volatility
Video here: http://2014.video.sector.ca/video/110388398
One-Byte Modification for Breaking Memory Forensic AnalysisTakahiro Haruyama
The document proposes a one-byte modification method to potentially abort memory forensic analysis tools without impacting the running system or hiding specific objects. It identifies three sensitive operations in memory analysis: 1) virtual address translation in kernel space, 2) guessing the OS version and architecture, and 3) getting kernel objects like processes. For each operation, it outlines how top tools like Volatility and Memoryze perform the operation and identifies specific "abort factors", or one-byte values that could be modified to abort the analysis without direct detection. Modifying these factors could stop analysis tools from functioning properly without blue screening the system or hiding specific objects.
REMnux tutorial-2: Extraction and decoding of ArtifactsRhydham Joshi
Power point presentation describes about tools and techniques used for extracting and decoding artifacts from malicious files, forensic discipline in handling infected disk-drives and recovering files from infected images.
Winnti is malware used by Chinese threat actor for cybercrime and cyber espionage since 2009. The behavior of Winnti components is well described in past analysis report by Novetta, but currently there are much more variants with different behavior from it. I will share my RE findings not explained in public reports including:
- Winnti worker component supporting SMTP protocol,
- Winnti as a loader for other malware family,
- rootkit driver making covert channels by hooking NDIS TCPIP protocol handlers and
- hack tools using the same API hash calculation as Winnti components.
The configuration data of Winnti is important for threat intelligence because campaign IDs indicating target organizations or countries to the actor are included. Moreover, as Kaspersky pointed out in the blog, inline 64-bit kernel drivers are sometimes signed with stolen certificates. The certificates are also useful to identify already-compromised targets. I checked about 170 Winnti samples to extract the configurations and certificates. Based on the work, I will show Winnti targets are not only game and pharmaceutical industries, but also chemical, e-commerce, electronics and telecommunications ones.
The document discusses computer forensics in the context of investigating a Windows system. It outlines the process of gathering volatile data like memory contents and network connections using tools run from a trusted CD. Non-volatile data like the filesystem is acquired by imaging the entire disk. Timeline analysis uses data from files, registry keys and logs to determine when files and events occurred. The goal is to methodically identify and preserve digital evidence while following forensic standards.
The document discusses NTFS forensics and the structure of the NTFS file system. Some key points:
1) NTFS stores metadata about files and folders in the Master File Table ($MFT) using file records and attributes like $FILE_NAME and $DATA.
2) Files can be recovered by finding their data runs stored in the $MFT entry and reading the data from disk.
3) Additional forensic artifacts can be found in hidden internal files like $USNJRNL, $LogFile, and $Bitmap that contain metadata about file operations and deletions.
44CON London 2015: NTFS Analysis with PowerForensicsJared Atkinson
This workshop was given by Jared Atkinson on September 11th 2015 at 44CON London. The purpose of this workshop was to introduce participants to NTFS Internals and PowerForensics, an open source PowerShell digital forensics platform.
Leveraging NTFS Timeline Forensics during the Analysis of Malwaretmugherini
This document provides an overview of leveraging NTFS timeline forensics in malware analysis. It discusses analyzing the Master File Table (MFT) of the NTFS file system to establish a timeline and location of system changes. The document analyzes a rogue antivirus malware sample to demonstrate the technique, extracting information from the MFT, prefetch folder, registry hives, and recovering deleted files. While useful, MFT analysis must be corroborated with other methods due to potential for attribute manipulation.
The document introduces Autopsy, an open source digital forensics platform. It provides an overview of Autopsy's features which allow users to efficiently analyze hard drives and smartphones through a graphical interface. Key capabilities include timeline analysis, keyword searching, web and file system artifact extraction, and support for common file systems. The document includes screenshots and references for additional information on Autopsy's functions and use in digital investigations.
The document provides an overview of disk forensics concepts and tools used for analyzing disk images. It discusses locating the NTFS partition, inspecting the master boot record and partition table. It also covers the NTFS file system structures like the master file table, file attributes, alternate data streams, and methods for recovering deleted files. Timestamp analysis and registry forensics are also briefly introduced. Various forensic tools like The Sleuth Kit, Autopsy, and samdump2 are demonstrated.
This is a draft presentation of a video lesson taken from the course "Digital forensics with Kali Linux" published by Packt Publishing in May 2017: https://www.packtpub.com/networking-and-servers/digital-forensics-kali-linux
In this presentation we are going to cover the recovery of deleted files from a disk image using three CLI file carving tools pre-installed on Kali Linux: Foremost, Scalpel and Photorec.
Android supports playing and capturing a variety of common audio and video formats. The MediaPlayer class is used for playback and MediaRecorder for capture. MediaPlayer can play resources, files, and streams, while MediaRecorder requires setting permissions, audio/video sources, and format/encoder before starting recording. The VideoView widget simplifies playing video by wrapping MediaPlayer and a SurfaceView.
(120513) #fitalk an introduction to linux memory forensicsINSIGHT FORENSIC
This document discusses Linux memory forensics and provides an overview of tools and techniques for acquiring and analyzing memory. It begins by covering live forensics commands and then discusses memory forensics in more depth. Several open source tools for dumping physical memory are described, including fmem and LiME, as well as tools for analyzing memory images like Foriana and Volatilitux. Commercial memory analysis solutions are also briefly mentioned.
Workshop - Linux Memory Analysis with VolatilityAndrew Case
Slides from my 3 hour workshop at Blackhat Vegas 2011. Covers using Volatility to perform Linux memory analysis investigations as well Linux kernel internals.
This document discusses computer memory forensics. It explains that memory forensics involves acquiring volatile memory contents from RAM and preserving them for later forensic analysis. The document outlines the different types of forensic analysis that can be performed on memory contents, including storage, file system, application, and network analysis. It also discusses the challenges of memory forensics, such as anti-forensic techniques used by malware to hide processes, drivers, and other artifacts in memory.
Volatility is an open source memory forensics tool that analyzes RAM dumps to detect malware. It supports Windows, Linux, Mac and Android and uses plugins to parse memory images and extract useful information. Some key things Volatility can do include detecting injected code, unpacked files, hooks, and kernel driver modifications left by malware in memory. It allows analyzing ransomware locked systems, unpacking encrypted files, and dumping executable content of running processes for further reverse engineering.
The document provides an overview of memory forensics and the Rekall memory analysis tool. It discusses why memory forensics is useful, describes how Rekall supports multiple operating systems through profiles, and covers memory imaging, virtual memory concepts, and analyzing live memory. Rekall's interfaces like the command line, console, notebook, and web console are also introduced.
This document discusses memory forensics and summarizes a presentation about investigating memory artifacts. It introduces dracOs, an open-source Linux distribution for security research. It describes the current and planned forensics tools being integrated into dracOs. It then explains the basics of memory forensics, including why it is important, how memory is organized, common memory acquisition formats, and analysis tools like Volatility and Rekall. The presentation aims to provide knowledge about digital forensic analysis of memory dumps.
The document analyzes and classifies 123 PlugX malware samples into 7 groups based on their configurations and relationships to known targeted attacks. The largest group is "Starter" with 24 samples, followed by "*Sys" with 20 samples. Various techniques are used to associate samples, including matching registry values, domains, IP addresses, debug strings, and network ranges. Many samples in the "*Sys" and "WS" groups are found to share the same owner, domains, or network.
Live Memory Forensics on Android devicesNikos Gkogkos
This presentation deals with some RAM forensics on the Android OS using the LiME tool for getting a RAM dump and the Volatility framework for the analysis part!
Unmasking Careto through Memory Forensics (video in description)Andrew Case
My presentation from SecTor 2014 on analyzing the sophisticated Careto malware with memory forensics & Volatility
Video here: http://2014.video.sector.ca/video/110388398
One-Byte Modification for Breaking Memory Forensic AnalysisTakahiro Haruyama
The document proposes a one-byte modification method to potentially abort memory forensic analysis tools without impacting the running system or hiding specific objects. It identifies three sensitive operations in memory analysis: 1) virtual address translation in kernel space, 2) guessing the OS version and architecture, and 3) getting kernel objects like processes. For each operation, it outlines how top tools like Volatility and Memoryze perform the operation and identifies specific "abort factors", or one-byte values that could be modified to abort the analysis without direct detection. Modifying these factors could stop analysis tools from functioning properly without blue screening the system or hiding specific objects.
REMnux tutorial-2: Extraction and decoding of ArtifactsRhydham Joshi
Power point presentation describes about tools and techniques used for extracting and decoding artifacts from malicious files, forensic discipline in handling infected disk-drives and recovering files from infected images.
Winnti is malware used by Chinese threat actor for cybercrime and cyber espionage since 2009. The behavior of Winnti components is well described in past analysis report by Novetta, but currently there are much more variants with different behavior from it. I will share my RE findings not explained in public reports including:
- Winnti worker component supporting SMTP protocol,
- Winnti as a loader for other malware family,
- rootkit driver making covert channels by hooking NDIS TCPIP protocol handlers and
- hack tools using the same API hash calculation as Winnti components.
The configuration data of Winnti is important for threat intelligence because campaign IDs indicating target organizations or countries to the actor are included. Moreover, as Kaspersky pointed out in the blog, inline 64-bit kernel drivers are sometimes signed with stolen certificates. The certificates are also useful to identify already-compromised targets. I checked about 170 Winnti samples to extract the configurations and certificates. Based on the work, I will show Winnti targets are not only game and pharmaceutical industries, but also chemical, e-commerce, electronics and telecommunications ones.
The document discusses computer forensics in the context of investigating a Windows system. It outlines the process of gathering volatile data like memory contents and network connections using tools run from a trusted CD. Non-volatile data like the filesystem is acquired by imaging the entire disk. Timeline analysis uses data from files, registry keys and logs to determine when files and events occurred. The goal is to methodically identify and preserve digital evidence while following forensic standards.
The document discusses NTFS forensics and the structure of the NTFS file system. Some key points:
1) NTFS stores metadata about files and folders in the Master File Table ($MFT) using file records and attributes like $FILE_NAME and $DATA.
2) Files can be recovered by finding their data runs stored in the $MFT entry and reading the data from disk.
3) Additional forensic artifacts can be found in hidden internal files like $USNJRNL, $LogFile, and $Bitmap that contain metadata about file operations and deletions.
44CON London 2015: NTFS Analysis with PowerForensicsJared Atkinson
This workshop was given by Jared Atkinson on September 11th 2015 at 44CON London. The purpose of this workshop was to introduce participants to NTFS Internals and PowerForensics, an open source PowerShell digital forensics platform.
Leveraging NTFS Timeline Forensics during the Analysis of Malwaretmugherini
This document provides an overview of leveraging NTFS timeline forensics in malware analysis. It discusses analyzing the Master File Table (MFT) of the NTFS file system to establish a timeline and location of system changes. The document analyzes a rogue antivirus malware sample to demonstrate the technique, extracting information from the MFT, prefetch folder, registry hives, and recovering deleted files. While useful, MFT analysis must be corroborated with other methods due to potential for attribute manipulation.
The document introduces Autopsy, an open source digital forensics platform. It provides an overview of Autopsy's features which allow users to efficiently analyze hard drives and smartphones through a graphical interface. Key capabilities include timeline analysis, keyword searching, web and file system artifact extraction, and support for common file systems. The document includes screenshots and references for additional information on Autopsy's functions and use in digital investigations.
The document provides an overview of disk forensics concepts and tools used for analyzing disk images. It discusses locating the NTFS partition, inspecting the master boot record and partition table. It also covers the NTFS file system structures like the master file table, file attributes, alternate data streams, and methods for recovering deleted files. Timestamp analysis and registry forensics are also briefly introduced. Various forensic tools like The Sleuth Kit, Autopsy, and samdump2 are demonstrated.
This is a draft presentation of a video lesson taken from the course "Digital forensics with Kali Linux" published by Packt Publishing in May 2017: https://www.packtpub.com/networking-and-servers/digital-forensics-kali-linux
In this presentation we are going to cover the recovery of deleted files from a disk image using three CLI file carving tools pre-installed on Kali Linux: Foremost, Scalpel and Photorec.
Android supports playing and capturing a variety of common audio and video formats. The MediaPlayer class is used for playback and MediaRecorder for capture. MediaPlayer can play resources, files, and streams, while MediaRecorder requires setting permissions, audio/video sources, and format/encoder before starting recording. The VideoView widget simplifies playing video by wrapping MediaPlayer and a SurfaceView.
- Memory management for Android apps has changed with Gingerbread and Honeycomb, increasing heap sizes and improving garbage collection to reduce pauses
- Understanding log messages and using heap dumps can help debug memory issues like leaks caused by lingering references to activities or contexts
- The Eclipse Memory Analyzer tool allows inspection of heap dumps to identify dominator trees and retained objects causing leaks
This document discusses various techniques for optimizing Android for low-RAM devices, including:
1. Tuning apps to release memory more aggressively, the Dalvik VM to use less memory, and the ActivityManager to better manage processes.
2. Configuring Dalvik VM properties like heap sizes and disabling JIT compilation to reduce memory usage.
3. Enabling kernel features like KSM for memory sharing and adjusting lowmemkiller parameters to more aggressively free memory.
4. Using tools like dumpsys, procrank, and meminfo to monitor memory usage and identify optimization opportunities.
Dynamic Languages in Production: Progress and Open Challengesbcantrill
This document discusses dynamic languages in production and open challenges. It summarizes the history and rise of dynamic languages like Lisp, Java, and Node.js. It then describes tools developed at Joyent to debug Node.js applications in production environments, including mdb_v8 for post-mortem debugging using core dumps, and integrating Node.js with DTrace for live tracing and profiling. These tools allow inspecting runtime state, identifying memory leaks, and debugging non-reproducible issues.
The document introduces a plan to cover Java and Android basics, Android UI topics, and Android in action over three parts. Part I provides an introduction to Java concepts like the JVM, garbage collection, threads, and an overview of the Android architecture and building blocks. It also demonstrates an NDK example. The document outlines the topics that will be covered in each part of the training.
Blocks is a cool concept and is very much needed for performance improvements and responsiveness. GCD helps run blocks effortlessly by scheduling on a desired queue, priority and lots more.
This document provides an overview of the Oxford Common File Layout (OCFL) specification for digital preservation. It summarizes the goals of OCFL, which are to enable the completeness, parsability, robustness, and storage of digital objects on a variety of infrastructures. The key components of an OCFL system include OCFL objects, which group content files and metadata into versioned directories, and OCFL storage roots, which provide a hierarchical storage structure for multiple OCFL objects.
CNIT 126: 10: Kernel Debugging with WinDbgSam Bowne
Slides for a college course at City College San Francisco. Based on "Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software", by Michael Sikorski and Andrew Honig; ISBN-10: 1593272901.
Instructor: Sam Bowne
Class website: https://samsclass.info/126/126_F18.shtml
This document provides an introduction to big data and NoSQL databases. It begins with an introduction of the presenter. It then discusses how the era of big data came to be due to limitations of traditional relational databases and scaling approaches. The document introduces different NoSQL data models including document, key-value, graph and column-oriented databases. It provides examples of NoSQL databases that use each data model. The document discusses how NoSQL databases are better suited than relational databases for big data problems and provides a real-world example of Twitter's use of FlockDB. It concludes by discussing approaches for working with big data using MapReduce and provides examples of using MongoDB and Azure for big data.
This document discusses Dalvik memory analysis on Android devices. It describes using LiME to acquire physical memory dumps and Volatility plugins to parse Dalvik Virtual Machine (DVM) objects and instances from memory. The author details how their tools locate and parse loaded classes, fields, strings, arrays and objects to analyze running apps without needing the app code or DEX files. Their Dalvik Inspector GUI allows searching parsed DVM structures from memory dumps. Open problems discussed include creating portable LiME kernels and automating profile generation for different device configurations and libdvm.so versions.
Exploring Java Heap Dumps (Oracle Code One 2018)Ryan Cuprak
Memory leaks are not always simple or easy to find. Heap dumps from production systems are often gigantic (4+ gigs) with millions of objects in memory. Simple spot checking with traditional tools is woefully inadequate in these situations, especially with real data. Leaks can be entire object graphs with enormous amounts of noise. This session will show you how to build custom tools using the Apache NetBeans Profiler/Heapwalker APIs. Using these APIs, you can read and analyze Java heaps programmatically to ask really hard questions. This gives you the power to analyze complex object graphs with tens of thousands of objects in seconds.
The document provides information about Java, including:
- Java is an object-oriented programming language that is platform independent and can be used to create applications for web, desktops, mobile devices, and more.
- Java was originally developed in the early 1990s by James Gosling at Sun Microsystems for use in set-top boxes, but became popular for building web applications and is now widely used.
- The Java Development Kit (JDK) includes tools like javac, java, javadoc and others needed to develop, compile, run and document Java programs, as well as class libraries and documentation. The JVM executes compiled Java code.
CNIT 126: 10: Kernel Debugging with WinDbgSam Bowne
Slides for a college course at City College San Francisco. Based on "Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software", by Michael Sikorski and Andrew Honig; ISBN-10: 1593272901.
Instructor: Sam Bowne
Class website: https://samsclass.info/126/126_F19.shtml
Introduction to Android Development and SecurityKelwin Yang
This document provides an introduction to Android development and security. It begins with a brief history of Android and overview of its architecture. It then discusses the Android development environment and process, including key tools and frameworks. It also outlines Android security features like application sandboxing, permissions, and encryption. Finally, it introduces a series of Android security labs that demonstrate exploits like parameter manipulation, insecure storage, and memory attacks. The goal is to provide hands-on examples of common Android vulnerabilities.
This document provides an overview of object-oriented programming concepts in Java including abstraction, encapsulation, inheritance, and polymorphism. It discusses key Java concepts like classes, objects, methods, and access specifiers. It also covers Java fundamentals like variables, data types, operators, control flow statements, comments, and arrays. Additionally, it describes the Java runtime environment, how to set up a Java development environment, compile and run a simple Java program. The document is intended as an introduction to object-oriented programming and the Java programming language.
Everything You Need To Know About Persistent Storage in KubernetesThe {code} Team
This document discusses Kubernetes persistent storage options for stateful applications. It covers common use cases that require persistence like databases, messaging systems, and content management systems. It then describes Kubernetes persistent volume (PV), persistent volume claim (PVC), and storage class objects that are used to provision and consume persistent storage. Finally, it compares deployments with statefulsets and covers other volume types like emptyDir, hostPath, daemonsets and their use cases.
When Node.js Goes Wrong: Debugging Node in Production
The event-oriented approach underlying Node.js enables significant concurrency using a deceptively simple programming model, which has been an important factor in Node's growing popularity for building large scale web services. But what happens when these programs go sideways? Even in the best cases, when such issues are fatal, developers have historically been left with just a stack trace. Subtler issues, including latency spikes (which are just as bad as correctness bugs in the real-time domain where Node is especially popular) and other buggy behavior often leave even fewer clues to aid understanding. In this talk, we will discuss the issues we encountered in debugging Node.js in production, focusing upon the seemingly intractable challenge of extracting runtime state from the black hole that is a modern JIT'd VM.
We will describe the tools we've developed for examining this state, which operate on running programs (via DTrace), as well as VM core dumps (via a postmortem debugger). Finally, we will describe several nasty bugs we encountered in our own production environment: we were unable to understand these using existing tools, but we successfully root-caused them using these new found abilities to introspect the JavaScript VM.
The document discusses developing Groovy scripts securely and productively in the cloud for Oracle Application Developer Framework (ADF). It outlines using Groovy AST transformations to add debugging capabilities and runtime security checks when executing scripts in the cloud. Caching is also discussed to improve performance of compiling thousands of scripts across many applications. The implementation transforms the AST to wrap method calls and inject breakpoints while limiting access to restricted APIs.
Kata Container & gVisor provide approaches to securely isolate containers by keeping them out of the direct kernel space. Kata Container uses virtual machines with lightweight kernels to isolate containers, while gVisor uses a userspace kernel implemented in Go to provide isolation. Both aim to protect the host kernel by preventing containers from accessing kernel resources directly. Kata Container has a larger memory footprint than gVisor due to its use of virtual machines, but provides stronger isolation of containers.
This document provides an overview of advanced C# concepts, including:
- C# can be used to create various types of applications like console apps, Windows forms, web services, and ASP.NET MVC apps.
- Assemblies are deployment units that contain code and metadata. They can be EXEs or DLLs.
- Types in C# can contain fields, methods, properties, and events. Methods are not virtual by default. Access modifiers include private, protected, internal, and public.
- Objects are allocated in memory and cleaned up through constructors, finalizers, and the garbage collector. Exceptions provide a way to handle errors.
The document discusses the fundamentals of object-oriented programming and Java. It covers key concepts like abstraction, encapsulation, inheritance and polymorphism. It also describes the basic structure of a Java program, including classes, objects, methods and variables. It explains how to set up a Java development environment, compile and run a simple Java program.
Similar to Memory Analysis of the Dalvik (Android) Virtual Machine (20)
Proactive Measures to Defeat Insider ThreatAndrew Case
This presentation was delivered at RSA 2016 and discussed measures to defeat insider threat. It focused on real investigations that I have performed and how the victim companies could have prevented the associated harm.
OMFW 2012: Analyzing Linux Kernel Rootkits with VolatlityAndrew Case
This document discusses analyzing Linux rootkits using Volatility, an open source memory forensics framework. It analyzes several Linux rootkits including Average Coder, KBeast, and Jynx/Jynx 2. For each rootkit, it describes the rootkit's techniques for hiding processes, files, network connections and how Volatility plugins like linux_check_fop, linux_check_modules, linux_check_syscall, and linux_check_afinfo can detect the rootkit by validating file operations structures, the kernel module list, system call tables, and network operations structures. It also shows how Volatility can recover hidden files, processes, network connections, and shared libraries loaded by the root
My Keynote from BSidesTampa 2015 (video in description)Andrew Case
This is the slides from keynote presentation at BSidesTampa 2015. A recording of the talk can be found at: https://www.youtube.com/watch?v=751bkSD2Nn8&t=1m35s
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Memory Analysis of the Dalvik (Android) Virtual Machine
1. Memory Analysis of the Dalvik
(Android) Virtual Machine
Andrew Case
Digital Forensics Solutions
2. Who Am I?
• Security Analyst at Digital Forensics Solutions
– Also perform wide ranging forensics investigations
• Volatility Developer
• Former Blackhat and DFRWS speaker
2
3. Agenda
• What is Dalvik / why do we care?
• Brief overview of memory forensics
• Extracting allocated and historical data from
Dalvik instances
• Target specific Android applications
3
4. What is Dalvik?
• Dalvik is the software VM for all Android
applications
• Nearly identical to the Java Virtual Machine
(JVM) [1]
• Open source, written in C / Java
4
5. Why do we care?
• Android-based phones leading in US mobile
market
– Which makes for many phones to investigate
• Memory forensics capabilities against Android
applications have numerous uses/implications
• Entire forensics community (LEO, .gov, private
firms) already urging development of such
capabilities
5
6. Memory Forensics Introduction
• Memory forensics is vital to orderly recovery of
runtime information
• Unstructured methods (strings, grep, etc) are
brittle and only recover superficial info
• Structured methods allow for recovery of data
structures, variables, and code from memory
• Previous work at operating system level led to
recovery of processes, open files, network
connections, etc [4,5]
6
7. Memory Analysis Process
• First, need to acquire memory
– Acquisition depends on environment [6]
• Next, requires locating information in memory
and interpreting it correctly
– Also requires re-implementing functionality offline
• Then it needs to be displayed in a useful way
to the investigator
7
9. Acquiring Memory – Approach 1
• The normal method is to acquire a complete
capture of physical RAM
• Works well when analyzing kernel data
structures as their pages are not swapped out
• Allows for recovery of allocated and historical
processes, open files, network connections,
and so on
9
10. Approach 1 on Android
• Without /dev/mem support, need a LKM to
read memory
• No current module works for Android (ARM)
• We developed our own (mostly by @jtsylve)
• Benefits of full capture:
– Can target any process (including its mappings)
– Can recover information from unmapped pages in
processes
10
11. Acquiring Memory – Approach 2
• Memory can be acquired on a per-process
basis
• Ensures that all pages of the process will be
acquired
• Easiest to perform with memfetch[8]
– After a few small changes, was statically compiled
for ARM
• No unmapped pages will be recovered though
– Heap and GC don’t munmap immediately
11
12. Analyzing C vs Java
• Most previous forensics research has had the
“luxury” of analyzing C
– Nearly 1:1 mapping of code/data to in-memory
layout
• Declaration of a C “string”
– char buffer*+ = “Hello World”;
• Memory Layout (xxd)
– 4865 6c6c 6f20 576f 726c 6400 Hello World.
12
13. A Dalvik String in Memory
• First, need the address of the “StringObject”
• Next, need the offsets of the
“java/lang/String” value and byte offset
members
• StringObject + value offset leads us to an
“ArrayObject”
• ArrayObject + byte offset leads to an UTF-16
array of characters
• … finally we have the string (in Unicode)
13
14. Now for the memory analysis…
• The real goal of the research was to be able to
locate arbitrary class instances and fields in
memory
• Other goals included replicating commonly
used features of the Android debugging
framework
14
15. Locating Data Structures
• The base of Dalvik loads as a shared library
(libdvm.so)
• Contains global variables that we use to locate
classes and other information
• Also contains the C structures needed to parse
and gather evidence we need
15
16. Gathering libdvm’s Structures
1) Grab the shared library from the phone (adb)
2) Use Volatility’s dwarfparse.py:
• Builds a profile of C structures along with
members, types, and byte offsets
• Records offsets of global variables
3) Example structure definition
'ClassObject': [ 0xa0, { Class name and size
'obj': [0x0, ['Object']], member name, offset,
and type
16
17. Volatility Plugin Sample
• Accessing structures is as simple as knowing
the type and offset
intval = obj.Object(“int”, offset=intOffset, ..)
• Volatility code to access ‘descriptor’ of an
‘Object’:
o = obj.Object("Object", offset=objectAddress, ..)
c = obj.Object("ClassObject", offset=o.clazz, …)
desc = linux_common.get_string(c.descriptor)
17
18. gDvm
• gDvm is a global structure of type DvmGlobals
• Holds info about a specific Dalvik instance
• Used to locate a number of structures needed
for analysis
18
19. Locating Loaded Classes
• gDvm.loadedClasses is a hash table of
ClassObjects for each loaded class
• Hash table is stored as an array
• Analysis code walks the backing array and
handles active entries
– Inactive entries are NULL or have a pointer value
of 0xcbcacccd
19
20. Information Per Class
• Type and (often) name of the source code file
• Information on backing DexFile
– DexFile stores everything Dalvik cares about for a
binary
• Data Fields
– Static
– Instance
• Methods
– Name and Type
– Location of Instructions
20
21. Static Fields
• Stored once per class (not instance)
• Pre-initialized if known
• Stored in an array with element type
StaticField
• Leads directly to the value of the specific field
21
22. Instance Fields
• Per instance of a Class
• Fields are stored in an array of element type
InstField
• Offset of each field stored in byteOffset
member
– Relative offset from ClassObject structure
22
24. Analyzing Methods
• We can enumerate all methods (direct, virtual)
and retrieve names and instructions
• Not really applicable to this talk
• Can be extremely useful for malware analysis
though
– If .apk is no longer on disk or if code was changed
at runtime
24
25. Methods in Memory vs on Disk
• Dalvik makes a number of runtime
optimizations [1]
• Example: When class members are accessed
(iget, iput) the field table index is replaced
with the direct byte offset
• Would likely need to undo some of the
optimizations to get complete baksmali
output
25
27. Recovery Approach
• Best approach seems to be
locating data structures of UI
screens
– UI screens represented by
uniform (single type) lists of
displayed information
– Data for many views are pre-
loaded
27
28. Finding Data Structures
• Can save substantial time by using adb’s
logcat (next slide)
– Shows the classes and often methods involved in
handling UI events
• Otherwise, need to examine source code
– Some applications are open source
– Others can be “decompiled” with baksmali [9]
28
29. logcat example
The following is a snippet of output when clicking on the text
message view:
D/ConversationList(12520): onResume Start
D/ComposeMessageActivity(12520): onConatctInfoChange
D/RecipientList(12520): mFilterHandler not null
D/RecipientList(12520): get recipient: 0
D/RecipientList(12520): r.name: John Smith
D/RecipientList(12520): r.filter() return result
D/RecipientList(12520): indexOf(r)0
D/RecipientList(12520): prepare set, index/name: 0/John Smith
29
30. Phone Call History
• Call history view controlled
through a
DialerContactCard$OnCard
ClickListener
• Each contact stored as a
DialerContactCard
• Contains the name,
number, convo length, and
photo of contact
30
31. Per Contact Call History
• Can (sometimes) retrieve call history per-
contact
• Requires the user to actually view a contact’s
history before being populated
31
32. Text Messages
• Recovery through ComposeMessageActivity &
TextMessageView
• Complete conversations can be recovered
• Not pre-populated
32
33. Voicemail
• Audio file is open()’ed
• Not mapped contiguously into the process
address space
• No method to recover deleted voicemails..
33
34. Browser (Opera Mini)
• Opera Mini is the most used mobile browser
• Can recover some session information
– The history file is always mapped in memory
(including information from current session)
• HTTP requests and page information is
(possibly) recoverable
– Can recover <title> information
– Stored in Opera Binary Markup Language
– Not publicly documented?
34
35. Recovering Wireless Information
• Screenshot on the right
shows results of a scan for
wireless networks
• Recovery of this view
provides the SSID, MAC
address, and enc type for
routers found
• Recovery of “Connected”
routers show which were
associated with
35
36. Other Wireless Information
• Potentially interesting information:
– Wireless keys
– Connection stats
• These are not controlled by Dalvik
– Keys only initially entered through Dalvik, but then
saved
• Stored by the usual Linux applications
– wpa_supplicant, dhcpd, in-kernel stats
36
37. Location Recovery
• Associating location & time not always
important
– But makes for better slides *hint*
• Interesting for a number of reasons
– Forensics & Privacy concerns
– Not part of a “standard” forensics investigation
37
38. Google Maps
• Did not do source code analysis
– Most phones won’t be using Google Maps while
being seized
– Wanted to find ways to get historical data cleanly
• Found two promising searches
– mTime=TIME,mLatitude=LAT,mLongitude=LON
– point: LAT,LON … lastFix: TIME
• TIME is the last location, extra work needed to verify
38
39. “Popular” Weather Application
• The weather application uses your location to
give you relevant information
• http://vendor.site.com/widget/search.asp?
lat=LAT&lon=LON&nocache=TIME
39
40. More GPS Fun
• All of the following applications do not clear
GPS data from memory, and all send their
lat/lon using GET with HTTP
– Urban Spoon
– Weather Channel
– WeatherBug
– Yelp
– Groupon
– Movies
40
41. Implementation
• Recovery code written as Volatility [7] plugins
– Most popular memory analysis framework
– Has support for all Windows versions since XP and
2.6 Intel Linux
– Now also supports ARM Linux/Android
• Makes rapid development of memory analysis
capabilities simple
• Also can be used for analyzing other binary
formats
41
42. Testing
• Tested against a HTC EVO 4G
– No phone-specific features used in analysis
– Only a few HTC-specific packages were analyzed
• Visually tested against other Dalvik versions
– No drastic changes in core Dalvik functionality
42
43. Research Applications
• Memory forensics (obviously)
• Testing of privacy assurances
• Malware analysis
– Can enumerate and recover methods and their
instructions
43
44. Future Avenues of Research
• Numerous applications with potentially
interesting information
– Too much to manually dig through
– Need automation
– Baksmali/Volatility/logcat integration?
• Automated determination of interesting
evidence across the whole system
– Combing work done in [2] and [3]
44