The document discusses various techniques for manipulating file formats in unconventional ways, including:
1. Embedding multiple files or data streams within a ZIP archive by ignoring specified parsing rules.
2. Hiding data in optional PDF layers that may not be displayed by all PDF readers.
3. Creating ambiguous BMP files that can display different pixel data depending on which header offsets are followed.
The document provides an introduction to event-driven programming and forms using Delphi. It discusses files and dialogs, including file processing routines for input and output, common file types, and examples of using files. It also covers media players, open and save dialogs, and the TFileRun component for executing files. Key points include how to assign, read from, write to, and close files, as well as how open and save dialogs can be used to select files without directly opening or saving them.
This document discusses input/output streams and files in Java. It describes input streams, which read data from sources like keyboards or files, and output streams, which write data to destinations like monitors or files. It then focuses on reading and writing files using FileInputStream, FileOutputStream, FileReader and FileWriter classes. Constructors and methods of each class are listed, with examples provided of reading and writing files byte-by-byte and character-by-character. The key difference between byte and character streams is also summarized.
The document discusses generating documentation from POD files using various Perl modules. It shows commands to:
1) Convert a POD file to HTML using pod2html and encounters an unknown directive error.
2) Batch convert all POD files in a lib directory to HTML using Pod::Simple::HTMLBatch.
3) Configure Pod::Simple::HTMLBatch to use the Pod::Simple::XHTML renderer.
4) Generate documentation projects from POD files using pod2projdocs.
5) Create a Pod site from lib POD files using podsite.
The document discusses FOCA 2.5, a tool for fingerprinting organizations using collected archives. It can extract metadata, hidden info, and lost data from files like documents, PDFs, images. It searches Google and Bing to find publicly available documents, then analyzes the metadata to identify users, systems, networks associated with an organization. FOCA 2.5 includes new features like network discovery, information gathering, DNS cache snooping, and a reporting module. It demonstrates analyzing documents from sites like fbi.gov and whitehouse.gov, using the metadata to map internal networks and systems. The document provides information on downloading and using FOCA for organizational fingerprinting and metadata analysis from publicly available files.
The document provides examples of how Python is used in different domains such as websites, desktop applications, science, embedded systems, and more. It also discusses why Python is popular due to its readability, ease of learning, rich libraries, and ability to be sped up with tools like Numba and Cython. The document outlines topics for learning Python including primitives, control flow with if/while statements, composites like lists and dictionaries, and for loops. It recommends keeping learning through tutorials, documentation, and communities.
The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key events in the early adoption of JSON, including its use for browser/server communication and as an alternative to XML.
This document discusses Python file handling and operations. It covers opening, reading, writing, closing, and modifying files. Some key points include:
- The open() function is used to open a file and returns a stream object. This object has methods like read(), write(), seek() to interact with the file.
- Files can be opened in read, write, append, and binary modes. The default is read mode.
- To read a file, the stream object's read() method is used. seek() allows changing the read position.
- Writing requires opening in write or append mode and using write() on the stream.
- It is important to close files to free resources using the close() method
This document discusses files in Python. It begins by defining what a file is and explaining that files enable persistent storage on disk. It then covers opening, reading from, and writing to files in Python. The main types of files are text and binary, and common file operations are open, close, read, and write. It provides examples of opening files in different modes, reading files line by line or in full, and writing strings or lists of strings to files. It also discusses searching files and handling errors when opening files. In the end, it presents some exercises involving copying files, counting words in a file, and converting decimal to binary.
The document provides an introduction to event-driven programming and forms using Delphi. It discusses files and dialogs, including file processing routines for input and output, common file types, and examples of using files. It also covers media players, open and save dialogs, and the TFileRun component for executing files. Key points include how to assign, read from, write to, and close files, as well as how open and save dialogs can be used to select files without directly opening or saving them.
This document discusses input/output streams and files in Java. It describes input streams, which read data from sources like keyboards or files, and output streams, which write data to destinations like monitors or files. It then focuses on reading and writing files using FileInputStream, FileOutputStream, FileReader and FileWriter classes. Constructors and methods of each class are listed, with examples provided of reading and writing files byte-by-byte and character-by-character. The key difference between byte and character streams is also summarized.
The document discusses generating documentation from POD files using various Perl modules. It shows commands to:
1) Convert a POD file to HTML using pod2html and encounters an unknown directive error.
2) Batch convert all POD files in a lib directory to HTML using Pod::Simple::HTMLBatch.
3) Configure Pod::Simple::HTMLBatch to use the Pod::Simple::XHTML renderer.
4) Generate documentation projects from POD files using pod2projdocs.
5) Create a Pod site from lib POD files using podsite.
The document discusses FOCA 2.5, a tool for fingerprinting organizations using collected archives. It can extract metadata, hidden info, and lost data from files like documents, PDFs, images. It searches Google and Bing to find publicly available documents, then analyzes the metadata to identify users, systems, networks associated with an organization. FOCA 2.5 includes new features like network discovery, information gathering, DNS cache snooping, and a reporting module. It demonstrates analyzing documents from sites like fbi.gov and whitehouse.gov, using the metadata to map internal networks and systems. The document provides information on downloading and using FOCA for organizational fingerprinting and metadata analysis from publicly available files.
The document provides examples of how Python is used in different domains such as websites, desktop applications, science, embedded systems, and more. It also discusses why Python is popular due to its readability, ease of learning, rich libraries, and ability to be sped up with tools like Numba and Cython. The document outlines topics for learning Python including primitives, control flow with if/while statements, composites like lists and dictionaries, and for loops. It recommends keeping learning through tutorials, documentation, and communities.
The document discusses the origins and development of JSON (JavaScript Object Notation). It describes how Douglas Crockford discovered JSON in 2001 and published the first JSON specification in 2002. It outlines some of the key events in the early adoption of JSON, including its use for browser/server communication and as an alternative to XML.
This document discusses Python file handling and operations. It covers opening, reading, writing, closing, and modifying files. Some key points include:
- The open() function is used to open a file and returns a stream object. This object has methods like read(), write(), seek() to interact with the file.
- Files can be opened in read, write, append, and binary modes. The default is read mode.
- To read a file, the stream object's read() method is used. seek() allows changing the read position.
- Writing requires opening in write or append mode and using write() on the stream.
- It is important to close files to free resources using the close() method
This document discusses files in Python. It begins by defining what a file is and explaining that files enable persistent storage on disk. It then covers opening, reading from, and writing to files in Python. The main types of files are text and binary, and common file operations are open, close, read, and write. It provides examples of opening files in different modes, reading files line by line or in full, and writing strings or lists of strings to files. It also discusses searching files and handling errors when opening files. In the end, it presents some exercises involving copying files, counting words in a file, and converting decimal to binary.
This document provides an overview of file systems and storage technologies, including Unix System 5, log-structured file systems, ZFS, RAID, flash memory, and garbage collection. It discusses how files are represented and accessed in different systems. The key aspects covered are:
- How Unix System 5 represents files using inodes and disk blocks
- How log-structured file systems write files sequentially to avoid overwriting and better suit flash memory
- Techniques used in modern file systems like ZFS to provide redundancy, detect errors, and improve performance
- Challenges of flash memory like limited write cycles and how file systems address these
- Garbage collection methods used in log-structured file systems to reclaim
At the end of this lecture students should be able to;
Define the C standard functions for managing file input output.
Apply taught concepts for writing programs.
File handling and Dictionaries in pythonnitamhaske
This document provides an introduction to file handling and dictionaries in Python. It discusses what files are and how they are used to store large amounts of data outside of RAM. Files are organized in a tree structure with paths to identify locations. There are two main types of files - text files which store character data and binary files which can store any type of data. The document outlines various functions for working with files, including open() to create a file object, close() to finish with the file, and attributes of the file object like name and mode. It also covers accessing a file, reading/writing data, and different modes for opening files.
This document discusses the Plan 9 operating system and network programming in Plan 9. It provides an overview of Plan 9's origins from UNIX and its networking APIs and model, including the use of file descriptors to represent network connections. It also demonstrates examples of echo clients and servers implemented using these networking APIs.
The document discusses file input/output in C++. It covers the header file fstream.h, stream classes like ifstream and ofstream for file input/output, opening and closing files, reading/writing characters and objects to files, detecting end of file, moving file pointers for random access, and handling errors. Functions like open(), close(), get(), put(), read(), write(), seekg(), seekp(), tellg(), tellp(), eof(), fail(), bad(), good(), and clear() are described.
1. The document discusses file handling in C++, including opening and closing files, stream state member functions, and different types of file operations.
2. Key classes for file input/output in C++ include ifstream for reading files, ofstream for writing files, and fstream for reading and writing. These classes inherit from iostream and allow file access using insertion and extraction operators.
3. The document covers opening and closing files, checking for errors, reading and writing basic data types to files, binary file operations using read() and write(), and random access in files using seekp(), seekg(), and tellp(). It provides examples of reading from and writing to both text and binary files.
This document presents an overview of file operations and data parsing in Python. It covers opening, reading, writing, and closing files, as well as using regular expressions to parse text data through functions like re.search(), re.findall(), re.split(), and re.sub(). Examples are provided for reading and writing files, manipulating file pointers, saving complex data with pickle, and using regular expressions to match patterns and extract or replace substrings in texts. The document aims to introduce Python tools for working with files and parsing textual data.
The document discusses how the Go runtime handles network namespaces when using Docker and how Go version 1.10 addressed issues with incorrect interface information detected by goroutines. Specifically, it notes that in earlier versions, goroutines could inherit incorrect interface state from already running threads, but Go 1.10 introduced template threads to isolate goroutines and ensure each starts with a clean network namespace state. The document provides examples of the runtime behavior before and after 1.10 and recommends using Go 1.10 or higher when creating and managing Linux network namespaces.
File Handling is used in C language for store a data permanently in computer.
Using file handling you can store your data in Hard disk.
http://www.tutorial4us.com/cprogramming/c-file-handling
The document discusses working with files in C++. It explains that files are used to store large amounts of data on storage devices like hard disks. Files contain related data organized in a specific area. Programs can perform read and write operations on files using file streams as an interface. There are three main file stream classes - ifstream for input, ofstream for output, and fstream for both. The document outlines how to open, read from, write to, and close files, and manipulate file pointers to control reading and writing locations within a file.
Files in Python represent sequences of bytes stored on disk for permanent storage. They can be opened in different modes like read, write, append etc using the open() function, which returns a file object. Common file operations include writing, reading, seeking to specific locations, and closing the file. The with statement is recommended for opening and closing files to ensure they are properly closed even if an exception occurs.
This document summarizes key concepts about file input/output in C++. It discusses what files are, how they are named and opened, and the process of reading from and writing to files. Specific functions and operators covered include open(), close(), << to write data, and >> to read data. It also discusses checking for open errors, formatting output, and detecting the end of a file. Program examples demonstrate how to open, read from, write to, and close files using C++.
The document discusses file handling in C++. It explains that files store data permanently on storage devices and can be opened for input or output by programs. Streams act as an interface between files and programs, representing the flow of data. The predefined stream classes like ifstream, ofstream, and fstream allow reading from and writing to files. The document outlines the general steps to work with files, describes file modes and pointers, and provides examples of reading from and writing to both text and binary files in C++.
The document discusses files and streams in C++. It defines files as sequences of bytes that end with an end-of-file marker. Streams are used to connect programs to files for input and output. There are standard input and output streams (cin and cout) as well as file streams that use classes like ifstream for input and ofstream for output. Files can be accessed sequentially or randomly - sequential files are read from start to finish while random access files allow direct access to any record.
The document discusses various types of files in UNIX/Linux systems such as regular files, directory files, device files, FIFO files, and symbolic links. It describes how each file type is created and used. It also covers UNIX file attributes, inodes, and how the kernel manages file access through system calls like open, read, write, and close.
Basic file operations in C++ involve opening, reading from, and writing to files. The key classes for input/output with files are ofstream for writing, ifstream for reading, and fstream for both reading and writing. A file must first be opened before performing any operations on it. Common operations include writing data to files with put() or write(), reading data from files with get() or read(), and closing files after completion. Proper opening modes and error handling should be used to ensure successful file input/output.
The document provides an overview of Mercurial version control system concepts and commands. It discusses the .hgrc configuration file, imagining Mercurial as a stack of patches with downward links, basic commands like add, commit, log and diff, branching and updating branches, and pushing and pulling changes between repositories. Key points covered include using .hgrc for usernames, extensions, and aliases, visualizing the commit history as a stack of patches, and the relationship between commits, branches, and unique identifiers.
The document discusses file input/output in C++. It covers the header file fstream.h, stream classes like ifstream and ofstream for file input/output, opening and closing files, reading/writing characters and objects to files, detecting end of file, moving file pointers for random access, and handling errors. Functions like open(), close(), get(), put(), read(), write(), seekg(), seekp(), tellg(), tellp(), eof(), fail(), bad(), good(), and clear() are described.
This document summarizes a talk about abusing file format parsers to cause different parsing behaviors, known as "schizophrenia". It describes techniques used across various formats like ZIP, BMP, PDF, GIF and PE files that can result in files being parsed or interpreted differently depending on factors like the parsing order, which part of a program does the parsing, or which specifications are followed. The goal is to fool parsers without causing failures by leveraging ambiguity and flexibility in file specifications.
This document summarizes Ange Albertini's talk on "Funky file Formats". The talk discusses how files can take on multiple formats by exploiting ambiguities and tolerance in file specifications. Examples are given of files that are valid images, archives, documents, and encrypted files simultaneously. The talk also covers steganography techniques like hiding files within other file formats by manipulating metadata or unused portions of file specifications. Overall, the talk illustrates the concept of "format polymorphism" where single files can masquerade as multiple file types to evade detection or trigger different parser behaviors.
This document provides an overview of file systems and storage technologies, including Unix System 5, log-structured file systems, ZFS, RAID, flash memory, and garbage collection. It discusses how files are represented and accessed in different systems. The key aspects covered are:
- How Unix System 5 represents files using inodes and disk blocks
- How log-structured file systems write files sequentially to avoid overwriting and better suit flash memory
- Techniques used in modern file systems like ZFS to provide redundancy, detect errors, and improve performance
- Challenges of flash memory like limited write cycles and how file systems address these
- Garbage collection methods used in log-structured file systems to reclaim
At the end of this lecture students should be able to;
Define the C standard functions for managing file input output.
Apply taught concepts for writing programs.
File handling and Dictionaries in pythonnitamhaske
This document provides an introduction to file handling and dictionaries in Python. It discusses what files are and how they are used to store large amounts of data outside of RAM. Files are organized in a tree structure with paths to identify locations. There are two main types of files - text files which store character data and binary files which can store any type of data. The document outlines various functions for working with files, including open() to create a file object, close() to finish with the file, and attributes of the file object like name and mode. It also covers accessing a file, reading/writing data, and different modes for opening files.
This document discusses the Plan 9 operating system and network programming in Plan 9. It provides an overview of Plan 9's origins from UNIX and its networking APIs and model, including the use of file descriptors to represent network connections. It also demonstrates examples of echo clients and servers implemented using these networking APIs.
The document discusses file input/output in C++. It covers the header file fstream.h, stream classes like ifstream and ofstream for file input/output, opening and closing files, reading/writing characters and objects to files, detecting end of file, moving file pointers for random access, and handling errors. Functions like open(), close(), get(), put(), read(), write(), seekg(), seekp(), tellg(), tellp(), eof(), fail(), bad(), good(), and clear() are described.
1. The document discusses file handling in C++, including opening and closing files, stream state member functions, and different types of file operations.
2. Key classes for file input/output in C++ include ifstream for reading files, ofstream for writing files, and fstream for reading and writing. These classes inherit from iostream and allow file access using insertion and extraction operators.
3. The document covers opening and closing files, checking for errors, reading and writing basic data types to files, binary file operations using read() and write(), and random access in files using seekp(), seekg(), and tellp(). It provides examples of reading from and writing to both text and binary files.
This document presents an overview of file operations and data parsing in Python. It covers opening, reading, writing, and closing files, as well as using regular expressions to parse text data through functions like re.search(), re.findall(), re.split(), and re.sub(). Examples are provided for reading and writing files, manipulating file pointers, saving complex data with pickle, and using regular expressions to match patterns and extract or replace substrings in texts. The document aims to introduce Python tools for working with files and parsing textual data.
The document discusses how the Go runtime handles network namespaces when using Docker and how Go version 1.10 addressed issues with incorrect interface information detected by goroutines. Specifically, it notes that in earlier versions, goroutines could inherit incorrect interface state from already running threads, but Go 1.10 introduced template threads to isolate goroutines and ensure each starts with a clean network namespace state. The document provides examples of the runtime behavior before and after 1.10 and recommends using Go 1.10 or higher when creating and managing Linux network namespaces.
File Handling is used in C language for store a data permanently in computer.
Using file handling you can store your data in Hard disk.
http://www.tutorial4us.com/cprogramming/c-file-handling
The document discusses working with files in C++. It explains that files are used to store large amounts of data on storage devices like hard disks. Files contain related data organized in a specific area. Programs can perform read and write operations on files using file streams as an interface. There are three main file stream classes - ifstream for input, ofstream for output, and fstream for both. The document outlines how to open, read from, write to, and close files, and manipulate file pointers to control reading and writing locations within a file.
Files in Python represent sequences of bytes stored on disk for permanent storage. They can be opened in different modes like read, write, append etc using the open() function, which returns a file object. Common file operations include writing, reading, seeking to specific locations, and closing the file. The with statement is recommended for opening and closing files to ensure they are properly closed even if an exception occurs.
This document summarizes key concepts about file input/output in C++. It discusses what files are, how they are named and opened, and the process of reading from and writing to files. Specific functions and operators covered include open(), close(), << to write data, and >> to read data. It also discusses checking for open errors, formatting output, and detecting the end of a file. Program examples demonstrate how to open, read from, write to, and close files using C++.
The document discusses file handling in C++. It explains that files store data permanently on storage devices and can be opened for input or output by programs. Streams act as an interface between files and programs, representing the flow of data. The predefined stream classes like ifstream, ofstream, and fstream allow reading from and writing to files. The document outlines the general steps to work with files, describes file modes and pointers, and provides examples of reading from and writing to both text and binary files in C++.
The document discusses files and streams in C++. It defines files as sequences of bytes that end with an end-of-file marker. Streams are used to connect programs to files for input and output. There are standard input and output streams (cin and cout) as well as file streams that use classes like ifstream for input and ofstream for output. Files can be accessed sequentially or randomly - sequential files are read from start to finish while random access files allow direct access to any record.
The document discusses various types of files in UNIX/Linux systems such as regular files, directory files, device files, FIFO files, and symbolic links. It describes how each file type is created and used. It also covers UNIX file attributes, inodes, and how the kernel manages file access through system calls like open, read, write, and close.
Basic file operations in C++ involve opening, reading from, and writing to files. The key classes for input/output with files are ofstream for writing, ifstream for reading, and fstream for both reading and writing. A file must first be opened before performing any operations on it. Common operations include writing data to files with put() or write(), reading data from files with get() or read(), and closing files after completion. Proper opening modes and error handling should be used to ensure successful file input/output.
The document provides an overview of Mercurial version control system concepts and commands. It discusses the .hgrc configuration file, imagining Mercurial as a stack of patches with downward links, basic commands like add, commit, log and diff, branching and updating branches, and pushing and pulling changes between repositories. Key points covered include using .hgrc for usernames, extensions, and aliases, visualizing the commit history as a stack of patches, and the relationship between commits, branches, and unique identifiers.
The document discusses file input/output in C++. It covers the header file fstream.h, stream classes like ifstream and ofstream for file input/output, opening and closing files, reading/writing characters and objects to files, detecting end of file, moving file pointers for random access, and handling errors. Functions like open(), close(), get(), put(), read(), write(), seekg(), seekp(), tellg(), tellp(), eof(), fail(), bad(), good(), and clear() are described.
This document summarizes a talk about abusing file format parsers to cause different parsing behaviors, known as "schizophrenia". It describes techniques used across various formats like ZIP, BMP, PDF, GIF and PE files that can result in files being parsed or interpreted differently depending on factors like the parsing order, which part of a program does the parsing, or which specifications are followed. The goal is to fool parsers without causing failures by leveraging ambiguity and flexibility in file specifications.
This document summarizes Ange Albertini's talk on "Funky file Formats". The talk discusses how files can take on multiple formats by exploiting ambiguities and tolerance in file specifications. Examples are given of files that are valid images, archives, documents, and encrypted files simultaneously. The talk also covers steganography techniques like hiding files within other file formats by manipulating metadata or unused portions of file specifications. Overall, the talk illustrates the concept of "format polymorphism" where single files can masquerade as multiple file types to evade detection or trigger different parser behaviors.
1. File formats are complex with many stakeholders who interpret specifications differently, leading to divergent implementations over time.
2. Specifications are often incomplete, unclear, non-free, or do not reflect reality, making it difficult to determine what a valid file is.
3. Relying on specifications alone is not sufficient - one must also analyze sample files and code to understand how file formats work in practice.
The document discusses the author's perspectives on file formats after over 30 years of experience working with computers and digital preservation. The author believes specifications are imperfect and do not fully define what constitutes a valid file, as implementations can interpret specifications differently and become outdated. The author has experimented with creating extreme files that push the boundaries of specifications in order to understand formats better and find potential issues.
Presented at Troopers 2016.
When Infosec and Digipres share interests...
TL;DR
- Attack surface with file formats is too big.
- Specs are useless (just a nice ‘guide’), not representing reality.
- We can’t deprecate formats because we can’t preserve and we can’t define how they really work
- We need open good libraries to simplify landscape, and create a corpus to express the reality of file format, which gives us real “documentation”.
- Then we can preserve and deprecate older format, which reduces attack surface.
- From then on, we can focus on making the present more secure.
- We don't need new formats: reality will diverge from the specs anyway - we need 'alive' (up to date, traceable) specs.
The document discusses different archive formats and their relationships. It begins with an introduction to the presenter and then covers zlib, gzip, and zip file formats. Zlib and gzip both wrap deflate compression, but in different ways, so while the compressed data can be transferred between them, the formats are not directly compatible. Zip can use deflate but also other compression methods and a different one for each file. In conclusion, deflate is a common algorithm while the various formats wrap it with different headers and metadata.
Simple Data Engineering in Python 3.5+ — Pycon.DE 2017 Karlsruhe — Bonobo ETLRomain Dorgueil
Simple Data Engineering in Python 3.5+ using Bonobo ETL, with real world example using Django2 and DBPedia.
https://www.bonobo-project.org/
Presentation from Pycon.DE 2017 in Karlsruhe
This document discusses binary file formats and creating visual documentation. It notes that specifications are imperfect and there are security consequences. Formats have diverse properties like headers, signatures, offsets. Visual docs should be self-contained, for a defined audience, and remove unnecessary details. The goal is creating useful documentation based on reality. Questions are welcome.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document describes the design and implementation of a FLAC audio decoder system using an ARM920T embedded development platform with an S3C2440 chip. FLAC is a lossless audio format that compresses audio without loss of quality. The authors built the decoder system using an ARM9 embedded board and implemented the FLAC decoder using the board's IIS bus in Linux with an open source player and firmware. The results showed that FLAC audio could be played back well through the decoder system.
Clustered and distributed storage with commodity hardware and open source ...Phil Cryer
An overview of the state of the Biodiversity Heritage Library's first storage cluster. It covers the basics of building a clustered and distributed storage with commodity hardware and open source software , and also details such as working software to maintain synchronization with other global partners. Presented to the Biodiversity Heritage Library Europe's Technical Architecture board at Natural History Museum, London on August 25, 2010.
Kernel Recipes 2016 - Kernel documentation: what we have and where it’s goingAnne Nicolas
The Linux kernel features an extensive array of, to put it kindly, somewhat disorganized documentation. A significant effort is underway to make things better, though. This talk will review the state of kernel documentation, cover the changes that are being made (including the adoption of a new system for formatted documentation), and discuss how interested developers can help.
Jonathan Corbet, LWN.net
OSDC 2016 - Ingesting Logs with Style by Pere Urbon-BayesNETWAYS
The log shipping scene been between us for a long time: from syslog, rsyslog to nowadays Fluentd, Flume and Logstash. Logstash been pushing hard to introduce new features that make the experience better for everyone. At the end of the day, a healthy shipper means a happy sysadmin. The latest Logstash includes persistence to reduce the chance of data loss, monitoring to find how everything is going and configuration management to make your life a lot easier. But wait, there’s more! Offline support, improved shutdown semantics, etc … features that will make your logs shipped and you a rested sysadmin.
In this talk we’ll see this features in action thought a real live sensor monitoring example. By the end of the session, you will be able to use the full power of Logstash in your own deployments.
This document discusses the internal representation of files in a file system. It describes inodes, which contain metadata about files, including file type and size. Inodes exist as disk inodes and in-core inodes in memory. Path names are converted to inodes using the namei algorithm. Directories store file names and inode numbers. The iget and iput algorithms manage caching of in-core inodes. New files are assigned inodes from the free inode list using ialloc. Blocks are allocated for file data non-contiguously using indirect pointers in the inode.
bup is a git-based backup system that provides fast, efficient, and scalable backups. It can backup entire filesystems over 1TB in size, including large virtual machine disk images over 100GB, with millions of files. It uses sub-file incrementals and deduplication to backup data in time proportional to the changed data size. Backups can be directly incremented to remote computers without a local copy.
Part 4 of 'Introduction to Linux for bioinformatics': Managing data Joachim Jacob
This is part 4 of the training session 'Introduction to Linux for bioinformatics'. We shows basics of data management, and tips for handling big data effectively. Interested in following this training session? Please contact me at http://www.jakonix.be/contact.html
The document discusses the author's experience with malware and file formats over 13 years, noting how specifications are often outdated and incomplete which can lead to misunderstandings. It advocates for better tools to analyze, document, and validate file formats to improve understanding of their current usage and behaviors. The author has created several open source projects focused on file format analysis and validation.
"Technical challenges"? More like horrors!
Let's explore first the technical debt of old file formats,
with the evolution of the "MP3" format.
Then we go through more recent forms of file format abuses and tools:
polyglots, polymocks, and crypto-polyglots.
Last, an overview of recent collisions and other forms of art with MD5.
They say that with file formats, "specs are enough".
Should we laugh, cry or run away screaming?
Presented at Digital Preservation Coalition's CyberSec & DigiPres event.
This document is a slide presentation about hash collisions and generating polyglot files that have the same hash but different content. It discusses existing attacks on hashes like MD5 and SHA1 that allow two files to be generated with the same hash. It then explains how collisions can be generated for ZIP and TAR.GZ files by manipulating the ZIP file format in a way that maintains compatibility with ZIP parsers but results in different files with the same hash. Examples of colliding file pairs are shown with identical prefixes and suffixes and differing collision blocks in the middle.
You are *not* an idiot ~ or maybe we're all idiots.
Keynote at NorthSec 2021.
Talking about school, failure, success, diploma, impostor syndrom, manipulators, burn out, suicide, and how to deal with them.
The talk delivery was more personal, the slides are kept generic.
The recording is available @ https://youtu.be/Iu70J49bPlE?t=20869 (starts at 5:47:49)
Demystifying hash collisions.
Pass the Salt, 1st July 2019.
video @ https://passthesalt.ubicast.tv/videos/kill-md5-demystifying-hash-collisions/
Hack.Lu, 22 October 2019.
video @ https://www.youtube.com/watch?v=JXazRQ0APpI
Beyond your studies ~ You studied X at Y. now what?
HackPra, July 2018
A student's life ago, the author somehow managed to graduate.
On the way, he made a lot of mistakes -- and he still does.
A few people since called him 'successful', but LOL, if only they knew....
And now, the author will do another (big!) mistake:
instead of hiding in shame as he probably should,
he'll share his mistakes with anyone bored enough to attend,
in the hope that he's the last person to ever look that dumb to commit such mistakes.
If you're a genius and you know what to do in life, please skip this. Seriously.
If, like the author at the time, you wonder WTF is going on with graduation, professional work and life, then hopefully you learn a few things. Maybe.
Btw the author is 42 (WTF - old!).
Maybe that will help to provide a few answers.
This document provides an introduction and overview of Inkscape, an open-source vector graphics editor. It discusses Inkscape's features such as its use of Scalable Vector Graphics (SVG), tools for drawing objects and manipulating nodes, layers, transformations, and more. The document also includes tutorials for tasks like tracing an image, creating a poster, and converting code snippets to SVG. Throughout, it emphasizes that Inkscape is non-destructive and files remain editable, while also noting some limitations like unsupported gradients along paths.
This document contains the table of contents for an issue of PoC||GTFO, a journal for sharing technical content in unconventional ways. It lists over 60 articles across various topics including hardware hacking, firmware reverse engineering, embedded exploitation, and unusual file formats. The sections are numbered and titled with references to hacking, unconventional thinking, and sharing knowledge in new ways.
Game developers are able to create better video games than what the limitations of computers allow by understanding how things truly work at a detailed level. They discovered tricks to get around limitations, such as updating colors rapidly to display more than the limited palette or changing sounds quickly to generate new voices. Understanding the underlying systems allows developers to creatively solve problems like drawing huge animated monsters that surpass the small allowed object sizes. This knowledge of how things really function provides advantages beyond initial restrictions.
The document provides a step-by-step guide to writing a basic "Hello World" PDF file. It explains the overall PDF file structure and key elements like the file body, cross-reference table, trailer, and objects. Objects are used to define things like the catalog, pages, and a single page. The guide demonstrates creating three objects - one for the catalog that refers to a pages object, which in turn refers to a page object defining a single page.
This document discusses potential leaks that can occur from PDF documents, specifically from text, images, and drawings embedded in the pages. Even if text is invisible, images are not displayed, or drawings are covered, this information can still be extracted from the PDF. Importing or copying parts of a PDF does not necessarily limit the content, as the full document is often brought in and only a "limiting view" is applied. The only fully reliable way to prevent leaks is to convert the PDF pages to individual image files. In general, the PDF format has many issues preventing leaks and poses a large attack surface due to embedded metadata.
video https://www.youtube.com/watch?v=vg7LPcFUxg8
audio / HD video download http://media.ccc.de/browse/congress/2014/31c3_-_5997_-_en_-_saal_6_-_201412282030_-_preserving_arcade_games_-_ange_albertini.html
complete animated presentation + extras (~1Gb):
https://archive.org/details/arcade31c3
more infos @ https://code.google.com/p/corkami/wiki/Arcade
The document draws analogies between file formats and animals. It discusses how files can be identified by metadata or branding like cattle, but these can be faked. It also discusses how the same data can be parsed differently by experts in different fields. Files can contain extra or foreign data and still be valid, like if a cow swallowed a microSD card. The document also mentions polyglot files that contain multiple file types and chimera files with multiple bodies or heads.
by Axelle Apvrille & Ange Albertini
presented at BlackHat Europe 2014, in Amsterdam
PoC: https://github.com/cryptax/angeapk
AngeCryption: http://corkami.googlecode.com/svn/trunk/src/angecryption/
This document discusses encrypting and manipulating PNG files while maintaining a valid file structure. It explains that encrypting a PNG breaks the signature and structure. However, by controlling the initialization vector and pre-decrypting target chunks, one can encrypt parts of the file while keeping it valid. Custom chunks can be added to ignore encrypted data, resulting in an encrypted file that is still valid when decrypted.
The document discusses hiding and revealing secrets in PDF documents. It provides an overview of the PDF file format, including the structure of objects, streams, filters and parsing. Examples are given to demonstrate how text and images can be encoded in streams and embedded within a PDF. The goal is to learn the internals of PDF so content can be hidden or revealed through the use of encoding.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
2. Gynvael Coldwind
Security researcher, Google
Dragon Sector captain
likes hamburgers
http://gynvael.coldwind.pl/
All opinions expressed during this presentation are mine and mine alone.
They are not opinions of my lawyer, barber and especially not my employer.
9. file names in ZIP
a couple of files with the same name?
update:
for an awesome example see:
Android: One Root to Own Them All
Jeff Forristal / Bluebox
(https://media.blackhat.com/us-13/US-13-Forristal-Android-One-Root-to-Own-Them-All-Slides.pdf)
11. Let's start with simple stuff -
the ZIP format
A ZIP file begins with letters PK.
12. Let's start with simple stuff -
the ZIP format
A ZIP file begins with letters PK.
WRONG
13. ZIP - second attempt :)
.zip file
last 65557 bytes of the file
the "header" is
"somewhere" here
PK56...
14. ZIP - "somewhere" ?!
4.3.16 End of central directory record:
end of central dir signature 4 bytes (0x06054b50)
number of this disk 2 bytes
number of the disk with the
start of the central directory 2 bytes
total number of entries in the
central directory on this disk 2 bytes
total number of entries in
the central directory 2 bytes
size of the central directory 4 bytes
offset of start of central
directory with respect to
the starting disk number 4 bytes
.ZIP file comment length 2 bytes
.ZIP file comment (variable size)
you
begin
ZIP
parsing
from
this; it MUST
be
at the end
of the file
$0000-$FFFF
0-65535
22bajty
Total: from 22 to 65557 bytes
(aka: PK56 magic will be somewhere between EOF-65557 and EOF-22)
15. ZIP - looking for the "header"?
"From the START"
Begin at EOF-65557,
and move forward.
"From the END"
(ZIPs usually don't have comments)
Begin at EOF-22,
and move backward.
PK56...
"somewhere"
PK56...
"somewhere"
17. ZIP Format - LFH
4.3.7 Local file header:
local file header signature 4 bytes (0x04034b50)
version needed to extract 2 bytes
general purpose bit flag 2 bytes
compression method 2 bytes
last mod file time 2 bytes
last mod file date 2 bytes
crc-32 4 bytes
compressed size 4 bytes
uncompressed size 4 bytes
file name length 2 bytes
extra field length 2 bytes
file name (variable size)
extra field (variable size)
file data (variable size)
randomstuff
PK34... LFH + data
Each file/directory in a ZIP has LFH + data.
18. ZIP Format - CDH
[central directory header n]
central file header signature 4 bytes (0x02014b50)
version made by 2 bytes
version needed to extract 2 bytes
general purpose bit flag 2 bytes
compression method 2 bytes
last mod file time 2 bytes
last mod file date 2 bytes
crc-32 4 bytes
compressed size 4 bytes
uncompressed size 4 bytes
file name length 2 bytes
extra field length 2 bytes
file comment length 2 bytes
disk number start 2 bytes
internal file attributes 2 bytes
external file attributes 4 bytes
relative offset of local header 4 bytes
file name (variable size)
extra field (variable size)
file comment (variable size)
similarstufftoLFH
PK21... CDH
Each file/directory has a CDH entry in the Central Directory
thanks to the
redundancy you
can recover LFH
using CDH, or
CDH using LFH
19. ZIP - a complete file
PK34... LFH + data PK56...EOCDPK21... CDH
Files (header+data) List of files
(and pointers)
20. ZIP - a complete file (continued)
PK34... LFH + data PK56...EOCDPK21... CDH
PK34... LFH + data PK56...EOCDPK21... CDH
If the list of the files has pointers to files...
... the ZIP structure can be more relaxed.
21. ZIP - a complete file (continued)
PK56...EOCDPK21... CDH PK34... LFH + data
file comment (variable size)
You can even do an "inception"
(some parsers may allow EOCD(CHD(LFH)))
22. And now back
to our show!
(we were looking
for the EOCD)
Larch
Something completely different
23. ZIP - looking for the "header"?
"stream"
Let's ignore EOCD!
(it's sometimes faster)
(99.9% of ZIPs out there can be parsed this way)
PK34... LFH + data PK34... LFH + data PK34... LFH + data
(single "files" in an archive)
PK56...
(who cares...)
24. ZIP - looking for the "header"?
"aggressive stream"
We ignore the "garbage"!
(forensics)
PK34... LFH + data PK34... LFH + data PK34... LFH + data
(single "files" in an archive)
PK56...
(who cares...)
47. “Optional Content Configuration”
● principles
○ define layered content via various /Forms
○ enable/disable layers on viewing/printing
● no warning when printing
● “you can see the preview!”
○ bypass preview by keeping page 1 unchanged
○ just do a minor change in the file
PDF Layers 1/2
48. ● it’s Adobe only
○ what’s displayed varies with readers
○ could be hidden via previous schizophrenic trick
● it was in the specs all along
○ very rarely used
○ can be abused
PDF Layers 2/2
50. FILE HEADER
INFO HEADER
PIXEL DATA
offset 0
offset N
bfOffBits
bfOffBits
Specifies the offset, in
bytes, from the
BITMAPFILEHEADER
structure to the bitmap
bits
(MSDN)
51. FILE HEADER
INFO HEADER
PIXEL DATA
(secondary)
offset 0
offset N
bfOffBits
bfOffBits
Specifies the offset, in
bytes, from the
BITMAPFILEHEADER
structure to the bitmap
bits
(MSDN)
PIXEL DATA
● Some image
viewers ignore
bfOffBits and look
for data
immediately after
the headers.
53. BMP
Trick 2
Something I've learnt about because it spoiled my steg100
task for a CTF (thankfully during testing).
54. BMP compression & palette
Run-Length Encoding (each box is 1 byte):
Length
>0
Palette Index
(color)
Length
0
End of Line
0
Length
0
End of Bitmap
1
Length
0
Move Cursor
2
X offset Y offset
Length
0
RAW Length
>2
Palette Index
(color)
Palette Index
(color)
...
55. BMP compression & palette
Question: If the opcodes below allow jump over pixels and
set no data, how will the pixels look like?
Hint: Please take a look at the presentation title :)
Length
0
End of Line
0
Length
0
End of Bitmap
1
Length
0
Move Cursor
2
X offset Y offset
56. Option 1
The missing data will be filled with background color.
(index 0 in the palette)
64. Relocations on relocations
Type 4
HIGH_ADJ -- -- ✓
Type 9
MIPS_JMPADDR16
IA64_IMM64
MACHINE_SPEC_9
32 bit 64 bit ✗
Type 10
DIR64
✓ ✓ ✓
as
seen
in
PoC
||G
TFO
68. GIF
GIF can be made of many small images.
If "frame speed" is defined, these are frames instead
(and the first frame is treated as background).
x
x
x y
yy
69. GIF
Certain parsers (e.g. browsers) treat "images" as "frames"
regardless of "frame speed" not being defined.
Frame 1 Frame 2 Frame 3
70. GIF
Certain parsers (e.g. browsers) treat "images" as "frames"
regardless of "frame speed" not being defined.
Frame 1 Frame 2 Frame 3
75. it was too simple
● WinRar: different behavior when viewing or
extracting
○ opening/failing
○ opening/’nothing’
● Adobe: viewing ⇔printing
○ well, it’s a feature
78. Failures / Ideas / WIP
● screen ⇔ printer
○ embedded color profiles?
● JPG
○ IrfanView vs the world
● Video
○ FLV: video fails but still plays sound ?
81. Conclusion
● such a mess
○ specs are messy
○ parsers don’t even respect them
● no CVE/blaming for parsing errors?
○ no security bug if no crash or exploit :(
PoCs and slides: http://goo.gl/Sfjfo4
84. Flash (SWF) vs Prezi
vs
Bonus Round
(not a fully schizophrenic problem in popular
parsers, that's why it's here)
85. Prezi SWF sanitizer
Prezi allows embedding SWF files.
But it first sanitizes them.
It uses one of two built-in SWF parsers.
There was a problem in one of them:
● It allowed huge chunk sizes.
● It just "jumped" (seeked) over these chunk...
● ...which resulted in an integer overflow...
● ...and this lead to schizophrenia.
● As the sanitizer saw a good SWF...
● ...Adobe Flash got its evil twin brother.
86. Prezi SWF sanitizer
"good" SWF sent to sanitizer
and its evil twin brother
kudos to the sanitizer!
Fixed in Q1 2014. For details see:
"Integer overflow into XSS and other fun stuff - a case study of a bug bounty"
http://gynvael.coldwind.pl/?id=533