This was a presentation on the exFAT file system given back in September 2010 at the HTCIA conference in Atlanta Ga. This presentation is effectively superseded by a new presentation deck that was uploaded to slideshare on June 6, 2014.
The document discusses the Linux booting process. It begins with the BIOS loading the master boot record (MBR) from the hard disk. The MBR then loads the boot loader like GRUB or LILO. The boot loader loads the Linux kernel into memory and passes control to it. The kernel initializes hardware and loads drivers. It then launches the init process which starts essential system processes and moves the system to the default runlevel based on the /etc/inittab file.
The document discusses the boot process of Linux operating systems. When a computer with Linux is turned on, the boot code in ROM loads and starts the kernel. The kernel then probes the system hardware and spawns the system init process. The Linux system can boot automatically or manually. In automatic mode, the complete boot procedure occurs without input, while manual mode involves operator intervention at a certain point before initialization scripts run. The typical Linux boot process involves loading and initializing the kernel, detecting and configuring devices, creating kernel threads, optional operator intervention, running system startup scripts, and achieving multiuser operation.
Memory management involves binding instructions and data to memory spaces using logical and physical addresses. The CPU uses base and limit registers to map the logical address space to the physical address space. Logical addresses are converted to physical addresses by adding the base register value. If a logical address is larger than the limit, an error occurs. Swapping and paging are techniques to manage memory fragmentation. Page tables implement paging by mapping logical page numbers to physical page frames. Task Manager displays memory usage and the working set of processes. NVRAM support and PFN locking help optimize memory usage. NUMA architectures scale multiprocessing by grouping CPUs and memory into nodes to reduce access latency.
introduction to linux kernel tcp/ip ptocotol stack monad bobo
This document provides an introduction and overview of the networking code in the Linux kernel source tree. It discusses the different layers including link (L2), network (L3), and transport (L4) layers. It describes the input and output processing, device interfaces, traffic directions, and major developers for each layer. Config and benchmark tools are also mentioned. Resources for further learning about the Linux kernel networking code are provided at the end.
The document discusses Microsoft file structures. It explains that clusters contain sectors and store file data, with cluster size varying by file system. It describes the master boot record containing partition tables and the file allocation table (FAT) or master file table (NTFS) storing file metadata. Finally, it notes that deleted files have metadata changed in FAT/NTFS and may remain recoverable from slack space.
This document discusses Linux kernel debugging. It provides an overview of debugging techniques including collecting system information, handling failures, and using printk(), KGDB, and debuggers. Key points covered are the components of a debugger, how KGDB can be used with gdb to debug interactively, analyzing crash data, and some debugging tricks and print functions.
This document discusses 10 techniques for persistence on macOS beyond using LaunchAgents. It provides an overview of techniques such as modifying shell startup files, using Pluggable Authentication Modules (PAM), Hammerspoon, preference panes, screen savers, color pickers, periodic scripts, Terminal preferences, emond (the event monitor daemon), and folder actions. For each technique, it describes how it works and how it could be detected and monitored for malicious use. The conclusion is that while malware most commonly uses LaunchAgents, there are many unexplored persistence options that blue teams need to prepare for.
The document discusses the Linux booting process. It begins with the BIOS loading the master boot record (MBR) from the hard disk. The MBR then loads the boot loader like GRUB or LILO. The boot loader loads the Linux kernel into memory and passes control to it. The kernel initializes hardware and loads drivers. It then launches the init process which starts essential system processes and moves the system to the default runlevel based on the /etc/inittab file.
The document discusses the boot process of Linux operating systems. When a computer with Linux is turned on, the boot code in ROM loads and starts the kernel. The kernel then probes the system hardware and spawns the system init process. The Linux system can boot automatically or manually. In automatic mode, the complete boot procedure occurs without input, while manual mode involves operator intervention at a certain point before initialization scripts run. The typical Linux boot process involves loading and initializing the kernel, detecting and configuring devices, creating kernel threads, optional operator intervention, running system startup scripts, and achieving multiuser operation.
Memory management involves binding instructions and data to memory spaces using logical and physical addresses. The CPU uses base and limit registers to map the logical address space to the physical address space. Logical addresses are converted to physical addresses by adding the base register value. If a logical address is larger than the limit, an error occurs. Swapping and paging are techniques to manage memory fragmentation. Page tables implement paging by mapping logical page numbers to physical page frames. Task Manager displays memory usage and the working set of processes. NVRAM support and PFN locking help optimize memory usage. NUMA architectures scale multiprocessing by grouping CPUs and memory into nodes to reduce access latency.
introduction to linux kernel tcp/ip ptocotol stack monad bobo
This document provides an introduction and overview of the networking code in the Linux kernel source tree. It discusses the different layers including link (L2), network (L3), and transport (L4) layers. It describes the input and output processing, device interfaces, traffic directions, and major developers for each layer. Config and benchmark tools are also mentioned. Resources for further learning about the Linux kernel networking code are provided at the end.
The document discusses Microsoft file structures. It explains that clusters contain sectors and store file data, with cluster size varying by file system. It describes the master boot record containing partition tables and the file allocation table (FAT) or master file table (NTFS) storing file metadata. Finally, it notes that deleted files have metadata changed in FAT/NTFS and may remain recoverable from slack space.
This document discusses Linux kernel debugging. It provides an overview of debugging techniques including collecting system information, handling failures, and using printk(), KGDB, and debuggers. Key points covered are the components of a debugger, how KGDB can be used with gdb to debug interactively, analyzing crash data, and some debugging tricks and print functions.
This document discusses 10 techniques for persistence on macOS beyond using LaunchAgents. It provides an overview of techniques such as modifying shell startup files, using Pluggable Authentication Modules (PAM), Hammerspoon, preference panes, screen savers, color pickers, periodic scripts, Terminal preferences, emond (the event monitor daemon), and folder actions. For each technique, it describes how it works and how it could be detected and monitored for malicious use. The conclusion is that while malware most commonly uses LaunchAgents, there are many unexplored persistence options that blue teams need to prepare for.
Recently our team researched various ntos subsystem attack vectors, and one of the outputs we will present in our talk. DeathNote as our internal code name to this component, which resides in Microsoft Windows kernel, hiding behind different interfaces and exposed to user differently.
What can goes bad with it?
Basically two kinds of problems, one is syscall handling via direct user interaction. We will describe how to obtain basic understanding of what's going on, how it interacts with other components and what is its purpose. With those knowledge we will dig deeper how to make more complex fuzzing logic to cause enough chaos that will end up in unexpected behaviors in Windows kernel, and demonstrate some of them.
And as for second, as it hints from title, this module does bit of data parsing, so we will dive deep into internals, pointing out some available materials, and move on to reverse engineered structures and internal mechanism. We will show how some tricks can outcome with various results, and how structured approach can expose more problems than is expected.
MySQL Parallel Replication: inventory, use-case and limitationsJean-François Gagné
Booking.com uses MySQL parallel replication extensively with thousands of servers replicating. The presentation summarized MySQL and MariaDB parallel replication features including: 1) MySQL 5.6 uses schema-based parallel replication but transactions commit out of order. 2) MariaDB 10.0 introduced out-of-order parallel replication using write domains that can cause gaps. 3) MariaDB 10.1 includes five parallel modes including optimistic replication to reduce deadlocks during parallel execution. Long transactions and intermediate masters can limit parallelism.
The document provides an overview of Logical Volume Management (LVM) in Linux. It discusses what LVM is, its main components like physical volumes, volume groups, logical volumes, and how they relate. It then gives steps to use LVM by creating a physical volume, volume group and logical volume. It also discusses how LVM allows expanding logical volumes and live resizing of file systems.
NTFS is a file system introduced by Microsoft in 1993 for Windows NT operating systems. It improved on previous file systems with features like larger storage capacity support, redundancy, security, and performance improvements important for businesses. NTFS formats volumes with system files including the Master File Table that stores metadata for all files and folders. It provides security, compression, encryption and other advanced features through file attributes. NTFS also supports features like sparse files, recoverability, and alternate data streams.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
What is operating system? Operating System and Its Function. Advantages and Disadvantages of Major OS’s. History of GNU/Linux. Features of Linux OS. The Indianized version of GNU/Linux OS: BOSS (Bharat Operating System Solutions). Directory Structure of Linux OS and Windows OS.
The document discusses compaction in RocksDB, an embedded key-value storage engine. It describes the two compaction styles in RocksDB: level style compaction and universal style compaction. Level style compaction stores data in multiple levels and performs compactions by merging files from lower to higher levels. Universal style compaction keeps all files in level 0 and performs compactions by merging adjacent files in time order. The document provides details on the compaction process and configuration options for both styles.
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
It is the File system that is contained on the same partition on which the "Root directory" is located. It is the File system on which all the other file systems are mounted
Kdump and the kernel crash dump analysisBuland Singh
Kdump is a kernel crash dumping mechanism that uses kexec to load a separate crash kernel to capture a kernel memory dump (vmcore file) when the primary kernel crashes. It can be configured to dump the vmcore file to local storage or over the network. Testing involves triggering a kernel panic using SysRq keys which causes the crash kernel to load and dump diagnostic information to the configured target path for analysis.
BlueStore: a new, faster storage backend for CephSage Weil
BlueStore is a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore stores metadata in RocksDB and data directly on block devices, avoiding double writes and improving transaction performance. It supports multiple storage tiers by allowing different components like the RocksDB WAL, database and object data to be placed on SSDs, HDDs or NVRAM as appropriate.
Linux device drivers act as an interface between hardware devices and user programs. They communicate with hardware devices and expose an interface to user applications through system calls. Device drivers can be loaded as kernel modules and provide access to devices through special files in the /dev directory. Common operations for drivers include handling read and write requests either through interrupt-driven or polling-based I/O.
1. Memory hierarchy takes advantage of spatial and temporal locality by keeping frequently used data closer to the CPU.
2. Caches store the most recently used data from main memory and are faster but smaller than main memory.
3. If a memory request is in cache it is a "hit" and faster to access, if not in cache it is a "miss" and requires fetching from slower main memory.
A register is a group of flip-flops that can each store one bit of information. A processor uses registers to hold instructions, addresses, and data for manipulating information. The document lists several common computer registers - the Data Register stores 16-bit operands from memory, the Address Register holds 12-bit memory addresses, the Accumulator is a general purpose 16-bit processing register, and the Program Counter contains the 12-bit address of the next instruction. Temporary and input/output registers are also used to store intermediate data and user input/output respectively.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
EMR Spark tuning involves configuring Spark and YARN parameters like executor memory and cores to optimize performance. The default Spark configurations depend on the deployment method (Thrift, Zeppelin etc). YARN is used for resource management in cluster mode, and allocates resources to containers based on minimum and maximum thresholds. When tuning, factors like available cluster resources, executor instances and cores should be considered to avoid overcommitting resources.
The document discusses Linux file systems. It begins with an overview of file system architecture, including inodes, dentries, superblocks, and how data is never erased but overwritten. It then covers various local file systems like Ext2, Ext3, Ext4, ReiserFS, and XFS. Next it discusses log-structured and pseudo file systems. It also covers network file systems like NFS and CIFS. Finally it summarizes cluster, distributed, and Hadoop file systems. The document provides a technical overview of Linux file system types, structures, features and capabilities.
Htcia an introduction to the microsoft ex fat file system 1.01 finalovercertified
This is a presentation on the Microsoft exFAT file system, given at HTCIA International Conference 2014 which was held in Austin Texas at the Hyatt Regency Lost Pines Resort and Spa.
Introduction to the Microsoft Extended File System (exFAT)
This session will examine the internals of the Microsoft Extended FAT file system (nicknamed FAT64) which was designed for use with removable storage devices and is the exclusive file system of the new SDXC digital media standard. This new format creates many challenges for the forensics examiner. With minimal documentation on the internals of exFAT, and with exFAT experiencing a very high adoption rate, the forensics examiner needs guidance on how to navigate the filesystem. This session will explain the various internal tables and directory formats and show the differences from previous legacy forms of FAT, such as FAT12/16 and FAT32.
The document provides an overview of operating systems, including their definition, types, history and a comparison of Linux and Windows. It discusses how an operating system manages computer hardware and software resources, provides common services and acts as an intermediary between programs and hardware. Examples of popular operating systems are given for different types including real-time, multi-user, multi-tasking, distributed and embedded. A brief history of operating system development from the 1940s to modern times is also provided. The document concludes with a table comparing key aspects of Linux and Windows such as cost, manufacturer, usage, development and security considerations.
Recently our team researched various ntos subsystem attack vectors, and one of the outputs we will present in our talk. DeathNote as our internal code name to this component, which resides in Microsoft Windows kernel, hiding behind different interfaces and exposed to user differently.
What can goes bad with it?
Basically two kinds of problems, one is syscall handling via direct user interaction. We will describe how to obtain basic understanding of what's going on, how it interacts with other components and what is its purpose. With those knowledge we will dig deeper how to make more complex fuzzing logic to cause enough chaos that will end up in unexpected behaviors in Windows kernel, and demonstrate some of them.
And as for second, as it hints from title, this module does bit of data parsing, so we will dive deep into internals, pointing out some available materials, and move on to reverse engineered structures and internal mechanism. We will show how some tricks can outcome with various results, and how structured approach can expose more problems than is expected.
MySQL Parallel Replication: inventory, use-case and limitationsJean-François Gagné
Booking.com uses MySQL parallel replication extensively with thousands of servers replicating. The presentation summarized MySQL and MariaDB parallel replication features including: 1) MySQL 5.6 uses schema-based parallel replication but transactions commit out of order. 2) MariaDB 10.0 introduced out-of-order parallel replication using write domains that can cause gaps. 3) MariaDB 10.1 includes five parallel modes including optimistic replication to reduce deadlocks during parallel execution. Long transactions and intermediate masters can limit parallelism.
The document provides an overview of Logical Volume Management (LVM) in Linux. It discusses what LVM is, its main components like physical volumes, volume groups, logical volumes, and how they relate. It then gives steps to use LVM by creating a physical volume, volume group and logical volume. It also discusses how LVM allows expanding logical volumes and live resizing of file systems.
NTFS is a file system introduced by Microsoft in 1993 for Windows NT operating systems. It improved on previous file systems with features like larger storage capacity support, redundancy, security, and performance improvements important for businesses. NTFS formats volumes with system files including the Master File Table that stores metadata for all files and folders. It provides security, compression, encryption and other advanced features through file attributes. NTFS also supports features like sparse files, recoverability, and alternate data streams.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
What is operating system? Operating System and Its Function. Advantages and Disadvantages of Major OS’s. History of GNU/Linux. Features of Linux OS. The Indianized version of GNU/Linux OS: BOSS (Bharat Operating System Solutions). Directory Structure of Linux OS and Windows OS.
The document discusses compaction in RocksDB, an embedded key-value storage engine. It describes the two compaction styles in RocksDB: level style compaction and universal style compaction. Level style compaction stores data in multiple levels and performs compactions by merging files from lower to higher levels. Universal style compaction keeps all files in level 0 and performs compactions by merging adjacent files in time order. The document provides details on the compaction process and configuration options for both styles.
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
It is the File system that is contained on the same partition on which the "Root directory" is located. It is the File system on which all the other file systems are mounted
Kdump and the kernel crash dump analysisBuland Singh
Kdump is a kernel crash dumping mechanism that uses kexec to load a separate crash kernel to capture a kernel memory dump (vmcore file) when the primary kernel crashes. It can be configured to dump the vmcore file to local storage or over the network. Testing involves triggering a kernel panic using SysRq keys which causes the crash kernel to load and dump diagnostic information to the configured target path for analysis.
BlueStore: a new, faster storage backend for CephSage Weil
BlueStore is a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore stores metadata in RocksDB and data directly on block devices, avoiding double writes and improving transaction performance. It supports multiple storage tiers by allowing different components like the RocksDB WAL, database and object data to be placed on SSDs, HDDs or NVRAM as appropriate.
Linux device drivers act as an interface between hardware devices and user programs. They communicate with hardware devices and expose an interface to user applications through system calls. Device drivers can be loaded as kernel modules and provide access to devices through special files in the /dev directory. Common operations for drivers include handling read and write requests either through interrupt-driven or polling-based I/O.
1. Memory hierarchy takes advantage of spatial and temporal locality by keeping frequently used data closer to the CPU.
2. Caches store the most recently used data from main memory and are faster but smaller than main memory.
3. If a memory request is in cache it is a "hit" and faster to access, if not in cache it is a "miss" and requires fetching from slower main memory.
A register is a group of flip-flops that can each store one bit of information. A processor uses registers to hold instructions, addresses, and data for manipulating information. The document lists several common computer registers - the Data Register stores 16-bit operands from memory, the Address Register holds 12-bit memory addresses, the Accumulator is a general purpose 16-bit processing register, and the Program Counter contains the 12-bit address of the next instruction. Temporary and input/output registers are also used to store intermediate data and user input/output respectively.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
EMR Spark tuning involves configuring Spark and YARN parameters like executor memory and cores to optimize performance. The default Spark configurations depend on the deployment method (Thrift, Zeppelin etc). YARN is used for resource management in cluster mode, and allocates resources to containers based on minimum and maximum thresholds. When tuning, factors like available cluster resources, executor instances and cores should be considered to avoid overcommitting resources.
The document discusses Linux file systems. It begins with an overview of file system architecture, including inodes, dentries, superblocks, and how data is never erased but overwritten. It then covers various local file systems like Ext2, Ext3, Ext4, ReiserFS, and XFS. Next it discusses log-structured and pseudo file systems. It also covers network file systems like NFS and CIFS. Finally it summarizes cluster, distributed, and Hadoop file systems. The document provides a technical overview of Linux file system types, structures, features and capabilities.
Htcia an introduction to the microsoft ex fat file system 1.01 finalovercertified
This is a presentation on the Microsoft exFAT file system, given at HTCIA International Conference 2014 which was held in Austin Texas at the Hyatt Regency Lost Pines Resort and Spa.
Introduction to the Microsoft Extended File System (exFAT)
This session will examine the internals of the Microsoft Extended FAT file system (nicknamed FAT64) which was designed for use with removable storage devices and is the exclusive file system of the new SDXC digital media standard. This new format creates many challenges for the forensics examiner. With minimal documentation on the internals of exFAT, and with exFAT experiencing a very high adoption rate, the forensics examiner needs guidance on how to navigate the filesystem. This session will explain the various internal tables and directory formats and show the differences from previous legacy forms of FAT, such as FAT12/16 and FAT32.
The document provides an overview of operating systems, including their definition, types, history and a comparison of Linux and Windows. It discusses how an operating system manages computer hardware and software resources, provides common services and acts as an intermediary between programs and hardware. Examples of popular operating systems are given for different types including real-time, multi-user, multi-tasking, distributed and embedded. A brief history of operating system development from the 1940s to modern times is also provided. The document concludes with a table comparing key aspects of Linux and Windows such as cost, manufacturer, usage, development and security considerations.
In this presentation, you can understand that how FAT works. How can we effectively boost your online business through our proper strategic plan, structure and then online presence. If you want to know more about it, visit our website http://www.fatthecreatives.com/.
Data backup involves copying files and data to external or online storage so they are preserved if the original files are lost or damaged. Reasons for data loss include hardware failures, viruses, file corruption, and disasters. The main purpose of data backup is to avoid data loss of important financial, customer, and company information that would be difficult to replace. Backup options include external drives, internal drives, department servers, online backup sites, and cloud storage services.
This document discusses various topics related to installing and configuring the Windows operating system, including:
1) Types of installation such as typical, full, and custom installations.
2) Basic options for new Windows installations like clean installs, upgrades, and multi-boot options.
3) Pre-installation checklist items such as backing up data and uninstalling incompatible applications.
4) Functions of the Disk Management utility for managing partitions, volumes, and file systems.
Power point presentation on backup and recovery.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
The document discusses and compares different file systems, including FAT, FAT32, NTFS, and their key features and limitations. FAT is the oldest file system and was designed for small disks and simple structures. It uses a file allocation table to organize files. NTFS is proprietary to Windows and offers improvements like larger volume sizes, security features like encryption, compression and quotas. It also has better performance, especially on large volumes.
This document provides an overview of SCSI drives and file systems. It describes SCSI interfaces and cables, how SCSI devices are connected in a daisy chain configuration, and SCSI standards including SCSI-1, SCSI-2, and SCSI-3. It also summarizes the FAT and NTFS file systems used in Windows, how they allocate disk space and store file information differently, and the advantages of NTFS. The document concludes with a brief explanation of how disk compression works to save space.
This document discusses best practices for backup and recovery planning. It covers common backup and recovery topics like different backup methods and topologies, the backup process, and managing backups. It also provides an overview of a typical backup application and the importance of backup reports and catalogs. The document is made up of multiple lessons intended to describe backup and recovery concepts and considerations.
The document discusses data backup and recovery strategies. It defines data recovery as retrieving files that have been deleted, forgotten passwords, or recovering damaged hard drives. It discusses challenges with backups like network bandwidth, backup windows, and lack of resources. It also covers backup storage technologies and strategies to improve backups like incremental and block-level backups. The document recommends automating recovery, testing recovery plans, and using tools like BMC's Back-up and Recovery Solution to manage the backup process and improve recovery outcomes.
The document discusses backup strategies and recovery procedures. It begins by asking questions about backup strategies, including the types of backups (full, differential, incremental), backup media, testing backups, and developing backup procedures. It then discusses the importance of data backups, noting that many organizations that lose their data go out of business. Key aspects of developing an effective backup strategy include assessing risks, testing recovery procedures, and ensuring continuity of services. RAID systems and disaster recovery plans (DRP) can help with the backup strategy and continuity of services.
This document discusses various topics related to backup and recovery of computer systems, including common threats to systems, types of viruses, importance of regular backups, different backup strategies and methods, and ensuring continuity of service. It provides details on full, incremental, and differential backups and recommends backing up data regularly to external storage drives or online backup services. The document stresses having clear backup procedures, assigning responsibility, and being prepared to recover from data loss through training and alternative plans.
A file system is used to control how data is stored and retrieved.
A filesystem is the methods and data structures that an operating system uses to keep track of files on a disk or partition; that is, the way the files are organized on the disk.
A file allocation table (FAT) is a table that an operating system maintains on a hard disk that provides a map of the clusters (the basic units of logical storage on a hard disk) that a file has been stored in.
File Allocation Table (FAT) is a computer file system architecture and a family of industry-standard file systems utilizing it. The FAT file system is a legacy file system which is simple and robust.
Today, FAT file systems are still commonly found on floppy disks, USB sticks, flash and other solid-state memory cards and modules, and many portable and embedded devices.
The document provides an overview of database backup, restore, and recovery. It describes various types of failures that may occur including statement failures, user process failures, instance failures, media failures, and user errors. It emphasizes the importance of defining a backup and recovery strategy that considers business requirements, operational requirements, technical considerations, and disaster recovery issues to minimize data loss and downtime in the event of failures.
The document provides an overview of file systems, including their purpose of organizing and storing information on storage devices. It discusses key aspects of file systems such as how they separate information into individual files and directories, use metadata to store attributes about files, allocate storage space in a granular manner (which can result in unused space), become fragmented over time, and use various utilities and structures to implement these functions while maintaining integrity of data and restricting access. File systems are a critical component of operating systems that allow for efficient organization, retrieval and updating of user data on different types of storage media and devices.
The document discusses ServiceMaster's disaster recovery program called 866 RECOVER. It provides an overview of ServiceMaster's resources and experience in disaster recovery. Key services include fire, flood and water damage restoration, drying, dehumidification, building stabilization, reconstruction, and management services. The program offers priority response, centralized billing, and pre-loss agreements. Case studies highlight ServiceMaster's work with Iowa State University and Biloxi Regional Medical Center after disasters.
Disaster Recovery & Data Backup StrategiesSpiceworks
This document discusses data backup strategies and planning. It emphasizes that backups are critical for businesses to protect their data and recover from data loss. The document outlines planning considerations like identifying critical systems and data, recovery objectives, and capacity needs. It then covers various backup methods and factors to consider when developing a backup plan such as repository type, media type, and testing procedures. Regularly monitoring and testing backups is key to ensuring the plan is effective.
Presentation on backup and recoveryyyyyyyyyyyyyTehmina Gulfam
The document provides an overview of backup strategies and technologies. It discusses different types of backups including full, differential, and incremental backups. It covers backup architecture including backup clients, servers, and storage nodes. Key aspects of the backup process and restore process are outlined. Different backup topologies of direct attached, LAN-based, and SAN-based backups are described. Options for backup technology include backing up to tape or disk. Features of Acronis backup software are briefly mentioned.
Sun produces a range of server products including x64 servers, SPARC Enterprise servers, and CMT products. The document discusses Sun's T-series of CMT servers which use concurrent multi-threading (CMT) technology to improve processor performance through thread-level parallelism. The highest-end model is the T5440 server which supports up to 256 threads and 512GB of memory in a redundant 4RU chassis.
NYC4SEC - An Introduction to the Microsoft exFAT File System (Draft 2.01)overcertified
This document provides a technical summary of the Microsoft Extended File Allocation Table (exFAT) file system format. It discusses exFAT's background and history in relation to other FAT file system versions. It also notes that exFAT support was added to forensic analysis tools like The Sleuth Kit and that exFAT is designed for use with removable media storage due to limitations of NTFS for such use cases. The document provides technical details on exFAT specifications, limitations, and terminology used in accordance with Microsoft's published exFAT specifications.
The document provides information about the HPE ProLiant DL360 Gen10 Server, including:
- An overview of the server platform and its features such as support for Intel Xeon Scalable processors with up to 28 cores and 3TB of memory.
- Details on standard server models, configurations, chassis types, processor options, memory support, I/O and storage.
- Specifications for pre-configured models, SMB models, factory integrated configuration information, and additional options.
Introduce: IBM Power Linux with PowerKVMZainal Abidin
This document provides an overview of PowerKVM, an open source virtualization option for Linux systems on IBM Power servers. It discusses PowerKVM and PowerVM virtualization, highlighting that PowerKVM supports only Linux guests while PowerVM supports AIX, IBM i and Linux. Management options for PowerKVM include open source tools while PowerVM supports proprietary tools and PowerVC for both virtualization platforms. The document also presents performance benchmarks showing Power8 significantly outperforming Intel Xeon processors.
The nanoFlash is a small, lightweight video recorder and player that can be used to record higher quality video than many cameras by unlocking higher bitrates and 4:2:2 color sampling. It supports various formats and codecs, has long battery life, works with many cameras, and provides a simple workflow for editing directly from compact flash cards. Using the nanoFlash can significantly improve video quality compared to cameras with limited internal codecs.
This document provides technical details about PostgreSQL WAL (Write Ahead Log) buffers. It describes the structure and purpose of WAL segments, WAL records, and their components. It also explains how the WAL is used to safely recover transactions after a server crash by replaying the log.
The document summarizes IBM's BladeCenter PS700, PS701, and PS702 POWER7 blades. The PS700 is a single-wide 4-core blade, the PS701 is a single-wide 8-core blade, and the PS702 is a double-wide 16-core blade. The blades support various operating systems including AIX, Linux, and IBM i. They provide memory, storage, networking, and expansion card options. PowerVM virtualization, NPIV, and systems management features are also discussed.
IEI is een van de grootste leveranciers van producten voor industriële computersystemen. IEI levert honderden verschillende boards, systemen en onderdelen voor uiteenlopende applicaties in de industriële automatisering, defensie, medisch, infotainment en mobiel gebruik. Vooruitstrevende oplossingen bezorgen u als klant een kortere ontwerpcyclus zodat u de voorsprong op de concurrent kunt behouden en zelfs vergroten.
The document discusses hard drive partitioning and file systems. It covers:
1) The master boot record (MBR) located in sector 0 that contains a partition table defining partitions on the drive. Hidden and extended partitions are possible by editing the MBR.
2) The structure of partition table entries in the MBR and an example showing how to decode the entries.
3) File allocation table (FAT) file systems, including the boot sector, BPB, FAT tables, root directory and clusters.
4) Features of FAT12, FAT16 and FAT32 like cluster size and FAT table entry sizes and meanings.
Future of server market. Advantech servers for industrial compute and storage applications. 10/26/17 Advantech Solution Day presentation by Frank Chiang, Industrial Server PM.
Event details: http://www.advantech-eautomation.com/eMarketingPrograms/Server_SolutionDay/
This document compares specifications for current generation blade servers from Cisco, Dell, HP, and IBM. It provides details on the maximum number of servers and CPUs supported, CPU types and cores, internal drive bays, memory capacity, and networking capabilities for each vendor's blade server models. Key specifications are highlighted for configurations using a standard 42U rack with either 7 single-width or 4 double-wide blade chassis.
Geniatech is a technology company that specializes in the development and production of innovative digital TV, video, and IoT products. The company was founded in 1997 and is headquartered in China. Geniatech's product line includes TV tuners, Android-based TV boxes, single board computer, system on module, embedded devices, and IoT solutions. The company is committed to providing high-quality and cost-effective products to meet the needs of its customers. Geniatech has a global presence with offices and distributors in Europe, North America, and Asia.
The document discusses the structure and layout of the NTFS file system. It describes the key components including the MBR/GPT disk layout, NTFS boot sector, master file table, MFT records and attributes. The MFT stores metadata for all files and directories. MFT records contain standard and file name attributes that provide metadata like timestamps, sizes and file paths. Attributes can be resident or non-resident, stored in or outside the MFT entry.
This document provides specifications for two ASUS Zenbook Ultrabook models, the UX21 and UX31. It lists the processors, operating systems, dimensions, memory, displays, I/O ports, storage, battery life, and other key specifications of the two models. The UX31 has a larger 13.3 inch screen compared to the 11.6 inch screen of the UX21. Both models are thin and light with quick resume capabilities and long battery life.
The document provides details about various components of a computer system including motherboard components, RAM types, CPU types, and BIOS settings. It discusses the purposes and properties of motherboard components such as CPU slot, RAM slots, expansion card slots, and ports. It also compares different RAM types such as SRAM, DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, and DDR3 SDRAM in terms of features and specifications. The document provides information about configuring and applying BIOS settings to change boot options, set passwords, and configure hardware settings.
Thiết kế chắc chắn, đẳng cấp
Hiệu năng vượt trội, hoạt động êm và mạnh mẽ
Số lượng cổng kết nối nhiều, đa dạng
Trang bị tính năng bảo mật hiện đại
Dễ nâng cấp
Trang bị chuẩn kết nối wifi 6
Phù hợp với giới thiết kế đồ họa chuyên nghiệp, nhất là người làm thiết kế kiến trúc cần di chuyển nhiều
Nguồn: https://laptops.vn/san-pham/thinkpad-p1-gen-2/
The document describes the 1I386H, a super compact wafer input/output single board computer with an Intel Bay Trail dual-core CPU. It measures 55x90mm and has features like 2GB RAM, mSATA storage, VGA output, Gigabit Ethernet, USB ports, and works in temperatures from -20C to 70C. It is targeted towards space-constrained and thermally limited embedded applications. The document also lists optional extension modules that can add more ports and capabilities.
Presentation for IoT workshop at Sinhagad University (Feb 4, 2016) - 2/2Bhavin Chandarana
This is the second part of the presentation used for the workshop I conducted at Sinhagad University on Thursday 4th Feb, 2016. A lot of the content has been taken from freely available existing sources and these slides are just for reference for those who attended the workshop
Similar to Demystifying the Microsoft Extended FAT File System (exFAT) (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
61. File Directory Entry September 20th, 2010 Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F 0000 85 04 D4 92 20 00 00 00 44 62 86 3B F1 62 BA 3A 0010 44 62 86 3B A8 00 EC EC EC 00 00 00 00 00 00 00 Type # Secondary Entries Set Checksum (0x92D4) Attributes (0x0020 = Archive) Create Modified TZ Offset CMA EC = GMT-5 Accessed Create 10ms Modified 10ms
62. Formatted File Directory Entry September 20th, 2010 Root Entry Type Read is: 85 Directory Entry Record Checksum: 92D4 Calculated Checksum is: 92D4 Size Directory Set (bytes): 160 Secondary Count 004 File Attributes: 0020 Archive Create Timestamp: 3B866244 12/06/2009 12:18:08 Last Modified Timestamp: 3ABA62F1 05/26/2009 12:23:34 Last Accessed Timestamp: 3B866244 12/06/2009 12:18:08 10 ms Offset Create A8 168 10 ms Offset Modified 00 0 Time Zone Create EC 236 Value of tz is: GMT -05:00 Time Zone Modified EC 236 Value of tz is: GMT -05:00 Time Zone Last Accessed EC 236 Value of tz is: GMT -05:00
63.
64. Stream Extension Directory Entry September 20th, 2010 Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F 0000 C0 03 00 28 AD 3C 00 00 1F 46 1D 01 00 00 00 00 0010 00 00 00 00 05 00 00 00 1F 46 1D 01 00 00 00 00 Entry Flags (Alloc Possible/Fat Invalid) Length of File Name (0x28= 40) Name Hash (0x3CAD) Cluster (5) Data Length 0x011d461f = 18,695,711
65. Parameters for Samples September 20th, 2010 Bytes Per Sector: 2 to the 09 power is: 512 Sectors Per Cluster: 2 to the 08 power is: 256 Bytes per Cluster: 131072 (128K)
66. Formatted Stream Extension September 20th, 2010 Root Entry Type Read is: C0 Directory Entry Record, Stream Extension Secondary Flags: 03 Flag Bit 0: Allocation Possible Flag Bit 1: FAT Chain Invalid Length of UniCode Filename is: 40 Name Hash Value is: AD3C Stream Extension First Cluster 5 Cluster 5 is Allocated Stream Extension Data Length 18695711 Bytes Slack: 83487 Clusters Used: 143 Stream Extension Valid Data Length 18695711 Bytes Slack: 83487 Clusters Used: 143
exFAT is specifically designed for Removable media, but can be used for fixed media as well. NTFS is not recommended for removable media, especially because of the lazy write problem. Faster I/O through less file system overhead
You need to be able to locate the evidence, just in general You also need to know the hiding places where it can be hidden You need to validate what you found is correct, in order, and complete.
If the OS can’t recognize the file system, then it thinks the media is not formatted.
Little to nothing available in these areas Exception: Tuxera is the first independent software vendor to sign an exFAT development agreement with Microsoft. Linux and Open Source is used a lot Commercial tools are lacking Encase 6.14.3 in Dec 2009 started logical support, some issues reported FTK 3.2 – Maybe? Little documentation or publications on exFAT internals.
Microsoft published a patent that included the exFAT 1.00 specification. This presentation and the paper attempt to stick to the terminology used in the patent/specification as close as possible. Links to the patent and my paper will be on a later slide, and references to the paper will also be on my blog.
In some cases you might see ZB or ZIB, technically they are really different, but are close.
You never really see another sector size other than 512 bytes, but everyone just assumes that it is only 512 The 4096 size is special to support a device that is used for paging and supports 4K pages. But with the standard format, you can’t adjust sector size Clusters (or blocks) are 32K max in FAT32 Potential capacity, but the FAT can’t support 64Zib in its current configuration The volume label and file names are all 16 bit unicode Filenames to a maximum of 255 characters
Microsoft in the KB for Windows XP support indicated a capacity to 64ZiB and a file size maximum to 64ZiB. In reality, the file system can only support up to 128PiB, and the file size up to 16EiB. The volume size is limited by a 32-bit FAT and a 25-bit cluster size giving a 57-bit addressable volume size The file size is limited by the 8-byte (64-bit) number that holds the filesize.
With TexFAT there will be 2 FATS and 2 BITMAPS, with exFAT 1.0 – which does not have TexFAT (Transactional FAT) support, there is ony 1 FAT and 1 BITMAP, where previous FAT versions had 2 FATs.
Any FS is limited, even FAT32 and NTFS. This is Windows only, we are not talking GUID Partition Table (GPT) Although a MBR uses a 4 byte sector count, remember that the FS can be larger is you make the sectors larger (512 vs. 4096) and this causes a lot of confusion on how big a FS fits.
Windows would not format FAT32 beyond 32GB, it required using a FAT32 format on a different OS Some Windows utilities did not work properly with volume spaces GT 32GB, but you can mount a device that was GT 32GB Limitations of FAT32 File System: http://support.microsoft.com/kb/184006 SDXC predecessor (SDHC) had a max spec of 32GB. SDXC picks up from 32GB. SD 4.0 Specification – 300MB/s I/O speeds http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2009/20090813_S204_Lin_Yee.pdf Starting at 104 mega bytes per second, and later to 300 mega bytes per second http://www.letsgodigital.org/en/20985/sdxc-cards/
The SDXC media will not be backward compatible Cameras and other devices have been announced, but I haven’t actually seen any devices yet, so it sounds like media is being announced and shipped with nothing that can read them.
New Devices may accept SDHC, but older devices will not.
With Sony adopting the XC memory stick to exFAT, plus the SD market, is almost 90% of the market today.
There are discussions of creation of exFAT on a Vista or Windows 7 machine that can’t be seen on Vista. This is usually a case of creating the media on a machine with exFAT support and then trying to read the media on a different machine without exFAT support. The common mistake is creation of the file system on removable media with a Vista SP1 (or higher machine) and trying to read it on a machine with Vista RTM.
exFAT uses 16 bit Unicode strings
It is important to note that Pentium processers use the little-endian format, so numbers stored in the file system are stored in little-endian. This can be significant because you need to change the order of the bytes in order to read the values from a hex dump.
Currently use exFAT 1.00, but if a later version of exFAT is in use, it will check the version # and not mount the FS unless it can suppoort it Checksums protect against corruption and viruses If there is a problem with critical directory entries, the FS should not mount.
FAT32 required a minimum of 65,525 clusters. exFAT does not have this restriction.
4 Regions defined on the volume The FAT tables reside outside the cluster heap
Details follow in the next slides
If there was no restriction, then the size of a cluster could be 4 255
If the sector size is > 512 bytes, all space on the first sector of the VBR )Main Boot Sector) is not used.
Unlike the first sector, the other 8 bot sectors can use the entire sector and the signature marker is moved to the last 8 bytes of the sector
Repeats over and over again, 4 bytes = 32 bit checksum Can be used to determine if the VBR was modified 3 bytes in the VBR are not calculated in the checksum This sector does not have a signture
The BITMAP is used to track cluster allocation, and the FAT is only required for re-assembling the original file. If the original file is contiguous, then the FAT isn’t needed for THAT file. We will see later that a flag in the directory record is used to tell the FS whether the FAT should be used or ignored.
Because there is no floppy support, there is only one possible media descriptor value Cluster 0 and 1 are not defined, so 0 & 1 are not significant(Same as legacy FAT) Since the FAT is no longer used for cluster allocation, 0 (zero) is no longer significant (used to be unused)
The 3 main critical records: Allocation Bitmap, UP-Case Table, and Root Directory will use FAT chains. The Root Directory can grow and since it is dynamic in its growth, most likely will fragment. The UP-CASE Table and Allocation bitmap should be static and not grow or change, although theoretically they could probably be relocated and moved somewhere else on the volume. The locations (cluster addresses) of the 3 special metadata files may change, this is based on one formatting and in reality these files could eventually end up in any cluster.
If there are 2 FATs in a TexFAT Transactional Safe exFAT environment, then each FAT is paired with a allocation bitmap The allocation BITMAP is pointed to by a 0x81 entry.
This is an eye chart, but the idea is to show how to get to the bitmap. You start at the VBR (BPB), go to the root directory, look up the 0x81 entry to get the cluster address, and then go into the BITMAP table.
We will see details of the directory entry construction later, including what we mean by an entry type.
The first byte of every directory entry is the “entry type” and describes the directory entry.
When a file set is not in use, it is usually (but not always) a deleted file When a volume label is not in use, it means no volume label Only files have secondary entries so far Missing Benign entries usually won’t prevent the file system from being mounted. 0x80 is not defined.
Primary and Critical
Since we use 16 bit unicode without string termination, we need the length of the volume label – in unicode characters.
Primary and Critical. If the FS can’t find the BITMAP table, it can’t mount the FS
This was a small volume. 63 bytes can support maximum of 63x8 = 504 clusters.
Filenames are stored case insensitive, so when a search is done, the filenames are converted to upper case (folded). The UP-CASE table is used to convert the filename to all uppercase.
The UP-Case table is less than 6K – imagine if it was in a 32K cluster, now imagine if it was in a 32MB cluster, the amount of available slack space.
File Entry Set would have a File, Stream Extensions, and up to 17 File Name Extension for a total of 19. Later, when a new exFAT version comes out, the ACL will be another secondary entry bringing this up to 20. As more file secondary entries are added, let’s say one for encryption, this increases to a max of 255 secondaries.
Attributes and Timestamps in later slides Checksum is across the Primary and all secondaries in the set.
Modified, Access, and Create. Timestamps are NOT stored in this order, but MAC is a common acronym in the literature. Timestamps are not one single field like NTFS which uses a 64 bit value. exFAT combines pieces to make a UTC value. TZ offset is absent in Vista SP1, and does not appear in the exFAT 1.00 spec.
The standard DOS Date/Time, also used in the previous FAT versions, does not count to the second, but double seconds. To get seconds, a 33 bit number would have been needed.
FAT and exFAT timestamp behavior varies, but is just not reliable as far as last accessed.
These are pretty much the same as previous FAT versions. Since we have a separate volume label entry, there is no attribute for it, and since we don’t have 8.3 support, there is no LFN (Long File Name) attribute either.
The update behavior on the 10ms Modified is also not predictable, sometimes it is just set to zero. Note that the create time is really 3B866244 (reversed because of little-endian)
In order to validate the analysis in reverse engineering the FS, I had to write a C program to format the directory entries. This is an example of the output. All the timestamps are even because of the double seconds. But since the create is 168, this means that the create time was really 12:18:09.68 Secondary count is 4, meaning that this file set is 5 entries, 1 File, 1 Stream, and 3 filename.
There is 2 file lengths, one is supposed to be te file length and the other the amount of data actually written into the file so far. Length of name is needed because there is no string termination, but the file name (max 255) may require multiple directory entries (we will see later). This is where the FS indicates whether the FAT is used, if the FAT Invalid flag is set, then the FAT is ignored.
Since these values can vary based on the format parameters, for reference this is what the samples in this presentation is using.
Another output from the C program. Allocation possible indicates that the directory entry specifies a cluster address field FAT invalid indicates that this file does not use the FAT This file is 18MB and required 143 clusters to store the file. As we said before, there are 3 filename entries (each holds 15 characters of the filename), and as we see above, the filename is 40 characters in length.
Allocation not possible indicates that there is no cluster address in the entry. FAT Invalid has no meaning
Filename is 40 characters (80 bytes) and takes 3 entries to store it.
When the entries are not in use, some may be overwritten, and some ma not. This means that a complete set may not exist.
I need followers
My paper on exFAT and the Microsoft Patent that exposes the specification