The document summarizes files and directories from both the programmer and operating system perspectives. It discusses the file system abstraction, file naming conventions, file structures like streams of bytes and records, common file operations, file organization methods, access rights and permissions, and directory structures. It provides examples of different file systems like UNIX and highlights key concepts like relative and absolute pathnames.
Linux uses a unified, hierarchical file system to organize and store data on disk partitions. It places all partitions under the root directory by mounting them at specific points. The file system is case sensitive. The Linux kernel manages hardware resources and the file system, while users interact through commands interpreted by the shell. Journaling file systems like ext3 and ReiserFS were developed to improve robustness over ext2 by logging file system changes to reduce the need for integrity checks after crashes. Ext4 further improved on this with features like larger maximum file sizes and delayed allocation.
The document discusses the UNIX file management system and how it uses a virtual file system (VFS) and virtual nodes (vnodes) to provide a common interface for accessing different file systems, with the VFS handling file system independent operations and vnodes representing files and delegating file system specific operations to each file system type's implementation. It also describes how UNIX uses buffer caches and disk schedulers to improve performance by reducing disk access through caching of recently or frequently used disk blocks in memory.
This document provides an overview of the Linux file system including:
1. It defines the main directories and contents according to the Filesystem Hierarchy Standard (FHS) with the root directory being "/" and possible multiple partitions and filesystems.
2. It describes the different types of files like ordinary files, directories, and special files as well as file permissions for reading, writing, and executing files and directories.
3. It explains how to change file permissions using the chmod command and navigate the file system using commands like pwd, cd, and ls including examples of using options, wildcards and navigation.
Linux is a free, open-source operating system based on UNIX with a modular kernel. It uses processes, threads, virtual memory, and files systems. Device drivers allow access to hardware via the block I/O system. Interprocess communication includes signals, pipes, shared memory, and semaphores. Security features authentication via PAM and access controls permissions via user and group IDs.
XFS is a file system designed for large storage needs and high performance. It supports large files and directories through its use of extents to track file data locations. XFS provides features like dynamic inode allocation, extended attributes, disk quotas, and crash recovery through write-ahead logging to enable quick recovery of metadata after an unclean shutdown.
The document provides an overview of file systems, including their purpose of organizing and storing information on storage devices. It discusses key aspects of file systems such as how they separate information into individual files and directories, use metadata to store attributes about files, allocate storage space in a granular manner (which can result in unused space), become fragmented over time, and use various utilities and structures to implement these functions while maintaining integrity of data and restricting access. File systems are a critical component of operating systems that allow for efficient organization, retrieval and updating of user data on different types of storage media and devices.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
Disk and File System Management in LinuxHenry Osborne
This document discusses disk and file system management in Linux. It covers MBR and GPT partition schemes, logical volume management, common file systems like ext4 and XFS, mounting file systems, and file system maintenance tools. It also discusses disk quotas, file ownership, permissions, and the umask command for setting default permissions.
Linux uses a unified, hierarchical file system to organize and store data on disk partitions. It places all partitions under the root directory by mounting them at specific points. The file system is case sensitive. The Linux kernel manages hardware resources and the file system, while users interact through commands interpreted by the shell. Journaling file systems like ext3 and ReiserFS were developed to improve robustness over ext2 by logging file system changes to reduce the need for integrity checks after crashes. Ext4 further improved on this with features like larger maximum file sizes and delayed allocation.
The document discusses the UNIX file management system and how it uses a virtual file system (VFS) and virtual nodes (vnodes) to provide a common interface for accessing different file systems, with the VFS handling file system independent operations and vnodes representing files and delegating file system specific operations to each file system type's implementation. It also describes how UNIX uses buffer caches and disk schedulers to improve performance by reducing disk access through caching of recently or frequently used disk blocks in memory.
This document provides an overview of the Linux file system including:
1. It defines the main directories and contents according to the Filesystem Hierarchy Standard (FHS) with the root directory being "/" and possible multiple partitions and filesystems.
2. It describes the different types of files like ordinary files, directories, and special files as well as file permissions for reading, writing, and executing files and directories.
3. It explains how to change file permissions using the chmod command and navigate the file system using commands like pwd, cd, and ls including examples of using options, wildcards and navigation.
Linux is a free, open-source operating system based on UNIX with a modular kernel. It uses processes, threads, virtual memory, and files systems. Device drivers allow access to hardware via the block I/O system. Interprocess communication includes signals, pipes, shared memory, and semaphores. Security features authentication via PAM and access controls permissions via user and group IDs.
XFS is a file system designed for large storage needs and high performance. It supports large files and directories through its use of extents to track file data locations. XFS provides features like dynamic inode allocation, extended attributes, disk quotas, and crash recovery through write-ahead logging to enable quick recovery of metadata after an unclean shutdown.
The document provides an overview of file systems, including their purpose of organizing and storing information on storage devices. It discusses key aspects of file systems such as how they separate information into individual files and directories, use metadata to store attributes about files, allocate storage space in a granular manner (which can result in unused space), become fragmented over time, and use various utilities and structures to implement these functions while maintaining integrity of data and restricting access. File systems are a critical component of operating systems that allow for efficient organization, retrieval and updating of user data on different types of storage media and devices.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
Disk and File System Management in LinuxHenry Osborne
This document discusses disk and file system management in Linux. It covers MBR and GPT partition schemes, logical volume management, common file systems like ext4 and XFS, mounting file systems, and file system maintenance tools. It also discusses disk quotas, file ownership, permissions, and the umask command for setting default permissions.
This document discusses file systems and their components. It covers topics like file processing, file organizations and access methods, directories, mounting file systems, file protection, disk space allocation, interfaces between file systems and IOCS, file sharing semantics, reliability of file systems, and journaling file systems. The document provides details on how files are organized, stored, accessed and shared in operating systems.
Acronis True Image 9.1 Enterprise Server allows creating disk images of servers including the operating system, applications, and configurations. It minimizes downtime and ensures 24/7 uptime by automating full system backups. The product supports various operating systems and can back up data to different storage locations.
Lesson 2 Understanding Linux File SystemSadia Bashir
The document provides an overview of Linux file systems and file types. It discusses:
1) The main types of files in Linux including directories, special files, links, sockets and pipes.
2) The standard Linux directory structure and the purpose of directories like /bin, /sbin, /etc, and /usr.
3) Common Linux file extensions and hidden files that begin with a dot.
4) Environment variables and how they can be used to customize a system.
5) Symbolic links and how they create references to files without copying the actual file.
The document discusses the motivation and design of file system implementations. It describes how file systems map the logical structure to physical storage, using various on-disk and in-memory data structures. These include boot blocks, superblocks, directories, inodes/file control blocks, buffer caches, open file tables, and more. Common operations like creating, opening, reading and closing files are also outlined.
The document discusses file systems and their components. It covers directory organization, allocation schemes, file attributes, operations, structures and access methods. It also compares different file systems like FAT, FAT32 and NTFS in terms of their compatibility, volume size limits, fault tolerance and other advantages/disadvantages.
The document discusses and compares different file systems, including FAT, FAT32, NTFS, and their key features and limitations. FAT is the oldest file system and was designed for small disks and simple structures. It uses a file allocation table to organize files. NTFS is proprietary to Windows and offers improvements like larger volume sizes, security features like encryption, compression and quotas. It also has better performance, especially on large volumes.
The document summarizes Linux file systems and input/output. It discusses:
1) Linux file systems arrange files on disk storage in a structured collection. Common file systems include Ext2, Ext3, Ext4, JFS, ReiserFS, XFS, and Btrfs.
2) Linux uses two caches for input/output - a page cache that is unified with virtual memory, and a buffer cache for metadata.
3) Devices are classified as block, character, or network. Block devices allow random access to fixed blocks, while character devices don't need to support regular file functionality. Network devices use the kernel's networking subsystem.
This document provides an overview of the Linux filesystem, including its structure, key directories, and concepts like mounting. It describes the Filesystem Hierarchy Standard which defines the main directories and their contents. Key points covered include that everything in Linux is treated as a file, the top-level root directory is "/", essential directories like /bin, /dev, /etc, /home, /lib, /proc, /sbin, /usr, /var are explained, and mounting additional filesystems is described.
The document discusses different file systems including FAT, FAT12, FAT16, FAT32, and NTFS. FAT (File Allocation Table) was created by Microsoft in 1977 and tracks pieces of files that may be fragmented across disks. FAT8 supported 8-inch floppies. FAT12 was used for floppy disks. FAT16 supported drives up to 16GB. FAT32 supports drives up to 2TB. NTFS (New Technology File System) was introduced in 1993 and supports features like large partition and file sizes, security, reliability, and compression.
NTFS is a file system introduced by Microsoft in 1993 for Windows NT operating systems. It improved on previous file systems with features like larger storage capacity support, redundancy, security, and performance improvements important for businesses. NTFS formats volumes with system files including the Master File Table that stores metadata for all files and folders. It provides security, compression, encryption and other advanced features through file attributes. NTFS also supports features like sparse files, recoverability, and alternate data streams.
The document summarizes the standard directory structure and purposes of the main directories in a Linux file system. The root directory (/) contains all other directories and files on the system. Key directories include /bin for essential executable binaries, /dev for device files, /etc for system configuration files, /home for user files, /lib for shared libraries, /sbin for system administration binaries, /tmp for temporary files, /usr for user programs and documentation, and /var for files that change frequently like logs.
This document provides an overview of FAT and NTFS filing systems. It discusses key terms like files, directories, sectors, and clusters. FAT was first developed by Bill Gates in 1976 and includes versions like FAT12, FAT16, and FAT32. NTFS was developed later by Microsoft for Windows NT. It provides more security, reliability, and efficiency compared to FAT. The document outlines the advantages and disadvantages of both filing systems.
Linux uses a logical file system hierarchy standard to organize files across multiple directories and file systems. The root directory is at the top level and is represented by a forward slash. Key directories include /bin for executable commands, /lib for shared libraries, /etc for configuration files, and /var for dynamic data. Common file systems in Linux include ext2, ext3, ReiserFS, tmpfs, and proc.
The document discusses Microsoft file structures. It explains that sectors are grouped into clusters to store files, and that clusters are numbered sequentially starting at 0 in NTFS and 2 in FAT. The Master Boot Record stores partition information, and files deleted in FAT have their directory entry marked with a sigma while files deleted in NTFS are removed from the Master File Table listing. The Master File Table in NTFS tracks file metadata in records containing attribute IDs.
A presentation on the Ext4 file system and the evolution of Ext filesystem in Linux operating system. Linux uses virtual filesystem. The comparison of the ext filesystem generations is provided.
The document discusses the FAT32 file system. It describes FAT32 as a file allocation table that stores files and locates them on a hard drive using 32-bit values instead of 16-bit like the original FAT. FAT32 supports larger volume sizes than FAT16 while maintaining compatibility. It is commonly used for removable drives and supports file sizes up to 4GB. The document also describes the volume boot record structure of a FAT32 system including fields like bytes per sector, sectors per cluster, and total number of sectors.
- FAT (File Allocation Table) was the original file system developed by Microsoft for early versions of Windows to organize files on disks. It stored metadata in a file allocation table and used a linked list data structure.
- NTFS (New Technology File System) was developed later to replace FAT as disk sizes increased. NTFS uses more advanced data structures like B-trees and provides features like security, compression, encryption, and journaling.
- In NTFS, files are stored in clusters across the disk. The master file table stores metadata about every file and directory, including attributes like security and extended properties. System files also store information to enable features like recoverability.
The Linux file system structure contains a root directory (/) that contains subdirectories for essential system files and programs. Some key subdirectories are /bin and /sbin for essential binaries, /etc for configuration files, /dev for device files, /proc for process information, /var for variable and log files, /tmp for temporary files, /usr for user-installed programs, /home for user home directories, /boot for boot loader files, and /lib for library files supporting binaries. Additional subdirectories include /opt for optional software, /mnt and /media for temporarily mounting file systems, and /srv for server-specific files.
1) The document discusses different modulation techniques including AM, DSB-SC, SSB, and VSB.
2) It provides examples of carrier and message signals for AM and DSB-SC modulation. Equations for the modulated signals and corresponding spectra are given.
3) Questions are asked about modulation index, bandwidth requirements, power efficiency for AM, and the block diagram and output for coherent detection of DSB-SC.
Physical file organization techniques include heap files, sorted sequential access method (SAM) files, and indexed sequential access method (ISAM) files. SAM files organize records sequentially by key for efficient sequential retrieval but slow direct access. ISAM files use an index for fast direct retrieval by key as well as sequential access. Hashed or direct files provide very fast direct access but inefficient sequential access. Secondary indexes like linked lists, inverted lists, and B-trees provide access to records based on non-key fields.
This document discusses different file organization methods including sequential files, indexed sequential files, indexed files, and direct/hashed files. Sequential files store records in the order they are entered with each record having a fixed format. Indexed sequential files add an index to allow random access by key fields while maintaining sequential ordering. Indexed files use multiple indexes on different keys to allow searching by different fields. Direct/hashed files directly access records by key values using hashing techniques for fast random access.
This document discusses file systems and their components. It covers topics like file processing, file organizations and access methods, directories, mounting file systems, file protection, disk space allocation, interfaces between file systems and IOCS, file sharing semantics, reliability of file systems, and journaling file systems. The document provides details on how files are organized, stored, accessed and shared in operating systems.
Acronis True Image 9.1 Enterprise Server allows creating disk images of servers including the operating system, applications, and configurations. It minimizes downtime and ensures 24/7 uptime by automating full system backups. The product supports various operating systems and can back up data to different storage locations.
Lesson 2 Understanding Linux File SystemSadia Bashir
The document provides an overview of Linux file systems and file types. It discusses:
1) The main types of files in Linux including directories, special files, links, sockets and pipes.
2) The standard Linux directory structure and the purpose of directories like /bin, /sbin, /etc, and /usr.
3) Common Linux file extensions and hidden files that begin with a dot.
4) Environment variables and how they can be used to customize a system.
5) Symbolic links and how they create references to files without copying the actual file.
The document discusses the motivation and design of file system implementations. It describes how file systems map the logical structure to physical storage, using various on-disk and in-memory data structures. These include boot blocks, superblocks, directories, inodes/file control blocks, buffer caches, open file tables, and more. Common operations like creating, opening, reading and closing files are also outlined.
The document discusses file systems and their components. It covers directory organization, allocation schemes, file attributes, operations, structures and access methods. It also compares different file systems like FAT, FAT32 and NTFS in terms of their compatibility, volume size limits, fault tolerance and other advantages/disadvantages.
The document discusses and compares different file systems, including FAT, FAT32, NTFS, and their key features and limitations. FAT is the oldest file system and was designed for small disks and simple structures. It uses a file allocation table to organize files. NTFS is proprietary to Windows and offers improvements like larger volume sizes, security features like encryption, compression and quotas. It also has better performance, especially on large volumes.
The document summarizes Linux file systems and input/output. It discusses:
1) Linux file systems arrange files on disk storage in a structured collection. Common file systems include Ext2, Ext3, Ext4, JFS, ReiserFS, XFS, and Btrfs.
2) Linux uses two caches for input/output - a page cache that is unified with virtual memory, and a buffer cache for metadata.
3) Devices are classified as block, character, or network. Block devices allow random access to fixed blocks, while character devices don't need to support regular file functionality. Network devices use the kernel's networking subsystem.
This document provides an overview of the Linux filesystem, including its structure, key directories, and concepts like mounting. It describes the Filesystem Hierarchy Standard which defines the main directories and their contents. Key points covered include that everything in Linux is treated as a file, the top-level root directory is "/", essential directories like /bin, /dev, /etc, /home, /lib, /proc, /sbin, /usr, /var are explained, and mounting additional filesystems is described.
The document discusses different file systems including FAT, FAT12, FAT16, FAT32, and NTFS. FAT (File Allocation Table) was created by Microsoft in 1977 and tracks pieces of files that may be fragmented across disks. FAT8 supported 8-inch floppies. FAT12 was used for floppy disks. FAT16 supported drives up to 16GB. FAT32 supports drives up to 2TB. NTFS (New Technology File System) was introduced in 1993 and supports features like large partition and file sizes, security, reliability, and compression.
NTFS is a file system introduced by Microsoft in 1993 for Windows NT operating systems. It improved on previous file systems with features like larger storage capacity support, redundancy, security, and performance improvements important for businesses. NTFS formats volumes with system files including the Master File Table that stores metadata for all files and folders. It provides security, compression, encryption and other advanced features through file attributes. NTFS also supports features like sparse files, recoverability, and alternate data streams.
The document summarizes the standard directory structure and purposes of the main directories in a Linux file system. The root directory (/) contains all other directories and files on the system. Key directories include /bin for essential executable binaries, /dev for device files, /etc for system configuration files, /home for user files, /lib for shared libraries, /sbin for system administration binaries, /tmp for temporary files, /usr for user programs and documentation, and /var for files that change frequently like logs.
This document provides an overview of FAT and NTFS filing systems. It discusses key terms like files, directories, sectors, and clusters. FAT was first developed by Bill Gates in 1976 and includes versions like FAT12, FAT16, and FAT32. NTFS was developed later by Microsoft for Windows NT. It provides more security, reliability, and efficiency compared to FAT. The document outlines the advantages and disadvantages of both filing systems.
Linux uses a logical file system hierarchy standard to organize files across multiple directories and file systems. The root directory is at the top level and is represented by a forward slash. Key directories include /bin for executable commands, /lib for shared libraries, /etc for configuration files, and /var for dynamic data. Common file systems in Linux include ext2, ext3, ReiserFS, tmpfs, and proc.
The document discusses Microsoft file structures. It explains that sectors are grouped into clusters to store files, and that clusters are numbered sequentially starting at 0 in NTFS and 2 in FAT. The Master Boot Record stores partition information, and files deleted in FAT have their directory entry marked with a sigma while files deleted in NTFS are removed from the Master File Table listing. The Master File Table in NTFS tracks file metadata in records containing attribute IDs.
A presentation on the Ext4 file system and the evolution of Ext filesystem in Linux operating system. Linux uses virtual filesystem. The comparison of the ext filesystem generations is provided.
The document discusses the FAT32 file system. It describes FAT32 as a file allocation table that stores files and locates them on a hard drive using 32-bit values instead of 16-bit like the original FAT. FAT32 supports larger volume sizes than FAT16 while maintaining compatibility. It is commonly used for removable drives and supports file sizes up to 4GB. The document also describes the volume boot record structure of a FAT32 system including fields like bytes per sector, sectors per cluster, and total number of sectors.
- FAT (File Allocation Table) was the original file system developed by Microsoft for early versions of Windows to organize files on disks. It stored metadata in a file allocation table and used a linked list data structure.
- NTFS (New Technology File System) was developed later to replace FAT as disk sizes increased. NTFS uses more advanced data structures like B-trees and provides features like security, compression, encryption, and journaling.
- In NTFS, files are stored in clusters across the disk. The master file table stores metadata about every file and directory, including attributes like security and extended properties. System files also store information to enable features like recoverability.
The Linux file system structure contains a root directory (/) that contains subdirectories for essential system files and programs. Some key subdirectories are /bin and /sbin for essential binaries, /etc for configuration files, /dev for device files, /proc for process information, /var for variable and log files, /tmp for temporary files, /usr for user-installed programs, /home for user home directories, /boot for boot loader files, and /lib for library files supporting binaries. Additional subdirectories include /opt for optional software, /mnt and /media for temporarily mounting file systems, and /srv for server-specific files.
1) The document discusses different modulation techniques including AM, DSB-SC, SSB, and VSB.
2) It provides examples of carrier and message signals for AM and DSB-SC modulation. Equations for the modulated signals and corresponding spectra are given.
3) Questions are asked about modulation index, bandwidth requirements, power efficiency for AM, and the block diagram and output for coherent detection of DSB-SC.
Physical file organization techniques include heap files, sorted sequential access method (SAM) files, and indexed sequential access method (ISAM) files. SAM files organize records sequentially by key for efficient sequential retrieval but slow direct access. ISAM files use an index for fast direct retrieval by key as well as sequential access. Hashed or direct files provide very fast direct access but inefficient sequential access. Secondary indexes like linked lists, inverted lists, and B-trees provide access to records based on non-key fields.
This document discusses different file organization methods including sequential files, indexed sequential files, indexed files, and direct/hashed files. Sequential files store records in the order they are entered with each record having a fixed format. Indexed sequential files add an index to allow random access by key fields while maintaining sequential ordering. Indexed files use multiple indexes on different keys to allow searching by different fields. Direct/hashed files directly access records by key values using hashing techniques for fast random access.
The document discusses file system internals and how operating systems abstract physical storage. It covers common file systems, allocation strategies like contiguous, dynamic, and linked list allocation. Issues like external fragmentation and metadata management are discussed. Directory implementation using inodes and attributes is also summarized.
This document provides an overview and objectives for the COMD 6361 class. It introduces students to the library's resources including over 2 million volumes, 250 databases, and 36 study rooms. Services covered include remote access, interlibrary loans, and free printing. It defines peer-reviewed articles and distinguishes between primary, secondary, and tertiary sources. Search strategies are discussed such as developing search terms and using Boolean logic. Recommended databases for topics like stuttering and deer management are provided.
From Performance to Health: Wearables for the Rest of Us.Amy Friel
Nokia's marketing director discusses the growing market for wearable devices beyond performance users, noting that while 50% of U.S. adults are already buyers or interested in wearables, the goal is to move the remaining 50% to the "playing field" by focusing on the significance of health data, simplicity of use, and attractive style of new wearable products.
শিক্ষার্থীদের ঝরে পড়া প্রতিকারে প্রয়োাজন সমন্বিত উদ্যোগ Abul Bashar
শিক্ষার্থীদের ঝরে পড়া প্রতিকারে প্রয়োজন সমন্বিত উদ্যোগ- মো: আবুল বাশার, সহকারী অধ্যাপক, ভূগোল, সরকারি টিচার্স ট্রেনিং কলেজ; ০৭ মার্চ ২০১২, দৈনিক বাংলা বাজার পত্রিকা : Md. Abul Bashar, Assistant Professor, Geography, Govt. Teachers' Training College:
Presentation for "Provas de Agergação" - 2 relatorioAlvaro Barbosa
The document is a pedagogical report on sound design lessons. It discusses how most sounds in films are added in post-production and outlines the objectives and content of an introductory course on sound design. The course aims to introduce students to sound and music as an art form for media, analyze the sound production process, and establish sound's role in influencing scenes. It covers topics like the history of sound in cinema, the taxonomy of sound layers in production, procedural audio generation, and how sound conveys messages.
Mutual exclusion is required when multiple threads or processes access shared resources concurrently to prevent race conditions. Critical regions define sections of code where shared resources are accessed exclusively by one thread at a time. Semaphores and monitors provide synchronization mechanisms for processes to coordinate access to shared resources and critical regions in a way that prevents race conditions and ensures progress.
Yn 2016 komt de nije digitale metoade Frysk foar it pû beskikber. Yn dizze workshop jouwe wy in oersjoch fan de ferskillende learlinen dêr’t skoallen aanst út kieze kinne. Wy litte foarbylden sjen fan hoe’t mei help fan ICT in passend learmiddelenoanbod gearstald wurde kin, ôfhinklik fan it ferlet fan skoallen en de taaldoelen foar it (fak) Frysk.
The document discusses the basis and development of music-related English terms. It notes that many Italian musical terms originated in Italy as the center of music and religion, and birthplace of opera. It also discusses English becoming a global lingua franca, and the etymological origins and processes of generalization, specialization, meaning transfer, and addition that have influenced musical English vocabulary over time and across cultures. The conclusion is that musical English words have developed through syntax, morphology and other changes influenced by historical and cultural contexts.
This document provides an overview of file system topics. It begins with an introduction to file systems and their relationship to operating system architecture. It then discusses the Virtual File System (VFS) interface and key metadata components like super blocks, inodes, and directory entries. The document reviews common file system optimizations based on memory hierarchy and storage characteristics. Examples of specific file systems are given, including Ext4, NTFS, ZFS, NFS, and Google File System. The document concludes by soliciting any questions.
File systems organize and store data on various storage media like hard drives. They consist of structures like directories and files to track allocated space, file names and locations. Key functions include managing free space, directories, and file storage locations. Common file systems include FAT, NTFS, disk, flash, tape, database, network and special purpose file systems. File systems use inodes, directories, block allocation maps and other metadata to organize and track files.
The document provides an overview of distributed file systems, including NFS, AFS, Lustre, and others. It discusses key aspects like scalability, consistency, caching, replication, and fault tolerance. Lustre is highlighted as an example of a distributed file system that aims to remove bottlenecks and achieve high scalability through an object-based design with separate metadata and storage servers.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses the physical structure of files on disk and basic Linux file system commands like mount, unmount, ls, and file paths.
This document provides an introduction and overview of FreeNAS 8.3.1. It outlines the core features of FreeNAS including the ability to create storage volumes using UFS or the ZFS filesystem. It describes how ZFS provides data integrity and flexibility through features like RAIDZ, snapshots, and clones. The document also covers installing and configuring plugins to extend FreeNAS functionality, and an overview of the new GELI full disk encryption support. Additional resources for FreeNAS support are provided at the end.
This document provides an overview of file systems. It discusses the physical structure of a hard drive and how the master boot record partitions it into sections. File systems then organize data on those partitions by creating structures like directories, files and metadata. Common file systems include FAT, Ext, and others. File systems in Windows and Unix/Linux are compared. Journaling is introduced as a way to prevent data corruption.
This document provides an overview of file systems. It discusses the physical structure of a hard drive and how the master boot record partitions it into sections. File systems then organize data on those partitions by creating structures like directories, files and metadata. Common file systems include FAT, Ext, and others. File systems in Windows and Unix/Linux are described and compared. Journaling is introduced as a way to prevent data corruption. Resources for further reading are also included.
This document provides an overview of the Linux filesystem, including its structure, key directories, and concepts like mounting. It describes the Filesystem Hierarchy Standard which defines the main directories and their contents. Key points covered include that everything in Linux is treated as a file, the top-level root directory is "/", essential directories like /bin, /dev, /etc, /home, /lib, /proc, /sbin, /usr, /var are explained, and mounting additional filesystems is described.
The document discusses digital forensics and evidence extraction from NTFS computers. It describes how NTFS works, including features like the master file table, alternate data streams, volume shadow copies, and more. It explains that deleted or hidden data can potentially be uncovered by examining registry entries, volume shadow copies, unallocated space in the MFT, and clusters marked as bad.
This document provides an overview of Linux basics including:
- A brief history of Linux and how it originated from UNIX.
- An overview of Linux components including the kernel, userspace programs, shells, and how they interact.
- Instructions for installing Linux distributions like Red Hat, Debian, and SuSE.
- How to use basic Linux commands and work with files, directories, and processes.
- Where to find Linux documentation using commands like man and info.
It serves as an introductory guide to getting started with the Linux operating system.
The document discusses file management and directories. It describes block management strategies like contiguous allocation and linked lists. It discusses reading and writing byte streams which involves packing and unpacking blocks of data. It also covers supporting high-level file abstractions like structured sequential files and indexed sequential files. Finally, it discusses directories and their structures like hierarchical and graph-based organizations.
The document discusses Python FUSE (Filesystem in Userspace) which allows users to create their own filesystems in userspace without modifying the kernel. It provides an overview of filesystem concepts, a brief history of filesystem development, and introduces FUSE as a way to develop filesystems using Python and other languages. Code examples are provided demonstrating how to create a basic hash table based filesystem in Python using the FUSE API.
This document provides an overview and summary of key concepts related to file management and low-level file systems. It discusses the structure of hard disks and how they are organized into blocks, tracks, and cylinders. It describes low-level file systems as implementing byte-stream files and stream-block translation. Common block management strategies for organizing file data blocks on storage devices include contiguous allocation, linked lists, and indexed allocation. The document also summarizes key file system calls like open, read, write, and close used in low-level file systems.
This document provides an overview of Linux file systems for beginners. It discusses disks and partitions, including common disk types like SATA, IDE, SCSI, and RAID configurations. Logical Volume Management (LVM) is introduced as a way to flexibly manage storage space using physical volumes, volume groups, and logical volumes. Common file system formats are also covered, such as FAT, NTFS, HFS+, Ext3, Ext4, and Btrfs. Key file system utilities are explained for viewing disk usage and managing mounts.
This document provides an overview of Linux file systems for beginners. It discusses disks and partitions, including common disk types like SATA, IDE, SCSI, and RAID configurations. Logical Volume Management (LVM) is introduced as a way to flexibly manage storage space using physical volumes, volume groups, and logical volumes. Common file system formats are also covered, such as FAT, NTFS, HFS+, Ext3, Ext4, and Btrfs. Key file system utilities are explained for viewing disk usage and managing mounts. The document aims to build practical understanding of configuring and using storage on Linux systems.
The document discusses various concepts related to files in a UNIX system. It defines files as building blocks of an operating system and describes the different types of files like regular files, directory files, device files, FIFO files, etc. It explains key concepts like inodes, file attributes, directory structure, hard links and symbolic links. The document provides detailed information about each file type and how they are represented and used in a UNIX file system.
Disk areas allocation in flash disks include:
1) Boot sector which contains information about other areas sizes
2) FRT area which stores file records
3) FAT area which consists of FAT sectors for allocation data
4) Data area which contains actual data sectors of files
5) Transaction journal area which caches modified sectors during disk transactions
This presentation is from the ZFS Tutorial presented at the USENIX LISA09 Conference at Baltimore, Maryland in November 2009.
Later versions are available on slideshare.net, too.
This document discusses problems involving modulation techniques and signal detection. Problem 1 asks to sketch modulated waves for 8-ary ASK, 4-ary PSK, and 4-ary FSK modulation of a binary sequence. Problem 2 provides a formula for probability of error for 4-ary PAM and asks to calculate average power. Problem 3 does similarly for 4-ary QAM. Problem 4 describes a binary system with an integrate-and-dump detector and asks to analyze the detector and find minimum error probability.
This document contains 4 problems related to digital and analog communications techniques:
1) It describes Delta Modulation and how it works to encode analog signals, including issues with slope overload. It also introduces Adaptive Delta Modulation as an improvement.
2) It compares Differential Pulse Code Modulation to standard Pulse Code Modulation.
3) It provides exercises on encoding a binary sequence using BPSK, DPSK, and 4-ary ASK modulation. It also covers removing phase ambiguity in DPSK.
4) It presents a problem on time division multiplexing analog and digital sources at different data rates.
The document discusses problems related to analogue and digital communications. It contains 5 problems:
1) Drawing the spectrum of a message signal sampled at different rates and specifying the cutoff frequency to fully recover the original signal from its sampled version.
2) Sketching the resulting pulse code modulation (PCM) wave for one cycle of an input signal that is quantized using a 4-bit binary system.
3) Determining the number of quantizing levels, quantizer step size, average quantizing noise power, and output signal-to-noise ratio for a sinusoidal modulating wave quantized using an n-bit code word.
4) Finding the Nyquist sampling rate for two signals.
This document provides a tutorial on analogue and digital communications. It contains 5 questions about Fourier transforms and frequency domain analysis of signals. Question 1 asks about the Fourier transform and spectra of a continuous time signal. Question 2 finds the Fourier transform and spectra of another signal. Question 3 analyzes a signal composed of two sinusoidal components. Questions 4 and 5 determine the Fourier transforms of rectangular and truncated sinusoidal signals.
This document contains two questions about FM signals and phase-locked loops (PLL). Question 1 asks about the instantaneous frequency, modulation index, effective bandwidth using Carson's rule and universal rule, and Fourier transform of an FM signal. Question 2 provides a block diagram of a PLL and asks the student to show equations for the voltage controlled oscillator output, input to the VCO, and how the PLL locks into the input signal's phase. It also asks how the input to the VCO would be related to the message signal for an FM input.
1) The document describes digital signal detection techniques at the receiver of a digital communication system.
2) It discusses the maximum a posteriori probability (MAP) and maximum likelihood (ML) detection criteria. The ML criterion reduces to choosing the signal that minimizes the Euclidean distance between the received signal vector and possible transmitted signals.
3) Detection errors occur when the received signal, distorted by noise, falls inside the decision region of another signal. The probability of error depends on the noise distribution around the actual transmitted signal.
1. This document discusses M-ary modulation techniques, which allow more than two amplitude, phase, or frequency levels to transmit more bits per symbol. This increases transmission rate or reduces bandwidth compared to binary modulation.
2. M-ary modulation techniques discussed include M-ASK, M-PSK, M-FSK, and M-QAM. M-ASK maps k bits to one of M amplitude levels. M-PSK maps k bits to one of M phase shifts of the carrier. M-QAM combines M-ASK with quadrature carriers to modulate both amplitude and phase.
3. Higher order modulation like M-QAM can significantly increase transmission rate but requires more transmission power and complex
The document provides an overview of digital passband modulation techniques. It discusses binary modulation schemes including amplitude-shift keying (ASK), frequency-shift keying (FSK), and phase-shift keying (BPSK). It also covers differential phase-shift keying (DPSK), which removes phase ambiguity in BPSK using differential encoding and decoding. Key aspects like signal representation, spectrum, and detection methods are described for each technique.
This document discusses an integrate-and-dump detector used in digital communications. It describes the operation of the integrate-and-dump detector, showing how it integrates the received signal plus noise over each symbol interval. The output of the integrator is used to detect whether a 1 or 0 was transmitted. An expression is derived for the probability of detection error in terms of the signal amplitude, noise power spectral density, and symbol interval. An example is also provided to calculate the error probability for a given binary signaling scheme and system parameters.
This document discusses quantization in analog-to-digital conversion. It describes how an analog signal is sampled, quantized by representing samples with discrete levels, and encoded into a digital signal. Quantization introduces noise that can be reduced by using more quantization levels or a smaller step size between levels. Non-uniform quantization allocates more levels to signal amplitudes that occur more frequently to improve efficiency compared to uniform quantization.
This document discusses multiplexing techniques used in communications systems. It describes frequency division multiplexing (FDM), time division multiplexing (TDM), and digital multiplexing used for digital telephone systems. FDM combines signals by assigning each a unique portion of the frequency spectrum. TDM combines signals by assigning each a unique time slot within a repeating time frame. Digital telephone systems use TDM to combine 24 digitized voice channels into a 193-bit frame transmitted every 125 microseconds.
This document discusses pulse code modulation (PCM) for analog to digital conversion. PCM involves sampling an analog signal, quantizing the sample values, and encoding the quantized levels into binary codes. The sampling rate must be at least twice the bandwidth of the analog signal. More bits per code provide higher quality but require more bandwidth. PCM is used in telephone systems with 8-bit coding at 8 kHz sampling, and in compact discs with 16-bit coding at 44.1 kHz sampling.
This document discusses delta modulation and adaptive delta modulation techniques for analogue to digital conversion.
Delta modulation encodes the difference between the input signal and a reference signal into a single bit per sample, creating a staircase-like approximation of the original signal. Adaptive delta modulation varies the step size according to the input signal level to prevent slope overload. Differential pulse code modulation encodes the difference between the current and predicted sample values, sending this difference value instead of absolute sample amplitudes.
This document discusses quantization in analog-to-digital conversion. It describes how an analog signal is sampled, quantized by representing samples with discrete levels, and encoded into a digital signal. Quantization introduces noise that can be reduced by using more quantization levels or a smaller step size between levels. Non-uniform quantization allocates more levels to signal amplitudes that occur more frequently to more efficiently represent the signal.
This document discusses quantization in analog-to-digital conversion. It describes how an analog signal is sampled, quantized by representing samples with discrete levels, and encoded into a digital signal. Quantization introduces noise that can be reduced by using more quantization levels or a smaller step size between levels. Non-uniform quantization allocates more levels to signal amplitudes that occur more frequently to more efficiently represent the signal.
This document discusses pulse modulation techniques in communications. It begins by reviewing continuous-wave modulation techniques studied previously, such as amplitude modulation and angle modulation. It then previews that pulse modulation will be studied next, including analog pulse modulation where a pulse feature varies continuously with the message, and digital pulse modulation using a sequence of coded pulses. The document provides explanations and equations regarding sampling of continuous-time signals, the sampling theorem, and recovery of the original analog signal from its samples. It also introduces pulse amplitude modulation (PAM) using natural and flat-top sampling, as well as pulse duration modulation (PDM) and pulse position modulation (PPM).
This document describes the operation of a phase-locked loop (PLL) circuit for demodulating frequency modulated (FM) signals. It contains 6 pages describing:
1) The basic block diagram of a PLL including a voltage controlled oscillator, multiplier, loop filter, and voltage controlled oscillator.
2) The mathematical equations showing how the PLL locks the phase and frequency of the voltage controlled oscillator to that of the incoming FM signal.
3) When the PLL is in phase lock and near phase lock based on the phase error between the signals.
4) How the PLL can be used to demodulate an FM signal and recover the original message signal by matching the phase and frequency of the voltage controlled
This document discusses the transmission of frequency modulated (FM) waves. It explains that the bandwidth of an FM wave is theoretically infinite but is effectively limited in practice. Carson's rule provides an estimate of the bandwidth but is not always accurate. The document also describes Armstrong's method for generating FM waves using narrowband phase modulation and frequency multiplication, and demodulating FM waves using envelope detection after taking the derivative of the received signal.
This document discusses angle modulation techniques for communications. It begins by defining phase modulation (PM) and frequency modulation (FM), where the carrier angle or frequency varies with the modulating signal. Key differences between PM and FM are outlined. Properties of angle modulation like constant transmitted power and irregular zero crossings are described. Narrowband FM is analyzed using sinusoidal modulation. Generation of narrowband FM using an integrator is also shown.
This document discusses vestigial sideband (VSB) modulation, which is a modulation technique that lies between single sideband (SSB) and double sideband suppressed carrier (DSB-SC). VSB modulation involves transmitting a vestige, or portion, of the sideband to reduce bandwidth usage compared to DSB-SC. The key aspects covered are:
1) VSB modulation achieves a transmission bandwidth of BW + fv, where fv is the vestige bandwidth which is typically 25% of the message bandwidth BW.
2) A VSB modulator uses a product modulator and VSB shaping filter to generate the modulated signal.
3) VSB demodulation involves product
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
1. File Management
Tanenbaum, Chapter 4
COMP3231
Operating Systems
Leonid Ryzhyk
Kevin Elphinstone
1
2. Outline
• Files and directories from the programmer
(and user) perspective
• Files and directory internals – the
operating system perspective
2
3. Summary of the FS abstraction
User's view Under the hood
Uniform namespace Heterogeneous collection of
storage devices
Hierarchical structure Flat address space
Arbitrarily-sized files Fixed-size blocks
Symbolic file names Numeric block addresses
Contiguous address space inside Fragmentation
a file
Access control No access control
Tools for
formatting
defragmentation
backup
consistency checking
3
4. A brief history of file systems
• Early batch processing systems
– No OS
– I/O from/to punch cards
– Tapes and drums for external storage, but no FS
– Rudimentary library support for reading/writing tapes and drums
IBM 709 [1958]
4
5. A brief history of file systems
• The first file systems were
single-level (everything in one
directory)
• Files were stored in contiguous
chunks
– Maximal file size must be known
in advance
• Now you can edit a program
and save it in a named file on
the tape! PDP-8 with DECTape [1965]
5
6. A brief history of file systems
• Time-sharing OSs
– Required full-fledged file systems
• MULTICS
– Multilevel directory structure (keep files that belong to
different users separately)
– Access control lists
– Symbolic links
Honeywell 6180 running
MULTICS [1976]
6
7. A brief history of file systems
• UNIX
– Based on ideas from
MULTICS
– Simpler access control
model
– Everything is a file!
PDP-7
7
8. OS storage stack
Application
Syscall interface: FD table
creat OF table
open VFS
read FS
write Buffer cache
...
Disk scheduler
Device driver
8
9. OS storage stack
Application
Hard disk platters:
FD table
tracks
sectors OF table
VFS
FS
Buffer cache
Disk scheduler
Device driver
9
10. OS storage stack
Application
Disk controller: FD table
OF table
Hides disk geometry,
bad sectors VFS
Exposes linear FS
sequence of blocks Buffer cache
Disk scheduler
Device driver
0 N
10
11. OS storage stack
Application
Device driver: FD table
OF table
Hides device-specific
protocol VFS
Exposes block-device FS
Interface (linear Buffer cache
sequence of blocks)
Disk scheduler
Device driver
0 N
11
12. OS storage stack
Application
File system: FD table
OF table
Hides physical location
of data on the disk VFS
FS
Exposes: directory Buffer cache
hierarchy, symbolic file
Disk scheduler
names, random-access
files, protection Device driver
12
13. OS storage stack
Application
Optimisations: FD table
OF table
Keep recently accessed
disk blocks in memory VFS
FS
Schedule disk accesses Buffer cache
from multiple processes
Disk scheduler
for performance and
fairness Device driver
13
14. OS storage stack
Application
Virtual FS:
FD table
Unified interface to OF table
multiple FSs VFS
FS FS2
Buffer cache
Disk scheduler Disk scheduler
Device driver Device driver
14
15. OS storage stack
Application
File desctriptor and
Open file tables: FD table
OF table
Keep track of files
VFS
opened by user-level
processes FS
Implement semantics Buffer cache
of FS syscalls Disk scheduler
Device driver
15
16. OS storage stack
Application
FD table
OF table
VFS
FS
Buffer cache
Disk scheduler
Device driver
16
17. File Names
• File system must provide a convenient naming
scheme
– Textual Names
– May have restrictions
• Only certain characters
– E.g. no ‘/’ characters
• Limited length
• Only certain format
– E.g DOS, 8 + 3
– Case (in)sensitive
– Names may obey conventions (.c files or C files)
• Interpreted by tools (UNIX)
• Interpreted by operating system (Windows)
17
19. File Structure
• Three kinds of files
– byte sequence
– record sequence
– key-based, tree structured
• e.g. IBM’s indexed sequential access method (ISAM) 19
20. File Structure
• Stream of Bytes • Records
– OS considers a file to – Collection of bytes
be unstructured treated as a unit
• Example: employee
– Simplifies file
record
management for the
OS – Operations at the level
of records (read_rec,
– Applications can
write_rec)
impose their own
structure – File is a collection of
similar records
– Used by UNIX,
Windows, most – OS can optimise
modern OSes operations on records
20
21. File Structure
• Tree of Records
– Records of variable length
– Each has an associated key
– Record retrieval based on key
– Used on some data processing systems (mainframes)
• Mostly incorporated into modern databases
21
22. File Types
• Regular files
• Directories
• Device Files
– May be divided into
• Character Devices – stream of bytes
• Block Devices
• Some systems distinguish between regular file types
– ASCII text files, binary files
• At minimum, all systems recognise their own executable
file format
– May use a magic number
22
23. File Access Types
• Sequential access
– read all bytes/records from the beginning
– cannot jump around, could rewind or back up
– convenient when medium was mag tape
• Random access
– bytes/records read in any order
– essential for data base systems
– read can be …
• move file pointer (seek), then read or
– lseek(location,…);read(…)
• each read specifies the file pointer
– read(location,…)
23
28. File Organisation and Access
Programmer’s Perspective
• Given an operating system supporting
unstructured files that are a stream-of-bytes,
how can one organise the contents of the files?
28
29. File Organisation and Access
Programmer’s Perspective
• Performance • Possible access patterns:
considerations: – Read the whole file
– File system performance – Read individual blocks or
affects overall system records from a file
performance – Read blocks or records
– Organisation of the file preceding or following the
system on disk affects current one
performance – Retrieve a set of records
– File organisation (data – Write a whole file
layout inside file) affects sequentially
performance – Insert/delete/update
• indirectly determines records in a file
access patterns
– Update blocks in a file
29
30. Classic File Organisations
• There are many ways to organise a file’s
contents, here are just a few basic
methods
– Unstructured Stream (Pile)
– Sequential Records
– Indexed Sequential Records
– Direct or Hashed Records
30
31. Criteria for File Organization
Things to consider when designing file layout
• Rapid access
– Needed when accessing a single record
– Not needed for batch mode
• read from start to finish
• Ease of update
– File on CD-ROM will not be updated, so this is not a concern
• Economy of storage
– Should be minimum redundancy in the data
– Redundancy can be used to speed access such as an index
• Simple maintenance
• Reliability
31
32. Unstructured Stream
• Data are collected in
the order they arrive
• Purpose is to
accumulate a mass of
data and save it
• Records may have
different fields
• No structure
• Record access is by
exhaustive search
32
33. Unstructured Stream Performance
• Update
– Same size record -
okay
– Variable size - poor
• Retrieval
– Single record - poor
– Subset – poor
– Exhaustive - okay
33
34. The Sequential File
• Fixed format used for
records
• Records are the same
length
• Field names and lengths
are attributes of the file
• One field is the key field
– Uniquely identifies the
record
– Records are stored in key
sequence
34
35. The Sequential File
• Update
– Same size record -
good
– Variable size – No
• Retrieval
– Single record - poor
– Subset – poor
– Exhaustive - okay
35
36. Indexed Sequential File
Main
• Index provides a lookup File
capability to quickly reach the
vicinity of the desired record
Index
– Contains key field and a pointer
to (location in) the main file
– Indexed is searched to find
highest key value that is equal
or less than the desired key Key
value File Ptr
– Search continues in the main file
at the location indicated by the
pointer
36
37. Indexed Sequential File
Main
• Update File
– Same size record -
good Index
– Variable size - No
• Retrieval
– Single record - good Key
– Subset – poor File Ptr
– Exhaustive - okay
37
38. File Directories
• Contains information about files
– Attributes
– Location
– Ownership
• Directory itself is a file owned by the
operating system
• Provides mapping between file names and
the files themselves
38
40. Hierarchical, or Tree-Structured
Directory
• Files can be located by following a path
from the root, or master, directory down
various branches
– This is the absolute pathname for the file
• Can have several files with the same file
name as long as they have unique path
names
40
41. Current Working Directory
• Always specifying the absolute pathname
for a file is tedious!
• Introduce the idea of a working directory
– Files are referenced relative to the working
directory
• Example: cwd = /home/leonid
.profile = /home/leonid/.profile
41
42. Relative and Absolute
Pathnames
• Absolute pathname
– A path specified from the root of the file system to the file
• A Relative pathname
– A pathname specified from the cwd
• Note: ‘.’ (dot) and ‘..’ (dotdot) refer to current and parent
directory
Example: cwd = /home/leonid
../../etc/passwd
/etc/passwd
../leonid/../.././etc/passwd
Are all the same file
42
44. Nice properties of UNIX naming
• Simple, regular format
– Names referring to different servers, objects,
etc., have the same syntax.
• Regular tools can be used where specialised tools
would be otherwise be needed.
• Location independent
– Objects can be distributed or migrated, and
continue with the same names.
44
45. An example of a bad naming
convention
• From, Rob Pike and Peter Weinberger,
“The Hideous Name”, Bell Labs TR
UCBVAX::SYS$DISK:[ROB.BIN]CAT_V.EXE;13
45
46. File Sharing
• In multiuser system, allow files to be
shared among users
• Two issues
– Access rights
– Management of simultaneous access
46
47. Access Rights
• None
– User may not know of the existence of the file
– User is not allowed to read the user directory
that includes the file
• Knowledge
– User can only determine that the file exists
and who its owner is
47
48. Access Rights
• Execution
– The user can load and execute a program but
cannot copy it
• Reading
– The user can read the file for any purpose,
including copying and execution
• Appending
– The user can add data to the file but cannot
modify or delete any of the file’s contents
48
49. Access Rights
• Updating
– The user can modify, deleted, and add to the
file’s data. This includes creating the file,
rewriting it, and removing all or part of the
data
• Changing protection
– User can change access rights granted to
other users
• Deletion
– User can delete the file
49
50. Access Rights
• Owners
– Has all rights previously listed
– May grant rights to others using the following
classes of users
• Specific user
• User groups
• All for public files
50
51. Case Study:
UNIX Access Permissions
total 1704
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:13 .
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:14 ..
drwxr-x--- 2 kevine kevine 4096 Oct 14 08:12 backup
-rw-r----- 1 kevine kevine 141133 Oct 14 08:13 eniac3.jpg
-rw-r----- 1 kevine kevine 1580544 Oct 14 08:13 wk11.ppt
• First letter: file type
d for directories
- for regular files)
• Three user categories
user, group, and other
51
52. UNIX Access Permissions
total 1704
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:13 .
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:14 ..
drwxr-x--- 2 kevine kevine 4096 Oct 14 08:12 backup
-rw-r----- 1 kevine kevine 141133 Oct 14 08:13 eniac3.jpg
-rw-r----- 1 kevine kevine 1580544 Oct 14 08:13 wk11.ppt
• Three access rights per category
read, write, and execute
drwxrwxrwx
user other
group
52
53. UNIX Access Permissions
total 1704
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:13 .
drwxr-x--- 3 kevine kevine 4096 Oct 14 08:14 ..
drwxr-x--- 2 kevine kevine 4096 Oct 14 08:12 backup
-rw-r----- 1 kevine kevine 141133 Oct 14 08:13 eniac3.jpg
-rw-r----- 1 kevine kevine 1580544 Oct 14 08:13 wk11.ppt
• Execute permission for directory?
– Permission to access files in the directory
• To list a directory requires read permissions
• What about drwxr-x—x?
53
54. UNIX Access Permissions
• Shortcoming
– The three user categories a rather coarse
• Problematic example
– Joe owns file foo.bar
– Joe wishes to keep his file private
• Inaccessible to the general public
– Joe wishes to give Bill read and write access
– Joe wishes to give Peter read-only access
– How????????
54
55. Simultaneous Access
• Most OSes provide mechanisms for users to
manage concurrent access to files
– Example: lockf(), flock() system calls
• Typically
– User may lock entire file when it is to be updated
– User may lock the individual records during the
update
• Mutual exclusion and deadlock are issues for
shared access
55