1. File allocation methods include contiguous, linked, and indexed allocation. Contiguous allocation stores files in contiguous blocks but can lead to fragmentation. Linked allocation stores non-contiguous blocks through pointers but has overhead. Indexed allocation uses index blocks to point to data blocks.
2. File systems manage free space through data structures like bit vectors, linked lists, and space maps. Bit vectors require extra space but allow contiguous allocation. Linked lists have no wasted space but non-contiguous allocation. Space maps divide devices into metaslabs for efficient free space management.
3. Performance depends on allocation algorithms, metadata handling, buffer caching, and write policies. Techniques like read-ahead and free-behind optimize sequential access,
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK aim to minimize seek time, rotational latency, and response time by scheduling requests in different orders.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
Control dataset partitioning and cache to optimize performances in SparkChristophePraud2
Christophe Préaud and Florian Fauvarque presented techniques for optimizing Spark performance through proper dataset partitioning and caching. They discussed how to tune the number of partitions for reading, writing, and transformations like joins. Storage levels like MEMORY_ONLY and MEMORY_AND_DISK were explained for caching datasets. Profiling tools like Babar were also mentioned for analyzing Spark applications. The presentation aimed to help optimize slot usage and reduce job runtimes.
The objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Main memory is used to store programs and data for the CPU to access directly. Paging is a memory management technique that divides main memory into fixed-sized blocks called frames and divides logical memory into same-sized blocks called pages. The page table maps logical page numbers to physical frame numbers, with translation lookaside buffers caching these mappings to improve performance. Protection bits in page table entries enforce access permissions on individual pages.
Project Tungsten: Bringing Spark Closer to Bare MetalDatabricks
As part of the Tungsten project, Spark has started an ongoing effort to dramatically improve performance to bring the execution closer to bare metal. In this talk, we’ll go over the progress that has been made so far and the areas we’re looking to invest in next. This talk will discuss the architectural changes that are being made as well as some discussion into how Spark users can expect their application to benefit from this effort. The focus of the talk will be on Spark SQL but the improvements are general and applicable to multiple Spark technologies.
1. File allocation methods include contiguous, linked, and indexed allocation. Contiguous allocation stores files in contiguous blocks but can lead to fragmentation. Linked allocation stores non-contiguous blocks through pointers but has overhead. Indexed allocation uses index blocks to point to data blocks.
2. File systems manage free space through data structures like bit vectors, linked lists, and space maps. Bit vectors require extra space but allow contiguous allocation. Linked lists have no wasted space but non-contiguous allocation. Space maps divide devices into metaslabs for efficient free space management.
3. Performance depends on allocation algorithms, metadata handling, buffer caching, and write policies. Techniques like read-ahead and free-behind optimize sequential access,
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK aim to minimize seek time, rotational latency, and response time by scheduling requests in different orders.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
Control dataset partitioning and cache to optimize performances in SparkChristophePraud2
Christophe Préaud and Florian Fauvarque presented techniques for optimizing Spark performance through proper dataset partitioning and caching. They discussed how to tune the number of partitions for reading, writing, and transformations like joins. Storage levels like MEMORY_ONLY and MEMORY_AND_DISK were explained for caching datasets. Profiling tools like Babar were also mentioned for analyzing Spark applications. The presentation aimed to help optimize slot usage and reduce job runtimes.
The objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Main memory is used to store programs and data for the CPU to access directly. Paging is a memory management technique that divides main memory into fixed-sized blocks called frames and divides logical memory into same-sized blocks called pages. The page table maps logical page numbers to physical frame numbers, with translation lookaside buffers caching these mappings to improve performance. Protection bits in page table entries enforce access permissions on individual pages.
Project Tungsten: Bringing Spark Closer to Bare MetalDatabricks
As part of the Tungsten project, Spark has started an ongoing effort to dramatically improve performance to bring the execution closer to bare metal. In this talk, we’ll go over the progress that has been made so far and the areas we’re looking to invest in next. This talk will discuss the architectural changes that are being made as well as some discussion into how Spark users can expect their application to benefit from this effort. The focus of the talk will be on Spark SQL but the improvements are general and applicable to multiple Spark technologies.
1) Separate chaining is an open hashing technique that uses linked lists to handle collisions. Each index in the hash table points to the head of a linked list of records with the same hash value.
2) Operations like search, insert and delete have average time complexity of O(1+α) where α is the load factor, as records are stored in linked lists after hashing.
3) Advantages are it never fills up and is less sensitive to hash function quality, while disadvantages include poorer cache performance than open addressing and wasted space due to unused indexes.
This document summarizes the key aspects of deduplication on NetApp storage arrays. It discusses what deduplication does, the core enabling technology of fingerprints, how fingerprints work, dedupe metadata space requirements, how dedupe metadata is handled in different ONTAP versions, potential dedupe savings rates for different data types, and considerations around when dedupe is appropriate.
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Hsien-Hsin Sean Lee, Ph.D.
This document summarizes a lecture on advanced computer architecture and memory hierarchy design techniques. It discusses cache penalty reduction techniques like victim caches and prefetching. It also covers virtual memory and address translation using page tables and translation lookaside buffers. Different cache organizations are described, including virtually and physically indexed caches. Solutions to problems like aliases and synonyms in virtual caches are also summarized.
The document provides details about an SQL expert's background and certifications. It summarizes the expert's career starting in 1982 working with computers and 1988 starting in the computer industry. In 1996, they started working with SQL Server 6.0 and have since earned multiple Microsoft certifications. The expert now provides training and consultation services, and created an online school called SQL School Greece to teach SQL Server.
The document describes the architecture of the AltaVista search engine. It discusses the goals of being general purpose, scalable, and having good query performance. It outlines the structure using inverted files to map words to document locations. It also describes the distributed system using front-end and back-end servers to handle queries and indexing in a way that provides redundancy and handles failures.
Kernel Recipes 2015: Solving the Linux storage scalability bottlenecksAnne Nicolas
lash devices introduced a sudden shift in the performance profile of direct attached storage. With IOPS rates orders of magnitude higher than rotating storage, it became clear that Linux needed a re-design of its storage stack to properly support and get the most out of these new devices.
This talk will detail the architecture of blk-mq, the redesign of the core of the Linux storage stack, and the later set of changes made to adapt the SCSI stack to this new queuing model. Early results of running Facebook infrastructure production workloads on top of the new stack will also be shared.
Jense Axboe, Facebook
The document discusses floor planning, which is the first step in physical design. It involves defining the size of the chip, pre-placing hard macros, I/O pads, and defining the power grid. A good floorplan partitions the design into functional blocks, arranges the blocks on the chip, places macros and I/O pads, and decides on the power distribution. Key inputs to floorplanning include the netlist, physical and timing libraries, timing constraints, and power requirements. The document then discusses various aspects of floorplanning such as die size calculations, macro placement guidelines, and different types of physical cells.
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiSatoshi Nagayasu
The document provides an overview of new features in PostgreSQL versions 9.4 and 9.5, including improvements to NoSQL support with JSONB and GIN indexes, analytics functions like aggregation and materialized views, SQL features like UPSERT, security with row level access policies, replication capabilities using logical decoding, and infrastructure to support parallelization. It also outlines the status and changes between versions, and resources for using and learning about PostgreSQL.
The document summarizes the architecture and internal structure of Oracle databases. It describes that an Oracle database consists of an Oracle instance and database. The database contains physical structures like datafiles, redo logs, and control files, as well as logical structures like tablespaces, schemas, tables, indexes, and views. It also explains the various components that make up an Oracle instance, including the system global area (SGA) and background processes that manage the database.
Project Tungsten Phase II: Joining a Billion Rows per Second on a LaptopDatabricks
Tech-talk at Bay Area Apache Spark Meetup.
Apache Spark 2.0 will ship with the second generation Tungsten engine. Building upon ideas from modern compilers and MPP databases, and applying them to data processing queries, we have started an ongoing effort to dramatically improve Spark’s performance and bringing execution closer to bare metal. In this talk, we’ll take a deep dive into Apache Spark 2.0’s execution engine and discuss a number of architectural changes around whole-stage code generation/vectorization that have been instrumental in improving CPU efficiency and gaining performance.
The document provides a summary of technical interview questions and answers related to various topics in computer science. It contains questions about data structures, C++, Java, JDBC, Oracle, networking, operating systems, and C aptitude. The questions cover areas such as data structures, algorithms, programming languages, databases, and general computer concepts.
This document summarizes a lecture on file system implementation. It discusses:
- File system structure including logical file system, file organization module, basic file system, and device drivers.
- Key data structures including file control blocks, mount table, directory cache, open file tables.
- Allocation methods like contiguous, linked, and indexed allocation.
- Free space management techniques like bit vectors, linked lists, and grouping.
- Caching strategies to improve performance like disk caching, read-ahead, and memory mapping.
Elasticsearch Arcihtecture & What's New in Version 5Burak TUNGUT
General architectural concepts of Elasticsearch and what's new in version 5? Examples are prepared with our company business therefore these are excluded from presentation.
The document describes Bin-Carver, a tool that can recover fragmented binary executable files from disk images. It outlines traditional file carving methods that rely on file headers and footers, which do not work for fragmented or unknown file types like binaries. Bin-Carver uses an ELF header scanner to locate binary headers, then builds a graph of block nodes to link fragmented content blocks based on function calls. It also includes a conflict node resolver to handle data blocks. Evaluation shows Bin-Carver can identify 96.3% and recover 93.1% of binary files, outperforming other file carving tools.
File Handling Btech computer science and engineering pptpinuadarsh04
Data is very important. Every organization depends on its data for continuing its business operations. If the data is lost, the organization has to be closed. To store data in a computer, we need files. For example, we can store employee data like employee number, name and salary in a file in the computer and later use it whenever we want.
Similarly, we can store student data like student roll number, name and marks in the computer. In computers’ view, a file is nothing but collection of data that is available to a program. Once we store data in a computer file, we can retrieve it and use it depending on our requirements.
This is the reason computers are primarily created for handling data, especially for storing and retrieving data. In later days, programs are developed to process the data that is stored in the computer.
Spark Summit EU talk by Sameer AgarwalSpark Summit
This document discusses Project Tungsten, which aims to substantially improve the memory and CPU efficiency of Spark. It describes how Spark has optimized IO but the CPU has become the bottleneck. Project Tungsten focuses on improving execution performance through techniques like explicit memory management, code generation, cache-aware algorithms, whole-stage code generation, and columnar in-memory data formats. It shows how these techniques provide significant performance improvements, such as 5-30x speedups on operators and 10-100x speedups on radix sort. Future work includes cost-based optimization and improving performance on many-core machines.
Why you should care about data layout in the file system with Cheng Lian and ...Databricks
Efficient data access is one of the key factors for having a high performance data processing pipeline. Determining the layout of data values in the filesystem often has fundamental impacts on the performance of data access. In this talk, we will show insights on how data layout affects the performance of data access. We will first explain how modern columnar file formats like Parquet and ORC work and explain how to use them efficiently to store data values. Then, we will present our best practice on how to store datasets, including guidelines on choosing partitioning columns and deciding how to bucket a table.
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed instructions and data from main memory. It improves performance by reducing access time compared to main memory. There are three main characteristics of cache memory: 1) it uses the principle of locality of reference, where data that is accessed once is likely to be accessed again soon; 2) it is organized into blocks that are transferred between cache and main memory as a unit; and 3) it uses mapping and tagging to determine if requested data is in cache or needs to be fetched from main memory.
1) Separate chaining is an open hashing technique that uses linked lists to handle collisions. Each index in the hash table points to the head of a linked list of records with the same hash value.
2) Operations like search, insert and delete have average time complexity of O(1+α) where α is the load factor, as records are stored in linked lists after hashing.
3) Advantages are it never fills up and is less sensitive to hash function quality, while disadvantages include poorer cache performance than open addressing and wasted space due to unused indexes.
This document summarizes the key aspects of deduplication on NetApp storage arrays. It discusses what deduplication does, the core enabling technology of fingerprints, how fingerprints work, dedupe metadata space requirements, how dedupe metadata is handled in different ONTAP versions, potential dedupe savings rates for different data types, and considerations around when dedupe is appropriate.
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Hsien-Hsin Sean Lee, Ph.D.
This document summarizes a lecture on advanced computer architecture and memory hierarchy design techniques. It discusses cache penalty reduction techniques like victim caches and prefetching. It also covers virtual memory and address translation using page tables and translation lookaside buffers. Different cache organizations are described, including virtually and physically indexed caches. Solutions to problems like aliases and synonyms in virtual caches are also summarized.
The document provides details about an SQL expert's background and certifications. It summarizes the expert's career starting in 1982 working with computers and 1988 starting in the computer industry. In 1996, they started working with SQL Server 6.0 and have since earned multiple Microsoft certifications. The expert now provides training and consultation services, and created an online school called SQL School Greece to teach SQL Server.
The document describes the architecture of the AltaVista search engine. It discusses the goals of being general purpose, scalable, and having good query performance. It outlines the structure using inverted files to map words to document locations. It also describes the distributed system using front-end and back-end servers to handle queries and indexing in a way that provides redundancy and handles failures.
Kernel Recipes 2015: Solving the Linux storage scalability bottlenecksAnne Nicolas
lash devices introduced a sudden shift in the performance profile of direct attached storage. With IOPS rates orders of magnitude higher than rotating storage, it became clear that Linux needed a re-design of its storage stack to properly support and get the most out of these new devices.
This talk will detail the architecture of blk-mq, the redesign of the core of the Linux storage stack, and the later set of changes made to adapt the SCSI stack to this new queuing model. Early results of running Facebook infrastructure production workloads on top of the new stack will also be shared.
Jense Axboe, Facebook
The document discusses floor planning, which is the first step in physical design. It involves defining the size of the chip, pre-placing hard macros, I/O pads, and defining the power grid. A good floorplan partitions the design into functional blocks, arranges the blocks on the chip, places macros and I/O pads, and decides on the power distribution. Key inputs to floorplanning include the netlist, physical and timing libraries, timing constraints, and power requirements. The document then discusses various aspects of floorplanning such as die size calculations, macro placement guidelines, and different types of physical cells.
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiSatoshi Nagayasu
The document provides an overview of new features in PostgreSQL versions 9.4 and 9.5, including improvements to NoSQL support with JSONB and GIN indexes, analytics functions like aggregation and materialized views, SQL features like UPSERT, security with row level access policies, replication capabilities using logical decoding, and infrastructure to support parallelization. It also outlines the status and changes between versions, and resources for using and learning about PostgreSQL.
The document summarizes the architecture and internal structure of Oracle databases. It describes that an Oracle database consists of an Oracle instance and database. The database contains physical structures like datafiles, redo logs, and control files, as well as logical structures like tablespaces, schemas, tables, indexes, and views. It also explains the various components that make up an Oracle instance, including the system global area (SGA) and background processes that manage the database.
Project Tungsten Phase II: Joining a Billion Rows per Second on a LaptopDatabricks
Tech-talk at Bay Area Apache Spark Meetup.
Apache Spark 2.0 will ship with the second generation Tungsten engine. Building upon ideas from modern compilers and MPP databases, and applying them to data processing queries, we have started an ongoing effort to dramatically improve Spark’s performance and bringing execution closer to bare metal. In this talk, we’ll take a deep dive into Apache Spark 2.0’s execution engine and discuss a number of architectural changes around whole-stage code generation/vectorization that have been instrumental in improving CPU efficiency and gaining performance.
The document provides a summary of technical interview questions and answers related to various topics in computer science. It contains questions about data structures, C++, Java, JDBC, Oracle, networking, operating systems, and C aptitude. The questions cover areas such as data structures, algorithms, programming languages, databases, and general computer concepts.
This document summarizes a lecture on file system implementation. It discusses:
- File system structure including logical file system, file organization module, basic file system, and device drivers.
- Key data structures including file control blocks, mount table, directory cache, open file tables.
- Allocation methods like contiguous, linked, and indexed allocation.
- Free space management techniques like bit vectors, linked lists, and grouping.
- Caching strategies to improve performance like disk caching, read-ahead, and memory mapping.
Elasticsearch Arcihtecture & What's New in Version 5Burak TUNGUT
General architectural concepts of Elasticsearch and what's new in version 5? Examples are prepared with our company business therefore these are excluded from presentation.
The document describes Bin-Carver, a tool that can recover fragmented binary executable files from disk images. It outlines traditional file carving methods that rely on file headers and footers, which do not work for fragmented or unknown file types like binaries. Bin-Carver uses an ELF header scanner to locate binary headers, then builds a graph of block nodes to link fragmented content blocks based on function calls. It also includes a conflict node resolver to handle data blocks. Evaluation shows Bin-Carver can identify 96.3% and recover 93.1% of binary files, outperforming other file carving tools.
File Handling Btech computer science and engineering pptpinuadarsh04
Data is very important. Every organization depends on its data for continuing its business operations. If the data is lost, the organization has to be closed. To store data in a computer, we need files. For example, we can store employee data like employee number, name and salary in a file in the computer and later use it whenever we want.
Similarly, we can store student data like student roll number, name and marks in the computer. In computers’ view, a file is nothing but collection of data that is available to a program. Once we store data in a computer file, we can retrieve it and use it depending on our requirements.
This is the reason computers are primarily created for handling data, especially for storing and retrieving data. In later days, programs are developed to process the data that is stored in the computer.
Spark Summit EU talk by Sameer AgarwalSpark Summit
This document discusses Project Tungsten, which aims to substantially improve the memory and CPU efficiency of Spark. It describes how Spark has optimized IO but the CPU has become the bottleneck. Project Tungsten focuses on improving execution performance through techniques like explicit memory management, code generation, cache-aware algorithms, whole-stage code generation, and columnar in-memory data formats. It shows how these techniques provide significant performance improvements, such as 5-30x speedups on operators and 10-100x speedups on radix sort. Future work includes cost-based optimization and improving performance on many-core machines.
Why you should care about data layout in the file system with Cheng Lian and ...Databricks
Efficient data access is one of the key factors for having a high performance data processing pipeline. Determining the layout of data values in the filesystem often has fundamental impacts on the performance of data access. In this talk, we will show insights on how data layout affects the performance of data access. We will first explain how modern columnar file formats like Parquet and ORC work and explain how to use them efficiently to store data values. Then, we will present our best practice on how to store datasets, including guidelines on choosing partitioning columns and deciding how to bucket a table.
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed instructions and data from main memory. It improves performance by reducing access time compared to main memory. There are three main characteristics of cache memory: 1) it uses the principle of locality of reference, where data that is accessed once is likely to be accessed again soon; 2) it is organized into blocks that are transferred between cache and main memory as a unit; and 3) it uses mapping and tagging to determine if requested data is in cache or needs to be fetched from main memory.
Similar to Unit 4 File Management System.ppt (File Services) (20)
- The Unix operating system originated from the MULTICS project in the 1960s. Bell Labs researchers Ken Thompson and Dennis Ritchie developed the first version of Unix in 1969.
- Unix became popular due to its portability, simplicity, and ability to support complex programs built from simpler ones. It introduced a hierarchical file system, consistent file format, and multi-user capabilities.
- Over time, Unix evolved into two main versions - System V (SVR) and Berkeley Software Distribution (BSD). Linux was later developed as a Unix-like system and shares many similarities with SVR.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
1. Sanjivani Rural Education Society’s
Sanjivani College of Engineering, Kopargaon-423 603
(An Autonomous Institute, Affiliated to Savitribai Phule Pune University, Pune)
NAAC ‘A’ Grade Accredited, ISO 9001:2015 Certified
Department of Computer Engineering
(NBA Accredited)
Prof. A. V. Brahmane
Assistant Professor
E-mail : brahmaneanilkumarcomp@sanjivani.org.in
Contact No: 91301 91301 Ext :145, 9922827812
Subject- Operating System and Administration (CO2013)
Unit IV- Introduction to the File System
2. Content
• Internal representation of the files,
• i-node,
• structure of regular files,
• directories,
• conversion of pathnames to i-node, Superblock,
• i-node assignments to new files,
• Allocation of disk blocks
• Pathnames,
• File system,
• Mounting and unmounting,
• The organization of the File Tree, File Types, File Attributes, Access Control lists.
• Case Study – Open Source Automation Red Hat Ansible, Introduction, Overview and setup, How
Ansible works, Playbooks, Variables, Advanced execution.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
3. Internal Representation of Files
• Every file a UNIX system has a unique inode. Processes interact with files using
well defined system calls. The users specify a file with a character string which is
the file's path and then the system get the inode which is mapped to the file
which corresponds to the path.
• The algorithms described below are above the layer of buffer cache.
Diagrammatically, it can be shown like this:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
5. Inodes
• Inodes exist in a static form on the disk. The kernel reads them into
in-core inodes and modifies them.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
6. Disk inodes consists of the following fields:
• Owner information: ownership is divided into a user and a group of users. Root
user has access to all the files.
• File type: it states whether a file is a normal file, a directory, a block or character
special file, or a device file.
• File access permissions: there are 3 types of access permissions: owner, group
and others. There are separate permissions for reading, writing and executing.
Since execute permission is not applicable to a directory, execute permission for a
directory gives the right to search inside the directory.
• Access times: the times at which the file was last accessed and last modified, and
the time at which the inodes was last modified
• Number of links: number of places from which the file is being referred.
• .
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
7. • Array of disk blocks: even if the users get a logically sequential representation of
data in files, the actual data is scattered across the disk. This array keeps the
addresses of the disk blocks on which the data is scattered.
• File size: the addressing of the file begins from location 0 from relative to the
starting location and the size of the file is the maximum offset of the file + 1. For
example, if a user creates a file and writes a byte at offset 999, the size of the file
is 1000
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
9. The in-core inodes
• The in-core inodes contain the following fields in additional to the fields of the
disk inode:
• Status of the inode
• Locked.
• A process is (or many processes are) waiting for it to be unlocked.
• The data in the inode differs from the disk inode due to change in the inode
data.
• The data in the inode differs from the disk inode due to change in the file
data.
• The file is a mount point (discussed later).
• The device number of the logical device of the file system on which the file
resides.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
10. • Inode number: the disk inodes are placed in an array. So the number of the inode
is nothing but the index of the inode in the array. That is why the disk copy does
not need to store the inode number.
• Points to inodes: just like the buffer cache, the in-core inodes are nothing but a
cache for disk inodes (but with some extra information which is deterministic). In-
core inodes also have hash queues and a free list and the lists behave in a very
similar way to the buffer lists. The inode has next and previous pointers to the
inodes in the hash queue and free lists. The hash queues have a hash function
based on the device number and inode number.
• Reference count: it gives the number of instances of files that are active currently.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
11. Accessing Inodes
• The algorithm iget allocates an in-core copy of an inode.
• If the inode is not found on a hash queue, it allocates an inode from the free list
and reads the disk copy into the in-core inode.
• It already knows the inode number and device number.
• It calculates the logical block number on which the disk inode resides according
to how many inodes fit into one disk block.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
15. Important Calculation
• The formula for calculating the logical block number is:
• block number = ((inode number - 1) / number of inodes per block) + start block of
inode list
• where the division operation returns the integer part of the quotient.
• To find the byte offset of the inode in that block, this following formula is used:
• byte offset = ((inode number - 1) % number of inodes per block) * size of disk
inode
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
16. Algorithm for allocation of incore inode - iget
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
17. Releasing Inodes – iput
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
19. Structure of a Regular File
• In UNIX, the data in files is not stored sequentially on disk.
• If it was to be stored sequentially, the file size would not be flexible without large
fragmentation.
• In case of sequential storage, the inode would only need to store the starting
address and size.
• Instead, the inode stores the disk block numbers on which the data is present.
• But for such strategy, if a file had data across 1000 blocks, the inode would need
to store the numbers of 1000 blocks and the size of the inode would differ
according to the size of the file.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
23. Direct and Indirect Blocks in Inode
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
24. • To be able to have constant size and yet allow large files, indirect addressing is used.
• The inodes have array of size 13 which for storing the block numbers, although, the
number of elements in array is independent of the storage strategy.
• The first 10 members of the array are "direct addresses", meaning that they store the
block numbers of actual data.
• The 11th member is "single indirect", it stores the block number of the block which has
"direct addresses".
• The 12th member is "double indirect", it stores block number of a "single indirect" block.
• And the 13th member is "triple indirect", it stores block number of a "double indirect"
block.
• This strategy can be extended to "quadruple" or "quintuple" indirect addressing.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
25. • If a logical block on the file system holds 1K bytes and that a block number is
addressable by a 32 bit integer, then a block can hold up to 256 block numbers.
The maximum file size with 13 member data array is:
• 10 direct blocks with 1K bytes each = 10K bytes
• 1 indirect block with 256 direct blocks = 256K bytes
• 1 double indirect block with 256 indirect blocks = 64M bytes
• 1 triple indirect block with 256 double indirect blocks = 16G bytes
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
27. • To access byte offset 9000: The first 10 blocks contain 10K bytes. So 9000 should be in
the first 10 block. 9000 / 1024 = 8 so it is in the 8th block (starting from 0). And 9000 %
1024 = 808 so the byte offset into the 8th block is 808 bytes (starting from 0). (Block 367
in the figure.)
• To access byte offset 350000: The first 10 blocks contain 10K bytes (350000 - 10240 =
339760). A single indirect block contains 256K bytes. (339760 - (256 * 1024) = 77616). So
a double indirect block (block 9156 in the figure) must be used. Every single indirect
block in the double indirect block contains 256K, so data must be in the 0th single
indirect block (block 331 in the figure). Every direct block in the single indirect block
addresses 10K bytes (77616 / 10240 = 7). So the data must be in 7th (starting from 0)
direct block (block number 3333 in the figure). And the byte offset will be 77616 % 1024
= 816.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
28. bmap Algorithm
• The algorithm bmap is used to convert logical byte offset of a file to a physical
disk block:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
30. Directories
• Directory files have entries of sub directories and files that reside inside them.
• Directory files have the mapping of a file name and its inode number.
• One directory entry takes 16 bytes.
• 14 bytes are given for the name of the file and 2 bytes for inode number.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
31. Directory layout for /etc
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
32. • The entry . has the inode number of the the directory file itself.
• And .. has the inode number of the parent directory. . and .. for the root directory
are nothing but inode numbers of the root directory itself.
• Entries that have the inode number 0 are empty (i.e. deleted files).
• The kernel has exclusive writes to write to a directory.
• The access permission for directories mean different things.
• Read permission is for reading the contents of the directory, write permission
given the permission to create files and directories in that directory, and the
execute permission gives the right to search in that directory.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
33. Conversion of Path Name to an Inode
• Algorithm namei (name to inode) is used for converting a path to an inode
number.
• The kernel parses the path by accessing each inode in the path and finally
returning the inode of the required file.
• Every process has a current directory. The current directory of process 0 is the
root directory.
• For every other process, it is the current directory of its parent process. Later the
process can change the current directory with the system call chdir.
• The inode of the current directory is stored in the u-area.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
34. • Searches start with respect to the current directory unless the path begins with
the / component, stating that the search should start with the root directory.
• Inode number of the root is present as the global variable. Even if the current
root is changed by using chroot (more on this later), the current root inode
number is stored in the u-area.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
38. Superblock
• The contents of the super block are:
• size of the file system.
• number of free blocks in the file system.
• list of free blocks in the file system.
• pointer to the next free block in the free blocks list
• size of the inodes list.
• number of free inodes in the file system.
• list of free inodes in the file system.
• pointer to the next free inode in the free inodes list.
• lock fields for the free blocks and free inodes list.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
39. Superblock cont….
• a field indicating whether the super block has changed.
• The kernel periodically writes the superblock to the disk if it had been modified
so that it is consistent with the data on the disk.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
40. Inode Assignment to a New File
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
41. ialloc
• If the list of inodes numbers in the super block is not empty, the kernel assigns
the next inode number, allocates a free in-core inode for the newly assigned disk
inode using algorithm iget, copies the disk inode to the in-core copy, initializes
the fields in the inode and returns the locked inode. It updates the inode on disk
to indicate that the inode is in use.
• If the super block list of free inodes is empty, the kernel searches the disk and
places as many free inode numbers as possible into the super block. The kernel
reads the inode list on disk, block by block, and fills the super block list of inode
numbers to capacity, remembering the highest-numbered inode that it finds
("remembered" inode). It is the last one saved in the super block. The next time
the kernel searches the disk for free inodes, it uses the remembered inode as its
starting point, thereby assuring that it wastes no time reading disk blocks where
no free inodes should exist.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
45. Algorithm for freeing inodes (ifree)
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
46. • If the super block is locked, the algorithm returns, the inode number is not
updated in the free list of inode numbers.
• But the inode can be found when searching the disk blocks. If the disk blocks list
is full and the inode number is less than the remembered inode number, the
replace the remembered inode number with the input inode number, so that the
search will start from that inode.
• Ideally, there should never be free inodes whose inode number is less than the
remembered inode number, but exceptions are possible.
• If an inode is being freed and the super block is locked, in such situation, the it is
possible to have an inode number that is free and is less than the remembered
inode number.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
47. Placing Free Inode Numbers into Super Block
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
48. Allocation of Disk Blocks
• When data is written on a file, the kernel must allocate disk blocks from the file
system (for direct data blocks or sometimes, indirect data blocks).
• The file system super block contains an array that is used to cache the numbers of
free disk blocks in the file system.
• The utility program mkfs (make file system) organizes the data blocks of a file
system in a linked list, such that each link of the list is a disk block that contains an
array of free disk block numbers, and one array entry is the number of the next
block of the linked list.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
52. Algorithm for allocation of disk blocks (alloc)
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
53. • The program mkfs tries to organize the original linked list of free block numbers so that
block numbers dispensed to a file are near each other.
• This helps performance, because it reduces disk seek time and latency when a process
reads a file sequentially.
• The kernel makes no attempt to sort block numbers on the free list.
• The algorithm free for freeing a block is the reverse of the one for allocating a block.
• If the super block list is not full, the block number of the newly freed block is placed on
the super block list.
• If, however, the super block list is full, the newly freed block becomes a link block; the
kernel writes the super block list into the block and writes the block to disk.
• It then places the block number of the newly freed block in the super block list: That
block number is the only member of the list.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
54. An example of how the super block free data blocks list works
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
55. • There is a difference is how free inode numbers are managed and how free disk
block numbers are managed.
• All the free disk block numbers are stored as a linked list of arrays but in case of
free inode numbers, all the free inode numbers are not stored.
• After the capacity of the free inode numbers exceeds, the other free inode
numbers are not stored anywhere.
• There are 3 reasons for this different treatment:
• The kernel can determine whether an inode is free by inspecting its file type. However, there is no
way to know whether a disk block is free by looking at the data in it.
• Disk block lend themselves to the use of linked list: a disk block easily holds large lists of free
block numbers. But inodes have no convenient place for bulk storage of large list of free inode
numbers.
• Users tend to use free disk blocks more than free inodes, so the performance lag in searching
would be more noticeable in case of disk blocks than inodes.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
56. Other File Types
• The UNIX system supports two other file types: pipes and special files. A pipe,
sometimes called a fifo (first-in-first-out), differs from a regular file in that its data
is transient; once data is read from a pipe, it cannot be read again. Also, the data
is read in the order that it was written to the pipe, and the system allows no
deviation from that order. The kernel stores data in a pipe the same way it stores
data in an ordinary file, except that it uses only the direct blocks, not the indirect
blocks. Pipes are described later.
• Other types in UNIX system are special files, including block device special files
and character device special files. Both types specify devices, and therefore the
file inodes do not reference any data. Instead, the inode contains two numbers
known as the major and minor device numbers. The major number indicates a
device type such as terminal or disk, and the minor number indicates the unit
number of the device
•
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon