Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
The buffer cache stores recently accessed disk blocks in memory to reduce disk I/O. When a process requests data from a file, the kernel checks if the data is already cached in memory before accessing the disk. If cached, the data is returned directly from memory. If not cached, the data is read from disk into the cache. The buffer cache is managed as a pool using structures like a free list and buffer headers to track cached blocks. Caching recently used data in memory improves performance by reducing disk access frequency.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed instructions and data. There are typically three levels of cache (L1, L2, L3) with L1 being the smallest and fastest cache located directly on the CPU chip. The performance of a cache is measured by its hit ratio, with a higher hit ratio indicating better performance as the CPU is less likely to access the slower main memory.
The document discusses memory hierarchy and virtual memory. It describes how memory is organized in a hierarchy with registers and caches providing the fastest access but smallest capacity, and disks providing the largest capacity but slowest access. It explains how caches improve performance by exploiting locality of reference. Direct, associative, and set-associative mapping functions are described for placing cache blocks. Virtual memory allows programs to access memory using virtual addresses that are translated to physical addresses, enabling programs to appear larger than actual memory.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK aim to minimize seek time, rotational latency, and response time by scheduling requests in different orders.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
Cache memory is a small, fast memory located between the CPU and main memory that temporarily stores frequently accessed data. It improves performance by providing faster access for the CPU compared to accessing main memory. There are different types of cache memory organization including direct mapping, set associative mapping, and fully associative mapping. Direct mapping maps each block of main memory to only one location in cache while set associative mapping divides the cache into sets with multiple lines per set allowing a block to map to any line within a set.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
The buffer cache stores recently accessed disk blocks in memory to reduce disk I/O. When a process requests data from a file, the kernel checks if the data is already cached in memory before accessing the disk. If cached, the data is returned directly from memory. If not cached, the data is read from disk into the cache. The buffer cache is managed as a pool using structures like a free list and buffer headers to track cached blocks. Caching recently used data in memory improves performance by reducing disk access frequency.
Cache memory is a small, fast memory located close to the processor that stores frequently accessed instructions and data. There are typically three levels of cache (L1, L2, L3) with L1 being the smallest and fastest cache located directly on the CPU chip. The performance of a cache is measured by its hit ratio, with a higher hit ratio indicating better performance as the CPU is less likely to access the slower main memory.
The document discusses memory hierarchy and virtual memory. It describes how memory is organized in a hierarchy with registers and caches providing the fastest access but smallest capacity, and disks providing the largest capacity but slowest access. It explains how caches improve performance by exploiting locality of reference. Direct, associative, and set-associative mapping functions are described for placing cache blocks. Virtual memory allows programs to access memory using virtual addresses that are translated to physical addresses, enabling programs to appear larger than actual memory.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK aim to minimize seek time, rotational latency, and response time by scheduling requests in different orders.
The document discusses various file allocation methods and disk scheduling algorithms. There are three main file allocation methods - contiguous allocation, linked allocation, and indexed allocation. Contiguous allocation suffers from fragmentation but allows fast sequential access. Linked allocation does not have external fragmentation but is slower. Indexed allocation supports direct access but has higher overhead. For disk scheduling, algorithms like FCFS, SSTF, SCAN, CSCAN, and LOOK are described. SSTF provides lowest seek time while SCAN and CSCAN have higher throughput but longer wait times.
Cache memory is a small, fast memory located between the CPU and main memory that temporarily stores frequently accessed data. It improves performance by providing faster access for the CPU compared to accessing main memory. There are different types of cache memory organization including direct mapping, set associative mapping, and fully associative mapping. Direct mapping maps each block of main memory to only one location in cache while set associative mapping divides the cache into sets with multiple lines per set allowing a block to map to any line within a set.
This document describes a set of Progress OE parameters and show how they can impact a Progress OE database performance.
I wrote this document by an easy way so every one can understand it even if he's not a Progress OE DB administrator.
This document discusses various factors that impact the performance of QAD application and provides recommendations for tuning them. It describes how CPU utilization, memory allocation, disk I/O, checkpoints, latches and other resources can affect performance if not optimized. Specific metrics like buffer hits, resource waits, latch counts and timeouts are identified to monitor performance. Parameters like CPU count, buffer pool size, APW count, block size and before image settings can be tuned to improve efficiency.
The document discusses the features and architecture of the Pentium processor, including that it is a 32-bit superscalar microprocessor introduced in 1993 containing over 3 million transistors. It has separate 8KB caches for instructions and data and executes instructions through five pipeline stages, allowing for simultaneous execution of multiple instructions. The Pentium uses a superscalar architecture with two integer pipelines (U-pipe and V-pipe) that can concurrently execute independent instructions to improve performance.
This document discusses file store organization and operating system support for input/output (I/O). It describes how the Berkeley Fast File System (FFS) model organizes file systems using cylinder groups to improve data locality. It also discusses how operating systems use device drivers, interrupt handling, and buffering to interface between applications and hardware for I/O operations involving block devices like disks.
This document provides information about the features and architecture of the Intel Pentium processor. It discusses the processor's superscalar design with dual integer pipelines, branch prediction using a branch target buffer, and separate instruction and data caches. It also describes the floating point unit's pipelined design. The document compares the Pentium to the Intel 80386, noting the Pentium's higher performance enabled by its superscalar design, on-chip caches, branch prediction, and 64-bit data bus. It briefly outlines additional performance features of the Pentium Pro such as its 14-stage pipeline, integrated L2 cache, and out-of-order execution capabilities.
This document summarizes key aspects of the Linux kernel's memory management system, including:
1) It describes the virtual address space for a process, including user/kernel segments, virtual memory areas, and system calls like brk for dynamic memory allocation.
2) It explains the block device caching system using buffer lists to cache blocks in memory.
3) It outlines the page cache and management, including finding free pages and handling page faults.
SFBigAnalytics_20190724: Monitor kafka like a ProChester Chen
Kafka operators need to provide guarantees to the business that Kafka is working properly and delivering data in real time, and they need to identify and triage problems so they can solve them before end users notice them. This elevates the importance of Kafka monitoring from a nice-to-have to an operational necessity. In this talk, Kafka operations experts Xavier Léauté and Gwen Shapira share their best practices for monitoring Kafka and the streams of events flowing through it. How to detect duplicates, catch buggy clients, and triage performance issues – in short, how to keep the business’s central nervous system healthy and humming along, all like a Kafka pro.
Speakers: Gwen Shapira, Xavier Leaute (Confluence)
Gwen is a software engineer at Confluent working on core Apache Kafka. She has 15 years of experience working with code and customers to build scalable data architectures. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an author of “Kafka - the Definitive Guide”, "Hadoop Application Architectures", and a frequent presenter at industry conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects.
Xavier Leaute is One of the first engineers to Confluent team, Xavier is responsible for analytics infrastructure, including real-time analytics in KafkaStreams. He was previously a quantitative researcher at BlackRock. Prior to that, he held various research and analytics roles at Barclays Global Investors and MSCI.
The document discusses buffer overflows, which occur when more data is placed in a buffer than it was allocated to store. Extra data overflows the buffer and can corrupt or overwrite other buffers. This can be exploited in buffer overflow attacks to damage files, change data, or access private information. The document provides examples of how buffer overflows work in C++ programs and how to prevent attacks by avoiding insecure library files, filtering user input, and thoroughly testing applications.
Hello and welcome to NIO, in this video we will cover the basics of buffer before that let’s focus why we need NIO. NIO was created to allow Java programmers to implement high-speed I/O without having to write custom native code. NIO moves the most time-consuming I/O activities (namely, filling and draining buffers) back into the operating system, thus allowing for a great increase in speed.
www.youtube.com/watch?v=rTa68v4Gfrk
Kernel Memory Allocation, Review of Relocation & Program FormsMeghaj Mallick
This document discusses different methods for allocating memory in the kernel, including the buddy system, slab allocation, and their variations. The buddy system allocates variable sized blocks using a splitting algorithm to satisfy requests. Slab allocation pre-allocates fixed sized blocks for specific data types to improve performance. Variations include the binary, weighted, and Fibonacci buddy systems, which split blocks into different predetermined sizes.
This document describes a cache simulator project. It discusses cache memory, types of cache including L1, L2 and L3 caches. It also describes cache mapping techniques like direct mapping, associative mapping, and set associative mapping. The document explains cache hits and misses. It covers write policies like write-back and write-through. Replacement algorithms like FIFO and LRU are also summarized. The cache simulator calculates metrics like hit rate, runtime, and memory access latency based on a memory access pattern file. It is implemented using data structures like queues for FIFO and doubly linked lists for LRU replacement.
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Hsien-Hsin Sean Lee, Ph.D.
This document summarizes a lecture on advanced computer architecture and memory hierarchy design techniques. It discusses cache penalty reduction techniques like victim caches and prefetching. It also covers virtual memory and address translation using page tables and translation lookaside buffers. Different cache organizations are described, including virtually and physically indexed caches. Solutions to problems like aliases and synonyms in virtual caches are also summarized.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed data from main memory to speed up processing. It is organized into multiple levels - L1 cache is inside the CPU, L2 cache is external, and main memory is L3. The cache improves performance by reducing access time - when data is in cache it is a "hit" and very fast to access, while a "miss" requires loading from main memory which is slower. Factors like cache size, mapping technique, replacement policy, and write strategy impact how efficiently it services memory requests.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This document describes a set of Progress OE parameters and show how they can impact a Progress OE database performance.
I wrote this document by an easy way so every one can understand it even if he's not a Progress OE DB administrator.
This document discusses various factors that impact the performance of QAD application and provides recommendations for tuning them. It describes how CPU utilization, memory allocation, disk I/O, checkpoints, latches and other resources can affect performance if not optimized. Specific metrics like buffer hits, resource waits, latch counts and timeouts are identified to monitor performance. Parameters like CPU count, buffer pool size, APW count, block size and before image settings can be tuned to improve efficiency.
The document discusses the features and architecture of the Pentium processor, including that it is a 32-bit superscalar microprocessor introduced in 1993 containing over 3 million transistors. It has separate 8KB caches for instructions and data and executes instructions through five pipeline stages, allowing for simultaneous execution of multiple instructions. The Pentium uses a superscalar architecture with two integer pipelines (U-pipe and V-pipe) that can concurrently execute independent instructions to improve performance.
This document discusses file store organization and operating system support for input/output (I/O). It describes how the Berkeley Fast File System (FFS) model organizes file systems using cylinder groups to improve data locality. It also discusses how operating systems use device drivers, interrupt handling, and buffering to interface between applications and hardware for I/O operations involving block devices like disks.
This document provides information about the features and architecture of the Intel Pentium processor. It discusses the processor's superscalar design with dual integer pipelines, branch prediction using a branch target buffer, and separate instruction and data caches. It also describes the floating point unit's pipelined design. The document compares the Pentium to the Intel 80386, noting the Pentium's higher performance enabled by its superscalar design, on-chip caches, branch prediction, and 64-bit data bus. It briefly outlines additional performance features of the Pentium Pro such as its 14-stage pipeline, integrated L2 cache, and out-of-order execution capabilities.
This document summarizes key aspects of the Linux kernel's memory management system, including:
1) It describes the virtual address space for a process, including user/kernel segments, virtual memory areas, and system calls like brk for dynamic memory allocation.
2) It explains the block device caching system using buffer lists to cache blocks in memory.
3) It outlines the page cache and management, including finding free pages and handling page faults.
SFBigAnalytics_20190724: Monitor kafka like a ProChester Chen
Kafka operators need to provide guarantees to the business that Kafka is working properly and delivering data in real time, and they need to identify and triage problems so they can solve them before end users notice them. This elevates the importance of Kafka monitoring from a nice-to-have to an operational necessity. In this talk, Kafka operations experts Xavier Léauté and Gwen Shapira share their best practices for monitoring Kafka and the streams of events flowing through it. How to detect duplicates, catch buggy clients, and triage performance issues – in short, how to keep the business’s central nervous system healthy and humming along, all like a Kafka pro.
Speakers: Gwen Shapira, Xavier Leaute (Confluence)
Gwen is a software engineer at Confluent working on core Apache Kafka. She has 15 years of experience working with code and customers to build scalable data architectures. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an author of “Kafka - the Definitive Guide”, "Hadoop Application Architectures", and a frequent presenter at industry conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects.
Xavier Leaute is One of the first engineers to Confluent team, Xavier is responsible for analytics infrastructure, including real-time analytics in KafkaStreams. He was previously a quantitative researcher at BlackRock. Prior to that, he held various research and analytics roles at Barclays Global Investors and MSCI.
The document discusses buffer overflows, which occur when more data is placed in a buffer than it was allocated to store. Extra data overflows the buffer and can corrupt or overwrite other buffers. This can be exploited in buffer overflow attacks to damage files, change data, or access private information. The document provides examples of how buffer overflows work in C++ programs and how to prevent attacks by avoiding insecure library files, filtering user input, and thoroughly testing applications.
Hello and welcome to NIO, in this video we will cover the basics of buffer before that let’s focus why we need NIO. NIO was created to allow Java programmers to implement high-speed I/O without having to write custom native code. NIO moves the most time-consuming I/O activities (namely, filling and draining buffers) back into the operating system, thus allowing for a great increase in speed.
www.youtube.com/watch?v=rTa68v4Gfrk
Kernel Memory Allocation, Review of Relocation & Program FormsMeghaj Mallick
This document discusses different methods for allocating memory in the kernel, including the buddy system, slab allocation, and their variations. The buddy system allocates variable sized blocks using a splitting algorithm to satisfy requests. Slab allocation pre-allocates fixed sized blocks for specific data types to improve performance. Variations include the binary, weighted, and Fibonacci buddy systems, which split blocks into different predetermined sizes.
This document describes a cache simulator project. It discusses cache memory, types of cache including L1, L2 and L3 caches. It also describes cache mapping techniques like direct mapping, associative mapping, and set associative mapping. The document explains cache hits and misses. It covers write policies like write-back and write-through. Replacement algorithms like FIFO and LRU are also summarized. The cache simulator calculates metrics like hit rate, runtime, and memory access latency based on a memory access pattern file. It is implemented using data structures like queues for FIFO and doubly linked lists for LRU replacement.
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2Hsien-Hsin Sean Lee, Ph.D.
This document summarizes a lecture on advanced computer architecture and memory hierarchy design techniques. It discusses cache penalty reduction techniques like victim caches and prefetching. It also covers virtual memory and address translation using page tables and translation lookaside buffers. Different cache organizations are described, including virtually and physically indexed caches. Solutions to problems like aliases and synonyms in virtual caches are also summarized.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
Cache memory is a small, fast memory located close to the CPU that stores frequently accessed data from main memory to speed up processing. It is organized into multiple levels - L1 cache is inside the CPU, L2 cache is external, and main memory is L3. The cache improves performance by reducing access time - when data is in cache it is a "hit" and very fast to access, while a "miss" requires loading from main memory which is slower. Factors like cache size, mapping technique, replacement policy, and write strategy impact how efficiently it services memory requests.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Unit 2.2. Buffer Cache.pptx (Introduction to Buffer Chache)
1. Sanjivani Rural Education Society’s
Sanjivani College of Engineering, Kopargaon-423 603
(An Autonomous Institute, Affiliated to Savitribai Phule Pune University, Pune)
NAAC ‘A’ Grade Accredited, ISO 9001:2015 Certified
Department of Computer Engineering
(NBA Accredited)
Prof. A. V. Brahmane
Assistant Professor
E-mail : brahmaneanilkumarcomp@sanjivani.org.in
Contact No: 91301 91301 Ext :145, 9922827812
Subject- Operating System and Administration (CO2013)
Unit II- Buffer Cache
2. Content
• Buffer cache
• Buffer Headers
• Structre of the Buffer Pool
• Scenaios for retrival of a buffer
• Reading and Writing Disk blocks
• Advntages and disadvantages of buffer cache
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
3. The Buffer Cache
• The kernel could read and write directly to and from the disk for all the file system accesses, but
system response time and throughput will be poor because of the slow disk transfer rate.
• The kernel therefore attempts to minimize the frequency of disk access by keeping a pool of data
buffers, called the buffer cache, which contains data in recently used disk blocks.
• Architecturally, it is positioned between file subsystem and device drivers.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
4. Buffer Headers
• During system initialization, the kernel allocates space for a number of buffers,
configurable according to memory size and performance constraints.
• Two parts of the buffer:
1.a memory array that contains data from the disk.
2.buffer header that identifies the buffer.
• Data in a buffer corresponds to data in a logical disk block on a file system. A disk
block can never map into more than one buffer at a time.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
6. • The device number fields specifies the logical file system (not physical device) and block
number block number of the data on disk. These two numbers uniquely identify the buffer. The
status field summarizes the current status of the buffer. The ptr to data area is a pointer to the
data area, whose size must be at least as big as the size of a disk block.
• The status of a buffer is a combination of the following conditions:
• Buffer is locked / busy
• Buffer contains valid data
• Kernel must write the buffer contents to disk before reassigning the buffer; called as "delayed-
write"
• Kernel is currently reading or writing the contexts of the buffer to disk
• A process is currently waiting for the buffer to become free.
• The two set of pointers in the header are used for traversal of the buffer queues (doubly linked
circular lists).
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
7. Structure of the Buffer Pool
• The kernel follows the least recently unused (LRU) algorithm for the buffer pool.
• The kernel maintains a free list of buffers that preserves the least recently used order.
• Dummy buffer header marks the beginning and end of the list.
• All the buffers are put on the free list when the system is booted.
• When the kernel wants any buffer, it takes it from the head of the free list.
• But it can also take a specific buffer from the list.
• The used buffers, when become free, are attached to the end of the list, hence the buffers closer
and closer to the head of the list are the most recently used ones.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
8. Free list of buffers
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
9. DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
• When the kernel accesses a disk block, it searches for the buffer with the
appropriate device-block number combination.
• Rather than search the entire buffer pool, it organizes the buffers into separate
queues, hashed as a function of the device and block number.
• The hash queues are also doubly linked circular lists.
• A hashing function which uniformly distributes the buffers across the lists is used.
• But it also has to be simple so that the performance does not suffer.
10. Buffers on the Hash Queues
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
11. Buffers on the Hash Queues
• The hash function shown in the figure only depends on the block number; real
hash functions depend on device number as well.
• Every disk block in the buffer pool exists on one and only one hash queue and
only once on that queue.
• However, presence of a buffer on a hash queue does not mean that it is busy, it
could well be on the free list as well if its status is free.
• Therefore, if the kernel wants a particular buffer, it will search it on the queue.
But if it wants any buffer, it removes a buffer from the free list. A buffer is always
on a hash queue, but it may or may not be on the free list
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
12. Scenarios for Retrieval of a Buffer
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
• The algorithms for reading and writing disk blocks use the algorithm getblk to allocate buffers
from the pool. There are 5 typical scenarios the kernel may follow in getblk to allocate a buffer for
a disk block.
• 1.Block is found on its hash queue and its buffer is free.
• 2.Block could not be found on the hash queue, so a buffer from the free list is allocated.
• 3.Block could not be found on the hash queue, and when allocating a buffer from free list, a
buffer marked "delayed write" is allocated. Then the kernel must write the "delayed write" buffer
to disk and allocate another buffer.
• 4.Block could not be found on the hash queue and the free list of buffers is empty.
• 5.Block was found on the hash queue, but its buffer is currently busy.
13. Algorithm for Buffer allocation
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
14. Algorithm for releasing a buffer
• When using the buffer, the kernel always marks the buffer as busy so that no
other process can access it.
• When the kernel finishes using the buffer, it releases the buffer according to the
algorithm brelse
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
15. Buffer releasing.....
• Buffer contents are old only if it is marked as "delayed write", in that case and in the case where
the data is not valid (for example, due to I/O corruption), the buffer is put in the beginning of the
free list as its data is not valid or old.
• Otherwise the data is valid as the buffer is put at the end to follow the LRU strategy.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
16. Scenario 1
• The states of hash queues for different scenarios are shown in following figures :
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
17. Scenario 2
• Here the buffer is not on the hash queue, so a buffer from free list is removed and then its device
and block numbers are changed.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
18. Scenario 3
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
• Cannot find block on the hash queue => allocate a buffer from free list but buffer
on the free list marked “delayed write” => flush “delayed write” buffer and
allocate another buffer.
19. Scenario 4
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
Cannot find block on the hash queue and free list of buffer also empty.
20. Scenario 5
• Block in the hash queue, but buffer is busy.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
21. Reading and Writing Disk Blocks
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
22. Reading and writing disk blocks....
•
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
23. Writing a disk block
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
24. Advantages of the buffer cache
• Uniform disk access => system design simpler
• Copying data from user buffers to system buffers => eliminates the need for
special alignment of user buffers.
• Use of the buffer cache can reduce the amount of disk traffic.
• Single image of of disk blocks contained in the cache => helps insure file system
integrity
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
25. Disadvantages of the buffer cache
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
• Delayed write => vulnerable to crashes that leave disk data in
incorrect state
• An extra data copy when reading and writing to and from user
processes => slow down when transmitting large data
•
26. • Thank you …
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon