The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
The document discusses memory management techniques used by operating systems. It describes contiguous and non-contiguous memory allocation. Contiguous allocation allocates processes a single contiguous block of memory, which can lead to fragmentation issues. Non-contiguous allocation allows a process to use multiple non-adjacent blocks of memory, reducing fragmentation. Specific allocation schemes like first-fit and best-fit are also covered.
The document discusses several topics related to process scheduling and memory management in operating systems, including:
- Process scheduling using a round-robin algorithm with prioritization of processes into different queues.
- System calls for getting and setting the time, including for scheduling alarms for individual processes.
- The clock interrupt handler which performs functions like process scheduling, profiling, and keeping track of time.
- Memory management policies including swapping entire processes in and out of memory and demand paging which swaps individual pages in and out on demand. The System V kernel uses both techniques together to avoid thrashing.
A process needs to be in the memory to be executed.
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution
Total physical memory space of processes can exceed physical memory
Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images
Magnetic disks provide most secondary storage and are relatively simple. Each disk contains one or more flat, circular platters coated with magnetic material. Disks are logically divided into tracks and sectors for reading and writing data. Disks spin rapidly and have read/write heads that can move to different tracks on the platters. Disk scheduling algorithms like SSTF aim to minimize access times by prioritizing requests located near the heads' current position. Disks can be attached directly via I/O ports or over a network using NAS or SAN storage. Disk management includes formatting, handling bad blocks, and using swap space on disk as an extension of main memory.
This document discusses memory management techniques in operating systems. It begins by defining memory management and discussing the concepts of logical and physical address spaces. It then explains different memory management schemes including swapping, multiple partitions with fixed and variable sizes, and paging and segmentation. The key points are that the OS must facilitate sharing of main memory among multiple processes and that memory management schemes determine how logical addresses are mapped to physical addresses.
Main memory is where programs are loaded to run on the CPU. There are several techniques for managing memory allocation and binding programs to addresses in memory, including compile-time, load-time, and execution-time binding. Memory management is needed to map logical addresses used by programs to physical addresses in memory. Paging is a memory management technique that divides memory into pages to allow non-contiguous allocation and reduce fragmentation.
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
The document discusses memory management techniques used by operating systems. It describes contiguous and non-contiguous memory allocation. Contiguous allocation allocates processes a single contiguous block of memory, which can lead to fragmentation issues. Non-contiguous allocation allows a process to use multiple non-adjacent blocks of memory, reducing fragmentation. Specific allocation schemes like first-fit and best-fit are also covered.
The document discusses several topics related to process scheduling and memory management in operating systems, including:
- Process scheduling using a round-robin algorithm with prioritization of processes into different queues.
- System calls for getting and setting the time, including for scheduling alarms for individual processes.
- The clock interrupt handler which performs functions like process scheduling, profiling, and keeping track of time.
- Memory management policies including swapping entire processes in and out of memory and demand paging which swaps individual pages in and out on demand. The System V kernel uses both techniques together to avoid thrashing.
A process needs to be in the memory to be executed.
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution
Total physical memory space of processes can exceed physical memory
Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images
Magnetic disks provide most secondary storage and are relatively simple. Each disk contains one or more flat, circular platters coated with magnetic material. Disks are logically divided into tracks and sectors for reading and writing data. Disks spin rapidly and have read/write heads that can move to different tracks on the platters. Disk scheduling algorithms like SSTF aim to minimize access times by prioritizing requests located near the heads' current position. Disks can be attached directly via I/O ports or over a network using NAS or SAN storage. Disk management includes formatting, handling bad blocks, and using swap space on disk as an extension of main memory.
This document discusses memory management techniques in operating systems. It begins by defining memory management and discussing the concepts of logical and physical address spaces. It then explains different memory management schemes including swapping, multiple partitions with fixed and variable sizes, and paging and segmentation. The key points are that the OS must facilitate sharing of main memory among multiple processes and that memory management schemes determine how logical addresses are mapped to physical addresses.
Main memory is where programs are loaded to run on the CPU. There are several techniques for managing memory allocation and binding programs to addresses in memory, including compile-time, load-time, and execution-time binding. Memory management is needed to map logical addresses used by programs to physical addresses in memory. Paging is a memory management technique that divides memory into pages to allow non-contiguous allocation and reduce fragmentation.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
This document provides an overview of memory management techniques in operating systems. It discusses contiguous memory allocation, segmentation, paging, and swapping. Contiguous allocation allocates processes to contiguous sections of memory which can lead to fragmentation issues. Segmentation divides memory into logical segments defined by segment tables. Paging divides memory into fixed-size pages and uses page tables to map virtual to physical addresses, avoiding external fragmentation. Swapping moves processes between main memory and disk to allow more processes to reside in memory than will physically fit. The document describes the hardware and data structures used to implement these techniques.
This document discusses different approaches to memory management in operating systems. It begins by describing monoprogramming without swapping or paging, where one program uses all available memory at a time. It then describes multiprogramming using fixed memory partitions, either with separate queues for each partition or a single queue. The challenges of relocation and protection when programs are loaded at different addresses are also covered. Finally, it introduces the concepts of swapping and virtual memory for handling situations where not all active processes fit in main memory.
The document discusses memory management techniques used by operating systems. It explains that memory management involves allocating and swapping blocks of data between main memory and secondary storage due to the slow speed of memory I/O compared to CPUs. Key requirements for memory management are relocation, protection, and sharing of memory. Early techniques like fixed and dynamic partitioning had limitations that were addressed by paging and segmentation which partition memory into small fixed-size chunks. Virtual memory management allows processes to be larger than physical memory by swapping pages or segments between main and secondary storage on demand.
Operating systems use main memory management techniques like paging and segmentation to allocate memory to processes efficiently. Paging divides both logical and physical memory into fixed-size pages. It uses a page table to map logical page numbers to physical frame numbers. This allows processes to be allocated non-contiguous physical frames. A translation lookaside buffer (TLB) caches recent page translations to improve performance by avoiding slow accesses to the page table in memory. Protection bits and valid/invalid bits ensure processes only access their allocated memory regions.
The kernel saves the context of a process during interrupts, system calls, and context switches by:
1. Saving the current register context and creating a new context layer.
2. Handling the interrupt and then restoring the previous context before resuming execution.
3. Changing the process execution mode to kernel during a system call before invoking the system call handler.
4. Ensuring consistency of internal data structures before switching between processes and restoring the saved context of the next process.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Memory Management in Operating Systems for allVSKAMCSPSGCT
The document discusses memory management techniques used in computer systems. It describes the memory hierarchy from fast registers to slower main memory and disk. Memory management aims to efficiently allocate memory for multiple processes while providing protection, relocation, sharing and logical organization. Techniques include contiguous allocation, fixed and dynamic partitioning, paging using page tables, segmentation using segment tables, and swapping processes in and out of memory. Hardware support through relocation registers, memory management units, translation lookaside buffers and associative memory help map logical to physical addresses efficiently.
The document discusses memory management techniques used in operating systems. It covers logical versus physical address spaces and introduces paging as a memory management technique. Paging divides both main memory and disk storage into fixed-sized pages. Each process has a page table containing entries for its pages, with each entry mapping a page to a frame in main memory if present or being invalid if on disk. The CPU address is divided into a page number to index the table and an offset to access within the page.
The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptxSumalatha A
Parallel processing involves executing multiple programs or tasks simultaneously. It can be implemented through hardware and software means in both multiprocessor and uniprocessor systems. In uniprocessors, parallelism is achieved through techniques like pipelining instructions, overlapping CPU and I/O operations, using cache memory, and multiprocessing through time-sharing. Pipelining helps execute instructions in an overlapped fashion to improve efficiency compared to non-pipelined processors. Parallel computers break large problems into smaller parallel tasks executed simultaneously on multiple processors. Types of parallel computers include pipeline computers, array processors, and multiprocessor systems.
This chapter discusses various memory management techniques used in computer systems, including segmentation, paging, and swapping. Segmentation divides memory into variable-length logical segments, while paging divides it into fixed-size pages that can be mapped to non-contiguous physical frames. Paging requires a page table to map virtual to physical addresses and allows processes to exceed physical memory by swapping pages to disk. The chapter describes address translation hardware like TLBs, protection mechanisms, and issues like fragmentation.
Memory system, and not processor speed, is often the bottleneck for many applications.
Memory system performance is largely captured by two parameters, latency and bandwidth.
Latency is the time from the issue of a memory request to the time the data is available at the processor.
Bandwidth is the rate at which data can be pumped to the processor by the memory system.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
The document discusses different memory management strategies:
- Swapping allows processes to be swapped temporarily out of memory to disk, then back into memory for continued execution. This improves memory utilization but incurs long swap times.
- Contiguous memory allocation allocates processes into contiguous regions of physical memory using techniques like memory mapping and dynamic storage allocation with first-fit or best-fit. This can cause external and internal fragmentation over time.
- Paging permits the physical memory used by a process to be noncontiguous by dividing memory into pages and mapping virtual addresses to physical frames, allowing more efficient use of memory but requiring page tables for translation.
This document provides an overview of memory management techniques in operating systems. It discusses contiguous memory allocation, segmentation, paging, and swapping. Contiguous allocation allocates processes to contiguous sections of memory which can lead to fragmentation issues. Segmentation divides memory into logical segments defined by segment tables. Paging divides memory into fixed-size pages and uses page tables to map virtual to physical addresses, avoiding external fragmentation. Swapping moves processes between main memory and disk to allow more processes to reside in memory than will physically fit. The document describes the hardware and data structures used to implement these techniques.
This document discusses different approaches to memory management in operating systems. It begins by describing monoprogramming without swapping or paging, where one program uses all available memory at a time. It then describes multiprogramming using fixed memory partitions, either with separate queues for each partition or a single queue. The challenges of relocation and protection when programs are loaded at different addresses are also covered. Finally, it introduces the concepts of swapping and virtual memory for handling situations where not all active processes fit in main memory.
The document discusses memory management techniques used by operating systems. It explains that memory management involves allocating and swapping blocks of data between main memory and secondary storage due to the slow speed of memory I/O compared to CPUs. Key requirements for memory management are relocation, protection, and sharing of memory. Early techniques like fixed and dynamic partitioning had limitations that were addressed by paging and segmentation which partition memory into small fixed-size chunks. Virtual memory management allows processes to be larger than physical memory by swapping pages or segments between main and secondary storage on demand.
Operating systems use main memory management techniques like paging and segmentation to allocate memory to processes efficiently. Paging divides both logical and physical memory into fixed-size pages. It uses a page table to map logical page numbers to physical frame numbers. This allows processes to be allocated non-contiguous physical frames. A translation lookaside buffer (TLB) caches recent page translations to improve performance by avoiding slow accesses to the page table in memory. Protection bits and valid/invalid bits ensure processes only access their allocated memory regions.
The kernel saves the context of a process during interrupts, system calls, and context switches by:
1. Saving the current register context and creating a new context layer.
2. Handling the interrupt and then restoring the previous context before resuming execution.
3. Changing the process execution mode to kernel during a system call before invoking the system call handler.
4. Ensuring consistency of internal data structures before switching between processes and restoring the saved context of the next process.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Memory Management in Operating Systems for allVSKAMCSPSGCT
The document discusses memory management techniques used in computer systems. It describes the memory hierarchy from fast registers to slower main memory and disk. Memory management aims to efficiently allocate memory for multiple processes while providing protection, relocation, sharing and logical organization. Techniques include contiguous allocation, fixed and dynamic partitioning, paging using page tables, segmentation using segment tables, and swapping processes in and out of memory. Hardware support through relocation registers, memory management units, translation lookaside buffers and associative memory help map logical to physical addresses efficiently.
The document discusses memory management techniques used in operating systems. It covers logical versus physical address spaces and introduces paging as a memory management technique. Paging divides both main memory and disk storage into fixed-sized pages. Each process has a page table containing entries for its pages, with each entry mapping a page to a frame in main memory if present or being invalid if on disk. The CPU address is divided into a page number to index the table and an offset to access within the page.
The document discusses processes in an operating system. It defines a process as a "thread of control" with its own private memory area. It describes process states like running, ready, waiting, and zombie. It outlines the kernel data structures used to manage processes, including the process table, u area, process groups, and sessions. It also covers memory layout, address spaces, context switching, and manipulating processes and shared memory regions.
The objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptxSumalatha A
Parallel processing involves executing multiple programs or tasks simultaneously. It can be implemented through hardware and software means in both multiprocessor and uniprocessor systems. In uniprocessors, parallelism is achieved through techniques like pipelining instructions, overlapping CPU and I/O operations, using cache memory, and multiprocessing through time-sharing. Pipelining helps execute instructions in an overlapped fashion to improve efficiency compared to non-pipelined processors. Parallel computers break large problems into smaller parallel tasks executed simultaneously on multiple processors. Types of parallel computers include pipeline computers, array processors, and multiprocessor systems.
This chapter discusses various memory management techniques used in computer systems, including segmentation, paging, and swapping. Segmentation divides memory into variable-length logical segments, while paging divides it into fixed-size pages that can be mapped to non-contiguous physical frames. Paging requires a page table to map virtual to physical addresses and allows processes to exceed physical memory by swapping pages to disk. The chapter describes address translation hardware like TLBs, protection mechanisms, and issues like fragmentation.
Memory system, and not processor speed, is often the bottleneck for many applications.
Memory system performance is largely captured by two parameters, latency and bandwidth.
Latency is the time from the issue of a memory request to the time the data is available at the processor.
Bandwidth is the rate at which data can be pumped to the processor by the memory system.
Similar to Unit 5.1 Memory Management_Device Driver_IPC.ppt (20)
- The Unix operating system originated from the MULTICS project in the 1960s. Bell Labs researchers Ken Thompson and Dennis Ritchie developed the first version of Unix in 1969.
- Unix became popular due to its portability, simplicity, and ability to support complex programs built from simpler ones. It introduced a hierarchical file system, consistent file format, and multi-user capabilities.
- Over time, Unix evolved into two main versions - System V (SVR) and Berkeley Software Distribution (BSD). Linux was later developed as a Unix-like system and shares many similarities with SVR.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...
Unit 5.1 Memory Management_Device Driver_IPC.ppt
1. Sanjivani Rural Education Society’s
Sanjivani College of Engineering, Kopargaon-423 603
(An Autonomous Institute, Affiliated to Savitribai Phule Pune University, Pune)
NAAC ‘A’ Grade Accredited, ISO 9001:2015 Certified
Department of Computer Engineering
(NBA Accredited)
Prof. A. V. Brahmane
Assistant Professor
E-mail : brahmaneanilkumarcomp@sanjivani.org.in
Contact No: 91301 91301 Ext :145, 9922827812
Subject- Operating System and Administration (CO2013)
Unit V- Memory Management Policies, I/O Subsystem
and IPC
2. Content
• Swapping,
• Demand Paging,
• Driver interface,
• disk drivers,
• Process Tracing ,
• Network communication,
• Sockets
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
3. Memory Management Policies
• Historically, UNIX systems transferred entire processes between primary memory and
the swap device.
• Such a memory management policy is called swapping.
• It made sense to implement such a policy on the PDP 11, where the maximum process
size was 64K.
• The BSD system (release 4.0) was the first major implementation of a demand
paging policy, transferring memory pages instead of processes to and from a secondary
device; later releases of UNIX system V also support demand paging.
• The entire process does not have to reside in main memory to execute, and the kernel
loads pages for a process on demand when the process references the pages.
• This permits greater flexibility in mapping the virtual address space of a process into the
physical memory of a machine, usually allowing the size of a process to be greater than
the amount of available physical memory and allowing more processes to fit
simultaneously in main memory. Swapping policy is easier to implement and has less
overhead.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
4. Swapping
• Swapping has 3 parts to it:
• managing space on the swap device
• swapping processes out of main memory
• swapping processes into main memory
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
5. Allocation of Swap Space
• The swap device is a block device in a configurable section of a disk.
• The space on swap device is allocated in the form of contiguous blocks (as opposed to
single block allocation for files).
• The allocation of space on the swap device is transitory (as opposed to long lasting file
allocation).
• A process that resides on the swap device will eventually migrate back to main memory.
• Since speed is critical and the system can do I/O faster in one multi-block operation than
in several single block operations, the kernel allocates contiguous space on the swap
device without regard for fragmentation (as opposed to the flexible allocation for files
which results into minimum fragmentation).
• The allocation scheme for swap space allocation differs from the scheme used for file
systems.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
6. • Therefore, the data structures also differ.
• The kernel maintains the free space for the swap device in an in-core table,
called map.
• A map is an array where each entry consists of an address of an allocatable
resource and the number of resource units available there; the kernel interprets
the address and units according to the type of map.
• Initially, a map contains one entry that indicates the address and the total
number of resources
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
8. The algorithm to allocate map space, malloc
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
9. Allocation of Swap Space
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
10. Allocation of Swap Space
• The following figures show the state of the map after allocating 100 units, 50
units, then 100 units again:
• When freeing resources, the kernel finds their proper position in the map by
address. Three cases are possible here:
• The freed resources completely fill a hole in the map. In this case, the kernel
combines the newly freed resources and the existing (two) entries into one entry
in the map.
• The freed resources partially fill a hole. In this case, the kernel adjusts the address
and units fields of an appropriate entry to account for the resources just freed.
• The freed resources partially fill a hole but are not contiguous to any resources in
the map. Then a new entry is created and inserted in the proper position.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
11. • Considering the previous example, if the kernel frees 50 units of swap space
starting at address 101, the swap map will contain a new entry (case 3). If the
kernel then frees 100 units starting at address 1, it adjusts the first entry of the
swap map (case 2). Suppose the kernel now requests 200 units of swap space,
since the first entry cannot satisfy the requirement, the second entry is modified
and space is given from address 251 to 450, both inclusive.
• These actions are shown below on next slide:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
12. Allocating Swap Space from Secondary Entry in Map
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
13. Swapping Processes Out of Main Memory
• A process is swapped out if the kernel needs space in memory. It needs space in
memory under following situations:
• The fork system call must allocate space for a child process.
• The brk system call increases the size of a process.
• A process becomes larger by the natural growth of its stack.
• The kernel wants to free space in memory for processes it had previously
swapped out and should now swap in.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
14. • When the kernel decides that a process is eligible for swapping from main memory, it
decrements the reference count of each region in the process and swaps the region out
if its reference count drops to 0.
• The kernel allocates space on a swap device and locks the process in memory (for cases
1-3), preventing swapper from swapping it out while the current swap operation is in
progress.
• The kernel saves the swap address of the region in the region table entry.
• The buffer cache is bypassed when the kernel swaps data between the swap device and
user address space.
• If memory is organized in pages, the data to be swapped out is likely to be discontiguous
in physical memory.
• The kernel must gather the page addresses of data to be swapped out, and the disk
driver may use the collection of page addresses to set up the I/O. The swapper waits for
each I/O operation to complete before swapping out other data.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
15. • It is not necessary that the kernel write the entire virtual address space of a process
to a swap device.
• Instead, it copies the physical memory assigned to a process to the allocated space
on the swap device, ignoring unassigned virtual addresses.
• When the kernel swaps the process back into memory, it knows the virtual address
map of the process, so it can reassign the process to the correct virtual addresses.
• The kernel eliminates an extra copy from a data buffer to physical memory by
reading the data into the physical memory locations that were previously set up to
conform to the virtual address locations.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
16. Mapping process Space onto the Swap Device
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
17. Swapping Processes Into the Main Memory
• The swapper process (process 0), is the only process that swaps processes into
memory from swap devices.
• After system initialization, the swapper goes into an infinite loop, where its only
task is to do process swapping.
• The swapper sleeps if there is no work for it. Swapper executes only in kernel
mode.
• When swapper wakes up to swap processes in, it selects the processes which has
spent the longest time in the state "ready to run but swapped out".
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
19. • The swapper tries to swap out sleeping processes rather than ready-to-run
processes, because ready-to-run processes might get scheduled earlier than sleeping
processes.
• A ready-to-run process must be core resident for at least 2 seconds before being
swapped out, and a process to be swapped in must have been swapped out for at
least 2 seconds.
• If the swapper cannot find a processes to swap out or if neither the process to be
swapped in nor the process to be swapped out have accumulated more than 2
seconds residence time in their environment, then swapper sleeps on the event that
it wants to swap a process into memory but cannot find room for it.
• The clock will awaken the swapper once a second in that state. The kernel also
awakens swapper if another process goes to sleep, since it may be more eligible for
swapping out than the processes previously considered by the swapper. In any case,
the swapper wakes up and begins execution from the beginning, attempting to swap
in eligible processes
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
20. • Here is an example of process scheduling in a system where A, B, C, D are same
sized processes and only 2 processes can stay in main memory.
• This is the scheduling timeline:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
21. Sequence of Swapping operations
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
22. • The swapper chooses processes to swap in based on the amount of time the
processes had been swapped out.
• Another criterion could have been to swap in the highest-priority process that is
ready to run, since such processes deserve a better chance to execute.
• It has been demonstrated that such a policy results in "slightly" better throughput
under heavy system load.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
23. • The algorithm for choosing a process to swap out has some serious flaws:
• The swapper swaps out a process based on its priority, memory-residence time,
and nice value.
• Although it swaps out a process only to make room for a process being swapped
in, it may swap out a process that does not provide enough memory for the
incoming process.
• For instance, if the swapper attempts to swap in a process that occupies 1
megabyte of memory and the system contains no free memory, it is futile to swap
out a process that occupies only 2K bytes of memory.
• An alternative strategy would be to swap out groups of processes only if they
provide enough memory for the incoming process.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
24. • If the swapper sleeps because it could not find enough memory to swap in a
process, it searches again for a process to swap in although it had previously
chosen one.
• The reason is that other swapped processes may have awakened in the meantime
and they may be more eligible for swapping in than the previously chosen
process.
• But that is small solace to the original process still trying to be swapped in.
• In some implementations, the swapper tries to swap out many smaller processes
to make room for the big process to be swapped in before searching for another
process to swap in; this is the revision in the swapper algorithm shown by the
comments in the algorithm given above.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
25. • If the swapper chooses a "ready-to-run" process to swap out, it is possible that
the process had not executed since it was previously swapped in. The following
figure shows such a case, where the kernel swaps in process D at the 2-second
mark, schedules process C, and then swaps out process D at the 3-second mark in
favor of process E (because of the interaction of the nice value) even though
process D had never run. Such thrashing is clearly undesirable:
• If the swapper attempts to swap out a process but cannot find space on the swap
device, a system deadlock could arise if the following four conditions are met:
• All the processes in main memory are asleep
• All the ready-to-run processes are swapped out
• There is no room on the swap device for new processes
• There is no room in main memory for incoming processes
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
26. Demand Paging
• Machines whose memory architectures is based on pages and whose CPU has
restartable instructions can support a kernel that implements a demand paging
algorithm, swapping pages of memory between main memory and a swap device.
• Demand paging systems free processes from size limitations otherwise imposed
by the amount of physical memory available on a machine.
• The kernel still has a limit on the virtual size of a process, dependent on the
amount of virtual memory the machine can address.
• Demand paging is transparent to the user except for the virtual size permissible
to a process.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
27. • The portion of the text section which contains the frequently called subroutines
and program loops is a small subset of the total space of the process. This is
known as the principle of "locality".
• The working set of a process is the set of pages that the process has referenced in
its last n memory references; the number n is called the window of the working
set.
• If the working set stays in the memory, swapping could be reduced.
• When a process addresses a page that is not in its working set, it incurs a page
fault; in handling the fault, the kernel updates the working set, reading in pages
from a secondary device if necessary.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
28. Working set a process
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
29. • As shown in the diagram, working sets with larger window sizes are more
efficient, implying that the process will not fault often as often.
• It is expensive to remember the order of page references. Instead, systems
approximate a working set model by setting a reference bit whenever a process
accesses a page and by sampling memory references periodically: If a page was
recently referenced, it is part of a working set; otherwise, it "ages" in memory
until it is eligible for swapping.
• When a process accesses a page that is not part of its working set, it incurs
a validity page fault. The kernel suspends execution of the process until it reads
the page into memory and makes it accessible to the process. When the page is
loaded in memory, the process restarts the instruction it was executing when it
incurred the fault. Thus, the implementation of a paging subsystem has two
parts: swapping rarely used pages to a swapping device and handling page faults.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
30. Data Structures for Demand Paging
• The kernel contains 4 major data structures to support low-level memory
management functions and demand paging:
• page table entries
• disk block descriptors
• page frame data table (called pfdata for short)
• swap-use table
• pfdata table is allocated once for the lifetime of the system, but allocations for
other structure are dynamic.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
31. • As seen previously, the region contains page tables to access physical memory.
Each entry of a page table contains the physical address of the page, protection
bits indicating whether processes can read, write or execute from the page, and
the following bit fields to support demand paging:
• Valid
• Reference
• Modify
• Copy on write
• Age
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
32. • The kernel sets the valid bit to indicate that the contents of a page are legal, but
the page reference is not necessarily illegal if the valid bit is not set.
• The reference bit indicates whether a process recently referenced a page, and
the modify bit indicates whether a process recently modified the contents of a
page.
• The copy on write bit, used in the fork system call, indicates that the kernel must
create a new copy of the page when a process modifies its contents.
• The kernel manipulates the age bits to indicate how long a page has been a
member of the working set of a process.
• Assume the kernel manipulates the valid, copy on write, and age bits, and the
hardware sets the reference and modify bits of the page table entry (later, a
hardware that does not have these capabilities is considered).
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
33. The structure of region
•
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
34. • Each page table entry is associated with a disk block descriptor, which describes
the disk copy of the virtual page.
• Processes that share a region therefore access common page table entries and
disk block descriptors.
• The contents of a virtual pages are either in a particular block on a swap device,
in an executable, or not on a swap device. If the page is on a swap device, the
disk block descriptor contains the logical device number and block number
containing the page contents.
• If the page is contained in an executable file, the disk block descriptor contains
the logical block number in the file that contains the page; the kernel can quickly
map this number into its disk address.
• The disk block descriptor also indicates two special conditions set during exec:
that a page is "demand fill" or "demand zero", studied later.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
35. Pfdata table
• The pfdata table describes each page of physical memory and is indexed by page
number. It contains these fields:
• State of the page, indicating that the page is on a swap device or executable file,
that DMA is currently underway for the page (reading data from a swap device),
or that the page can be reassigned.
• The number of processes that reference the page. The reference count equals the
number of valid page table entries that reference the page. It may differ from the
number of processes that share regions containing the page, as will be described
below when reconsidering the algorithm fork.
• The logical device (swap or file system) and block number that contains a copy of
the page.
• Pointers to other pfdata table entries on a list of free pages and on a hash queue
of pages.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
36. Relationship between page table entries, disk block descriptors, pfdata table
entries, and the swap-use count table.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
37. Fork in Paging System
• In swapping systems, the kernel copies the address space of the parent. But in
paging system, for shared regions, kernel just increments the reference count in
the region table entry to share the region. And for private regions such as data
and stack, the kernel allocates a new region table entry and page table. Then it
examines the pages on the parent process and if they are valid, increments the
reference count in pfdata table entry, sharing the page (as opposed to sharing the
region).
• The kernel sets the "copy-on-write" bit in fork, for the shared pages in the private
regions. If any process writes on such shared page, it incurs the "protection fault".
In the fault handler, the kernel makes a copy of page and then the process can
write on its private copy. So the copying is differed until an actually write
operation is done.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
38. The diagram below shows the kernel data structures after
the fork system call:
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
39. Exec in a Paging System
• On a demand paged system, the executable file may be too large to fit in the
available main memory. Therefore, the kernel does not preassign memory to
executable file but "faults" it in, assigning memory as needed. It first assigns the
page tables and disk block descriptors for the executable file, marking the page
table entries "demand fill" (for non-bss data) or "demand zero" (for bss data).
Following a variant of read algorithm for reading the file into memory, the
process incurs a validity fault as it reads each page. The fault handler notes
whether the page is "demand fill", meaning its contents will immediately be
overwritten with the contents of the executable file so it need not be cleared, or
that it is "demand zero", meaning that its contents should be cleared. If the
process cannot fit into memory, the page-stealer process periodically swaps
pages from memory, making room for the incoming file.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
40. • There are some inefficiencies in this scheme:
• A process incurs a page fault when reading each page of the executable file, even though it may
never access the page.
• The page stealer may swap pages from memory before the exec is done, resulting in two extra
swap operations per page if the process needs the page early.
• To make exec more efficient, the kernel can demand page directly from the executable file if the
data is properly aligned, as indicated by a special magic number. However, use of standard
algorithms (such as bmap) to access a file would make it expensive to demand page from indirect
blocks because of the multiple buffer cache accesses necessary to read a block.
• Consistency problems could arise because bmap is not reentrant. The kernel sets various I/O
parameters in the u-area during the read system call. If a process incurs a page fault during
a read system call when attempting to copy data to user space, it would overwrite these fields in
the u-area to read the page from the file system. Therefore, the kernel cannot use the regular
algorithms to fault in pages from the file system. The algorithms are, of course, reentrant in
regular cases, because each process has separate u-area and a process cannot simultaneously
execute multiple system calls.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
41. The Page-Stealer Process
• The page stealer is a kernel process that swaps out memory pages that are no
longer part of the working set of a process.
• It is created on system initialization and is invoked throughout the system lifetime
when system is low on free pages. It examines every active, unlocked region and
increments the age field of all valid pages.
• The kernel locks a region when a process faults on a page in the region, so that
the page stealer cannot steal the page being faulted in.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
42. • There are two paging states for a page in memory: the page is aging and is not yet
eligible for swapping, or the page is eligible fro swapping and is available for
reassignment to other virtual pages.
• The first state indicates that a process recently accessed the page. Some
machines set a reference bit when they reference a page, but software methods
can be substituted if the hardware does not have this feature.
• The page stealer turns off the reference bit for such pages but remembers how
many examinations have passed since the page was last referenced.
• The first state thus consists of several substates, corresponding to the number of
passes the page stealer makes before the page is eligible for swapping.
• When the number exceeds a threshold value, the kernel puts the page into the
second state, ready to be swapped.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
43. State diagram for Page Aging
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
44. Page Faults
• There are two types of page faults: validity faults and protection faults.
• The fault handlers may have to read a page from disk to memory and sleep
during the I/O operation, therefore, they are an exception to the general rule that
interrupt handlers cannot sleep.
• Validity Fault Handler
• If a process attempts to access a page whose valid bit is not set, it incurs a validity
fault and then the kernel invokes the validity fault handler.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
45. Protection Fault Handler
• If a process accesses a valid page but its permission bits do not permit access to
it, the process incurs a protection fault. A process also incurs a protection fault
when it attempts to write a page whose copy on write bit was set during
the fork system call. The kernel must determine whether permission was denied
because the page requires a copy on write or whether something truly illegal
happened.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
46. A Hybrid System with Swapping and Demand Paging
• Demand paging systems treat memory more flexibly than swapping systems. But
there could be situations where the page stealer and validity fault handler thrash
(spend more time on swapping pages rather than process execution) because of
shortage of memory. If the sum of the working sets of all processes is greater
than the physical memory on a machine, the fault handler will usually sleep,
because it cannot allocate pages for a process. The page stealer will not be able
to steal pages fast enough, because all pages are in a working set. System
throughput will be affected badly.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon
47. • The system V kernel runs swapping and demand paging both to avoid
thrashing. When the kernel is not able to allocate pages for a process,
it wakes up the swapper and puts the calling process in a state
equivalent to "ready to run but swapped". The swapper swaps out
entire processes until available memory exceeds the high-water mark.
For each process swapped out, it makes one "ready-to-run but
swapped" process ready to run. It does not swap those processes in
via the normal swapping algorithm but lets them fault in pages as
needed. This method slows down the system fault rate and reduces
thrashing.
•
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon