This document discusses virtual memory and demand paging. It begins with background on virtual memory, how it allows programs to be larger than physical memory. It then discusses demand paging specifically, how pages are brought into memory only when needed by a reference. It describes how page tables track valid/invalid pages and cause page faults when an invalid page is accessed. It also discusses page replacement algorithms which select a page to remove from memory when a new page is needed but no frame is available.
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
A demand-paging system is similar to a paging system, discussed earlier, with a little difference that it uses - swapping.
Processes reside on secondary memory (which is usually a disk).
When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, however, we use a lazy swapper, which swaps a page into memory only when that page is needed.
Since we are now viewing a process as a sequence of pages, rather than one large contiguous address space, the use of the term swap will not technically correct.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process.
We shall thus use the term pager, rather than swapper, in connection with demand paging.
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
A demand-paging system is similar to a paging system, discussed earlier, with a little difference that it uses - swapping.
Processes reside on secondary memory (which is usually a disk).
When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, however, we use a lazy swapper, which swaps a page into memory only when that page is needed.
Since we are now viewing a process as a sequence of pages, rather than one large contiguous address space, the use of the term swap will not technically correct.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process.
We shall thus use the term pager, rather than swapper, in connection with demand paging.
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Lecture 8- Virtual Memory Final.pptx
1. Operating Systems: CSE 3204
ASTU
Department of CSE
January 4, 2023 1
Operating Systems
Lecture 8 - Virtual Memory
Chapter Three
Memory Management
Outline
Demand Paging
Page Replacement
Allocation of Frames
Thrashing
2. Background
• First requirement for execution: Instructions must be in physical memory
• One Approach: Place entire logical address in main memory.
• Overlays and dynamic loading may relax this criteria.
• But the size of the program is limited to size of main memory.
• Normally entire program may not be needed in main memory.
• Programs have error conditions.
• Arrays, lists, and tables may be declared by 100 by 100 elements, but
seldom larger than 10 by 10 elements.
• Assembler program may have room for 3000 symbols, although average
program may contain less than 200 symbols.
• Certain portions or features of the program are used rarely.
• Benefits of the ability to execute program that is partially in memory:
• User can write programs and software for entirely large virtual address
space.
• More programs can run at the same time.
• Less I/O would be needed to load or swap each user program into
memory.
January 4, 2023
2
Operating Systems
3. Background (Cont….)
Virtual memory is a technique that allows the execution of processes that may
not be completely in memory.
• Programs are larger than main memory.
• VM abstract main memory into an extremely large, uniform array of
storage.
Separation of user logical memory from physical memory.
• Only part of the program needs to be in memory for execution.
• Logical address space can therefore be much larger than physical address
space.
• Allows address spaces to be shared by several processes.
• Allows for more efficient process creation.
• Frees the programmer from memory constraints.
Virtual memory can be implemented via:
• Demand paging
• Demand segmentation
We only cover demand paging.
For demand segmentation refer research papers. -> IBM OS/2, Burroughs’
computer systems
January 4, 2023
3
Operating Systems
4. Virtual Memory That is Larger Than Physical Memory
January 4, 2023
4
Operating Systems
5. Demand Paging
• Paging system with swapping.
• When we execute a process we swap into memory (next fig).
• For demand paging, we use lazy swapper.
• Never swaps a page into memory unless required.
• Bring a page into memory only when it is needed.
• Less I/O needed
• Less memory needed
• Faster response
• More users
• Page is needed reference to it
• invalid reference abort
• not-in-memory bring to memory
January 4, 2023
5
Operating Systems
6. Transfer of a Paged Memory to Contiguous Disk Space
January 4, 2023
6
Operating Systems
7. Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(1 in-memory, 0 not-in-memory)
Initially valid–invalid but is set to 0 on all entries.
Example of a page table snapshot.
During address translation, if valid–invalid bit in page table entry is 0 page
fault.
1
1
1
1
0
0
0
Frame # valid-invalid bit
page table
January 4, 2023
7
Operating Systems
8. Page table when some pages are not in main memory
January 4, 2023
8
Operating Systems
9. Page Fault
• If there is ever a reference to a page, first reference will trap to OS
page fault
• OS looks at another table to decide:
• Invalid reference abort.
• Just not in memory.
• Get empty frame.
• Swap page into frame.
• Reset tables, validation bit = 1.
• Restart instruction:
January 4, 2023
9
Operating Systems
10. Steps in Handling a Page Fault
1. Check the internal table, to
determine whether this
reference is valid or invalid.
2. If the reference is invalid,
then terminate.
3. Find free frame.
4. Schedule disk operation.
5. Modify the internal table to
indicate that page is in main
memory.
6. Restart the instruction.
January 4, 2023
10
Operating Systems
11. What happens if there is no free frame?
• Page replacement – find some page in memory, but not
really in use, swap it out.
• algorithm
• performance – want an algorithm which will result in
minimum number of page faults.
• Same page may be brought into memory several times.
• Theoretically some programs access several pages of new memory causing
multiple page faults per instruction.
• But analysis of program show locality of reference.
• Hardware support to demand paging
• Page table
• Secondary memory; high speed disk
January 4, 2023
11
Operating Systems
Hardware support to demand paging
12. Software support to demand paging
• Additional software is also required.
• Restarting of instruction after page fault.
• Page fault could occur any time during execution.
• Ex: Add the contents of A and B and replace the result in C
• 1.Fetch and decode the instruction
• Fetch A
• Fetch B
• Add A and B
• Store the sum in C.
• If the page is faulted if we try to store C, we have to restart the
instruction.
January 4, 2023
12
Operating Systems
13. Software support to demand paging…
• Difficulty occurs if the instruction modifies several different locations.
• In IBM 360/370 MVC (Move character) instruction, we can move 256
bytes from one location to another.
• source and destination may overlap
• If the page fault occurs after partial moving, we can not redo the
instruction, if regions overlap.
• Solution:
• 1. Use micro code to access both ends of blocks
• If page fault is going to occur, it will occur.
• 2. Use temporary registers to hold the values of temporary registers
• If a page fault occurs old values are written back to memory,
restoring the memory state to before the instruction was started.
January 4, 2023
13
Operating Systems
14. Hardware support and software support to
demand paging…
• Also, similar difficulty occurs in machines that use special addressing modes.
Uses register as a pointer
• Auto-increment: increment after using
• auto-decrement. Decrements before using
• MOV (R2)+, -(R3)
• If the page fault occurs while storing in R3, we have to restart the
instruction by restoring the values of R2 and R3.
• if the instruction modifies several different locations.
• Solution: use status register to record the register number and amount
modified so that OS can undo the effect of partially executed instruction that
causes a page fault.
• EVERY THING SHOULD BE TRANSAPARENT TO USER
January 4, 2023
14
Operating Systems
15. Performance of Demand Paging
• Let p be the probability of page fault.
• Page Fault Rate 0 p 1.0
• if p = 0 no page faults , effective access time=memory access time.
• if p = 1, every reference is a fault
• Effective Access Time (EAT)
EAT = (1 – p) x memory access+ p (page fault overhead)
Major operations during page fault:
Trap to OS; save user registers and process state; issue a disk read; wait
for interrupt from disk; wait for the CPU; restore the process status;
Demand Paging Example
Memory access time = 100 nanoseconds, Page fault service time =
25milliseconds
Effective access time (EAT) = (1 – p) x 100 + p (25 msec)
• =100+24,999,900 x p
• EAT is directly proportional to page-fault rate.
• It is important to keep the page-fault rate low.
• Otherwise EAT increases and slowing the process execution dramatically.
January 4, 2023
15
Operating Systems
16. Advantages of VM: Process Creation
• Virtual memory allows other benefits during process creation:
a. Copy-on-Write(COW):- allows both parent and child processes to
initially share the same pages in memory.
If either process modifies a shared page, only then the page copied.
• COW allows more efficient process creation as only modified pages are
copied.
• Free pages are allocated from a pool of zeroed-out pages.
b. Memory-Mapped Files:- allows file I/O to be treated as routine
memory access by mapping a disk block to a page in memory.
• A file is initially read using demand paging. A page-sized portion of the
file is read from the file system into a physical page. Subsequent
reads/writes to/from the file are treated as ordinary memory accesses.
• Simplifies file access by treating file I/O through memory rather than
read() write() system calls.
• Also allows several processes to map the same file allowing the pages in
memory to be shared.
January 4, 2023
16
Operating Systems
18. Page Replacement
• Prevent over-allocation of memory by modifying page-fault
service routine to include page replacement.
Basic Page Replacement
1. Find the location of the desired page on disk.
2. Find a free frame:
- If there is a free frame, use it.
- If there is no free frame, use a page replacement
algorithm to select a victim frame.
3. Read the desired page into the (newly) free frame. Update the
page table.
4. Restart the process.
January 4, 2023
18
Operating Systems
20. Reducing overhead: modify bit
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages
are written to disk.
Modify bit can be used to reduce the overhead with the help of hardware.
It is set indicating the page has been modified.
While replacing a page
• If the modify bit is set, we must write that page to disk.
• If it is not set we can avoid overwriting it, if is not overwritten.
Page Replacement Algorithms
• Page replacement completes separation between logical memory and physical
memory – large virtual memory can be provided on a smaller physical memory.
• Enormous virtual memory can be provided on a smaller physical memory.
• Two major problems are solved to implement demand paging
• Frame allocation algorithm
• If multiple processes exist in memory we have to decide the number frames
for each process
• Page replacement algorithm
• We have to select a frame that is to be replaced.
January 4, 2023
20
Operating Systems
21. Page Replacement Algorithms
Want lowest page-fault rate.
Evaluate algorithm by running it on a particular string of memory references
(reference string) and computing the number of page faults on that string.
In all our examples, the reference string is :1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Address reference divided by page size.
If the address reference is 0432 and page size is 100 then the reference
number is 0432/100=4
January 4, 2023
21
Operating Systems
Graph of Page
Faults Versus
The Number of
Frames
22. First-In-First-Out (FIFO) Algorithm
Oldest page is replaced; FIFO queue; replace the head of the queue
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
4 frames
more frames more page faults
1
2
3
1
2
3
4
1
2
5
3
4
9 page
faults
1
2
3
1
2
3
5
1
2
4
5 10 page faults
4
4 3
January 4, 2023
22
Operating Systems
23. January 4, 2023 23
Operating Systems
This most unexpected result is known as Belady's anomaly: For
some page-replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases.
24. Optimal Algorithm
• Replace page that will not be used for longest period of time.
• 4 frames example: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• How do you know this?
• Used for measuring how well your algorithm performs.
1
2
3
4
6 page faults
4 5
January 4, 2023
24
Operating Systems
Optimal Page Replacement
25. Least Recently Used (LRU) Algorithm
• The page that has not been used for longest period of time is replaced.
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
January 4, 2023
25
Operating Systems
1
2
3
5
4
4 3
5
LRU Page
Replacement
26. LRU Algorithm Implementation
Counter implementation
• Every page entry has a counter; every time page is referenced through
this entry, copy the clock into the counter.
• When a page needs to be changed, look at the counters to determine
which are to change.
• Issues:
• Search pf page table to find LRU page, Overflow of clock,..
Stack implementation – keep a stack of page numbers in a double link
form:
• Page referenced:
• move it to the top
• requires 6 pointers to be changed
• Update is expensive
• No search for replacement
• Top is the most recently used page and bottom is the LRU page.
January 4, 2023
26
Operating Systems
27. Use Of A Stack to Record The Most Recent Page
References
January 4, 2023
27
Operating Systems
Performance issue: Stack and Counters
The updating of stack or clock must be done on every memory reference.
If we use interrupt for every reference, to allow software to update data
structures, it would slow every reference by a factor of 10.
Few systems tolerate such degradation in performance.
Sol: Systems follow LRU approximation implemented through hardware.
28. LRU Approximation Algorithms
• Reference bit
• With each page associate a bit, initially = 0
• When page is referenced bit set to 1.
• Replace the one which is 0 (if one exists). We do not know the order,
however.
• Additional ordering:
• By maintaining 8-bit byte for each page.
• Shifting can be used to record the history (interrupt to OS for every 100
msec).
• Second chance
• Need one reference bit.
• Clock replacement.
• If page to be replaced (in clock order) has reference bit = 1. then:
• set reference bit 0; arrival time is set to current time.
• leave page in memory.
• replace next page (in clock order), subject to same rules.
• Similar to FIFO if all bits are set.
January 4, 2023
28
Operating Systems
30. Enhanced-second chance algorithm
• Use reference bit and modify bit as an ordered pair.
• (0,0) neither recently used nor modified – best page to replace.
• (0,1) not recently used but modified- not quite good; to be written before
replacement.
• (1,0) recently used but clean – it probably used again soon.
• (1,1) recently used and modified- it probably will be used again soon; to
be written before replacement.
• Each page is one of four classes.
• Keep a counter of the number of references that have been made to each
page.
• LFU(least frequently used) Algorithm: replaces page with smallest count.
• MFU (MOST frequently used)Algorithm: based on the argument that the
page with the smallest count was probably just brought in and has yet to be
used.
• MFU and LFU are not used
• Implementation is expensive
January 4, 2023
30
Operating Systems
Other algorithms: Counting Algorithms
31. Allocation of Frames
• Each process needs minimum number of pages allocation.
• If there is a single process, entire available memory can be allocated.
• Multi-programming puts two or more processes in memory at same
time.
• We must allocate minimum number of frames to each process.
• Two major allocation schemes.
• fixed allocation
• priority allocation
January 4, 2023
31
Operating Systems
32. 1. Fixed Allocation
• Equal allocation – e.g., if 100 frames and 5 processes, give each 20 pages.
• Proportional allocation – Allocate according to the size of process.
m
S
s
p
a
m
s
S
p
s
i
i
i
i
i
i
for
allocation
frames
of
number
total
process
of
size
59
64
137
127
5
64
137
10
127
10
64
2
1
2
a
a
s
s
m
i
January 4, 2023
32
Operating Systems
2. Priority Allocation:- Use a proportional allocation scheme using priorities rather
than size.
• If process Pi generates a page fault,
– select for replacement one of its frames.
– select for replacement a frame from a process with lower priority number.
33. Global vs. Local Allocation
• Global replacement – process selects a replacement frame from the
set of all frames; one process can take a frame from another.
• Local replacement – each process selects from only its own set of
allocated frames.
• With local replacement # of frames does not change.
• Performance depends on the paging behavior of the process.
• Free frames may not be used
• With global replacement, a process can take a frame from another
process.
• Performance depends not only paging behavior of that process, but
also paging behavior of other processes.
• In practice global replacement is used.
January 4, 2023
33
Operating Systems
34. Thrashing
• If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:
• low CPU utilization.
• operating system thinks that it needs to increase the degree of
multiprogramming.
• another process is added to the system.
• Thrashing is High paging activity.
• Thrashing a process is spending more time in swapping pages in and
out.
• If the process does not have number of frames equivalent to number of
active pages, it will very quickly page fault.
• Since all the pages are in active use it will page fault again.
January 4, 2023
34
Operating Systems
35. Thrashing
• Why does paging work?
Locality model
• Process migrates from one locality to another.
• Localities may overlap.
• Why does thrashing occur?
size of locality > total memory size
January 4, 2023
35
Operating Systems
36. Causes of thrashing
OS monitors CPU utilization
If it is low, increases the degree of MPL
Consider that a process enters new execution phase and starts faulting.
It takes pages from other processes
Since other processes need those pages, they also fault, taking pages
from other processes.
The queue increases for paging device and ready queue empties
CPU utilization decreases.
Solution: provide process as many frames as it needs.
But how we know how many frames it needs ?
Locality model provides hope.
January 4, 2023
36
Operating Systems
37. Locality model
Locality is a set of pages that are actively used together.
A program is composed of several different localities which may
overlap.
Ex: even when a subroutine is called it defines a new locality.
The locality model states that all the programs exhibit this memory
reference structure.
This is the main reason for caching and virtual memory!
If we allocate enough frames to a process to accommodate its
current locality, it fault till all pages are in that locality are in
the MM. Then it will not fault.
If we allocate fewer frames than current locality, the process
will thrash.
January 4, 2023
37
Operating Systems
38. Working-Set Model
Based on locality->Define a parameter ;
working-set window a fixed number of page references
Example: 10,000 instruction
Most recent references are examined
WSSi (working set of Process Pi) = total number of pages referenced in the most
recent (varies in time)
if too small will not encompass entire locality.
if too large will encompass several localities.
if = will encompass entire program.
D = WSSi total demand frames
if D > m Thrashing
Policy: if D > m, then suspend one of the processes.
January 4, 2023
38
Operating Systems
39. Working Set
OS monitors the WS of each process allocates to that working set enough
frames equal to WS size.
If there are enough extra frames, another process can be initiated.
If D>m, OS suspends a process, and its frames are allocated to other
processes.
The WS strategy prevents thrashing by keeping MPL as high as possible.
However, we have to keep track of working set.
Keeping track of Working Set
Approximate with interval timer + a reference bit, -> Example: = 10,000
Timer interrupts after every 5000 time units.
Keep in memory 2 bits for each page.
Whenever a timer interrupts copy and sets the values of all reference bits
to 0.
If one of the bits in memory = 1 page in working set.
Why is this not completely accurate?
We can not tell when the reference was occurred.
Accuracy can be increased by increasing frequency of interrupts which
also increases the cost.
January 4, 2023
39
Operating Systems
40. January 4, 2023 40
Operating Systems
Questions, Comments and
Discussions?