Virtual memory management techniques allow processes to access memory in a virtual address space that is larger than the actual physical memory. There are three main techniques:
1. Demand paging only loads pages into memory when they are needed, reducing I/O and memory usage but increasing access time due to page faults.
2. Copy-on-write shares pages between processes until a page is modified, then it is copied to avoid overwriting another process's page.
3. Page replacement algorithms select victim pages to remove from memory and write to disk when new pages are needed. Least recently used is commonly used but not optimal.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Swapping is the process of exchanging pages, segment of memory and values to another location and it also manipulates data files that are larger than the main memory. Copy the link given below and paste it in new browser window to get more information on Swapping:- http://www.transtutors.com/homework-help/computer-science/operating-system/memory-management/swapping/
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
Virtual Memory
• Copy-on-Write
• Page Replacement
• Allocation of Frames
• Thrashing
• Operating-System Examples
Background
Page Table When Some PagesAre Not in Main Memory
Steps in Handling a Page Fault
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Swapping is the process of exchanging pages, segment of memory and values to another location and it also manipulates data files that are larger than the main memory. Copy the link given below and paste it in new browser window to get more information on Swapping:- http://www.transtutors.com/homework-help/computer-science/operating-system/memory-management/swapping/
This presentation describes about the various memory allocation methods like first fit, best fit and worst fit in memory management and also about fragmentation problem and solution for the problem.
A demand-paging system is similar to a paging system, discussed earlier, with a little difference that it uses - swapping.
Processes reside on secondary memory (which is usually a disk).
When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, however, we use a lazy swapper, which swaps a page into memory only when that page is needed.
Since we are now viewing a process as a sequence of pages, rather than one large contiguous address space, the use of the term swap will not technically correct.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process.
We shall thus use the term pager, rather than swapper, in connection with demand paging.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Disk Structure (Magnetic)
Disk Attachment
Disk Scheduling Algorithms
FCFS, SSTF, SCAN, LOOK
Disk Management
Formatting, booting, bad sectors
Swap-Space Management
Performance optimization
n computer operating systems, demand paging is a method of virtual memory management. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it and that page is not already in memory
A demand-paging system is similar to a paging system, discussed earlier, with a little difference that it uses - swapping.
Processes reside on secondary memory (which is usually a disk).
When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, however, we use a lazy swapper, which swaps a page into memory only when that page is needed.
Since we are now viewing a process as a sequence of pages, rather than one large contiguous address space, the use of the term swap will not technically correct.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process.
We shall thus use the term pager, rather than swapper, in connection with demand paging.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Disk Structure (Magnetic)
Disk Attachment
Disk Scheduling Algorithms
FCFS, SSTF, SCAN, LOOK
Disk Management
Formatting, booting, bad sectors
Swap-Space Management
Performance optimization
n computer operating systems, demand paging is a method of virtual memory management. In a system that uses demand paging, the operating system copies a disk page into physical memory only if an attempt is made to access it and that page is not already in memory
Operating System (Scheduling, Input and Output Management, Memory Management,...Project Student
Computer Science - Operating System
All the jobs and aspects of the operating system are explained and defined. The 5 main jobs of the operating system are outlined, this includes scheduling, managing input and output, memory management, virtual memory and paging and file management.
a glance on memory management in operating system.
this note is useful for those who are keen to know about how the OS works and a brief explanation regarding several terms such
-paging
segmentation
fragmentation
virtual memory
page table
to A Level A2 Computing students, this light note may be helpful for your revision
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
The objectives of these slides are:
- To describe the benefits of a virtual memory system
- To explain the concepts of demand paging, page-replacement algorithms, and allocation of page frames
- To discuss the principle of the working-set model
Goals
make allocation and swapping easier
Make all chunks of memory the same size
call each chunk a “PAGE”
example page sizes are 512 bytes, 1K, 4K, 8K, etc
pages have been getting bigger with time
we’ll discuss reasons why pages should be of a certain size as the week progresses.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. 1. Background
• Virtual memory – separates user logical memory from physical memory.
• Only part of the program needs to be in memory for execution
• Logical address space can be much larger than physical address space
Virtual Memory That is Larger Than Physical Memory 2
Loganathan R, CSE, HKBKCE
3. 1. Background Cntd…
• The virtual address space of a process
refers to the logical (or virtual) view of how
a process is stored in memory
• The heap to grow upward in memory as it is
used for dynamic memory allocation
• The stack to grow downward in memory
through successive function calls
• The large blank space (or hole) between
the heap and the stack is part of the virtual
address space
• Virtual address spaces that include holes
are known as sparse address spaces
Loganathan R, CSE, HKBKCE
Virtual-address Space
3
4. 1. Background Cntd…
• Virtual memory allows files and memory to be shared by two or more
processes through page sharing
• Benefits:
• System libraries can be shared by several processes
• Enables processes to share memory
• Allow pages to be shared during process creation with the fork() system call
Shared Library Using Virtual Memory 4
Loganathan R, CSE, HKBKCE
5. 2. Demand Paging
• Bring a page into memory only
when it is needed
– Less I/O needed
– Less memory needed
– Faster response
– More users
• Page is needed reference to it
– invalid reference abort
– not-in-memory bring to
memory
• Lazy swapper – never swaps a
page into memory unless page
will be needed
– Swapper that deals with pages is a
pager
• H/W Support required :
– Page table valid-invalid bit or
special value of protection bits
– Secondary memory holds those Transfer of a Paged Memory
pages that are not present in to Contiguous Disk Space
memory Loganathan R, CSE, HKBKCE 5
6. 2. Demand Paging
Valid-Invalid Bit
• With each page
table entry a valid–
invalid bit is
associated
• v in-memory
•i not-in-
memory
• Initially valid–invalid
bit is set to i on all
entries
• During address
translation, if valid–
invalid bit in page
table entry is
• i page fault trap
to OS Page Table When Some Pages Are Not in Main Memory
Loganathan R, CSE, HKBKCE 6
7. 2. Demand Paging Cntd…
Procedure for
handling the page
fault
1. OS looks at
another table to
decide:
– Invalid reference
abort
– Just not in
memory
2. Get empty frame
3. Swap page into
frame
4. Reset tables
5. Set validation bit =
v
6. Restart the
instruction that
caused the page Steps in Handling a Page Fault
fault Loganathan R, CSE, HKBKCE 7
8. 2. Demand Paging Cntd…
Performance of Demand Paging
• Let p is the probability of a page fault (0 p 1.0)
if p = 0 no page faults and if p = 1, every reference is a fault
• Effective Access Time EAT = (1 – p) x ma(memory access) + p x page fault time
• Page fault causes
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on
the disk.
5. Issue a read from the disk to a free frame:
Wait in a queue for this device until the read request is serviced.
Wait for the device seek and /or latency time.
Begin the transfer of the page to a free frame.
6. While waiting, allocate the CPU to some other user (CPU scheduling, optional).
7. Receive an interrupt from the disk I/O subsystem (I/O completed).
8. Save the registers and process state for the other user (if step 6 is executed).
9. Determine that the interrupt was from the disk.
10. Correct page table &other tables to show the desired page is now in memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user registers, process state, and new page table, and then resume
the interrupted instruction.
8
Loganathan R, CSE, HKBKCE
9. 2. Demand Paging Cntd…
Demand Paging Example
• 3 major components of the page-fault time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.
• Memory access time = 200 nanoseconds
• Average page-fault service time = 8 milliseconds
• EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800
• If one access out of 1,000 causes a page fault, then
EAT = 8.2 microseconds.
This is a slowdown by a factor of 40!!
• If we want performance degradation to be less than 10 percent, we need
220 > 200 + 7,999,800 x p,
20 > 7,999,800 x p,
p < 0.0000025.
Loganathan R, CSE, HKBKCE 9
10. 3. Copy-on-Write
• Copy-on-Write (COW) allows both parent and child processes to initially share
the same pages in memory
If either process modifies a shared page, only then is the page copied
• COW allows more efficient process creation as only modified pages are copied
• Free pages are allocated from a pool of zeroed-out pages
• Example:
Before Process 1 Modifies Page C
Loganathan R, CSE, HKBKCE 10
11. 3. Copy-on-Write Cntd…
After Process 1 Modifies Page C
Loganathan R, CSE, HKBKCE 11
13. 4. Page Replacement Cntd…
4.1 Basic Page Replacement
1. Find the location of the desired page on disk
2. If there is a free frame, use it otherwise, use a page replacement algorithm to
select a victim frame, write it to the disk, change the page and frame tables
3. Bring the desired page into the (newly) free frame; update the page and
frame tables
4. Restart the process
• If no frames are free,
two page transfers
(one out and one in)
are required
• To reduce this
overhead by using a
modify bit or dirty bit
• The modify bit for a
page is set(must write
that page to the disk)
by the hardware
whenever any word or
byte is written
13
Loganathan R, CSE, HKBKCE
14. 4. Page Replacement Cntd…
If the number of frames available increases, the number of
page faults decreases
Graph of Page Faults Versus The Number of Frames
Loganathan R, CSE, HKBKCE 14
15. 4. Page Replacement Cntd…
4.2 First-In-First-Out (FIFO) Algorithm
There are 15 faults
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• 3 frames (3 pages can be in memory at a time per process)
1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
• 4 frames 1 1 5 4
2 2 1 5 10 page faults
3 3 2
4 4 3
Belady’s Anomaly: more frames more page faults in some algoritms
Loganathan R, CSE, HKBKCE 15
16. 4. Page Replacement Cntd…
FIFO Illustrating Belady’s Anomaly
Page-fault curve for FIFO replacement on a reference string
Loganathan R, CSE, HKBKCE 16
17. 4. Page Replacement Cntd…
4.3 Optimal Algorithm (OPT or MIN)
• Replace page that will not be used for longest period of time
• Never suffers from Belady's anomaly
• Difficult to implement, because it requires future knowledge of the reference string
• 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 4
2 6 page faults
3
4 5
Loganathan R, CSE, HKBKCE 17
18. 4. Page Replacement Cntd…
4.4 Least Recently Used (LRU) page replacement Algorithm
• Replace the page that has not been used for the longest period of time
Total 12 faults
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3
Loganathan R, CSE, HKBKCE 18
19. 4. Page Replacement Cntd…
LRU page-replacement implementations
1. Using Counter
– Every page entry associated with time-of-use field and add to the counter /
clock ; every time page is referenced through this entry, copy the clock into
the time of use field
– When a page needs to be changed, look at smallest time value
2. Using Stack
• Keep a stack of page
numbers in a double link
form
• If page referenced
move that page to the top
(requires 6 pointers to be
changed)
• No search for replacement
Both requires H/W support
S/W implementation through
interrupt will slow every
memory reference by a
factor of at least ten Loganathan R, CSE, HKBKCE 19
20. 4. Page Replacement Cntd…
4.5 LRU Approximation Page Replacement Algorithms
• Each entry in the page table is associated with the reference bit is set by the
hardware whenever that page is referenced.
4.5.1 Additional-Reference bit Algorithm
– Replace the one which is 0 (if one exists)
– To know the order of use 8-bit shift registers contain the history of page use for
the last eight time periods(11000100 has been used more recently)
4.5.2 Second chance Algorithm
– If page to be replaced has reference bit = 1 then:
• set reference bit 0 and leave page in memory
• replace next page (in clock order), subject to same rules
4.5.3 Enhanced Second-Chance Algorithm
• Enhance the second-chance algorithm by considering the reference bit and the
modify bit as an ordered pair
1. (0, 0) neither recently used nor modified—best page to replace
2. (0, 1) not recently used but modified—not quite as good, because the page will
need to be written out before replacement
3. (1., 0) recently used but clean—probably will be used again soon
4. (1,1) recently used and modified—probably will be used again soon, and the
page will be need to be written out to disk before it can be replaced
Loganathan R, CSE, HKBKCE 20
22. 4. Page Replacement Cntd…
4.6 Counting-Based Page Replacement
• Keep a counter of the number of references that have been made to each
page
• Least frequently Used (LFU) Algorithm: replaces page with smallest
count, since actively used page should have a large reference count
• Most Frequently Used (MFU) Algorithm: based on the argument that the
page with the smallest count was probably just brought in and has yet to
be used
4.7 Page-Buffering Algorithms
• The desired page is read into a free frame from the pool before the victim
is written out to allow the process to restart as soon as possible and when
the victim is later written, its frame is added to the free-frame pool.
• Maintain a list of modified pages and whenever the paging device is idle, a
modified page is written to the disk
• If the frame contents are not modified, it can be reused directly from the
free-frame pool if it is needed before that frame is reused
Loganathan R, CSE, HKBKCE 22
23. 5. Allocation of Frames
• Each process needs minimum number of pages
• Example: IBM 370 – 6 pages to handle MVC (m/y to m/y)instruction:
– instruction is 6 bytes, might span 2 pages
• Allocation Algorithms
– Equal allocation – For example, if there are 100 frames and 5 processes,
give each process 20 frames.
– Proportional allocation – Allocate according to the size of process
Let si size of process pi Example
m 64
S si
s i 10
m total number of frames
s 2 127
si
ai allocation for pi m a1
10
64 5
S 137
127
a2 64 59
137
Loganathan R, CSE, HKBKCE 23
24. 5. Allocation of Frames Cntd…
5.3 Global versus Local Allocation
• Global replacement allows a process to select a replacement frame
from the set of all frame i.e. one process can take a frame from
another
– A process can select a replacement from its own frames or the frames of any
lower-priority process
– Allows a high-priority process to increase its frame allocation
• Local replacement requires that each process select from only its own
set of allocated frames
– The number of frames allocated to a process does not change
Loganathan R, CSE, HKBKCE 24
25. 6. Thrashing
Thrashing : high paging activity i.e. a process is spending more time
paging than executing
6.1 Cause of Thrashing
• If a process does not have enough pages, the page-fault rate is very
high. This leads to:
– low CPU utilization So, OS thinks that it needs to increase the degree of
multiprogramming which leads to more page fault
Loganathan R, CSE, HKBKCE 25
26. 6. Thrashing Cntd…
Thrashing Prevention
• Why does thrashing occur?
A locality is a set of pages that are actively used together
size of locality > total memory size
– Process migrates from one locality to another
– Localities may overlap
Working-Set Model
• working-set window a fixed number of page references
Example: 10,000 instruction
• WSSi (Working Set Size of Process Pi) =
total number of pages referenced in the most recent (varies
in time)
– if too small will not include entire locality
– if too large will include several localities
– if = will include entire program
26
27. 6. Thrashing Cntd…
• D = WSSi total demand frames
• if D > m Thrashing
• Policy if D > m, then suspend one of the processes
• Approximate the working set with interval timer + a
reference bit
Loganathan R, CSE, HKBKCE 27
28. 6. Thrashing Cntd…
Page-Fault Frequency Scheme
• Establish “acceptable” page-fault rate
– If actual rate too low, process loses frame
– If actual rate too high, process gains frame
Loganathan R, CSE, HKBKCE 28