SlideShare a Scribd company logo
1 of 40
Virtual Memory
Introduction
 Virtual Memory
 Allows the execution of processes that are not completely
in memory
 Abstracts main memory into an extremely large, uniform
array of storage
 Allows processes to share files easily and to implement
shared memory
 Benefits:
 A program would no longer be constrained by the amount
of physical memory that is available
 More programs could be run at the same time
 Less I/O would be needed to load or swap each user
program into memory
Virtual Memory
 Involves the separation of logical memory
as perceived by users from physical
memory
 Virtual address space
 the logical (or virtual) view of how a process is
stored in memory
 The hole in stack and heap is known as
sparse address space
 Allows files and memory to be shared by two
or more processes through page sharing
 System libraries can be shared by several
processes
 enables processes to share memory
 allow pages to be shared during process creation
Stack
heap
data
0
max
Demand Paging
 What is problem with loading entire program in
memory?
 Demand paging
 Pages are only loaded when they are demanded during
program execution
 A paging system with swapping
 System uses a lazy swapper
 Never swaps a page into memory unless that page will be
needed
 Use term pager, rather than swapper
 Avoids reading into memory pages that will not be used
anyway
 Decreases the swap time and the amount of physical
memory needed
Concept of Demand Paging
 Requires a mechanism to distinguish between the pages
that are in memory and the pages that are on the disk
 The valid-invalid bit scheme is used
 Valid - the associated page is both legal and in memory
 Invalid - the page either is not valid or is valid but is currently on
the disk
0
1
2
3
4
5
A
B
C
D
E
F
0
1
2
3
4
5
4 V
I
6 V
I
I
9 V
0
1
2
3
4
5
6
7
8
9
10
A
C
F
A B
c D E
F
frame valid-invalid
bit
Logical
memory
Page table
physical
memory
Disk
Concept of Demand Paging
 If pages marked as
invalid and process
never accesses those
pages?
 If a process access
memory resident pages,
execution proceeds
normally
 Page-fault trap
 The process tries to
access a page that was
not brought into memory
(Access to a page
marked invalid)
How to handle a page fault?
1. Check internal table to determine
whether reference is valid or
invalid memory reference
2. If invalid, terminate the process.
If valid, page it in.
1. Find a free frame
2. Read page into newly allocated
frame
3. Modify the internal table
4. Restart the instruction
Pure Demand Paging
 Never bring a page into memory until it is required
 Executing a process with no pages in memory
 Process faults for first page. Page is brought into memory
 Process continues to execute
 Faults as necessary until all pages are in memory. Process now
executes with no more page faults
 Hardware
 Page table
 Secondary memory: a high speed disk
 Holds those pages that are not present in main memory
 Known as a swap device
 The section of disk used: a swap space
 Important requirement is the need to be able to restart
any instruction after a page fault
Performance of Demand Paging
 effective access time for a demand-paged memory
 If no page fault,
effective access time = memory access time
 If page fault occurs
effective access time = (1 - p) x ma + p x page fault time
 p = Probability of a page fault
 ma = memory access time
 page fault time = time requires to service a page fault
 It is important to keep the page-fault rate low in a
demand-paging system
Page Service routine
 Page Service routine
 Consider a demand-paging system with a paging
disk that has an average access and transfer time of
20 ms. Addresses are translated through a page
table in main memory, with an access time of 1 us
per memory access. Thus, each memory reference
through the page table takes two accesses. To
improve this time, we have added an associative
memory that reduces access time to one memory
reference, if the page-table entry is in the associative
memory.
80% of access are in associative memory and those
of remaining 10%(2% of total) cause page fault.
What is effective memory access time?
= (0.80 * 1 us) + (0.18 * 2us) + (0.02 * 20,002 us)
= .8 us + .36 us + 400.04 us
= 401.2 us
 In demand paging, system takes 100 ms to service a
page fault and 300 ms to replace a dirty page.
Memory access time is 1 ms.The probability of page
fault is P. In case of a page fault, the probability of
page being dirty is also P. It is observed that average
access time is 3 ms.
 Calculate the value of P.
solution : EMAT= P(P*300+(1-p)100)+(1-P)1
P=0.0194
 Consider the page fault service time of 10 ms in
computer with average memory access time being
20 ns. If one page fault is generated for every 106
memory access. Calculate effective memory access
time EMAT
Solution = (1-10-6 )*20nsec+(10-6 )*10
=30nec
Need for Page Replacement
 Over-allocation of memory
 To increase degree of multiprogramming
 On page fault, system finds no free frame to page in the
desired page
 Possible solutions:
 Process Termination
 Swapping out a process and free all its frames
 Page Replacement
Page Replacement
“If no frame is free, find one that is not currently being
used and free it”
 Page-fault service routine with page replacement
 If no frames are free, two page transfers
 Doubles the page-fault service time and increases the
effective access time
 Use of a modify bit (or dirty bit) associated with each
page
 When a page is selected for replacement
 If set- indicating that the page has been modified since it was
read into memory, must write that page to the disk
 If not set - the page has not been modified since it was read into
memory
Page Replacement
Page & Frame Replacement Algorithms
 Frame allocation algorithm
 Decides how many frames to allocate to each process
 Page replacement algorithm
 select the frames that are to be replaced
 Several algorithms are there
 Evaluation is carried out by running algorithm on a reference
string and compute the number of page faults
 We consider only page numbers rather than entire
address
 If we have a reference to a page p, then any immediately
following references to page p will never cause a page
fault
Page & Frame Replacement Algorithms
 E.g. for a process we have a sequence of address
0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101,
0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105
at 100 bytes per page. We reduce this sequence to reference string
as
1,4,1,6,1,6,1,6,1,6,1
 To determine #page faults for particular reference string and
page replacement algorithm
 Must know the number of page frames available
FIFO Page Replacement Algorithm
 associates with each page the time when that page
was brought into memory
“The oldest page is chosen to replace”
 We have
 A reference string
7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1
 Three frames initially empty
7 7
0
7
0
1
7 0 1
2
0
1
2 0 3
2
3
1
0 4 2
4
2
0
2
3
0
4
3
0
3
4
2
3
0
0
2
3
3 2 1
0
1
3
2
0
1
2
0 1 7
7
1
2
0
7
0
2
1
7
0
1
Page
Fram
e
Total Page faults =
15
Belady’s Anomaly
 FIFO page-replacement algorithm is easy to understand
 Its performance is not always good
 Possible problem with FIFO page replacement
 As we increase no. of page frames, page fault increases
unexpectedly: Belady’s Anomaly
Optimal Page Replacement
 Has the lowest page-fault rate of all algorithms
 Never suffer from Belady's anomaly
 Such an algorithm does exist and has been called OPT or
MIN
“Replace the page that will not be used for the longest
period of time.”
 Three page frames(empty initially) and a reference string
7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1
7 7
0
7
0
1
7 0 1
Page
Fram
e
2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2
0
1
2
0
3
2
4
3
2
0
3
2
0
1
7
0
1
Total Page faults = 9
Optimal Page Replacement
 Guarantees the lowest possible page fault rate
for a fixed number of frames
 Difficult to implement,
 Requires future knowledge of the reference string.
 Used mainly for comparison studies.
LRU Page Replacement
FIFO algorithm uses the time when a page was brought into memory
OPT algorithm uses the time when a page is to be used.
 Use the recent past as an approximation of the near
future
 Associates with each page the time of that page's last
use
“Replace the page that has not been used for the longest
period of time”
 The optimal page-replacement algorithm looking backward
in time
7 7
0
7
0
1
7 0 1
Page
Fram
e
2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
2
0
1
2
0
3
4
0
3
0
3
2
1
3
2
1
0
7
Total Page faults =
12
4
0
2
4
3
2
1
0
2
LRU Page Replacement
 The major problem to implement LRU:
 How to implement LRU replacement?
 How to determine an order for the frames defined by the
time of last use?
 Solution:
 Counters
 An association of a time-of-use field in page table entry
 Addition of logical clock or counter
 We replace the page with the smallest time value
 Stack
 Keeping a stack of page numbers
 Whenever a page is referenced, it is removed from the stack
and put on the top
 Does not suffer from Belady's anomaly
LRU Page Replacement
 OPT and LRU belongs to Stack algorithm
 Can never exhibit Belady's anomaly
 Algorithm:
 The set of pages in memory for n frames is always a
subset of the set of pages that would be in memory with n
+ 1 frames
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
1.FIFO : 16
2.Optimal : 11
3.LRU : 12
LRU Approximation Page Replacement
 To implement LRU page replacement, sufficient
hardware support is required
 For system which does not supporting such support
 Either other page replacement algorithms must be used
 Or provides help in the form of a reference bit
 Is set for a page by hardware whenever the page is referenced .
 Reference bits are associated with each entry in the page
table.
 We can determine which pages have been used and
which have not been used by examining the
reference bits
 We do not know the order of use
Additional Reference Bits Algorithm
 Additional ordering information can be gained by
recording the reference bits regular intervals
 After time interval OS:
 Shifts the reference bit for each page into the high-order
bit of its 8-bit byte
 Shifts the other bits right by 1 bit
 Discards the low-order bit
 Contains the history of page use for the last eight time
periods
 The numbers are not guaranteed to be unique
Second Chance Page Replacement
 Derived from FIFO page replacement algorithm
 Second chance algorithm
 When a page is selected for replacement, check the
reference bit
 If 0  replace this page
 If 1  give the page a second chance and move on to select
the next FIFO page
 When page gets a second time,
 Its reference bit is cleared & its arrival time is reset to the
current time
 Clock algorithm
 Implemented using a circular queue
0
0
1
1
0
1
Page
s
Referenc
e bits
Next victim
.
.
.
.
.
.
0
0
1
1
1
1
1
Page
s
Referenc
e bits
.
.
.
.
.
.
1
0
0
0
0
0
0
Page is
selected for
replacement
Page is
selected for
replacement
Enhanced Second Chance Page
Replacement
 Considers the reference bit and the modify bit as an
ordered pair
 (0, 0) neither recently used nor modified-best!
 (0, 1) not recently used but modified - not quite as good
 (1, 0) recently used but clean - probably will be used again
soon
 (1,1) recently used and modified - probably will be used again
soon, and the page will be need to be written out to disk before
it can be replaced
 Requires three loops
1. Cycle through buffer for (0, 0). If one found, use that page.
2. Cycle through buffer for (0, 1). Set the reference bit to zero for all
page bypassed
3. If above step failed, all reference bits will be zero and repetition of
step 1 and 2 guarantees to find page for replacement
Counting Based Page Replacement
 Keep a counter of the number of references that
have been made to each page
 Least Frequently Used (LFU) Page Replacement
 Requires that the page with the smallest count be replaced
 Most Frequently Used (MFU) Page Replacement
 Requires that the page with the largest count be replaced
Allocation of Frames
 How do we allocate the fixed amount of free memory
among the various processes?
 allocate at least a minimum number of frames
 Number of frames  the page fault rate
 Allocation Algorithms
 Equal Allocation
 Proportional Allocation
 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑣𝑖𝑟𝑡𝑢𝑎𝑙 𝑚𝑒𝑚𝑜𝑟𝑦 𝑆 = 𝑆𝑖
 𝑁𝑜. 𝑜𝑓 𝑓𝑟𝑎𝑚𝑒𝑠 𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑡𝑜 𝑃𝑖 =
𝑆𝑖
𝑆×𝑚
Global and Local Allocation
 Page replacement
 Local replacement
 Requires that each process select from only its own set of
allocated frames
 The number of frames allocated to a process does not change
 Might hinder a process
 Global replacement
 Allows a process to select a replacement frame from the
set of all frames
 Increases the number of frames allocated to it
 A process cannot control its own page-fault rate
 Results in greater system throughput
Thrashing
 What is thrashing?
 If a process does not have ''enough" frames?
 High paging activity – thrashing
 A process is thrashing if it is spending more time
paging than executing
 Why does thrashing occur?
 OS increases degree of multiprogramming to increase CPU
utilization
 global page-replacement algorithm is used
 Processes start faulting for pages
 faulting processes must use the paging device to swap pages in and
out
 ready queue empties
 CPU utilization decreases
 Thrashing has occurred and system throughput plunges
 Page fault rate increases tremendously
Thrashing
 How to limit effect of thrashing?
 By using a local replacement algorithm (or priority
replacement algorithm)
 It cannot steal frames from another process
 But average service time for a page fault will increase
 The effective access time will increase even for a process that is
not thrashing
 To prevent thrashing
 We must provide a process with as many frames as it
needs
 How many frames a process needs?
 Working Set Strategy: defines the locality model of a process
execution
Working Set Model
 Based on the assumption of locality
 Examines the most recent ∆ page references
 ∆ = working set window
 Set of pages in the most recent ∆ page references:
working set
 Approximation of the program's locality
 Total demand for frames D = 𝑊𝑆𝑆𝑖
 If D > m , thrashing will occur
 OS monitors the working set of each process
 If the sum of the working-set sizes increases?
 OS selects a process to suspend
 Pages are written out (swapped),
 Its frames are reallocated to other processes
. . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4…
A
t1
WS(t1) = {1,2,5,6,7}
t2
WS(t2) = {3,4}
Working Set Model
 Optimizes CPU utilization
 By preventing thrashing while keeping the degree of
multiprogramming as high as possible
 Difficulty in keeping track of the working set
 working-set window is a moving window
 A page is in the working set if it is referenced anywhere in
the working-set window
Keeping Track of the Working Set
 Approximate with interval timer + a reference bit
 E.g. ∆ = 10000
 Timer interrupts after every 5000 time units
 Keep 2 bits in memory for each page
 Whenever a timer interrupts copy and sets the values of
all reference bits to 0
 If one of the bits in memory = 1 page in in working set
Page Fault Frequency
 More direct approach than WSS
 Establish “acceptable” page-fault frequency (PFF)
rate
 Uses a local replacement policy
 If actual rate too low  process loses frames
 If actual rate too high  process gains frames

More Related Content

Similar to Virtual memory management in Operating System

Adobe Scan 06-Jan-2023.pdf demand paging document
Adobe Scan 06-Jan-2023.pdf demand paging documentAdobe Scan 06-Jan-2023.pdf demand paging document
Adobe Scan 06-Jan-2023.pdf demand paging document
AllyKhan2
 
Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904
marangburu42
 

Similar to Virtual memory management in Operating System (20)

Unit 2chapter 2 memory mgmt complete
Unit 2chapter 2  memory mgmt completeUnit 2chapter 2  memory mgmt complete
Unit 2chapter 2 memory mgmt complete
 
381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)
 
CH09.pdf
CH09.pdfCH09.pdf
CH09.pdf
 
Mca ii os u-4 memory management
Mca  ii  os u-4 memory managementMca  ii  os u-4 memory management
Mca ii os u-4 memory management
 
Distributed Operating System_3
Distributed Operating System_3Distributed Operating System_3
Distributed Operating System_3
 
Virtual memory This is the operating system ppt.ppt
Virtual memory This is the operating system ppt.pptVirtual memory This is the operating system ppt.ppt
Virtual memory This is the operating system ppt.ppt
 
Demand paging
Demand pagingDemand paging
Demand paging
 
Adobe Scan 06-Jan-2023.pdf demand paging document
Adobe Scan 06-Jan-2023.pdf demand paging documentAdobe Scan 06-Jan-2023.pdf demand paging document
Adobe Scan 06-Jan-2023.pdf demand paging document
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858
 
Virtual memoryfinal
Virtual memoryfinalVirtual memoryfinal
Virtual memoryfinal
 
Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904
 
Virtual memory
Virtual memory Virtual memory
Virtual memory
 
Virtual memory - Demand Paging
Virtual memory - Demand PagingVirtual memory - Demand Paging
Virtual memory - Demand Paging
 
Virtual Memory Management
Virtual Memory ManagementVirtual Memory Management
Virtual Memory Management
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Module4
Module4Module4
Module4
 
Virtual memory pre-final-formatting
Virtual memory pre-final-formattingVirtual memory pre-final-formatting
Virtual memory pre-final-formatting
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdf
 
LRU_Replacement-Policy.pdf
LRU_Replacement-Policy.pdfLRU_Replacement-Policy.pdf
LRU_Replacement-Policy.pdf
 
Operating system 38 page replacement
Operating system 38 page replacementOperating system 38 page replacement
Operating system 38 page replacement
 

More from Rashmi Bhat

More from Rashmi Bhat (17)

Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating System
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
Process Scheduling in OS
Process Scheduling in OSProcess Scheduling in OS
Process Scheduling in OS
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating System
 
The Geometry of Virtual Worlds.pdf
The Geometry of Virtual Worlds.pdfThe Geometry of Virtual Worlds.pdf
The Geometry of Virtual Worlds.pdf
 
Module 1 VR.pdf
Module 1 VR.pdfModule 1 VR.pdf
Module 1 VR.pdf
 
OLAP
OLAPOLAP
OLAP
 
Spatial Data Mining
Spatial Data MiningSpatial Data Mining
Spatial Data Mining
 
Web mining
Web miningWeb mining
Web mining
 
Mining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association RulesMining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association Rules
 
Clustering
ClusteringClustering
Clustering
 
Classification in Data Mining
Classification in Data MiningClassification in Data Mining
Classification in Data Mining
 
ETL Process
ETL ProcessETL Process
ETL Process
 
Data Warehouse Fundamentals
Data Warehouse FundamentalsData Warehouse Fundamentals
Data Warehouse Fundamentals
 
Virtual Reality
Virtual Reality Virtual Reality
Virtual Reality
 
Introduction To Virtual Reality
Introduction To Virtual RealityIntroduction To Virtual Reality
Introduction To Virtual Reality
 
Graph Theory
Graph TheoryGraph Theory
Graph Theory
 

Recently uploaded

Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
drjose256
 

Recently uploaded (20)

Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdf
 
The Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptxThe Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptx
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptx
 
Adsorption (mass transfer operations 2) ppt
Adsorption (mass transfer operations 2) pptAdsorption (mass transfer operations 2) ppt
Adsorption (mass transfer operations 2) ppt
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
handbook on reinforce concrete and detailing
handbook on reinforce concrete and detailinghandbook on reinforce concrete and detailing
handbook on reinforce concrete and detailing
 
15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon
 
Insurance management system project report.pdf
Insurance management system project report.pdfInsurance management system project report.pdf
Insurance management system project report.pdf
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdf
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 
Raashid final report on Embedded Systems
Raashid final report on Embedded SystemsRaashid final report on Embedded Systems
Raashid final report on Embedded Systems
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
 
What is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, FunctionsWhat is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, Functions
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
 
Filters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility ApplicationsFilters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility Applications
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1
 
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
Tembisa Central Terminating Pills +27838792658 PHOMOLONG Top Abortion Pills F...
 

Virtual memory management in Operating System

  • 2. Introduction  Virtual Memory  Allows the execution of processes that are not completely in memory  Abstracts main memory into an extremely large, uniform array of storage  Allows processes to share files easily and to implement shared memory  Benefits:  A program would no longer be constrained by the amount of physical memory that is available  More programs could be run at the same time  Less I/O would be needed to load or swap each user program into memory
  • 3. Virtual Memory  Involves the separation of logical memory as perceived by users from physical memory  Virtual address space  the logical (or virtual) view of how a process is stored in memory  The hole in stack and heap is known as sparse address space  Allows files and memory to be shared by two or more processes through page sharing  System libraries can be shared by several processes  enables processes to share memory  allow pages to be shared during process creation Stack heap data 0 max
  • 4. Demand Paging  What is problem with loading entire program in memory?  Demand paging  Pages are only loaded when they are demanded during program execution  A paging system with swapping  System uses a lazy swapper  Never swaps a page into memory unless that page will be needed  Use term pager, rather than swapper  Avoids reading into memory pages that will not be used anyway  Decreases the swap time and the amount of physical memory needed
  • 5. Concept of Demand Paging  Requires a mechanism to distinguish between the pages that are in memory and the pages that are on the disk  The valid-invalid bit scheme is used  Valid - the associated page is both legal and in memory  Invalid - the page either is not valid or is valid but is currently on the disk 0 1 2 3 4 5 A B C D E F 0 1 2 3 4 5 4 V I 6 V I I 9 V 0 1 2 3 4 5 6 7 8 9 10 A C F A B c D E F frame valid-invalid bit Logical memory Page table physical memory Disk
  • 6. Concept of Demand Paging  If pages marked as invalid and process never accesses those pages?  If a process access memory resident pages, execution proceeds normally  Page-fault trap  The process tries to access a page that was not brought into memory (Access to a page marked invalid) How to handle a page fault? 1. Check internal table to determine whether reference is valid or invalid memory reference 2. If invalid, terminate the process. If valid, page it in. 1. Find a free frame 2. Read page into newly allocated frame 3. Modify the internal table 4. Restart the instruction
  • 7. Pure Demand Paging  Never bring a page into memory until it is required  Executing a process with no pages in memory  Process faults for first page. Page is brought into memory  Process continues to execute  Faults as necessary until all pages are in memory. Process now executes with no more page faults  Hardware  Page table  Secondary memory: a high speed disk  Holds those pages that are not present in main memory  Known as a swap device  The section of disk used: a swap space  Important requirement is the need to be able to restart any instruction after a page fault
  • 8. Performance of Demand Paging  effective access time for a demand-paged memory  If no page fault, effective access time = memory access time  If page fault occurs effective access time = (1 - p) x ma + p x page fault time  p = Probability of a page fault  ma = memory access time  page fault time = time requires to service a page fault  It is important to keep the page-fault rate low in a demand-paging system Page Service routine
  • 10.  Consider a demand-paging system with a paging disk that has an average access and transfer time of 20 ms. Addresses are translated through a page table in main memory, with an access time of 1 us per memory access. Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an associative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory. 80% of access are in associative memory and those of remaining 10%(2% of total) cause page fault. What is effective memory access time?
  • 11. = (0.80 * 1 us) + (0.18 * 2us) + (0.02 * 20,002 us) = .8 us + .36 us + 400.04 us = 401.2 us
  • 12.  In demand paging, system takes 100 ms to service a page fault and 300 ms to replace a dirty page. Memory access time is 1 ms.The probability of page fault is P. In case of a page fault, the probability of page being dirty is also P. It is observed that average access time is 3 ms.  Calculate the value of P. solution : EMAT= P(P*300+(1-p)100)+(1-P)1 P=0.0194
  • 13.  Consider the page fault service time of 10 ms in computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory access. Calculate effective memory access time EMAT Solution = (1-10-6 )*20nsec+(10-6 )*10 =30nec
  • 14. Need for Page Replacement  Over-allocation of memory  To increase degree of multiprogramming  On page fault, system finds no free frame to page in the desired page  Possible solutions:  Process Termination  Swapping out a process and free all its frames  Page Replacement
  • 15. Page Replacement “If no frame is free, find one that is not currently being used and free it”  Page-fault service routine with page replacement  If no frames are free, two page transfers  Doubles the page-fault service time and increases the effective access time  Use of a modify bit (or dirty bit) associated with each page  When a page is selected for replacement  If set- indicating that the page has been modified since it was read into memory, must write that page to the disk  If not set - the page has not been modified since it was read into memory
  • 17. Page & Frame Replacement Algorithms  Frame allocation algorithm  Decides how many frames to allocate to each process  Page replacement algorithm  select the frames that are to be replaced  Several algorithms are there  Evaluation is carried out by running algorithm on a reference string and compute the number of page faults  We consider only page numbers rather than entire address  If we have a reference to a page p, then any immediately following references to page p will never cause a page fault
  • 18. Page & Frame Replacement Algorithms  E.g. for a process we have a sequence of address 0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101, 0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105 at 100 bytes per page. We reduce this sequence to reference string as 1,4,1,6,1,6,1,6,1,6,1  To determine #page faults for particular reference string and page replacement algorithm  Must know the number of page frames available
  • 19. FIFO Page Replacement Algorithm  associates with each page the time when that page was brought into memory “The oldest page is chosen to replace”  We have  A reference string 7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1  Three frames initially empty 7 7 0 7 0 1 7 0 1 2 0 1 2 0 3 2 3 1 0 4 2 4 2 0 2 3 0 4 3 0 3 4 2 3 0 0 2 3 3 2 1 0 1 3 2 0 1 2 0 1 7 7 1 2 0 7 0 2 1 7 0 1 Page Fram e Total Page faults = 15
  • 20. Belady’s Anomaly  FIFO page-replacement algorithm is easy to understand  Its performance is not always good  Possible problem with FIFO page replacement  As we increase no. of page frames, page fault increases unexpectedly: Belady’s Anomaly
  • 21. Optimal Page Replacement  Has the lowest page-fault rate of all algorithms  Never suffer from Belady's anomaly  Such an algorithm does exist and has been called OPT or MIN “Replace the page that will not be used for the longest period of time.”  Three page frames(empty initially) and a reference string 7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1 7 7 0 7 0 1 7 0 1 Page Fram e 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 2 0 1 2 0 3 2 4 3 2 0 3 2 0 1 7 0 1 Total Page faults = 9
  • 22. Optimal Page Replacement  Guarantees the lowest possible page fault rate for a fixed number of frames  Difficult to implement,  Requires future knowledge of the reference string.  Used mainly for comparison studies.
  • 23. LRU Page Replacement FIFO algorithm uses the time when a page was brought into memory OPT algorithm uses the time when a page is to be used.  Use the recent past as an approximation of the near future  Associates with each page the time of that page's last use “Replace the page that has not been used for the longest period of time”  The optimal page-replacement algorithm looking backward in time 7 7 0 7 0 1 7 0 1 Page Fram e 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 2 0 1 2 0 3 4 0 3 0 3 2 1 3 2 1 0 7 Total Page faults = 12 4 0 2 4 3 2 1 0 2
  • 24. LRU Page Replacement  The major problem to implement LRU:  How to implement LRU replacement?  How to determine an order for the frames defined by the time of last use?  Solution:  Counters  An association of a time-of-use field in page table entry  Addition of logical clock or counter  We replace the page with the smallest time value  Stack  Keeping a stack of page numbers  Whenever a page is referenced, it is removed from the stack and put on the top  Does not suffer from Belady's anomaly
  • 25. LRU Page Replacement  OPT and LRU belongs to Stack algorithm  Can never exhibit Belady's anomaly  Algorithm:  The set of pages in memory for n frames is always a subset of the set of pages that would be in memory with n + 1 frames
  • 26. 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6 1.FIFO : 16 2.Optimal : 11 3.LRU : 12
  • 27. LRU Approximation Page Replacement  To implement LRU page replacement, sufficient hardware support is required  For system which does not supporting such support  Either other page replacement algorithms must be used  Or provides help in the form of a reference bit  Is set for a page by hardware whenever the page is referenced .  Reference bits are associated with each entry in the page table.  We can determine which pages have been used and which have not been used by examining the reference bits  We do not know the order of use
  • 28. Additional Reference Bits Algorithm  Additional ordering information can be gained by recording the reference bits regular intervals  After time interval OS:  Shifts the reference bit for each page into the high-order bit of its 8-bit byte  Shifts the other bits right by 1 bit  Discards the low-order bit  Contains the history of page use for the last eight time periods  The numbers are not guaranteed to be unique
  • 29. Second Chance Page Replacement  Derived from FIFO page replacement algorithm  Second chance algorithm  When a page is selected for replacement, check the reference bit  If 0  replace this page  If 1  give the page a second chance and move on to select the next FIFO page  When page gets a second time,  Its reference bit is cleared & its arrival time is reset to the current time  Clock algorithm  Implemented using a circular queue
  • 30. 0 0 1 1 0 1 Page s Referenc e bits Next victim . . . . . . 0 0 1 1 1 1 1 Page s Referenc e bits . . . . . . 1 0 0 0 0 0 0 Page is selected for replacement Page is selected for replacement
  • 31. Enhanced Second Chance Page Replacement  Considers the reference bit and the modify bit as an ordered pair  (0, 0) neither recently used nor modified-best!  (0, 1) not recently used but modified - not quite as good  (1, 0) recently used but clean - probably will be used again soon  (1,1) recently used and modified - probably will be used again soon, and the page will be need to be written out to disk before it can be replaced  Requires three loops 1. Cycle through buffer for (0, 0). If one found, use that page. 2. Cycle through buffer for (0, 1). Set the reference bit to zero for all page bypassed 3. If above step failed, all reference bits will be zero and repetition of step 1 and 2 guarantees to find page for replacement
  • 32. Counting Based Page Replacement  Keep a counter of the number of references that have been made to each page  Least Frequently Used (LFU) Page Replacement  Requires that the page with the smallest count be replaced  Most Frequently Used (MFU) Page Replacement  Requires that the page with the largest count be replaced
  • 33. Allocation of Frames  How do we allocate the fixed amount of free memory among the various processes?  allocate at least a minimum number of frames  Number of frames  the page fault rate  Allocation Algorithms  Equal Allocation  Proportional Allocation  𝑠𝑖𝑧𝑒 𝑜𝑓 𝑣𝑖𝑟𝑡𝑢𝑎𝑙 𝑚𝑒𝑚𝑜𝑟𝑦 𝑆 = 𝑆𝑖  𝑁𝑜. 𝑜𝑓 𝑓𝑟𝑎𝑚𝑒𝑠 𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑡𝑜 𝑃𝑖 = 𝑆𝑖 𝑆×𝑚
  • 34. Global and Local Allocation  Page replacement  Local replacement  Requires that each process select from only its own set of allocated frames  The number of frames allocated to a process does not change  Might hinder a process  Global replacement  Allows a process to select a replacement frame from the set of all frames  Increases the number of frames allocated to it  A process cannot control its own page-fault rate  Results in greater system throughput
  • 35. Thrashing  What is thrashing?  If a process does not have ''enough" frames?  High paging activity – thrashing  A process is thrashing if it is spending more time paging than executing  Why does thrashing occur?  OS increases degree of multiprogramming to increase CPU utilization  global page-replacement algorithm is used  Processes start faulting for pages  faulting processes must use the paging device to swap pages in and out  ready queue empties  CPU utilization decreases  Thrashing has occurred and system throughput plunges  Page fault rate increases tremendously
  • 36. Thrashing  How to limit effect of thrashing?  By using a local replacement algorithm (or priority replacement algorithm)  It cannot steal frames from another process  But average service time for a page fault will increase  The effective access time will increase even for a process that is not thrashing  To prevent thrashing  We must provide a process with as many frames as it needs  How many frames a process needs?  Working Set Strategy: defines the locality model of a process execution
  • 37. Working Set Model  Based on the assumption of locality  Examines the most recent ∆ page references  ∆ = working set window  Set of pages in the most recent ∆ page references: working set  Approximation of the program's locality  Total demand for frames D = 𝑊𝑆𝑆𝑖  If D > m , thrashing will occur  OS monitors the working set of each process  If the sum of the working-set sizes increases?  OS selects a process to suspend  Pages are written out (swapped),  Its frames are reallocated to other processes . . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4… A t1 WS(t1) = {1,2,5,6,7} t2 WS(t2) = {3,4}
  • 38. Working Set Model  Optimizes CPU utilization  By preventing thrashing while keeping the degree of multiprogramming as high as possible  Difficulty in keeping track of the working set  working-set window is a moving window  A page is in the working set if it is referenced anywhere in the working-set window
  • 39. Keeping Track of the Working Set  Approximate with interval timer + a reference bit  E.g. ∆ = 10000  Timer interrupts after every 5000 time units  Keep 2 bits in memory for each page  Whenever a timer interrupts copy and sets the values of all reference bits to 0  If one of the bits in memory = 1 page in in working set
  • 40. Page Fault Frequency  More direct approach than WSS  Establish “acceptable” page-fault frequency (PFF) rate  Uses a local replacement policy  If actual rate too low  process loses frames  If actual rate too high  process gains frames