SlideShare a Scribd company logo
1 of 26
Download to read offline
Amrita
School
of
Engineering,
Bangalore
Ms. Harika Pudugosula
Lecturer
Department of Electronics & Communication Engineering
• Background
• Swapping
• Contiguous Memory Allocation
• Segmentation
• Paging
• Structure of the Page Table
2
Segmentation
• It is a memory-management scheme that supports this programmer view of memory
• A logical address space is a collection of segments
• Each segment has a name and a length
• The addresses specify both the segment name and the offset(the second part of a
logical address that permits to locate an Address inside a memory segment) within
the segment
• The programmer therefore specifies each address by two quantities: a segment name
and an offset
• Segments are numbered and are referred to by a segment number, rather than by a
segment name
• Logical address consists of a two tuple:
<segment-number, offset>
3
Segmentation
• When a program is compiled, the compiler automatically constructs segments
reflecting the input program
• A C compiler might create separate segments for the following:
1. The code
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library
• Libraries that are linked in during compile time might be assigned separate segments
• The loader would take all these segments and assign them segment numbers
4
Segmentation Hardware
• An implementation to map two-dimensional user-defined addresses into one-
dimensional physical addresses, this mapping is effected by a segment table
• Each entry in the segment table has a segment base and a segment limit
• The segment base contains the starting physical address where the segment resides
in memory, and the segment limit specifies the length of the segment
• A logical address consists of two parts:
• a segment number, s, and an offset into that segment, d
• The segment number is used as an index to the segment table
• The offset d of the logical address must be between 0 and the segment limit
• If it is not, we trap to the operating system (logical addressing attempt beyond end of
segment).
• When an offset is legal, it is added to the segment base to produce the address in
physical memory of the desired byte
• The segment table is thus essentially an array of base–limit register pairs
5
Segmentation Hardware
6
Figure. Segmentation hardware
Segmentation Hardware
• Consider the situation shown in fig.
• We have five segments numbered from 0
through 4
• The segment table has a separate entry
for each segment, giving the beginning
address of the segment in physical
memory (or base) and the length of that
segment (or limit)
• For example, segment 2 is 400 bytes long
and begins at location 4300
• A reference to byte 53 of segment 2 is
mapped onto location 4300 + 53 = 4353
• A reference to segment 3, byte 852, is
mapped to 3200 (the base of segment 3) +
852 = 4052
• A reference to byte 1222 of segment 0
would result in a trap to the operating
system, as this segment is only 1,000
bytes long
7
Fig. Example of segmentation
Paging
• The basic method for implementing paging involves
• breaking physical memory into fixed-sized blocks called frames and
• breaking logical memory into blocks of the same size called pages
• When a process is to be executed, its pages are loaded into any available memory
frames from their source
• The backing store is divided into fixed-sized blocks that are the same size as the
memory frames or clusters of multiple frames
• Example, the logical address space is now totally separate from the physical address
space, so a process can have a logical 64-bit address space even though the system
has less than 264 bytes of physical memory
• The hardware support for paging is illustrated that every address generated by the
CPU is divided into two parts: a page number (p) and a page offset (d)
• The page number is used as an index into a page table
• The page table contains the base address of each page in physical memory
8
• This base address is combined with the page offset to define the physical memory
address that is sent to the memory unit
9
Fig. Paging Hardware
• The paging model of memory is shown in Figure below
10
Fig.
Paging
model
of
logical
and
physical
memory
• Page size -
• The page size (like the frame size) is defined by the hardware
• The size of a page is a power of 2, varying between 512 bytes and 1 GB per
page, depending on the computer architecture
• The selection of a power of 2 as a page size makes the translation of a logical
address into a page number and page offset particularly easy
• If the size of the logical address space is 2m, and a page size is 2n bytes, then
the high-order m − n bits of a logical address designate the page number, and
the n low-order bits designate the page offset
• The logical address is as follows -
where p is an index into the page table and d is the displacement within the page
11
p
a
g
e
n
u
m
b
e
r p
a
g
e
o
f
f
s
e
t
p d
m
-
n n
• Paging Example -
12
• Here, in the logical address, n= 2 and m = 4
2^2=4 size of frame
2^4= 32 size of logical memory
m-n=4-2=2 bits to represent page number
• Using a page size of 4 bytes and a physical memory
of 32 bytes (8 pages)
• Logical address 0 is page 0, offset 0
Indexing into the page table, we find that page 0 is
in frame 5. Thus, logical address 0 maps to physical
address
20 [= (5 × 4) +0]
• Logical address 3 (page 0, offset 3) maps to physical
address 23 [= (5 × 4) +3]
• Logical address 4 is page 1, offset 0; according to the
page table, page 1 is mapped to frame 6
• Thus, logical address 4 maps to physical address 24
[= (6× 4) + 0]
fig: Paging example for a 32-byte memory
with 4-byte pages
• Paging -
13
• Paging itself is dynamic relocation
• Every logical address is bound by the paging h/w to some physical address
• Using paging is similar to using a table of base register, one for each frames of
memory
• When paging is used no external fragmentation
• Calculating internal fragmentation
• Page size = 2,048 bytes
• Process size = 72,766 bytes
• So we require 35 pages + 1,086 bytes
• Internal fragmentation of 2,048 - 1,086 = 962 bytes
• Worst case fragmentation = 1 frame – 1 byte(page size=4 bytes, process
size=17 bytes, 4 pages+1 byte. Internal fragmentation, 4-1 = 3 bytes
• Paging -
14
• Page sizes growing over time (pages typically are between 4 KB and 8 KB in size)
• Example: Solaris supports two page sizes – 8 KB and 4 MB
• A 32-bit entry can point to one of 232 physical page frames
• If frame size is 4 KB (212), then a system with 4-byte entries can address 244
bytes (or 16 TB) of physical memory
• Free frames -
15
• When a process arrives for execution,
its size, expressed in pages, is examined
• Each page of the process needs one
frame
• If the process requires n pages, at least
n frames must be available in memory
• If n frames are available, they are
allocated to this arriving process
• The first page of the process is loaded
into one of the allocated frames, and
the frame number is put in the page
table for this process
• The next page is loaded into another
frame, its frame number is put into the
page table, and so on
Figure. Free frames (a) before allocation and
(b) after allocation
• Paging -
16
• How to choose frame size to reduce internal fragmentation
• Small page sizes reduce internal fragmentation but create overhead as entries to
be done in page table
• Frame Table -
• OS should know which frames are allocated, which frames are available and how
many total frames are there. This information is maintained in a table called
frame table
• The frame table has one entry for each physical page frame, indicating whether
the latter is free or allocated and, if it is allocated, to which page of which
process or processes
Hardware Support
17
• OS maintains a copy of the page table for each process. This page table is used to translate logical
addresses to physical addresses
• A pointer to the page table is stored in the PCB of a process
• Several ways of hardware implementation of page table -
• page table is implemented as a set of dedicated registers (if no. of page table entries is small)
• page table is stored in memory (if no. of page table entries is high)
Page-table base register (PTBR) points to the page table
Changing page table requires changing only this one register.which reduce context switch
time
In this scheme every data/instruction access requires two memory accesses- One for the
page table and one for the data / instruction
If access time for main memory is x units= X+ X= 2X units
• The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs). It incurs no
performance penalty because it is faster
Hardware Support
18
• TLB is a cache memory that stores recent
translations of logical memory addresses to
physical addresses for faster retrieval
• If the page number is not in the TLB (known as
a TLB miss), a memory reference to the page
table must be made
• When the frame number is obtained, we can
use it to access memory, then add the page
number and frame number to the TLB, so that,
they will be found quickly on the next
reference
• If TLB is full, then replacement policies to be
used
• Various replacement policies are LRU, random
and etc.,
• wired down - Some entries in TLB cannot be
removed(TLB entries for key kernel code are
wired down)
Figure. Paging hardware with TLB
Hardware Support
19
• Each TLB entry contains address-space identifiers (ASID)
• ASID uniquely identifies each process and is used to provide address-space
protection for that process
• When the TLB attempts to resolve page numbers, it ensures that the ASID for the
currently running process matches the ASID associated with the page
• If the ASIDs do not match, the attempt is treated as a TLB miss
• If the TLB does not support separate ASIDs, then every time a new page table is
selected (for instance, with each context switch), the TLB must be flushed (or erased)
to ensure that the next executing process does not use the wrong translation
information
Hardware Support
20
• Hit ratio - The percentage of times that the page number of interest is found in the
TLB
• Example - An 80-percent hit ratio means that we find the desired page number in
the TLB 80 percent of the time
• If it takes 100 nanoseconds to access memory, then a mapped-memory access takes
100 nanoseconds. when the page number is in the TLB
• If we fail to find the page number in the TLB then we must first access memory for
the page table and frame number (100 nanoseconds) and then access the desired
byte in memory (100 nanoseconds), for a total of 200 nanoseconds
• To find the effective memory-access time, we weight the case by its probability -
• effective access time = 0.80 × 100 + 0.20 × 200 = 120 nanoseconds
In this example, we suffer a 20-percent slowdown in average memory-access
time (from 100 to 120 nanoseconds).
• For a 99-percent hit ratio, which is much more realistic, we have
effective access time = 0.99 × 100 + 0.01 × 200 = 101 nanoseconds
This increased hit rate produces only a 1 percent slowdown in access time
Protection
21
• Memory protection in contiguous memory location was done using limit register
and relocation (base) register
• The memory assigned to process p1 should not be accessed by process p2
• So protection bits are provided
• Memory protection implemented by associating protection bit with each frame
to indicate if read-only or read-write access is allowed
• Can also add more bits to indicate page execute-only, and so on
• Valid-invalid bit attached to each entry in the page table -
• “valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page
• “invalid” indicates that the page is not in the process’ logical address space
• Some systems provide hardware, in the form of a page-table length register
(PTLR), to indicate the size of the page table
• This value is checked against every logical address to verify that the address is in
the valid range for the process
• Failure of this test causes an error trap to the operating system
Protection
22
Figure. Valid (v) or invalid (i) bit in a
page table
• In a system with a 14-bit address space (0 to
16383), we have a program that should use only
addresses 0 to 10468
• Given a page size of 2 KB, we have the situation
shown in Figure
• Addresses in pages 0, 1, 2, 3, 4, and 5 are mapped
normally through the page table
• Any attempt to generate an address in pages 6 or 7,
however, will find that the valid–invalid bit is set to
invalid, and the computer will trap to the operating
system (invalid page reference).
• Notice that this scheme has created a problem.
Because the program extends only to address
10468, any reference beyond that address is illegal
• However, references to page 5 are classified as
valid, so accesses to addresses up to 12287 are
valid
• Only the addresses from 12288 to 16383 are
invalid. This is due to internal fragmentation
Shared Pages
23
• An advantage of paging is the possibility of sharing common code
• This consideration is particularly important in a time-sharing environment
• Shared code :
• One copy of read-only (reentrant: non-self-modifying code. it never changes
during execution) code shared among processes (i.e., text editors, compilers,
window systems)
• Similar to multiple threads sharing the same process space
• Consider a system that support 40 users each of whom executes text editor
• If text editor consist of 150kb of code, 50kb of data space
• We would need 8000kb to support 40 users
• If code is reentrant code it can be shared.( reentrant code is non self modifying
code it never changes during execution. Thus two or more process can execute the
same code at the same time)
• Here we have 3 page editor each page of size 50kb being shared among 3
processes
• Each process has its own data page
Shared Pages
24
Figure. Sharing of code in a paging environment
25
References
1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition,
John Wiley and Sons, 2012.
26
Thank you

More Related Content

Similar to Memory Management Strategies - III.pdf

PAGIN AND SEGMENTATION.docx
PAGIN AND SEGMENTATION.docxPAGIN AND SEGMENTATION.docx
PAGIN AND SEGMENTATION.docxImranBhatti58
 
Csc4320 chapter 8 2
Csc4320 chapter 8 2Csc4320 chapter 8 2
Csc4320 chapter 8 2bshikhar13
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfHarika Pudugosula
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 pagingVaibhav Khanna
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.pptShiva340703
 
Hardware implementation of page table
Hardware implementation of page table Hardware implementation of page table
Hardware implementation of page table Sukhraj Singh
 
Memory management
Memory managementMemory management
Memory managementcpjcollege
 
Cache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptCache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptrularofclash69
 
Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniquesnikhilrana24112003
 

Similar to Memory Management Strategies - III.pdf (20)

Vm
VmVm
Vm
 
cache memory
cache memorycache memory
cache memory
 
PAGIN AND SEGMENTATION.docx
PAGIN AND SEGMENTATION.docxPAGIN AND SEGMENTATION.docx
PAGIN AND SEGMENTATION.docx
 
Csc4320 chapter 8 2
Csc4320 chapter 8 2Csc4320 chapter 8 2
Csc4320 chapter 8 2
 
os presentation.ppt
os presentation.pptos presentation.ppt
os presentation.ppt
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
Cache Memory.pptx
Cache Memory.pptxCache Memory.pptx
Cache Memory.pptx
 
Memory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdfMemory Management Strategies - IV.pdf
Memory Management Strategies - IV.pdf
 
Operating system 35 paging
Operating system 35 pagingOperating system 35 paging
Operating system 35 paging
 
Auxiliary, Cache and Virtual memory.pptx
Auxiliary, Cache and Virtual memory.pptxAuxiliary, Cache and Virtual memory.pptx
Auxiliary, Cache and Virtual memory.pptx
 
04_Cache Memory.ppt
04_Cache Memory.ppt04_Cache Memory.ppt
04_Cache Memory.ppt
 
Hardware implementation of page table
Hardware implementation of page table Hardware implementation of page table
Hardware implementation of page table
 
Memory management
Memory managementMemory management
Memory management
 
Memory+management
Memory+managementMemory+management
Memory+management
 
Module5 secondary storage
Module5 secondary storageModule5 secondary storage
Module5 secondary storage
 
Memory management
Memory managementMemory management
Memory management
 
Cache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptCache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.ppt
 
Segmentation with paging methods and techniques
Segmentation with paging methods and techniquesSegmentation with paging methods and techniques
Segmentation with paging methods and techniques
 
COA (Unit_4.pptx)
COA (Unit_4.pptx)COA (Unit_4.pptx)
COA (Unit_4.pptx)
 
Chapter 8 - Main Memory
Chapter 8 - Main MemoryChapter 8 - Main Memory
Chapter 8 - Main Memory
 

More from Harika Pudugosula

Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxArtificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxHarika Pudugosula
 
Artificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxArtificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxHarika Pudugosula
 
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfMultithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfHarika Pudugosula
 
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfMultithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfHarika Pudugosula
 
Multithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfMultithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfHarika Pudugosula
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdfHarika Pudugosula
 
Memory Management Strategies - I.pdf
Memory Management Strategies - I.pdfMemory Management Strategies - I.pdf
Memory Management Strategies - I.pdfHarika Pudugosula
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfHarika Pudugosula
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfHarika Pudugosula
 
Operating System Structure Part-II.pdf
Operating System Structure Part-II.pdfOperating System Structure Part-II.pdf
Operating System Structure Part-II.pdfHarika Pudugosula
 
Operating System Structure Part-I.pdf
Operating System Structure Part-I.pdfOperating System Structure Part-I.pdf
Operating System Structure Part-I.pdfHarika Pudugosula
 
Lecture-4_Process Management.pdf
Lecture-4_Process Management.pdfLecture-4_Process Management.pdf
Lecture-4_Process Management.pdfHarika Pudugosula
 
Lecture_3-Process Management.pdf
Lecture_3-Process Management.pdfLecture_3-Process Management.pdf
Lecture_3-Process Management.pdfHarika Pudugosula
 

More from Harika Pudugosula (20)

Artificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptxArtificial Neural Networks_Part-2.pptx
Artificial Neural Networks_Part-2.pptx
 
Artificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptxArtificial Neural Networks_Part-1.pptx
Artificial Neural Networks_Part-1.pptx
 
Introduction.pptx
Introduction.pptxIntroduction.pptx
Introduction.pptx
 
CPU Scheduling Part-III.pdf
CPU Scheduling Part-III.pdfCPU Scheduling Part-III.pdf
CPU Scheduling Part-III.pdf
 
CPU Scheduling Part-II.pdf
CPU Scheduling Part-II.pdfCPU Scheduling Part-II.pdf
CPU Scheduling Part-II.pdf
 
CPU Scheduling Part-I.pdf
CPU Scheduling Part-I.pdfCPU Scheduling Part-I.pdf
CPU Scheduling Part-I.pdf
 
Multithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdfMultithreaded Programming Part- III.pdf
Multithreaded Programming Part- III.pdf
 
Multithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdfMultithreaded Programming Part- II.pdf
Multithreaded Programming Part- II.pdf
 
Multithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdfMultithreaded Programming Part- I.pdf
Multithreaded Programming Part- I.pdf
 
Deadlocks Part- III.pdf
Deadlocks Part- III.pdfDeadlocks Part- III.pdf
Deadlocks Part- III.pdf
 
Deadlocks Part- II.pdf
Deadlocks Part- II.pdfDeadlocks Part- II.pdf
Deadlocks Part- II.pdf
 
Deadlocks Part- I.pdf
Deadlocks Part- I.pdfDeadlocks Part- I.pdf
Deadlocks Part- I.pdf
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdf
 
Memory Management Strategies - I.pdf
Memory Management Strategies - I.pdfMemory Management Strategies - I.pdf
Memory Management Strategies - I.pdf
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdf
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdf
 
Operating System Structure Part-II.pdf
Operating System Structure Part-II.pdfOperating System Structure Part-II.pdf
Operating System Structure Part-II.pdf
 
Operating System Structure Part-I.pdf
Operating System Structure Part-I.pdfOperating System Structure Part-I.pdf
Operating System Structure Part-I.pdf
 
Lecture-4_Process Management.pdf
Lecture-4_Process Management.pdfLecture-4_Process Management.pdf
Lecture-4_Process Management.pdf
 
Lecture_3-Process Management.pdf
Lecture_3-Process Management.pdfLecture_3-Process Management.pdf
Lecture_3-Process Management.pdf
 

Recently uploaded

Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 

Recently uploaded (20)

Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 

Memory Management Strategies - III.pdf

  • 2. • Background • Swapping • Contiguous Memory Allocation • Segmentation • Paging • Structure of the Page Table 2
  • 3. Segmentation • It is a memory-management scheme that supports this programmer view of memory • A logical address space is a collection of segments • Each segment has a name and a length • The addresses specify both the segment name and the offset(the second part of a logical address that permits to locate an Address inside a memory segment) within the segment • The programmer therefore specifies each address by two quantities: a segment name and an offset • Segments are numbered and are referred to by a segment number, rather than by a segment name • Logical address consists of a two tuple: <segment-number, offset> 3
  • 4. Segmentation • When a program is compiled, the compiler automatically constructs segments reflecting the input program • A C compiler might create separate segments for the following: 1. The code 2. Global variables 3. The heap, from which memory is allocated 4. The stacks used by each thread 5. The standard C library • Libraries that are linked in during compile time might be assigned separate segments • The loader would take all these segments and assign them segment numbers 4
  • 5. Segmentation Hardware • An implementation to map two-dimensional user-defined addresses into one- dimensional physical addresses, this mapping is effected by a segment table • Each entry in the segment table has a segment base and a segment limit • The segment base contains the starting physical address where the segment resides in memory, and the segment limit specifies the length of the segment • A logical address consists of two parts: • a segment number, s, and an offset into that segment, d • The segment number is used as an index to the segment table • The offset d of the logical address must be between 0 and the segment limit • If it is not, we trap to the operating system (logical addressing attempt beyond end of segment). • When an offset is legal, it is added to the segment base to produce the address in physical memory of the desired byte • The segment table is thus essentially an array of base–limit register pairs 5
  • 7. Segmentation Hardware • Consider the situation shown in fig. • We have five segments numbered from 0 through 4 • The segment table has a separate entry for each segment, giving the beginning address of the segment in physical memory (or base) and the length of that segment (or limit) • For example, segment 2 is 400 bytes long and begins at location 4300 • A reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353 • A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 = 4052 • A reference to byte 1222 of segment 0 would result in a trap to the operating system, as this segment is only 1,000 bytes long 7 Fig. Example of segmentation
  • 8. Paging • The basic method for implementing paging involves • breaking physical memory into fixed-sized blocks called frames and • breaking logical memory into blocks of the same size called pages • When a process is to be executed, its pages are loaded into any available memory frames from their source • The backing store is divided into fixed-sized blocks that are the same size as the memory frames or clusters of multiple frames • Example, the logical address space is now totally separate from the physical address space, so a process can have a logical 64-bit address space even though the system has less than 264 bytes of physical memory • The hardware support for paging is illustrated that every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d) • The page number is used as an index into a page table • The page table contains the base address of each page in physical memory 8
  • 9. • This base address is combined with the page offset to define the physical memory address that is sent to the memory unit 9 Fig. Paging Hardware
  • 10. • The paging model of memory is shown in Figure below 10 Fig. Paging model of logical and physical memory
  • 11. • Page size - • The page size (like the frame size) is defined by the hardware • The size of a page is a power of 2, varying between 512 bytes and 1 GB per page, depending on the computer architecture • The selection of a power of 2 as a page size makes the translation of a logical address into a page number and page offset particularly easy • If the size of the logical address space is 2m, and a page size is 2n bytes, then the high-order m − n bits of a logical address designate the page number, and the n low-order bits designate the page offset • The logical address is as follows - where p is an index into the page table and d is the displacement within the page 11 p a g e n u m b e r p a g e o f f s e t p d m - n n
  • 12. • Paging Example - 12 • Here, in the logical address, n= 2 and m = 4 2^2=4 size of frame 2^4= 32 size of logical memory m-n=4-2=2 bits to represent page number • Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages) • Logical address 0 is page 0, offset 0 Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 [= (5 × 4) +0] • Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) +3] • Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6 • Thus, logical address 4 maps to physical address 24 [= (6× 4) + 0] fig: Paging example for a 32-byte memory with 4-byte pages
  • 13. • Paging - 13 • Paging itself is dynamic relocation • Every logical address is bound by the paging h/w to some physical address • Using paging is similar to using a table of base register, one for each frames of memory • When paging is used no external fragmentation • Calculating internal fragmentation • Page size = 2,048 bytes • Process size = 72,766 bytes • So we require 35 pages + 1,086 bytes • Internal fragmentation of 2,048 - 1,086 = 962 bytes • Worst case fragmentation = 1 frame – 1 byte(page size=4 bytes, process size=17 bytes, 4 pages+1 byte. Internal fragmentation, 4-1 = 3 bytes
  • 14. • Paging - 14 • Page sizes growing over time (pages typically are between 4 KB and 8 KB in size) • Example: Solaris supports two page sizes – 8 KB and 4 MB • A 32-bit entry can point to one of 232 physical page frames • If frame size is 4 KB (212), then a system with 4-byte entries can address 244 bytes (or 16 TB) of physical memory
  • 15. • Free frames - 15 • When a process arrives for execution, its size, expressed in pages, is examined • Each page of the process needs one frame • If the process requires n pages, at least n frames must be available in memory • If n frames are available, they are allocated to this arriving process • The first page of the process is loaded into one of the allocated frames, and the frame number is put in the page table for this process • The next page is loaded into another frame, its frame number is put into the page table, and so on Figure. Free frames (a) before allocation and (b) after allocation
  • 16. • Paging - 16 • How to choose frame size to reduce internal fragmentation • Small page sizes reduce internal fragmentation but create overhead as entries to be done in page table • Frame Table - • OS should know which frames are allocated, which frames are available and how many total frames are there. This information is maintained in a table called frame table • The frame table has one entry for each physical page frame, indicating whether the latter is free or allocated and, if it is allocated, to which page of which process or processes
  • 17. Hardware Support 17 • OS maintains a copy of the page table for each process. This page table is used to translate logical addresses to physical addresses • A pointer to the page table is stored in the PCB of a process • Several ways of hardware implementation of page table - • page table is implemented as a set of dedicated registers (if no. of page table entries is small) • page table is stored in memory (if no. of page table entries is high) Page-table base register (PTBR) points to the page table Changing page table requires changing only this one register.which reduce context switch time In this scheme every data/instruction access requires two memory accesses- One for the page table and one for the data / instruction If access time for main memory is x units= X+ X= 2X units • The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs). It incurs no performance penalty because it is faster
  • 18. Hardware Support 18 • TLB is a cache memory that stores recent translations of logical memory addresses to physical addresses for faster retrieval • If the page number is not in the TLB (known as a TLB miss), a memory reference to the page table must be made • When the frame number is obtained, we can use it to access memory, then add the page number and frame number to the TLB, so that, they will be found quickly on the next reference • If TLB is full, then replacement policies to be used • Various replacement policies are LRU, random and etc., • wired down - Some entries in TLB cannot be removed(TLB entries for key kernel code are wired down) Figure. Paging hardware with TLB
  • 19. Hardware Support 19 • Each TLB entry contains address-space identifiers (ASID) • ASID uniquely identifies each process and is used to provide address-space protection for that process • When the TLB attempts to resolve page numbers, it ensures that the ASID for the currently running process matches the ASID associated with the page • If the ASIDs do not match, the attempt is treated as a TLB miss • If the TLB does not support separate ASIDs, then every time a new page table is selected (for instance, with each context switch), the TLB must be flushed (or erased) to ensure that the next executing process does not use the wrong translation information
  • 20. Hardware Support 20 • Hit ratio - The percentage of times that the page number of interest is found in the TLB • Example - An 80-percent hit ratio means that we find the desired page number in the TLB 80 percent of the time • If it takes 100 nanoseconds to access memory, then a mapped-memory access takes 100 nanoseconds. when the page number is in the TLB • If we fail to find the page number in the TLB then we must first access memory for the page table and frame number (100 nanoseconds) and then access the desired byte in memory (100 nanoseconds), for a total of 200 nanoseconds • To find the effective memory-access time, we weight the case by its probability - • effective access time = 0.80 × 100 + 0.20 × 200 = 120 nanoseconds In this example, we suffer a 20-percent slowdown in average memory-access time (from 100 to 120 nanoseconds). • For a 99-percent hit ratio, which is much more realistic, we have effective access time = 0.99 × 100 + 0.01 × 200 = 101 nanoseconds This increased hit rate produces only a 1 percent slowdown in access time
  • 21. Protection 21 • Memory protection in contiguous memory location was done using limit register and relocation (base) register • The memory assigned to process p1 should not be accessed by process p2 • So protection bits are provided • Memory protection implemented by associating protection bit with each frame to indicate if read-only or read-write access is allowed • Can also add more bits to indicate page execute-only, and so on • Valid-invalid bit attached to each entry in the page table - • “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page • “invalid” indicates that the page is not in the process’ logical address space • Some systems provide hardware, in the form of a page-table length register (PTLR), to indicate the size of the page table • This value is checked against every logical address to verify that the address is in the valid range for the process • Failure of this test causes an error trap to the operating system
  • 22. Protection 22 Figure. Valid (v) or invalid (i) bit in a page table • In a system with a 14-bit address space (0 to 16383), we have a program that should use only addresses 0 to 10468 • Given a page size of 2 KB, we have the situation shown in Figure • Addresses in pages 0, 1, 2, 3, 4, and 5 are mapped normally through the page table • Any attempt to generate an address in pages 6 or 7, however, will find that the valid–invalid bit is set to invalid, and the computer will trap to the operating system (invalid page reference). • Notice that this scheme has created a problem. Because the program extends only to address 10468, any reference beyond that address is illegal • However, references to page 5 are classified as valid, so accesses to addresses up to 12287 are valid • Only the addresses from 12288 to 16383 are invalid. This is due to internal fragmentation
  • 23. Shared Pages 23 • An advantage of paging is the possibility of sharing common code • This consideration is particularly important in a time-sharing environment • Shared code : • One copy of read-only (reentrant: non-self-modifying code. it never changes during execution) code shared among processes (i.e., text editors, compilers, window systems) • Similar to multiple threads sharing the same process space • Consider a system that support 40 users each of whom executes text editor • If text editor consist of 150kb of code, 50kb of data space • We would need 8000kb to support 40 users • If code is reentrant code it can be shared.( reentrant code is non self modifying code it never changes during execution. Thus two or more process can execute the same code at the same time) • Here we have 3 page editor each page of size 50kb being shared among 3 processes • Each process has its own data page
  • 24. Shared Pages 24 Figure. Sharing of code in a paging environment
  • 25. 25 References 1. Silberscatnz and Galvin, “Operating System Concepts,” Ninth Edition, John Wiley and Sons, 2012.