The document discusses memory management techniques used in operating systems. It covers logical versus physical address spaces and introduces paging as a memory management technique. Paging divides both main memory and disk storage into fixed-sized pages. Each process has a page table containing entries for its pages, with each entry mapping a page to a frame in main memory if present or being invalid if on disk. The CPU address is divided into a page number to index the table and an offset to access within the page.
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
memory managment on computer science.pptfootydigarse
Description:
This PowerPoint presentation delves into the critical realm of memory management, exploring strategies to optimize system performance and resource utilization. Beginning with an overview of memory management fundamentals, the presentation progresses to examine various memory management techniques employed in modern computing environments. Topics covered include memory allocation algorithms, memory fragmentation mitigation strategies, virtual memory concepts, and the role of caching mechanisms. Through illustrative diagrams, case studies, and real-world examples, the presentation offers insights into best practices for memory management across different computing platforms. Additionally, emerging trends and advancements in memory management technologies are explored, providing attendees with a comprehensive understanding of how to leverage memory management to enhance system efficiency, scalability, and reliability. Whether you're a seasoned IT professional, a software developer, or a student eager to expand your knowledge of memory management, this presentation offers valuable insights into the intricacies of memory optimization in contemporary computing systems.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
The Objectives of these slides are:
- To provide a detailed description of various ways of organizing memory hardware
- To discuss various memory-management techniques, including paging and segmentation
- To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging
memory managment on computer science.pptfootydigarse
Description:
This PowerPoint presentation delves into the critical realm of memory management, exploring strategies to optimize system performance and resource utilization. Beginning with an overview of memory management fundamentals, the presentation progresses to examine various memory management techniques employed in modern computing environments. Topics covered include memory allocation algorithms, memory fragmentation mitigation strategies, virtual memory concepts, and the role of caching mechanisms. Through illustrative diagrams, case studies, and real-world examples, the presentation offers insights into best practices for memory management across different computing platforms. Additionally, emerging trends and advancements in memory management technologies are explored, providing attendees with a comprehensive understanding of how to leverage memory management to enhance system efficiency, scalability, and reliability. Whether you're a seasoned IT professional, a software developer, or a student eager to expand your knowledge of memory management, this presentation offers valuable insights into the intricacies of memory optimization in contemporary computing systems.
This Presentation is for Memory Management in Operating System (OS). This Presentation describes the basic need for the Memory Management in our OS and its various Techniques like Swapping, Fragmentation, Paging and Segmentation.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
2. S c h o o l o f C S A / M C A - I s t S e m e s t e r
V i j a y a K u m a r H
Operating System with Linux
M20CA1030
UNIT II
3. 3
SYLLABUS
Memory Management: Logical and Physical address space, Swapping,
Contiguous allocation, Paging, Segmentation, Segmentation with paging,
Virtual memory-Demand paging and its performance, Page replacement
algorithms, Allocation of frames, Thrashing.
4. 4
MEMORY MANAGEMENT
Address binding of instructions and data to memory addresses can happen at three different
stages:
Compile time: The compile time is the time taken to compile the program or source code. During
compilation, if memory location known a priori, then it generates absolute codes.
Load time: It is the time taken to link all related program file and load into the main memory. It
must generate relocatable code if memory location is not known at compile time.
5. 5
MEMORY MANAGEMENT
Execution time: It is the time taken to execute the program in main memory by processor.
Binding delayed until run time if the process can be moved during its execution from one
memory segment to another. Need hardware support for address maps (e.g., base and limit
registers)
6. 6
LOGICAL- VERSUS PHYSICAL-ADDRESS SPACE
An address generated by the CPU is commonly referred to as a logical address or a virtual
address whereas an address seen by the main memory unit is commonly referred to as a
physical address.
The set of all logical addresses generated by a program is a logical-address space whereas the
set of all physical addresses corresponding to these logical addresses is a physical address
space.
8. 8
MEMORY MANAGEMENT
• Memory is central to the operation of a computer system.
• It consists of a large array of words or bytes each with its own address.
• Interaction is achieved through a sequence of reads/writes of specific memory address.
• The CPU fetches from the program from the hard disk and stores in memory.
• If a program is to be executed, it must be mapped to absolute addresses and loaded
into memory.
9. 9
MEMORY MANAGEMENT
The Operating System is responsible for the following activities in connection with
memory management:
• Keep track of which parts of memory are currently being used and by whom.
• Decide which processes are to be loaded into memory when memory space
becomes available.
• Allocate and deallocate memory space as needed.
10. 10
MEMORY MANAGEMENT
• In the multiprogramming environment operating system dynamically allocates
memory to multiple processes.
• Thus, memory plays a significant role in the important aspects of computer system
like performance, S/W support, reliability and stability.
• Memory can be broadly classified into two categories–the primary memory (like
cache and RAM) and the secondary memory (like magnetic tape, disk etc.).
11. 11
MEMORY MANAGEMENT
• In multiprogramming system, as available memory is shared among number of
processes, so the allocation speed and the efficient memory utilization (in terms of
minimal overheads and reuse/relocation of released memory block) are of prime
concern.
• Protection is difficult to achieve with relocation requirement, as location of process
and absolute address in memory is unpredictable. But at run-time, it can be done.
12. 12
MEMORY MANAGEMENT TECHNIQUES
• Uniprogramming: Here the RAM is divided into two parts one part is for the resigning
the operating system and other portion is for the user process.
• Here the border register is used which contain the last address of the operating
system parts.
• The operating system will compare the user data addresses with the fence register
and if it is different that means the user is not entering in the OS area.
13. 13
MEMORY MANAGEMENT TECHNIQUES
• Border register is also called boundary register and is used to prevent a user from
entering in the operating system area.
• Here the CPU utilization is very poor and hence multiprogramming is used
14. 14
MEMORY MANAGEMENT TECHNIQUES
• Multiprogramming: Here, multiple users can share the memory simultaneously.
• By multiprogramming we mean there will be more than one process in the main
memory and if the running process wants to wait for an event like I/O then instead of
sitting ideal CPU will make a context switch and will pick another process.
a. Contiguous memory allocation
b. Non-contiguous memory allocation
15. 15
MEMORY MANAGEMENT TECHNIQUES
The basic approaches of allocation are of two types:
• Contiguous Memory Allocation: Each programs data and instructions are
allocated a single contiguous space in memory.
• Non-Contiguous Memory Allocation: Each programs data and instructions are
allocated memory space that is not continuous. This unit focuses on contiguous
memory allocation scheme.
16. 16
CONTIGUOUS MEMORY ALLOCATION
• Contiguous Memory Allocation: Each programs data and instructions are
allocated a single contiguous space in memory.
• When a process requests for the memory, a single contiguous section of memory
blocks is assigned to the process according to its requirement.
17. 17
CONTIGUOUS MEMORY ALLOCATION
The kernel performs compaction to create a single free memory area and initiates
process E in this area. It involves moving processes C and D in memory during their
execution.
18. 18
CONTIGUOUS MEMORY ALLOCATION
Fixed sized partition:
In the fixed sized partition the system divides memory into fixed size partition (may
or may not be of the same size) here entire partition is allowed to a process and if
there is some wastage inside the partition is allocated to a process and if there is
some wastage inside the partition then it is called internal fragmentation.
19. 19
CONTIGUOUS MEMORY ALLOCATION
Variable size partition:
• In the variable size partition, the memory is treated as one unit and space
allocated to a process is exactly the same as required and the leftover space
can be reused again.
• Hole – block of available memory; holes of various size are scattered throughout
memory
• When a process arrives, it is allocated memory from a hole large enough to
accommodate it
20. 20
CONTIGUOUS MEMORY ALLOCATION
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
• Advantage: There is no internal fragmentation.
• Disadvantage: Management is very difficult as memory is becoming purely
fragmented after some time
23. 23
CONTIGUOUS MEMORY ALLOCATION
How to satisfy a request of size n from a list of free holes?
• First-fit: Allocate the first hole that is big enough
• Next-fit: Similar to first-fit, but start from last hole allocated
• Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size. Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest
leftover hole H
24. 24
FRAGMENTATION
• External Fragmentation: total memory space exists to satisfy a request, but it is not
contiguous
• Internal Fragmentation: allocated memory may be larger than requested memory; this
size difference is memory internal to a partition, but not being used
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in one large block
25. 25
NON-CONTIGUOUS MEMORY ALLOCATION
• Non-Contiguous Memory Allocation: Each programs data and instructions are
allocated memory space that is not continuous. This unit focuses on contiguous
memory allocation scheme.
• The Non-contiguous memory allocation allows a process to acquire the several
memory blocks at the different location in the memory according to its requirement.
• The non-contiguous memory allocation also reduces the memory wastage caused
due to internal and external fragmentation.
26. 26
NON-CONTIGUOUS MEMORY ALLOCATION
• As it utilizes the memory holes,
• The available free memory space are scattered here and there and all the free
memory space is not at one place. So this is time-consuming.
• A process will acquire the memory space, but it is not at one place it is at the
different locations according to the process requirement.
• It reduces the wastage of memory. This utilizes all the free memory space which is
created by a different process.
28. 28
SWAPPING
• The kernel swaps out a process that is not in the running state by writing out its code
and data space to a swapping area on the disk.
• The swapped-out process is brought back into memory before it is due for another
burst of CPU time.
• A basic issue in swapping is whether a swapped-in process should be loaded back
into the same memory area that it occupied before it was swapped-out.
29. 29
SWAPPING
• Swapping is a technique of temporarily removing inactive programs from the
memory of the system.
• A process can be swapped temporarily out of the memory to a backing store and
then brought back into the memory for continuing the execution.
• Eg: Round Robin CPU scheduling whenever the time quantum expires then the
process that has just finished is swapped out and a new process swaps into the
memory for execution.
30. 30
SWAPPING
• A variation of swap is priority-based scheduling.
• When a low priority is executing and if a high priority process arrives then a low
priority will be swapped out and high priority is allowed for execution (Roll out and
Roll in).
• physical address is computed during run time.
• Swapping requires backing store and it should be large enough to accommodate the
copies of all memory images.
31. 31
SWAPPING
Swapping is constant by other factors:
• To swap a process, it should be completely idle.
• A process may be waiting for an i/o operation.
• If the i/o is asynchronously accessing the user memory for i/o buffers, then the
process cannot be swapped Normally the process which is swapped out will be
swapped back to the same memory space that is occupied previously.
33. 33
MEMORY MANAGEMENT TECHNIQUE
Fixed partition (Static partition):-
• No of partitions are fixed.
• Size of each partition may or may not same.
• Contiguous allocation
• Ex: P1=2MB, P2=7MB, P3=7MB P4=14 MB
34. 34
MEMORY MANAGEMENT TECHNIQUE
Drawback:
• Internal fragmentation
• Limit in process size(32 MB of process cannot be
accommodated)
• Limit on degree of multiprogramming (P5 cannot be brought
into RAM)
• External fragmentation occur(5 mb)
41. 41
VARIABLE PARTITIONING
• Initially RAM is empty and partitions are made during the run-time according to
process’s need instead of partitioning during system configure.
• The size of partition will be equal to incoming process.
42. 42
VARIABLE PARTITIONING
• The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilization of RAM.
• Number of partitions in RAM is not fixed and depends on the number of incoming
process and Main Memory’s size.
44. 44
PARTITION ALGORITHMS- FIXED SIZE PARTITIONING
In First Fit:
• P1 required minimum 212k size portion so p1 occupied 500k.
• P2 required minimum 417k size portion so p2 occupied 600k.
• P3 required minimum 112k size portion so p3 occupied 200k.
• P4 required minimum 426k size portion but 500k and 600k both occupied so p4 have to wait.
(Now more than 426k memory are free but we can’t use these memories because of Continuous Memory Allocation, this
is called External Fragmentation)
45. 45
PARTITION ALGORITHMS- FIXED SIZE PARTITIONING
In Best Fit:
• P1 required minimum 212k size portion so p1 occupied 300k.
• P2 required minimum 417k size portion so p2 occupied 500k.
• P3 required minimum 112k size portion so p3 occupied 200k.
• P4 required minimum 426k size portion so p4 occupied 600k
46. 46
PARTITION ALGORITHMS- FIXED SIZE PARTITIONING
In Worst Fit:
• P1 required minimum 212k size portion so p1 occupied 600k.
• P2 required minimum 417k size portion so p2 occupied 500k.
• P3 required minimum 112k size portion so p3 occupied 300k.
• P4 required minimum 426k size portion but 500k and 600k both occupied so p4 have to wait.
(External Fragmentation)
47. 47
PARTITION ALGORITHMS- FIXED SIZE PARTITIONING
Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB
(ill order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)? Which algorithm
makes the most efficient use of memory?
56. 56
PARTITION ALGORITHMS
• First fit. Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search ended.
We can stop searching as soon as we find a free hole that is large enough.
Advantage
Fastest algorithm because it searches as little as possible.
Disadvantage
The remaining unused memory areas left after allocation become waste if it is too
smaller. Thus, request for larger memory requirement cannot be accomplished
57. 57
PARTITION ALGORITHMS
• Best fit. Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.
• Advantage
Memory utilization is much better than first fit as it searches the smallest free partition
first available.
• Disadvantage
It is slower and may even tend to fill up memory with tiny useless holes.
58. 58
PARTITION ALGORITHMS
• Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole,
• Advantage
Reduces the rate of production of small gaps.
• Disadvantage
If a process requiring larger memory arrives at a later stage then it cannot be
accommodated as the largest hole is already split and occupied.
• First-fit and best-fit better than worst-fit in terms of speed and storage utilization
59. 59
PARTITION ALGORITHMS
1) Given five memory partitions of 200 KB, 400 KB, 600 KB, 500 KB, 300 KB, 250KB (ill order), how
would the first-fit, best-fit, and worst-fit algorithms place processes of P1=357 KB, P2=210 KB, P3=468
KB, and P4=491 KB? Which algorithm makes the most efficient use of memory?
2) Let the free pace memory allocation blocks be 50 KB, 100 KB, 90 KB, 200 KB, 50 KB, (ill order), how
would the first-fit, best-fit, and worst-fit algorithms place processes of P1=90 KB, P2=20 KB, P3=50 KB,
and P4=200 KB? Which algorithm makes the most efficient use of memory?
61. 61
PAGING
• Paging is a method of writing and reading data from a secondary storage(Drive) for
use in primary storage(RAM).
• When a computer runs out of RAM, the OS will move pages of memory over to the
computer’s hard disk to free up RAM for other process.
62. 62
PAGING
• Secondary memory is divided into equal size partition (fixed) called pages.
• Every process will have a separate page table.
• The entries in the page table are the number of pages a process.
• At each entry either we have an invalid pointer which means the page is not in main
memory or we will get the corresponding frame number.
63. 63
PAGING
• Main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to
have optimum utilization of the main memory and to avoid external
fragmentation.
64. 64
PAGING
• When the frame number is combined with instruction of set D than we will get the
corresponding physical address.
• Size of a page table is generally very large so cannot be accommodated inside the
PCB, therefore, PCB contains a register value PTBR( page table base register)
which leads to the page table.
65. 65
PAGING
• Address generated by CPU is divided into:
• Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
• It tells the exact page of the process which the CPU wants to access.
• Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
• It tells the exact word on that page which the CPU wants to read.
66. 66
PAGING
•Physical Address: The physical address consists of two parts frame number and
page offset.
•Frame Number: It tells the exact frame where the page is stored in physical memory.
•Page Offset: It tells the exact word on that page which the CPU wants to read. It
requires no translation as the page size is the same as the frame size so the place of
the word which CPU wants access will not change.
70. 70
ADDRESS TRANSLATION
• Page address is called logical address and represented by page number and
the offset.
• Logical address=Page number + page offset
• Frame address is called physical address and represented by a frame number
and the offset.
• Physical address = Frame number + page offset
• A data structure called page map table is used to keep track of the relation
between a pages of a process to a frame in physical memory
72. 72
PAGING
• When the system allocates a frame to any page, it translates this logical address
into a physical address and creates entry into the page table to be used
throughout execution of the program.
• When a process is to be executed, its corresponding pages are loaded into any
available memory frames.
73. 73
PAGING
• Suppose you have a program of 8Kb but your memory can accommodate only
5Kb at a given point in time, then the paging concept will come into picture.
• When a computer runs out of RAM, the operating system (OS) will move idle or
unwanted pages of memory to secondary memory to free up RAM for other
processes and brings them back when needed by the program.
74. 74
PAGING
• This process continues during the whole execution of the program where the OS keeps
removing idle pages from the main memory and write them onto the secondary memory
and bring them back when required by the program.
75. 75
PAGING- ADVANTAGES
• By diving the memory into fix blocks, it eliminates the issue of External Fragmentation.
• It also supports Multiprogramming. Overheads that come with compaction during
relocation are eliminated.
• Easy to swap since everything is the same size, which is usually the same size as disk
blocks to and from which pages are swapped.
76. 76
PAGING- DISADVANTAGES
• Paging increases the price of computer hardware, as page addresses are mapped to
hardware.
• Memory is forced to store variables like page tables. Some memory space stays unused
when available blocks are not sufficient for address space for jobs to run.
• Since the physical memory is split into equal sizes, it allows for internal fragmentation.
77. 77
PAGING- EXAMPLE
Physical memory is of size 16 bytes. Logical memory has pages of size 2 bytes each.
So total number of pages can be stored in physical memory =
Physical memory size/logical memory size (page size) =16/2=8 pages.
The formula to find out physical address corresponding to logical address is:
(Frame number X page size)+page offset
78. 78
PAGING- EXAMPLE
• Consider a page size of 4 bytes and a
physical memory of 32 bytes (8 pages).
• (Frame number X page size)+page offset.
• we find that page 0 is in frame 5. Thus,
logical address 0 maps to physical address
20 (= (5 x 4) + 0).
79. 79
SEGMENTATION
• Segmentation is a memory management technique in which each job is divided
into several segments of different sizes, one for each module that contains pieces
that perform related functions.
• Each segment is actually a different logical address space of the program.
• When a process is to be executed, its corresponding segmentation is loaded into
non-contiguous memory though every segment is loaded into a contiguous
block of available memory.
80. 80
SEGMENTATION
• Segmentation is a programmer view of the memory where instead of dividing a
process into equal size partition we divided according to program into partition
called segments.
• The translation is the same as paging, but paging segmentation is independent of
internal fragmentation but suffers from external fragmentation.
• Reason of external fragmentation is program can be divided into segments, but
segment must be contiguous in nature.
81. 81
SEGMENTATION
• segments are of variable length whereas in paging pages are of fixed size.
• A segment is a logical unit such as: main function, utility functions, data
structures, procedure, method, object, local variables, global variables, common
block, stack, symbol table, arrays etc.
• A logical-address space is a collection of segments. Each segment has a name
and a length. The user specifies each address by two quantities: a segment
name/number and an offset
82. 82
SEGMENTATION
• The operating system maintains a segment map table for every process and a list
of free memory blocks along with segment numbers, their size and corresponding
memory locations in main memory.
• For each segment, the table stores the starting address of the segment and the
length of the segment.
• A reference to a memory location includes a value that identifies a segment and
an offset
86. 86
SEGMENTATION
• Logical address consists of a two tuple:
• <Segment-number, offset>
• Segment table maps two-dimensional physical addresses and each entry in table
has:
• base – contains the starting physical address where the segments reside in
memory.
• limit – specifies the length of the segment.
89. 89
SEGMENTATION
• Consider we have five segments numbered from 0 through 4.
• The segments are stored in physical memory as shown.
• The segment table has a separate entry for each segment, giving start address in
physical memory (or base)
• The length of that segment (or limit).
• For example, segment 2 is 400 bytes long and begins at location 4300. Thus, a
reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.
90. 90
SEGMENTATION WITH PAGING
• In this technique, segment is viewed as a collection of pages.
• Logical address generated by CPU is divided into three parts- the segment, the
page and the offset, this is shown in figure:
91. 91
SEGMENTATION WITH PAGING
• The segment is used as an index is segment table.
• Entry in the segment table contains the base address of the page table.
• Page number is used as an index in a page table and selects an entry within page
table.
• Page table is used to stored frame number of each page in physical memory.
92. 92
SEGMENTATION WITH PAGING
• This frame number is the base address of the page.
• This frame number + offset part of logical address forms the physical address.
• The physical address is the actual address in computer physical memory
corresponding to the logical address generated by CPU.
94. 94
SEGMENTATION WITH PAGING-ADVANTAGES
• It reduces memory usage.
• Page table size is limited by the segment size.
• Segment table has only one entry corresponding to one
actual segment.
• External Fragmentation is not there.
• It simplifies memory allocation.
96. 96
VIRTUAL MEMORY
• Virtual memory is a concept of an OS that virtually increases the apparent size of
main memory and gives liberty of user (programmer) to write programs without
worrying about the size of physical memory.
• The user uses address and space of virtually memory which is then translated
(mapped) into corresponding main memory space.
97. 97
VIRTUAL MEMORY
• The address generated and referenced by user program is called virtual address
and the collection of virtual address forms virtual address space.
• Similarly, the address of main memory is called physical address and collection of
physical address is called physical address space.
98. 98
VIRTUAL MEMORY
Entire program is not required to be loaded fully in main memory:
• Parts of the program called error handing routines are used when an error occurs.
• Some functions and procedures of a program may be used seldom.
• Many data structures like arrays, structures, tables etc are assigned a fixed size of
memory space in user programs but only a small amount of memory assigned to
these data structures is used.
99. 99
VIRTUAL MEMORY
• Benefits:
• The user (programmer) is no longer be bounded by the amount
of main memory that is available and can write programs
without worrying about the size of memory.
• Since each user program is loaded in portion taking less
memory space so more programs could reside in main memory
and execute
• This will lead to efficient CPU utilization and throughput and will
give overall good system performance.
100. 100
DEMAND PAGING
• like a paging system but with additional feature of swapping.
• The user process resides in secondary memory and is a considered as a set of
pages.
• when we want to execute a process, instead of bringing (swapping in) the entire
process from secondary memory into main memory, we use a lazy swapper called
pager which swaps only those pages into memory that are needed currently.
101. 101
DEMAND PAGING
• Pages that are not needed in process execution are not brought into main memory.
• This considerably reduces the time required for swapping and the amount of
physical memory needed by a process.
• A lazy swapper never swaps a page into memory unless that page will be needed.
• A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process
103. 103
PAGE TRANSFER METHOD
• When a process is to be swapped in, the pager guesses which pages will be used
before the process is swapped out again.
• Instead of swapping in a whole process, the pager brings only those necessary
pages into memory.
• Thus, it avoids reading into memory pages that will not be used anyway,
decreasing the swap time and the amount of physical memory needed
105. 105
PAGE TABLE
• The valid-invalid bit scheme of Page table can be used for indicating which pages
are currently in memory.
• When this bit is set to "valid", this value indicates that the associated page is both
legal and in memory.
• If the bit is set to "invalid", this value indicates that the page either is not valid or is
valid but is currently on the disk.
106. 106
PAGE TABLE
• The page-table entry for a page that is brought into memory is set as usual, but
the page table entry for a page that is not currently in memory is simply marked
invalid or contains the address of the page on disk.
• When a page references an invalid page, then it is called Page Fault. It means
that page is not in main memory.
109. 109
PROCEDURE FOR HANDLING PAGE FAULT
• We check an internal table for this process, to determine whether the reference
was a valid or invalid memory access.
• If the reference was invalid, we terminate the process. If it was valid, but we have
not yet brought in that page into memory.
• We find a free frame (by taking one from the free-frame list).
• We schedule a disk operation to read the desired page into the newly allocated
frame.
110. 110
PROCEDURE FOR HANDLING PAGE FAULT
• When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
• We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.
111. 111
PERFORMANCE OF DEMAND PAGING
• Let p be the probability of a page fault (0< p < 1).
• Then the effective access time is
• EAT = (1 - p) x memory access time + p x page fault time
• In any case, we are faced with three major components of the page-fault service
time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.
113. 113
FIRST-IN-FIRST-OUT (FIFO) ALGORITHM:
• A FIFO replacement algorithm associates with each page the time when that
page was brought into memory.
• When a page must be replaced, the oldest page is chosen to swap out.
• We can create a FIFO queue to hold all pages in memory
115. 115
OPTIMAL PAGE REPLACEMENT ALGORITHM:
• It is simply “Replace the page that will not be used for the longest period of time”.
• Use of this page-replacement algorithm guarantees the lowest possible page fault
rate for a fixed number of frames.
116. 116
LRU PAGE REPLACEMENT ALGORITHM
• If we use the recent past as an approximation of the near future, then we will replace
the page that has not been used for the longest period of time.