SlideShare a Scribd company logo
1 of 6
Operating System
MCA II Semester
SWETA BARNWAL Page 1
Module: 4
MEMORY MANAGEMENT
Memory Management Strategies:
Memory consists of a large array of words or bytes, each with its own address. The CPU fetches
instructions from memory according to the value of the program counter. These instructions may
cause additional loading from and storing to specific memory addresses.
A typical instruction-execution cycle, for example, first fetches an instruction from memory. The
instruction is then decoded and may cause operands to be fetched from memory. After the
instruction has been executed on the operands, results may be stored back in memory. The
memory unit sees only a stream of memory addresses; it does not know how they are generated
(by the instruction counter, indexing, indirection, literal addresses, and so on) or what they are
for (instructions or data). Accordingly, we can ignore how a program generates a memory
address. We are interested only in the sequence of memory addresses generated by the running
program.
We begin our discussion by covering several issues that are pertinent to the various techniques
for managing memory. This coverage includes an overview of basic hardware issues, the binding
of symbolic memory addresses to actual physical addresses, and the distinction between logical
and physical addresses. We conclude the section with a discussion of dynamically loading and
linking code and shared libraries.
Swapping:
A process must be in memory to be executed. A process, however, can be swapped temporarily
out of memory to a backing store and then brought back into memory for continued execution.
For example, assume a multiprogramming environment with a round-robin CPU-scheduling
algorithm. When a quantum expires, the memory manager will start to swap out the process that
just finished and to swap another process into the memory space that has been freed (Figure 8.5).
In the meantime, the CPU scheduler will allocate a time slice to some other process in memory.
When each process finishes its quantum, it will be swapped with another process. Ideally, the
memory manager can swap processes fast enough that some processes will be in memory, ready
to execute, when the CPU scheduler wants to reschedule the CPU. In addition, the quantum must
be large enough to allow reasonable amounts of computing to be done between swaps.
A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher-
priority process arrives and wants service, the memory manager can swap out the lower-priority
process and then load and execute the higher-priority process. When the higher-priority process
finishes, the lower-priority process can be swapped back in and continued. This variant of
swapping is sometimes called roll out roll in.
Normally, a process that is swapped out will be swapped back into the same memory space it
occupied previously. This restriction is dictated by the method of address binding. If binding is
done at assembly or load time, then the process cannot be easily moved to a different location. If
execution-time binding is being used, however, then a process can be swapped into a different
memory space, because the physical addresses are computed during execution time.
Operating System
MCA II Semester
SWETA BARNWAL Page 2
Memory Allocation
The main memory must accommodate both the operating system and the various user processes.
We therefore need to allocate main memory in the most efficient way possible. This section
explains one common method, contiguous memory allocation.
The memory is usually divided into two partitions: one for the resident operating system and one
for the user processes. We can place the operating system in either low memory or high memory.
The major factor affecting this decision is the location of the interrupt vector. Since the interrupt
vector is often in low memory, programmers usually place the operating system in low memory
as well. Thus, in this text, we discuss only the situation in which the operating system resides in
low memory. The development of the other situation is similar.
We usually want several user processes to reside in memory at the same time. We therefore need
to consider how to allocate available memory to the processes that are in the input queue waiting
to be brought into memory.
In contiguous memory allocation, each process is contained in a single contiguous section of
memory.
Now we are ready to turn to memory allocation. One of the simplest methods for allocating
memory is to divide memory into several fixed-sized partitions. Each partition may contain
exactly one process. Thus, the degree of multiprogramming is bound by the number of partitions.
In this multiple partitions method when a partition is free, a process is selected from the input
queue and is loaded into the free partition. When the process terminates, the partition becomes
available for another process. This method was originally used by the IBM OS/360 operating
system (called MFT); it is no longer in use. The method described next is a generalization of the
fixed-partition scheme (called MVT); it is used primarily in batch environments. Many of the
ideas presented here are also applicable to a time-sharing environment.
Operating System
MCA II Semester
SWETA BARNWAL Page 3
Segmentation:
An. important aspect of memory management that became unavoidable with paging is the
separation of the user's view of memory from the actual physical memory. As we have already
seen, the user's view of memory is not the same as the actual physical memory. The user's view
is mapped onto physical memory. This mapping allows differentiation between logical memory
and physical memory.
It is a memory-management scheme that supports this user view of memory. A logical address
space is a collection of segments. Each segment has a name and a length. The addresses specify
both the segment name and the offset within the segment. The user therefore specifies each
address by two quantities: a segment name and an offset. (Contrast this scheme with the paging
scheme, in which the user specifies only a single address, which is partitioned by the hardware
into a page number and an offset, all invisible to the programmer.)
Normally, the user program is compiled, and the compiler automatically constructs segments
reflecting the input program.
A C (language) compiler might create separate segments for the following:
 The code
 Global variables
 The heap, from which memory is allocated
 The stacks used by each thread
 The standard C library
Virtual Memory Management
Virtual memory is a technique that allows the execution of processes that are not completely in
memory. One major advantage of this scheme is that programs can be larger than physical
memory. Further, virtual memory abstracts main memory into an extremely large, uniform array
of storage, separating logical memory as viewed by the user from physical memory. This
technique frees programmers from the concerns of memory-storage limitations. Virtual memory
Operating System
MCA II Semester
SWETA BARNWAL Page 4
also allows processes to share files easily and to implement shared memory. In addition, it
provides an efficient mechanism for process creation. Virtual memory is not easy to implement,
however, and may substantially decrease performance if it is used carelessly. In this chapter, we
discuss virtual memory in the form of demand paging and examine its complexity and cost.
The requirement that instructions must be in physical memory to be executed seems both
necessary and reasonable; but it is also unfortunate, since it limits the size of a program to the
size of physical memory. In fact, an examination of real programs shows us that, in many cases,
the entire program is not needed. For instance, consider the following:
 Programs often have code to handle unusual error conditions. Since these errors seldom,
if ever, occur in practice, this code is almost never executed.
 Arrays lists and tables are often allocated more memory than they actually need. An array
may be declared 100 by 100 elements, even though it is seldom larger than 10 by 10
elements. An assembler symbol table may have room for 3,000 symbols, although the
average program has less than 200 symbols.
 Certain options and features of a program may be used rarely. For instance, the routines
on U.S. government computers that balance the budget have not been used in many years.
Even in those cases where the entire program is needed, it may not all be needed at the same
time.
The ability to execute a program that is only partially in memory would confer many benefits:
 A program would no longer be constrained by the amount of physical memory that is
available. Users would be able to write programs for an extremely large virtual address
space, simplifying the programming task.
Operating System
MCA II Semester
SWETA BARNWAL Page 5
 Because each user program could take less physical memory, more programs could be
run at the sance time, with a corresponding increase in CPU utilization and throughput
but with no increase in response time or turnaround time.
 Less I/O would be needed to load or swap user programs into memory, so each user
program would run faster.
Virtual memory involves the separation of logical memory as perceived by users from physical
memory. This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available (Figure 9.1). Virtual memory
makes the task of programming much easier, because the programmer no longer needs to worry
about the amount of physical memory available; she can concentrate instead on the problem to
be programmed.
Demand paging
Consider how an executable program might be loaded from disk into memory. One option is to
load the entire program in physical memory at program execution time. However, a problem
with this approach is that we may not initially need the entire program in memory. Suppose a
program starts with a list of available options from which the user is to select. Loading the entire
program into memory results in loading the executable code for all options, regardless of
whether an option is ultimately selected by the user or not. An alternative strategy is to load
pages only as they are needed. This technique is known as demand paging and is commonly used
in virtual memory systems. With demand-paged virtual memory, pages are only loaded when
they are demanded during program execution; pages that are never accessed are thus never
loaded into physical memory.
A demand-paging system is similar to a paging system with swapping (Figure 9.4) where
processes reside in secondary memory (usually a disk). When we want to execute a process, we
swap it into memory. Rather than swapping the entire process into memory, however, we use a
lazy swapper never swaps a page into memory unless that page will be needed. Since we are now
viewing a process as a sequence of pages, rather than as one large contiguous address space, use
of the term swapper is technically incorrect.
A swapper manipulates entire processes, whereas a pager is concerned with the individual pages
of a process. We thus use pager, rather than swapper, in connection with demand paging.
Operating System
MCA II Semester
SWETA BARNWAL Page 6
Page Replacement
This representation is not strictly accurate, however. If a process of ten pages actually uses only
half of them, then demand paging saves the I/0 necessary to load the five pages that are never
used. We could also increase our degree of multiprogramming by running twice as many
processes. Thus, if we had forty frames, we could run eight processes, rather than the four that
could run if each required ten frames (five of which were never used).
If we increase our degree of multiprogramming, we are memory. If we run six processes, each of
which is ten pages in size but uses only five pages, we have higher CPU utilization and
throughput, ten frames to spare. It is possible, however, that each of these processes, for a
particular data set, may suddenly try to use all ten of its pages, resulting in a need for sixty
frames when only forty are available.
Figure 9.9 Need for page replacement
Further, consider that system memory is not used only for holding program pages. Buffers for
I/O also consume a considerable amount of memory. This use can increase the strain on
memory-placement algorithms. Deciding how much memory to allocate to I/O and how much to
program pages is a significant challenge. Some systems allocate a fixed percentage of memory
for I/O buffers, whereas others allow both user processes and the I/O subsystem to compete for
all system memory.

More Related Content

What's hot

Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
rohassanie
 
Centralized vs distrbution system
Centralized vs distrbution systemCentralized vs distrbution system
Centralized vs distrbution system
zirram
 
7 distributed and real systems
7 distributed and real systems7 distributed and real systems
7 distributed and real systems
myrajendra
 

What's hot (20)

Operating system
Operating systemOperating system
Operating system
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
 
Unit 1 computer architecture (1)
Unit 1   computer architecture (1)Unit 1   computer architecture (1)
Unit 1 computer architecture (1)
 
Os4
Os4Os4
Os4
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
 
Opetating System Memory management
Opetating System Memory managementOpetating System Memory management
Opetating System Memory management
 
chapter 2 memory and process management
chapter 2 memory and process managementchapter 2 memory and process management
chapter 2 memory and process management
 
Storage management
Storage managementStorage management
Storage management
 
Basic os-concepts
Basic os-conceptsBasic os-concepts
Basic os-concepts
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
Memory management ppt
Memory management pptMemory management ppt
Memory management ppt
 
Memory management
Memory managementMemory management
Memory management
 
Unit IV Memory and I/O Organization
Unit IV Memory and I/O OrganizationUnit IV Memory and I/O Organization
Unit IV Memory and I/O Organization
 
Overview of Distributed Systems
Overview of Distributed SystemsOverview of Distributed Systems
Overview of Distributed Systems
 
Operating System
Operating SystemOperating System
Operating System
 
Memory Management | Computer Science
Memory Management | Computer ScienceMemory Management | Computer Science
Memory Management | Computer Science
 
Centralized vs distrbution system
Centralized vs distrbution systemCentralized vs distrbution system
Centralized vs distrbution system
 
Os ch1
Os ch1Os ch1
Os ch1
 
7 distributed and real systems
7 distributed and real systems7 distributed and real systems
7 distributed and real systems
 

Similar to Module 4 memory management

A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMSA REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
cseij
 
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMSVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
cseij
 
6416cseij01
6416cseij016416cseij01
6416cseij01
cseij
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
C.U
 
MEMORY MANAGEMENT FHES har ar mahaadev.pptx
MEMORY MANAGEMENT FHES har ar mahaadev.pptxMEMORY MANAGEMENT FHES har ar mahaadev.pptx
MEMORY MANAGEMENT FHES har ar mahaadev.pptx
22bec032
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
BHUPESHRAHANGDALE200
 

Similar to Module 4 memory management (20)

Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory management
 
Dynamic loading
Dynamic loadingDynamic loading
Dynamic loading
 
CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management CSI-503 - 6. Memory Management
CSI-503 - 6. Memory Management
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Discovering Robustness Amongst CBIR Features
Discovering Robustness Amongst CBIR Features     Discovering Robustness Amongst CBIR Features
Discovering Robustness Amongst CBIR Features
 
A Review of Memory Allocation and Management in Computer Systems
A Review of Memory Allocation and Management in Computer SystemsA Review of Memory Allocation and Management in Computer Systems
A Review of Memory Allocation and Management in Computer Systems
 
Computer Science & Engineering: An International Journal (CSEIJ)
Computer Science & Engineering: An International Journal (CSEIJ)Computer Science & Engineering: An International Journal (CSEIJ)
Computer Science & Engineering: An International Journal (CSEIJ)
 
A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMSA REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
A REVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
 
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMSVIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
VIEW OF MEMORY ALLOCATION AND MANAGEMENT IN COMPUTER SYSTEMS
 
6416cseij01
6416cseij016416cseij01
6416cseij01
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 
unit5_os (1).pptx
unit5_os (1).pptxunit5_os (1).pptx
unit5_os (1).pptx
 
Operating system
Operating systemOperating system
Operating system
 
MEMORY MANAGEMENT FHES har ar mahaadev.pptx
MEMORY MANAGEMENT FHES har ar mahaadev.pptxMEMORY MANAGEMENT FHES har ar mahaadev.pptx
MEMORY MANAGEMENT FHES har ar mahaadev.pptx
 
Memory Managementgggffffffffffgggggggggg
Memory ManagementgggffffffffffggggggggggMemory Managementgggffffffffffgggggggggg
Memory Managementgggffffffffffgggggggggg
 
Operating Systems - memory management
Operating Systems - memory managementOperating Systems - memory management
Operating Systems - memory management
 
Main memoryfinal
Main memoryfinalMain memoryfinal
Main memoryfinal
 

More from Sweta Kumari Barnwal

More from Sweta Kumari Barnwal (20)

UNIT-1 Start Learning R.pdf
UNIT-1 Start Learning R.pdfUNIT-1 Start Learning R.pdf
UNIT-1 Start Learning R.pdf
 
MODULE-2-Cloud Computing.docx.pdf
MODULE-2-Cloud Computing.docx.pdfMODULE-2-Cloud Computing.docx.pdf
MODULE-2-Cloud Computing.docx.pdf
 
Number System.pdf
Number System.pdfNumber System.pdf
Number System.pdf
 
Cloud Computing_Module-1.pdf
Cloud Computing_Module-1.pdfCloud Computing_Module-1.pdf
Cloud Computing_Module-1.pdf
 
Computer Network-Data Link Layer-Module-2.pdf
Computer Network-Data Link Layer-Module-2.pdfComputer Network-Data Link Layer-Module-2.pdf
Computer Network-Data Link Layer-Module-2.pdf
 
Sensors in Different Applications Area.pdf
Sensors in Different Applications Area.pdfSensors in Different Applications Area.pdf
Sensors in Different Applications Area.pdf
 
Sensor technology module-3-interface electronic circuits
Sensor technology module-3-interface electronic circuitsSensor technology module-3-interface electronic circuits
Sensor technology module-3-interface electronic circuits
 
Sensors fundamentals and characteristics, physical principle of sensing
Sensors fundamentals and characteristics, physical principle of sensingSensors fundamentals and characteristics, physical principle of sensing
Sensors fundamentals and characteristics, physical principle of sensing
 
Logic gates
Logic gatesLogic gates
Logic gates
 
Basic computer system
Basic computer systemBasic computer system
Basic computer system
 
Features of windows
Features of windowsFeatures of windows
Features of windows
 
Operating system and services
Operating system and servicesOperating system and services
Operating system and services
 
Introduction to computers
Introduction to computersIntroduction to computers
Introduction to computers
 
Application Layer
Application LayerApplication Layer
Application Layer
 
Network Layer & Transport Layer
Network Layer & Transport LayerNetwork Layer & Transport Layer
Network Layer & Transport Layer
 
Module 5-cloud computing-SECURITY IN THE CLOUD
Module 5-cloud computing-SECURITY IN THE CLOUDModule 5-cloud computing-SECURITY IN THE CLOUD
Module 5-cloud computing-SECURITY IN THE CLOUD
 
Module 3-cyber security
Module 3-cyber securityModule 3-cyber security
Module 3-cyber security
 
Unit ii-hackers and cyber crimes
Unit ii-hackers and cyber crimesUnit ii-hackers and cyber crimes
Unit ii-hackers and cyber crimes
 
Module 3-cloud computing
Module 3-cloud computingModule 3-cloud computing
Module 3-cloud computing
 
Virtualization - cloud computing
Virtualization - cloud computingVirtualization - cloud computing
Virtualization - cloud computing
 

Recently uploaded

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
pritamlangde
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 

Recently uploaded (20)

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
457503602-5-Gas-Well-Testing-and-Analysis-pptx.pptx
457503602-5-Gas-Well-Testing-and-Analysis-pptx.pptx457503602-5-Gas-Well-Testing-and-Analysis-pptx.pptx
457503602-5-Gas-Well-Testing-and-Analysis-pptx.pptx
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 

Module 4 memory management

  • 1. Operating System MCA II Semester SWETA BARNWAL Page 1 Module: 4 MEMORY MANAGEMENT Memory Management Strategies: Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory according to the value of the program counter. These instructions may cause additional loading from and storing to specific memory addresses. A typical instruction-execution cycle, for example, first fetches an instruction from memory. The instruction is then decoded and may cause operands to be fetched from memory. After the instruction has been executed on the operands, results may be stored back in memory. The memory unit sees only a stream of memory addresses; it does not know how they are generated (by the instruction counter, indexing, indirection, literal addresses, and so on) or what they are for (instructions or data). Accordingly, we can ignore how a program generates a memory address. We are interested only in the sequence of memory addresses generated by the running program. We begin our discussion by covering several issues that are pertinent to the various techniques for managing memory. This coverage includes an overview of basic hardware issues, the binding of symbolic memory addresses to actual physical addresses, and the distinction between logical and physical addresses. We conclude the section with a discussion of dynamically loading and linking code and shared libraries. Swapping: A process must be in memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. For example, assume a multiprogramming environment with a round-robin CPU-scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed (Figure 8.5). In the meantime, the CPU scheduler will allocate a time slice to some other process in memory. When each process finishes its quantum, it will be swapped with another process. Ideally, the memory manager can swap processes fast enough that some processes will be in memory, ready to execute, when the CPU scheduler wants to reschedule the CPU. In addition, the quantum must be large enough to allow reasonable amounts of computing to be done between swaps. A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher- priority process arrives and wants service, the memory manager can swap out the lower-priority process and then load and execute the higher-priority process. When the higher-priority process finishes, the lower-priority process can be swapped back in and continued. This variant of swapping is sometimes called roll out roll in. Normally, a process that is swapped out will be swapped back into the same memory space it occupied previously. This restriction is dictated by the method of address binding. If binding is done at assembly or load time, then the process cannot be easily moved to a different location. If execution-time binding is being used, however, then a process can be swapped into a different memory space, because the physical addresses are computed during execution time.
  • 2. Operating System MCA II Semester SWETA BARNWAL Page 2 Memory Allocation The main memory must accommodate both the operating system and the various user processes. We therefore need to allocate main memory in the most efficient way possible. This section explains one common method, contiguous memory allocation. The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. We can place the operating system in either low memory or high memory. The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well. Thus, in this text, we discuss only the situation in which the operating system resides in low memory. The development of the other situation is similar. We usually want several user processes to reside in memory at the same time. We therefore need to consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. In contiguous memory allocation, each process is contained in a single contiguous section of memory. Now we are ready to turn to memory allocation. One of the simplest methods for allocating memory is to divide memory into several fixed-sized partitions. Each partition may contain exactly one process. Thus, the degree of multiprogramming is bound by the number of partitions. In this multiple partitions method when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. This method was originally used by the IBM OS/360 operating system (called MFT); it is no longer in use. The method described next is a generalization of the fixed-partition scheme (called MVT); it is used primarily in batch environments. Many of the ideas presented here are also applicable to a time-sharing environment.
  • 3. Operating System MCA II Semester SWETA BARNWAL Page 3 Segmentation: An. important aspect of memory management that became unavoidable with paging is the separation of the user's view of memory from the actual physical memory. As we have already seen, the user's view of memory is not the same as the actual physical memory. The user's view is mapped onto physical memory. This mapping allows differentiation between logical memory and physical memory. It is a memory-management scheme that supports this user view of memory. A logical address space is a collection of segments. Each segment has a name and a length. The addresses specify both the segment name and the offset within the segment. The user therefore specifies each address by two quantities: a segment name and an offset. (Contrast this scheme with the paging scheme, in which the user specifies only a single address, which is partitioned by the hardware into a page number and an offset, all invisible to the programmer.) Normally, the user program is compiled, and the compiler automatically constructs segments reflecting the input program. A C (language) compiler might create separate segments for the following:  The code  Global variables  The heap, from which memory is allocated  The stacks used by each thread  The standard C library Virtual Memory Management Virtual memory is a technique that allows the execution of processes that are not completely in memory. One major advantage of this scheme is that programs can be larger than physical memory. Further, virtual memory abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory. This technique frees programmers from the concerns of memory-storage limitations. Virtual memory
  • 4. Operating System MCA II Semester SWETA BARNWAL Page 4 also allows processes to share files easily and to implement shared memory. In addition, it provides an efficient mechanism for process creation. Virtual memory is not easy to implement, however, and may substantially decrease performance if it is used carelessly. In this chapter, we discuss virtual memory in the form of demand paging and examine its complexity and cost. The requirement that instructions must be in physical memory to be executed seems both necessary and reasonable; but it is also unfortunate, since it limits the size of a program to the size of physical memory. In fact, an examination of real programs shows us that, in many cases, the entire program is not needed. For instance, consider the following:  Programs often have code to handle unusual error conditions. Since these errors seldom, if ever, occur in practice, this code is almost never executed.  Arrays lists and tables are often allocated more memory than they actually need. An array may be declared 100 by 100 elements, even though it is seldom larger than 10 by 10 elements. An assembler symbol table may have room for 3,000 symbols, although the average program has less than 200 symbols.  Certain options and features of a program may be used rarely. For instance, the routines on U.S. government computers that balance the budget have not been used in many years. Even in those cases where the entire program is needed, it may not all be needed at the same time. The ability to execute a program that is only partially in memory would confer many benefits:  A program would no longer be constrained by the amount of physical memory that is available. Users would be able to write programs for an extremely large virtual address space, simplifying the programming task.
  • 5. Operating System MCA II Semester SWETA BARNWAL Page 5  Because each user program could take less physical memory, more programs could be run at the sance time, with a corresponding increase in CPU utilization and throughput but with no increase in response time or turnaround time.  Less I/O would be needed to load or swap user programs into memory, so each user program would run faster. Virtual memory involves the separation of logical memory as perceived by users from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available (Figure 9.1). Virtual memory makes the task of programming much easier, because the programmer no longer needs to worry about the amount of physical memory available; she can concentrate instead on the problem to be programmed. Demand paging Consider how an executable program might be loaded from disk into memory. One option is to load the entire program in physical memory at program execution time. However, a problem with this approach is that we may not initially need the entire program in memory. Suppose a program starts with a list of available options from which the user is to select. Loading the entire program into memory results in loading the executable code for all options, regardless of whether an option is ultimately selected by the user or not. An alternative strategy is to load pages only as they are needed. This technique is known as demand paging and is commonly used in virtual memory systems. With demand-paged virtual memory, pages are only loaded when they are demanded during program execution; pages that are never accessed are thus never loaded into physical memory. A demand-paging system is similar to a paging system with swapping (Figure 9.4) where processes reside in secondary memory (usually a disk). When we want to execute a process, we swap it into memory. Rather than swapping the entire process into memory, however, we use a lazy swapper never swaps a page into memory unless that page will be needed. Since we are now viewing a process as a sequence of pages, rather than as one large contiguous address space, use of the term swapper is technically incorrect. A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process. We thus use pager, rather than swapper, in connection with demand paging.
  • 6. Operating System MCA II Semester SWETA BARNWAL Page 6 Page Replacement This representation is not strictly accurate, however. If a process of ten pages actually uses only half of them, then demand paging saves the I/0 necessary to load the five pages that are never used. We could also increase our degree of multiprogramming by running twice as many processes. Thus, if we had forty frames, we could run eight processes, rather than the four that could run if each required ten frames (five of which were never used). If we increase our degree of multiprogramming, we are memory. If we run six processes, each of which is ten pages in size but uses only five pages, we have higher CPU utilization and throughput, ten frames to spare. It is possible, however, that each of these processes, for a particular data set, may suddenly try to use all ten of its pages, resulting in a need for sixty frames when only forty are available. Figure 9.9 Need for page replacement Further, consider that system memory is not used only for holding program pages. Buffers for I/O also consume a considerable amount of memory. This use can increase the strain on memory-placement algorithms. Deciding how much memory to allocate to I/O and how much to program pages is a significant challenge. Some systems allocate a fixed percentage of memory for I/O buffers, whereas others allow both user processes and the I/O subsystem to compete for all system memory.