SlideShare a Scribd company logo
MCA 203 OPERATING SYSTEMS
UNIT - III
Unit – III
STORAGE MANAGEMENT
Memory Management:
• Swapping
• Contiguous memory allocation
• Paging,
• Segmentation
Virtual memory:
• Demand paging
• Page replacement,
• Allocation of frames,
• Thrashing.
File System Interface & Implementation:
• File concept
• Access methods
• Directory structure
• File System Mounting
• File sharing
• Protection
• File system structure and implementation,
• Directory implementation
• Allocation methods
• Free space management
• Efficiency and performance, Recovery.
MEMORY MANAGEMENT
• Swapping
• Contiguous memory allocation
• Paging
• Segmentation
INTRODUCTION:
• Main Memory refers to a physical memory that is the internal
memory to the computer. The word main is used to distinguish it
from external mass storage devices such as disk drives. Main
memory is also known as RAM. The computer is able to change only
data that is in main memory. Therefore, every program we execute
and every file we access must be copied from a storage device into
main memory.
• All the programs are loaded in the main memory for execution.
Sometimes complete program is loaded into the memory, but some
times a certain part or routine of the program is loaded into the main
memory only when it is called by the program, this mechanism is
called Dynamic Loading, this enhance the performance.
• Also, at times one program is dependent on some other program. In
such a case, rather than loading all the dependent programs, CPU
links the dependent programs to the main executing program when
its required. This mechanism is known as Dynamic Linking.
Base and Limit Registers
• A pair of base and limit registers define the
logical address space
• CPU must check every memory access
generated in user mode to be sure it is
between base and limit for that user
Hardware Address Protection with Base and
Limit Registers
Logical vs. Physical Address Space
• The concept of a logical address space that is bound
to a separate physical address space is central to
proper memory management
– Logical address – generated by the CPU; also referred to
as virtual address
– Physical address – address seen by the memory unit
• Logical and physical addresses are the same in
compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses
differ in execution-time address-binding scheme
• Logical address space is the set of all logical
addresses generated by a program
• Physical address space is the set of all physical
addresses generated by a program
Dynamic relocation using a relocation register
 Routine is not loaded until it is
called
 Better memory-space utilization;
unused routine is never loaded
 All routines kept on disk in
relocatable load format
 Useful when large amounts of
code are needed to handle
infrequently occurring cases
 No special support from the
operating system is required
 Implemented through
program design
 OS can help by providing
libraries to implement
dynamic loading
Swapping
• A process needs to be in memory for execution.
But sometimes there is not enough main memory
to hold all the currently active processes in a
timesharing system. So, excess process are kept
on disk and brought in to run dynamically.
• Swapping is the process of bringing in each
process in main memory, running it for a while
and then putting it back to the disk.
Schematic View of Swapping
Contiguous Memory Allocation
• In contiguous memory allocation each process is
contained in a single contiguous block of
memory. Memory is divided into several fixed
size partitions. Each partition contains exactly
one process. When a partition is free, a process
is selected from the input queue and loaded into
it. The free blocks of memory are known
as holes. The set of holes is searched to
determine which hole is best to allocate.
Contiguous Allocation
• Main memory must support both OS and user processes
• Limited resource, must allocate efficiently
• Contiguous allocation is one early method
• Main memory usually into two partitions:
– Resident operating system, usually held in low memory with interrupt
vector
– User processes then held in high memory
– Each process contained in single contiguous section of memory
• Relocation registers used to protect user processes from each
other, and from changing operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses – each logical
address must be less than the limit register
– MMU maps logical address dynamically
– Can then allow actions such as kernel code being transient and
kernel changing size
Hardware Support for Relocation
and Limit Registers
Memory Protection
• Memory protection is a phenomenon by which
we control memory access rights on a computer.
The main aim of it is to prevent a process from
accessing memory that has not been allocated to
it. Hence prevents a bug within a process from
affecting other processes, or the operating
system itself, and instead results in a
segmentation fault or storage violation exception
being sent to the disturbing process, generally
killing of process.
Memory Allocation:
How to satisfy a request of size n from a list of free holes?
Memory allocation is a process by which computer programs
are assigned memory or space.
It is of three types :
• First Fit: The first hole that is big enough is allocated to
program.
• Best Fit: The smallest hole that is big enough is allocated
to program.
• Worst Fit: The largest hole that is big enough is allocated
to program.
 First-fit and best-fit better than worst-fit in terms of
speed and storage utilization
Fragmentation:
• Fragmentation occurs in a dynamic memory allocation system
when most of the free blocks are too small to satisfy any
request. It is generally termed as inability to use the available
memory.
• In such situation processes are loaded and removed from the
memory. As a result of this, free holes exists to satisfy a request
but is non contiguous i.e. the memory is fragmented into large
no. Of small holes. This phenomenon is known as External
Fragmentation.
• Also, at times the physical memory is broken into fixed size
blocks and memory is allocated in unit of block sizes. The
memory allocated to a space may be slightly larger than the
requested memory. “The difference between allocated and
required memory is known as Internal fragmentation” i.e. the
memory that is internal to a partition but is of no use.
•
Fragmentation (Cont.,)
• External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous
• Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used
• First fit analysis reveals that given N blocks allocated, 0.5 N blocks
lost to fragmentation
1/3 may be unusable -> 50-percent rule
• Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory together in
one large block
– Compaction is possible only if relocation is dynamic, and is
done at execution time
– I/O problem
• Latch job in memory while it is involved in I/O
• Do I/O only into OS buffers
• Now consider that backing store has same fragmentation problems
Paging
A solution to fragmentation problem is Paging. Paging is a
memory management mechanism that allows the physical
address space of a process to be non-contagious. Here
physical memory is divided into blocks of equal size
called Pages. The pages belonging to a certain process are
loaded into available memory frames.
Page Table
• A Page Table is the data structure used by a virtual memory
system in a computer operating system to store the mapping
between virtual address and physical addresses.
• Virtual address is also known as Logical address and is
generated by the CPU. While Physical address is the address
that actually exists on memory.
• Every address generated by the CPU is divided
into two parts: Page number (p) and Page offset
(d)
Paging Hardware
Paging Hardware
 The page number is used as an index into a
Page Table
 The page size is defined by the hardware
 The size of a page is typically a power of 2,
varying between 512 bytes and 16MB per
page
 Reason: If the size of logical address is 2^m
and page size is 2^n, then the high-order
m-n bits of a logical address designate the
page number
Paging Hardware
23
Page Tables
• The OS now needs to maintain (in main memory) a
page table for each process
• Each entry of a page table consist of the frame
number where the corresponding page is physically
located
• The page table is indexed by the page number to
obtain the frame number
• A free frame list, available for pages, is maintained
24
Logical-to-Physical Address Translation
in Paging
Paging Example
Paging Example
Implementing PageTable:
• To implement paging, the simplest
method is to implement the page table
as a set of registers
• However, the size of register is limited
and the size of page table is usually
large
• Therefore, the page table is kept in
main memory
ADVANTAGES
 No external Fragmentation
 Simple memory management algorithm
 Swapping is easy (Equal sized Pages and Page
Frames)
 Share common code especially in a time-sharing
environment
DISADVANTAGES
 Internal fragmentation
 Page tables may consume more memory.
 Multi level paging leads to memory
reference overhead.
Segmentation
Segmentation is another memory management scheme that supports the
user-view of memory. Segmentation allows breaking of the virtual
address space of a single process into segments that may be placed in
non-contiguous areas of physical memory.
Segmentation with Paging
Both paging and segmentation have their advantages and
disadvantages, it is better to combine these two schemes to improve
on each. The combined scheme is known as 'Page the Elements'. Each
segment in this scheme is divided into pages and each segment is
maintained in a page table.
So the logical address is divided into following 3 parts :
• Segment numbers(S)
• Page number (P)
• The displacement or offset number (D)
Logical addressing in Segmentation
Logical Address space
Segment number
Offset
The mapping of the logical address to the physical address is done
with the help of the segment table.
Segment Limit Segment Base Other bits
A bit is needed to determine if the segment is
already in main memory (P)
Another bit is needed to determine if the segment
has been modified since it was loaded in main
memory (M)
the length of the
segment SEGMENT TABLE
starting address of the
corresponding segment in
main memory
Segmentation Hardware
EXAMPLE OF SEGMENTATION
ADVANTAGES OF SEGMENTATION
• No internal fragmentation
• Segment tables consume less memory than page
tables ( only one entry per actual segment as opposed
to one entry per page in Paging method)
• Because of the small segment table, memory
reference is easy.
• Lends itself to sharing data among processes.
• Lends itself to protection.
• As the individual lines of a page do not form one
logical unit, it is not possible to set a particular
access right to a page.
• Note that each segment could be set up an access
right
DISADVANTAGES
• External fragmentation.
• Costly memory management algorithm
• Unequal size of segments is not good in the case of swapping.
So, why can’t we combine the ease of sharing and
protection we get from segments with efficient
memory utilization we get from pages ????
Paging verses Segmentation
Paging Segmentation
 Each process is assigned its
page table.
 Page table size proportional
to allocated memory
 Often large page tables
and/or multi-level paging
 Internal fragmentation
 Free memory is quickly
allocated to a process
 Each process is assigned a
segment table
 Segment table size
proportional to number of
segments
 Usually small segment tables
 External fragmentation.
 Lengthy search times when
allocating memory to a
process.
VIRTUAL MEMORY
• Demand paging
• Page replacement,
• Allocation of frames,
• Thrashing.
What is Virtual Memory?
Virtual Memory is a space where large programs can store
themselves in the form of pages while their execution and
only the required pages or portions of processes are loaded
into the main memory. This technique is useful as large
virtual memory is provided for user programs when a very
small physical memory is there.
In real scenarios, most processes never need all their pages at
once, for following reasons :
• Error handling code is not needed unless that specific
error occurs, some of which are quite rare.
• Arrays are often over-sized for worst-case scenarios, and
only a small fraction of the arrays are actually used in
practice.
• Certain features of certain programs are rarely used.
Benefits of having Virtual Memory
• Large programs can be written, as virtual space
available is huge compared to physical
memory.
• Less I/O required, leads to faster and easy
swapping of processes.
• More physical memory available, as programs
are stored on virtual memory, so they occupy
very less space on actual physical memory.
What is Demand Paging?
• The basic idea behind demand paging is that when a process is swapped
in, its pages are not swapped in all at once. Rather they are swapped in
only when the process needs them(On demand). This is termed as lazy
swapper, although a pager is a more accurate term.
• Initially only those pages are loaded which will be required the process
immediately.
The pages that are not moved into the memory, are marked as invalid in the page table.
For an invalid entry the rest of the table is empty. In case of pages that are loaded in
the memory, they are marked as valid along with the information about where to
find the swapped out page.
When the process requires any of the page that is not loaded into the memory, a page
fault trap is triggered and following steps are followed:
1. The memory address which is requested by the process is first checked, to verify
the request made by the process.
2. If its found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a
free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from disk to the
specified memory location. ( This will usually block the process on an I/O wait,
allowing some other process to use the CPU in the meantime. )
5. When the I/O operation is complete, the process's page table is updated with the
new frame number, and the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the
beginning.
There are cases when no pages are loaded into the memory initially, pages are only
loaded when demanded by the process by generating page faults. This is called Pure
Demand Paging.
The only major issue with Demand Paging is, after a new page is loaded, the process
starts execution from the beginning. Its is not a big issue for small programs, but for
larger programs it affects performance drastically.
PURE DEMAND PAGING
We start executing a process with no pages in the memory.
When the Operating System sets the instruction pointer to
the first pointer instruction of the process, which is on a non
memory resident page, the process immediately faults for the
process. After this page is brought into the memory, the
process continues to execute, faulting as necessary until every
page that it needs is in memory.
At this point, it can execute with no more faults. This is Pure
Demand Paging.
HARDWARE SUPPORT
The hardware to support demand paging is the same as the
hardware for paging and swapping:
 PAGE TABLE : This table has the ability to mark an entry
invalid through a valid invalid bit or a special value of
protection bits.
 SECONDARY MEMORY : The memory holds those pages
that are not present in the main memory. The secondary
memory is usually a high speed disk.
It is known as Swap Device and the section of the disk used
for this purpose is known as the Swap Space.
Performance of Demand Paging
• Page Fault Rate 0 < p < 1.0
- if p = 0 no page faults
- if p = 1, every reference is a fault
• Effective Access Time (EAT)
EAT = (1 - p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in
+ restart overhead)
Demand Paging Example
• Memory access time = 200 nanoseconds
• Average page-fault service time = 8 milliseconds
• EAT = (1 - p) x 200 + p (8 milliseconds)
= (1 - p x 200 + p x 8,000,000
= 200 + p x 7,999,800
EAT is directly proportional to the page fault rate.
What happens if there is no free frame?
• Page replacement - find some page in memory,
but not really in use, swap it out
- Algorithm
- Performance - want an algorithm which will result in
minimum number of page faults
• Same page may be brought into memory several
times
Page Replacement
• As studied in Demand Paging, only certain pages of a process
are loaded initially into the memory. This allows us to get more
number of processes into the memory at the same time. but
what happens when a process requests for more pages and no
free memory is available to bring them in.
• Following steps can be taken to deal with this problem :
1. Put the process in the wait queue, until any other process
finishes its execution thereby freeing frames.
2. Or, remove some other process completely from the
memory to free frames.
3. Or, find some pages that are not being used right now,
move them to the disk to get free frames. This technique is
called Page replacement and is most commonly used. We
have some great algorithms to carry on page replacement
efficiently.
Basic Page Replacement
• Find the location of the page requested by ongoing process on
the disk.
• Find a free frame. If there is a free frame, use it. If there is no
free frame, use a page-replacement algorithm to select any
existing frame to be replaced, such frame is known as victim
frame.
• Write the victim frame to disk. Change all related page tables to
indicate that this page is no longer in memory.
• Move the required page and store it in the frame. Adjust all
related page and frame tables to indicate the change.
• Restart the process that was waiting for this page.
STEPS IN PAGE REPLACEMENT :
 Find the location of the desired page on the disk.
 Find a free frame :
 If there is a free frame, use it.
 If there is no free frame, use a page replacement algorithm
to select a victim frame.
 Write the victim frame to the disk, change the page and
frame tables accordingly.
 Read the desired page into the newly freed frame, change
the page and frame tables.
 Restart the user process.
Page Replacement
Use modify (dirty) bit to reduce overhead of page transfers -
only modified pages are written to disk
Page Replacement Algorithms
• Want lowest page-fault rate
• Evaluate algorithm by running it on a particular
string of memory references (reference string)
and computing the number of page faults on
that string
• In all our examples, the reference string is
7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
FIFO Page Replacement
• A very simple way of Page replacement is FIFO (First in
First Out)
• As new pages are requested and are swapped in, they are
added to tail of a queue and the page which is at the
head becomes the victim.
• Its not an effective way of page replacement but can be
used for small systems.
First-In-First-Out (FIFO)
Example
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
4 frames
FIFO Illustrating Belady’s
Anomaly
Optimal Page Replacement
Replace page that will not be used for longest period of time
Unfortunately, the optimal page-replacement is difficult to implement,
because it requires future knowledge of the reference string
Least Recently Used (LRU) Algorithm
• LRU replacement associates with each page
the time of that page’s last use
• When a page must be replaced, LRU chooses
the page that has not been used for the
longest period of time
Least Recently Used (LRU) Algorithm
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3
LRU Implementation
• The major problem is how to implement LRU
replacement:
- Counter: whenever a reference to a page is made,
the content of the clock register are copied to the
time-of-use field in the page table entry for the
page. We replace the page with the smallest time
value
- Stack: Whenever a page is referenced, it is
removed from the stack and put on the top. In this
way, the most recently used page is always at the
top of the stack
Example: LRU Page Replacement
Allocation of Frames
• Each process needs minimum number of pages
• Two major allocation schemes :-
- fixed allocation
- priority allocation
• The minimum no. of frames per process is
defined by the architecture, the maximum no.
is defined by the amount of physical memory.
Fixed Allocation
• Equal allocation - For example, if there are 100 frames
and 5 processes, give each process 20 frames.
• Proportional allocation - Allocate according to the size
of process
m= 64
s i size of process p is1 =10
S
s =1272
a1
= (10/137) * 64 = 4
m is total number of free frames
a
a i allocation for pi ( si / S )* m
2 = (127/137) * 64 = 57
is sum of all the process
Priority Allocation
• Use a proportional allocation scheme using
priorities rather than size
• If process Pi generates a page fault,
- select for replacement one of its frames
- select for replacement a frame from a process with
lower priority number
Global vs. Local Allocation
• Global replacement - process selects a
replacement frame from the set of all frames;
one process can take a frame from another
• Local replacement - each process selects from
only its own set of allocated frames
Thrashing
• A process that is spending more time paging than executing is said to be
thrashing. In other words it means, that the process doesn't have enough
frames to hold all the pages for its execution, so it is swapping pages in and
out very frequently to keep executing. Sometimes, the pages which will be
required in the near future have to be swapped out.
• Initially when the CPU utilization is low, the process scheduling
mechanism, to increase the level of multiprogramming loads multiple
processes into the memory at the same time, allocating a limited amount of
frames to each process. As the memory fills up, process starts to spend a lot
of time for the required pages to be swapped in, again leading to low CPU
utilization because most of the processes are waiting for pages. Hence the
scheduler loads more processes to increase CPU utilization, as this continues
at a point of time the complete system comes to a stop.
To prevent thrashing we must provide processes with as many frames as they really
need "right now".
FILE SYSTEM
• File concept
• Access methods
• Directory structure
• File System Mounting
• File sharing
• Protection
• File system structure and implementation,
• Directory implementation
• Allocation methods
• Free space management
• Efficiency and performance, Recovery
Introduction to File System
A file can be "free formed", indexed or structured collection of
related bytes having meaning only to the one who created it.
Or in other words an entry in a directory is the file. The file
may have attributes like name, creator, date, type,
permissions etc.
File Structure:
A file has various kinds of structure. Some of them can be :
• Simple Record Structure with lines of fixed or variable
lengths.
• Complex Structures like formatted document or
reloadable load files.
• No Definite Structure like sequence of words and bytes
etc.
Attributes of a File
Following are some of the attributes of a file :
• Name . It is the only information which is in human-
readable form.
• Identifier. The file is identified by a unique tag(number)
within file system.
• Type. It is needed for systems that support different types
of files.
• Location. Pointer to file location on device.
• Size. The current size of the file.
• Protection. This controls and assigns the power of reading,
writing, executing.
• Time, date, and user identification. This is the data for
protection, security, and usage monitoring.
File Access Methods
The way that files are accessed and read into memory is determined by
Access methods. Usually a single access method is supported by
systems while there are OS's that support multiple access methods.
1. Sequential Access
• Data is accessed one record right after another is an order.
• Read command cause a pointer to be moved ahead by one.
• Write command allocate space for the record and move the pointer to
the new End Of File.
• Such a method is reasonable for tape.
2. Direct Access
• This method is useful for disks.
• The file is viewed as a numbered sequence of blocks or records.
• There are no restrictions on which blocks are read/written, it can be
dobe in any order.
• User now says "read n" rather than "read next".
• "n" is a number relative to the beginning of file, not relative to an
absolute physical disk location.
3. Indexed Sequential Access
• It is built on top of Sequential access.
• It uses an Index to control the pointer while accessing files.
What is a Directory?
Information about files is maintained by Directories. A directory can contain
multiple files. It can even have directories inside of them. In Windows we
also call these directories as folders.
Following is the information maintained in a directory :
• Name : The name visible to user.
• Type : Type of the directory.
• Location : Device and location on the device where the file header is
located.
• Size : Number of bytes/words/blocks in the file.
• Position : Current next-read/next-write pointers.
• Protection : Access control on read/write/execute/delete.
• Usage : Time of creation, access, modification etc.
• Mounting : When the root of one file system is "grafted" into the existing
tree of another file system its called Mounting.

More Related Content

What's hot

Distributed and clustered systems
Distributed and clustered systemsDistributed and clustered systems
Distributed and clustered systems
V.V.Vanniaperumal College for Women
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
Vajira Thambawita
 
Clock synchronization in distributed system
Clock synchronization in distributed systemClock synchronization in distributed system
Clock synchronization in distributed system
Sunita Sahu
 
Multi Processors And Multi Computers
 Multi Processors And Multi Computers Multi Processors And Multi Computers
Multi Processors And Multi Computers
Nemwos
 
services and system calls of operating system
services and system calls of operating system services and system calls of operating system
services and system calls of operating system
Saurabh Soni
 
file system in operating system
file system in operating systemfile system in operating system
file system in operating system
tittuajay
 
Physical organization of parallel platforms
Physical organization of parallel platformsPhysical organization of parallel platforms
Physical organization of parallel platforms
Syed Zaid Irshad
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
Sunita Sahu
 
Segmentation in Operating Systems.
Segmentation in Operating Systems.Segmentation in Operating Systems.
Segmentation in Operating Systems.
Muhammad SiRaj Munir
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
Ashish Kumar
 
Distributed Operating System_4
Distributed Operating System_4Distributed Operating System_4
Distributed Operating System_4
Dr Sandeep Kumar Poonia
 
Demand paging
Demand pagingDemand paging
Demand paging
Trinity Dwarka
 
Operating system 25 classical problems of synchronization
Operating system 25 classical problems of synchronizationOperating system 25 classical problems of synchronization
Operating system 25 classical problems of synchronization
Vaibhav Khanna
 
Operating system services 9
Operating system services 9Operating system services 9
Operating system services 9
myrajendra
 
Operating system overview concepts ppt
Operating system overview concepts pptOperating system overview concepts ppt
Operating system overview concepts ppt
RajendraPrasad Alladi
 
contiguous memory allocation.pptx
contiguous memory allocation.pptxcontiguous memory allocation.pptx
contiguous memory allocation.pptx
Rajapriya82
 
Communication model of parallel platforms
Communication model of parallel platformsCommunication model of parallel platforms
Communication model of parallel platforms
Syed Zaid Irshad
 
Multiprocessor Architecture (Advanced computer architecture)
Multiprocessor Architecture  (Advanced computer architecture)Multiprocessor Architecture  (Advanced computer architecture)
Multiprocessor Architecture (Advanced computer architecture)
vani261
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machines
Syed Zaid Irshad
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
Shashank Kapoor
 

What's hot (20)

Distributed and clustered systems
Distributed and clustered systemsDistributed and clustered systems
Distributed and clustered systems
 
Lecture 1 introduction to parallel and distributed computing
Lecture 1   introduction to parallel and distributed computingLecture 1   introduction to parallel and distributed computing
Lecture 1 introduction to parallel and distributed computing
 
Clock synchronization in distributed system
Clock synchronization in distributed systemClock synchronization in distributed system
Clock synchronization in distributed system
 
Multi Processors And Multi Computers
 Multi Processors And Multi Computers Multi Processors And Multi Computers
Multi Processors And Multi Computers
 
services and system calls of operating system
services and system calls of operating system services and system calls of operating system
services and system calls of operating system
 
file system in operating system
file system in operating systemfile system in operating system
file system in operating system
 
Physical organization of parallel platforms
Physical organization of parallel platformsPhysical organization of parallel platforms
Physical organization of parallel platforms
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
 
Segmentation in Operating Systems.
Segmentation in Operating Systems.Segmentation in Operating Systems.
Segmentation in Operating Systems.
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
 
Distributed Operating System_4
Distributed Operating System_4Distributed Operating System_4
Distributed Operating System_4
 
Demand paging
Demand pagingDemand paging
Demand paging
 
Operating system 25 classical problems of synchronization
Operating system 25 classical problems of synchronizationOperating system 25 classical problems of synchronization
Operating system 25 classical problems of synchronization
 
Operating system services 9
Operating system services 9Operating system services 9
Operating system services 9
 
Operating system overview concepts ppt
Operating system overview concepts pptOperating system overview concepts ppt
Operating system overview concepts ppt
 
contiguous memory allocation.pptx
contiguous memory allocation.pptxcontiguous memory allocation.pptx
contiguous memory allocation.pptx
 
Communication model of parallel platforms
Communication model of parallel platformsCommunication model of parallel platforms
Communication model of parallel platforms
 
Multiprocessor Architecture (Advanced computer architecture)
Multiprocessor Architecture  (Advanced computer architecture)Multiprocessor Architecture  (Advanced computer architecture)
Multiprocessor Architecture (Advanced computer architecture)
 
Communication costs in parallel machines
Communication costs in parallel machinesCommunication costs in parallel machines
Communication costs in parallel machines
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 

Similar to Os unit 3

Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
rprajat007
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
VSKAMCSPSGCT
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
ssusera387fd1
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptx
Amanuelmergia
 
OS UNIT4.pptx
OS UNIT4.pptxOS UNIT4.pptx
OS UNIT4.pptx
DHANABALSUBRAMANIAN
 
Memory management
Memory managementMemory management
Memory management
PATELARCH
 
Unit-4 swapping.pptx
Unit-4 swapping.pptxUnit-4 swapping.pptx
Unit-4 swapping.pptx
ItechAnand1
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
C.U
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
hello509579
 
Memory Management
Memory ManagementMemory Management
Memory Management
lavanya marichamy
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
Kumar Pritam
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory management
Sweety Singhal
 
Main Memory
Main MemoryMain Memory
Main Memory
Usama ahmad
 
UNIT IV.pptx
UNIT IV.pptxUNIT IV.pptx
UNIT IV.pptx
YogapriyaJ1
 
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
LeahRachael
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
footydigarse
 
Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
AvadhRakholiya3
 
Operating systems- Main Memory Management
Operating systems- Main Memory ManagementOperating systems- Main Memory Management
Operating systems- Main Memory Management
Chandrakant Divate
 
Memory Management.pdf
Memory Management.pdfMemory Management.pdf
Memory Management.pdf
SujanTimalsina5
 
CSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxCSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptx
MadhuraK13
 

Similar to Os unit 3 (20)

Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptx
 
OS UNIT4.pptx
OS UNIT4.pptxOS UNIT4.pptx
OS UNIT4.pptx
 
Memory management
Memory managementMemory management
Memory management
 
Unit-4 swapping.pptx
Unit-4 swapping.pptxUnit-4 swapping.pptx
Unit-4 swapping.pptx
 
Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Introduction to memory management
Introduction to memory managementIntroduction to memory management
Introduction to memory management
 
Main Memory
Main MemoryMain Memory
Main Memory
 
UNIT IV.pptx
UNIT IV.pptxUNIT IV.pptx
UNIT IV.pptx
 
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DI...
 
memory managment on computer science.ppt
memory managment on computer science.pptmemory managment on computer science.ppt
memory managment on computer science.ppt
 
Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
 
Operating systems- Main Memory Management
Operating systems- Main Memory ManagementOperating systems- Main Memory Management
Operating systems- Main Memory Management
 
Memory Management.pdf
Memory Management.pdfMemory Management.pdf
Memory Management.pdf
 
CSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptxCSE2010- Module 4 V1.pptx
CSE2010- Module 4 V1.pptx
 

More from SandhyaTatekalva

Uml diagrams usecase
Uml diagrams usecaseUml diagrams usecase
Uml diagrams usecase
SandhyaTatekalva
 
Staffing
StaffingStaffing
San se unit
San se unitSan se unit
San se unit
SandhyaTatekalva
 
Os unit i
Os unit iOs unit i
Os unit i
SandhyaTatekalva
 
Marketing
MarketingMarketing
Marketing
SandhyaTatekalva
 
E r diagram
E r diagramE r diagram
E r diagram
SandhyaTatekalva
 
Em unit v
Em unit vEm unit v
Em unit v
SandhyaTatekalva
 
communication
communicationcommunication
communication
SandhyaTatekalva
 
software engineering
software engineeringsoftware engineering
software engineering
SandhyaTatekalva
 

More from SandhyaTatekalva (9)

Uml diagrams usecase
Uml diagrams usecaseUml diagrams usecase
Uml diagrams usecase
 
Staffing
StaffingStaffing
Staffing
 
San se unit
San se unitSan se unit
San se unit
 
Os unit i
Os unit iOs unit i
Os unit i
 
Marketing
MarketingMarketing
Marketing
 
E r diagram
E r diagramE r diagram
E r diagram
 
Em unit v
Em unit vEm unit v
Em unit v
 
communication
communicationcommunication
communication
 
software engineering
software engineeringsoftware engineering
software engineering
 

Recently uploaded

Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
chanes7
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
David Douglas School District
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
thanhdowork
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
How to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRMHow to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRM
Celine George
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
IreneSebastianRueco1
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
ak6969907
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
PECB
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Excellence Foundation for South Sudan
 
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
National Information Standards Organization (NISO)
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
tarandeep35
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
Academy of Science of South Africa
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
eBook.com.bd (প্রয়োজনীয় বাংলা বই)
 
Life upper-Intermediate B2 Workbook for student
Life upper-Intermediate B2 Workbook for studentLife upper-Intermediate B2 Workbook for student
Life upper-Intermediate B2 Workbook for student
NgcHiNguyn25
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
Celine George
 
How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17
Celine George
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
amberjdewit93
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
TechSoup
 
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
RitikBhardwaj56
 

Recently uploaded (20)

Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
 
How to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRMHow to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRM
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
 
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
Pollock and Snow "DEIA in the Scholarly Landscape, Session One: Setting Expec...
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
 
Life upper-Intermediate B2 Workbook for student
Life upper-Intermediate B2 Workbook for studentLife upper-Intermediate B2 Workbook for student
Life upper-Intermediate B2 Workbook for student
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
 
How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
 
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...The simplified electron and muon model, Oscillating Spacetime: The Foundation...
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
 

Os unit 3

  • 1. MCA 203 OPERATING SYSTEMS UNIT - III
  • 2. Unit – III STORAGE MANAGEMENT Memory Management: • Swapping • Contiguous memory allocation • Paging, • Segmentation Virtual memory: • Demand paging • Page replacement, • Allocation of frames, • Thrashing. File System Interface & Implementation: • File concept • Access methods • Directory structure • File System Mounting • File sharing • Protection • File system structure and implementation, • Directory implementation • Allocation methods • Free space management • Efficiency and performance, Recovery.
  • 3. MEMORY MANAGEMENT • Swapping • Contiguous memory allocation • Paging • Segmentation
  • 4. INTRODUCTION: • Main Memory refers to a physical memory that is the internal memory to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Main memory is also known as RAM. The computer is able to change only data that is in main memory. Therefore, every program we execute and every file we access must be copied from a storage device into main memory. • All the programs are loaded in the main memory for execution. Sometimes complete program is loaded into the memory, but some times a certain part or routine of the program is loaded into the main memory only when it is called by the program, this mechanism is called Dynamic Loading, this enhance the performance. • Also, at times one program is dependent on some other program. In such a case, rather than loading all the dependent programs, CPU links the dependent programs to the main executing program when its required. This mechanism is known as Dynamic Linking.
  • 5. Base and Limit Registers • A pair of base and limit registers define the logical address space • CPU must check every memory access generated in user mode to be sure it is between base and limit for that user
  • 6. Hardware Address Protection with Base and Limit Registers
  • 7. Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physical address space is central to proper memory management – Logical address – generated by the CPU; also referred to as virtual address – Physical address – address seen by the memory unit • Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme • Logical address space is the set of all logical addresses generated by a program • Physical address space is the set of all physical addresses generated by a program
  • 8. Dynamic relocation using a relocation register  Routine is not loaded until it is called  Better memory-space utilization; unused routine is never loaded  All routines kept on disk in relocatable load format  Useful when large amounts of code are needed to handle infrequently occurring cases  No special support from the operating system is required  Implemented through program design  OS can help by providing libraries to implement dynamic loading
  • 9. Swapping • A process needs to be in memory for execution. But sometimes there is not enough main memory to hold all the currently active processes in a timesharing system. So, excess process are kept on disk and brought in to run dynamically. • Swapping is the process of bringing in each process in main memory, running it for a while and then putting it back to the disk.
  • 10. Schematic View of Swapping
  • 11. Contiguous Memory Allocation • In contiguous memory allocation each process is contained in a single contiguous block of memory. Memory is divided into several fixed size partitions. Each partition contains exactly one process. When a partition is free, a process is selected from the input queue and loaded into it. The free blocks of memory are known as holes. The set of holes is searched to determine which hole is best to allocate.
  • 12. Contiguous Allocation • Main memory must support both OS and user processes • Limited resource, must allocate efficiently • Contiguous allocation is one early method • Main memory usually into two partitions: – Resident operating system, usually held in low memory with interrupt vector – User processes then held in high memory – Each process contained in single contiguous section of memory • Relocation registers used to protect user processes from each other, and from changing operating-system code and data – Base register contains value of smallest physical address – Limit register contains range of logical addresses – each logical address must be less than the limit register – MMU maps logical address dynamically – Can then allow actions such as kernel code being transient and kernel changing size
  • 13. Hardware Support for Relocation and Limit Registers
  • 14. Memory Protection • Memory protection is a phenomenon by which we control memory access rights on a computer. The main aim of it is to prevent a process from accessing memory that has not been allocated to it. Hence prevents a bug within a process from affecting other processes, or the operating system itself, and instead results in a segmentation fault or storage violation exception being sent to the disturbing process, generally killing of process.
  • 15. Memory Allocation: How to satisfy a request of size n from a list of free holes? Memory allocation is a process by which computer programs are assigned memory or space. It is of three types : • First Fit: The first hole that is big enough is allocated to program. • Best Fit: The smallest hole that is big enough is allocated to program. • Worst Fit: The largest hole that is big enough is allocated to program.  First-fit and best-fit better than worst-fit in terms of speed and storage utilization
  • 16. Fragmentation: • Fragmentation occurs in a dynamic memory allocation system when most of the free blocks are too small to satisfy any request. It is generally termed as inability to use the available memory. • In such situation processes are loaded and removed from the memory. As a result of this, free holes exists to satisfy a request but is non contiguous i.e. the memory is fragmented into large no. Of small holes. This phenomenon is known as External Fragmentation. • Also, at times the physical memory is broken into fixed size blocks and memory is allocated in unit of block sizes. The memory allocated to a space may be slightly larger than the requested memory. “The difference between allocated and required memory is known as Internal fragmentation” i.e. the memory that is internal to a partition but is of no use. •
  • 17. Fragmentation (Cont.,) • External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous • Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used • First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to fragmentation 1/3 may be unusable -> 50-percent rule • Reduce external fragmentation by compaction – Shuffle memory contents to place all free memory together in one large block – Compaction is possible only if relocation is dynamic, and is done at execution time – I/O problem • Latch job in memory while it is involved in I/O • Do I/O only into OS buffers • Now consider that backing store has same fragmentation problems
  • 18. Paging A solution to fragmentation problem is Paging. Paging is a memory management mechanism that allows the physical address space of a process to be non-contagious. Here physical memory is divided into blocks of equal size called Pages. The pages belonging to a certain process are loaded into available memory frames. Page Table • A Page Table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual address and physical addresses. • Virtual address is also known as Logical address and is generated by the CPU. While Physical address is the address that actually exists on memory.
  • 19.
  • 20. • Every address generated by the CPU is divided into two parts: Page number (p) and Page offset (d) Paging Hardware
  • 21. Paging Hardware  The page number is used as an index into a Page Table  The page size is defined by the hardware  The size of a page is typically a power of 2, varying between 512 bytes and 16MB per page  Reason: If the size of logical address is 2^m and page size is 2^n, then the high-order m-n bits of a logical address designate the page number
  • 23. 23 Page Tables • The OS now needs to maintain (in main memory) a page table for each process • Each entry of a page table consist of the frame number where the corresponding page is physically located • The page table is indexed by the page number to obtain the frame number • A free frame list, available for pages, is maintained
  • 27. Implementing PageTable: • To implement paging, the simplest method is to implement the page table as a set of registers • However, the size of register is limited and the size of page table is usually large • Therefore, the page table is kept in main memory
  • 28. ADVANTAGES  No external Fragmentation  Simple memory management algorithm  Swapping is easy (Equal sized Pages and Page Frames)  Share common code especially in a time-sharing environment DISADVANTAGES  Internal fragmentation  Page tables may consume more memory.  Multi level paging leads to memory reference overhead.
  • 29. Segmentation Segmentation is another memory management scheme that supports the user-view of memory. Segmentation allows breaking of the virtual address space of a single process into segments that may be placed in non-contiguous areas of physical memory. Segmentation with Paging Both paging and segmentation have their advantages and disadvantages, it is better to combine these two schemes to improve on each. The combined scheme is known as 'Page the Elements'. Each segment in this scheme is divided into pages and each segment is maintained in a page table. So the logical address is divided into following 3 parts : • Segment numbers(S) • Page number (P) • The displacement or offset number (D)
  • 30. Logical addressing in Segmentation Logical Address space Segment number Offset The mapping of the logical address to the physical address is done with the help of the segment table. Segment Limit Segment Base Other bits A bit is needed to determine if the segment is already in main memory (P) Another bit is needed to determine if the segment has been modified since it was loaded in main memory (M) the length of the segment SEGMENT TABLE starting address of the corresponding segment in main memory
  • 33. ADVANTAGES OF SEGMENTATION • No internal fragmentation • Segment tables consume less memory than page tables ( only one entry per actual segment as opposed to one entry per page in Paging method) • Because of the small segment table, memory reference is easy. • Lends itself to sharing data among processes. • Lends itself to protection. • As the individual lines of a page do not form one logical unit, it is not possible to set a particular access right to a page. • Note that each segment could be set up an access right
  • 34. DISADVANTAGES • External fragmentation. • Costly memory management algorithm • Unequal size of segments is not good in the case of swapping. So, why can’t we combine the ease of sharing and protection we get from segments with efficient memory utilization we get from pages ????
  • 35. Paging verses Segmentation Paging Segmentation  Each process is assigned its page table.  Page table size proportional to allocated memory  Often large page tables and/or multi-level paging  Internal fragmentation  Free memory is quickly allocated to a process  Each process is assigned a segment table  Segment table size proportional to number of segments  Usually small segment tables  External fragmentation.  Lengthy search times when allocating memory to a process.
  • 36. VIRTUAL MEMORY • Demand paging • Page replacement, • Allocation of frames, • Thrashing.
  • 37. What is Virtual Memory? Virtual Memory is a space where large programs can store themselves in the form of pages while their execution and only the required pages or portions of processes are loaded into the main memory. This technique is useful as large virtual memory is provided for user programs when a very small physical memory is there. In real scenarios, most processes never need all their pages at once, for following reasons : • Error handling code is not needed unless that specific error occurs, some of which are quite rare. • Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays are actually used in practice. • Certain features of certain programs are rarely used.
  • 38. Benefits of having Virtual Memory • Large programs can be written, as virtual space available is huge compared to physical memory. • Less I/O required, leads to faster and easy swapping of processes. • More physical memory available, as programs are stored on virtual memory, so they occupy very less space on actual physical memory.
  • 39. What is Demand Paging? • The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped in all at once. Rather they are swapped in only when the process needs them(On demand). This is termed as lazy swapper, although a pager is a more accurate term. • Initially only those pages are loaded which will be required the process immediately.
  • 40. The pages that are not moved into the memory, are marked as invalid in the page table. For an invalid entry the rest of the table is empty. In case of pages that are loaded in the memory, they are marked as valid along with the information about where to find the swapped out page. When the process requires any of the page that is not loaded into the memory, a page fault trap is triggered and following steps are followed: 1. The memory address which is requested by the process is first checked, to verify the request made by the process. 2. If its found to be invalid, the process is terminated. 3. In case the request by the process is valid, a free frame is located, possibly from a free-frame list, where the required page will be moved. 4. A new operation is scheduled to move the necessary page from disk to the specified memory location. ( This will usually block the process on an I/O wait, allowing some other process to use the CPU in the meantime. ) 5. When the I/O operation is complete, the process's page table is updated with the new frame number, and the invalid bit is changed to valid. 6. The instruction that caused the page fault must now be restarted from the beginning. There are cases when no pages are loaded into the memory initially, pages are only loaded when demanded by the process by generating page faults. This is called Pure Demand Paging. The only major issue with Demand Paging is, after a new page is loaded, the process starts execution from the beginning. Its is not a big issue for small programs, but for larger programs it affects performance drastically.
  • 41. PURE DEMAND PAGING We start executing a process with no pages in the memory. When the Operating System sets the instruction pointer to the first pointer instruction of the process, which is on a non memory resident page, the process immediately faults for the process. After this page is brought into the memory, the process continues to execute, faulting as necessary until every page that it needs is in memory. At this point, it can execute with no more faults. This is Pure Demand Paging.
  • 42. HARDWARE SUPPORT The hardware to support demand paging is the same as the hardware for paging and swapping:  PAGE TABLE : This table has the ability to mark an entry invalid through a valid invalid bit or a special value of protection bits.  SECONDARY MEMORY : The memory holds those pages that are not present in the main memory. The secondary memory is usually a high speed disk. It is known as Swap Device and the section of the disk used for this purpose is known as the Swap Space.
  • 43. Performance of Demand Paging • Page Fault Rate 0 < p < 1.0 - if p = 0 no page faults - if p = 1, every reference is a fault • Effective Access Time (EAT) EAT = (1 - p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead)
  • 44. Demand Paging Example • Memory access time = 200 nanoseconds • Average page-fault service time = 8 milliseconds • EAT = (1 - p) x 200 + p (8 milliseconds) = (1 - p x 200 + p x 8,000,000 = 200 + p x 7,999,800 EAT is directly proportional to the page fault rate.
  • 45. What happens if there is no free frame? • Page replacement - find some page in memory, but not really in use, swap it out - Algorithm - Performance - want an algorithm which will result in minimum number of page faults • Same page may be brought into memory several times
  • 46. Page Replacement • As studied in Demand Paging, only certain pages of a process are loaded initially into the memory. This allows us to get more number of processes into the memory at the same time. but what happens when a process requests for more pages and no free memory is available to bring them in. • Following steps can be taken to deal with this problem : 1. Put the process in the wait queue, until any other process finishes its execution thereby freeing frames. 2. Or, remove some other process completely from the memory to free frames. 3. Or, find some pages that are not being used right now, move them to the disk to get free frames. This technique is called Page replacement and is most commonly used. We have some great algorithms to carry on page replacement efficiently.
  • 47. Basic Page Replacement • Find the location of the page requested by ongoing process on the disk. • Find a free frame. If there is a free frame, use it. If there is no free frame, use a page-replacement algorithm to select any existing frame to be replaced, such frame is known as victim frame. • Write the victim frame to disk. Change all related page tables to indicate that this page is no longer in memory. • Move the required page and store it in the frame. Adjust all related page and frame tables to indicate the change. • Restart the process that was waiting for this page.
  • 48. STEPS IN PAGE REPLACEMENT :  Find the location of the desired page on the disk.  Find a free frame :  If there is a free frame, use it.  If there is no free frame, use a page replacement algorithm to select a victim frame.  Write the victim frame to the disk, change the page and frame tables accordingly.  Read the desired page into the newly freed frame, change the page and frame tables.  Restart the user process.
  • 49. Page Replacement Use modify (dirty) bit to reduce overhead of page transfers - only modified pages are written to disk
  • 50. Page Replacement Algorithms • Want lowest page-fault rate • Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string • In all our examples, the reference string is 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
  • 51. FIFO Page Replacement • A very simple way of Page replacement is FIFO (First in First Out) • As new pages are requested and are swapped in, they are added to tail of a queue and the page which is at the head becomes the victim. • Its not an effective way of page replacement but can be used for small systems.
  • 52. First-In-First-Out (FIFO) Example Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) 4 frames
  • 54. Optimal Page Replacement Replace page that will not be used for longest period of time Unfortunately, the optimal page-replacement is difficult to implement, because it requires future knowledge of the reference string
  • 55. Least Recently Used (LRU) Algorithm • LRU replacement associates with each page the time of that page’s last use • When a page must be replaced, LRU chooses the page that has not been used for the longest period of time
  • 56. Least Recently Used (LRU) Algorithm • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 1 1 1 5 2 2 2 2 2 3 5 5 4 4 4 4 3 3 3
  • 57. LRU Implementation • The major problem is how to implement LRU replacement: - Counter: whenever a reference to a page is made, the content of the clock register are copied to the time-of-use field in the page table entry for the page. We replace the page with the smallest time value - Stack: Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack
  • 58. Example: LRU Page Replacement
  • 59. Allocation of Frames • Each process needs minimum number of pages • Two major allocation schemes :- - fixed allocation - priority allocation • The minimum no. of frames per process is defined by the architecture, the maximum no. is defined by the amount of physical memory.
  • 60. Fixed Allocation • Equal allocation - For example, if there are 100 frames and 5 processes, give each process 20 frames. • Proportional allocation - Allocate according to the size of process m= 64 s i size of process p is1 =10 S s =1272 a1 = (10/137) * 64 = 4 m is total number of free frames a a i allocation for pi ( si / S )* m 2 = (127/137) * 64 = 57 is sum of all the process
  • 61. Priority Allocation • Use a proportional allocation scheme using priorities rather than size • If process Pi generates a page fault, - select for replacement one of its frames - select for replacement a frame from a process with lower priority number
  • 62. Global vs. Local Allocation • Global replacement - process selects a replacement frame from the set of all frames; one process can take a frame from another • Local replacement - each process selects from only its own set of allocated frames
  • 63. Thrashing • A process that is spending more time paging than executing is said to be thrashing. In other words it means, that the process doesn't have enough frames to hold all the pages for its execution, so it is swapping pages in and out very frequently to keep executing. Sometimes, the pages which will be required in the near future have to be swapped out. • Initially when the CPU utilization is low, the process scheduling mechanism, to increase the level of multiprogramming loads multiple processes into the memory at the same time, allocating a limited amount of frames to each process. As the memory fills up, process starts to spend a lot of time for the required pages to be swapped in, again leading to low CPU utilization because most of the processes are waiting for pages. Hence the scheduler loads more processes to increase CPU utilization, as this continues at a point of time the complete system comes to a stop.
  • 64. To prevent thrashing we must provide processes with as many frames as they really need "right now".
  • 65. FILE SYSTEM • File concept • Access methods • Directory structure • File System Mounting • File sharing • Protection • File system structure and implementation, • Directory implementation • Allocation methods • Free space management • Efficiency and performance, Recovery
  • 66. Introduction to File System A file can be "free formed", indexed or structured collection of related bytes having meaning only to the one who created it. Or in other words an entry in a directory is the file. The file may have attributes like name, creator, date, type, permissions etc. File Structure: A file has various kinds of structure. Some of them can be : • Simple Record Structure with lines of fixed or variable lengths. • Complex Structures like formatted document or reloadable load files. • No Definite Structure like sequence of words and bytes etc.
  • 67. Attributes of a File Following are some of the attributes of a file : • Name . It is the only information which is in human- readable form. • Identifier. The file is identified by a unique tag(number) within file system. • Type. It is needed for systems that support different types of files. • Location. Pointer to file location on device. • Size. The current size of the file. • Protection. This controls and assigns the power of reading, writing, executing. • Time, date, and user identification. This is the data for protection, security, and usage monitoring.
  • 68. File Access Methods The way that files are accessed and read into memory is determined by Access methods. Usually a single access method is supported by systems while there are OS's that support multiple access methods. 1. Sequential Access • Data is accessed one record right after another is an order. • Read command cause a pointer to be moved ahead by one. • Write command allocate space for the record and move the pointer to the new End Of File. • Such a method is reasonable for tape. 2. Direct Access • This method is useful for disks. • The file is viewed as a numbered sequence of blocks or records. • There are no restrictions on which blocks are read/written, it can be dobe in any order. • User now says "read n" rather than "read next". • "n" is a number relative to the beginning of file, not relative to an absolute physical disk location. 3. Indexed Sequential Access • It is built on top of Sequential access. • It uses an Index to control the pointer while accessing files.
  • 69. What is a Directory? Information about files is maintained by Directories. A directory can contain multiple files. It can even have directories inside of them. In Windows we also call these directories as folders. Following is the information maintained in a directory : • Name : The name visible to user. • Type : Type of the directory. • Location : Device and location on the device where the file header is located. • Size : Number of bytes/words/blocks in the file. • Position : Current next-read/next-write pointers. • Protection : Access control on read/write/execute/delete. • Usage : Time of creation, access, modification etc. • Mounting : When the root of one file system is "grafted" into the existing tree of another file system its called Mounting.