SlideShare a Scribd company logo
Chanderprabhu Jain College of Higher Studies & School of Law
Plot No. OCF, Sector A-8, Narela, New Delhi – 110040
(Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India)
Semester: Fifth Semester
Name of the Subject: Operating System
UNIT- 1
WHAT IS AN OPERATING SYSTEM?
• An interface between users and hardware - an environment "architecture”
• Allows convenient usage; hides the tedious stuff
• Allows efficient usage; parallel activity, avoids wasted cycles
• Provides information protection
• Gives each user a slice of the resources
• Acts as a control program.
OPERATING SYSTEM
OVERVIEW
2
Chanderprabhu Jain College of Higher
Studies & School of Law
OPERATING SYSTEM
OVERVIEW
The Layers Of
A System
Program Interface
Humans
User Programs
O.S. Interface
O.S.
Hardware Interface/
Privileged Instructions
Disk/Tape/Memory
3
Chanderprabhu Jain College of Higher
Studies & School of Law
A mechanism for scheduling jobs or processes. Scheduling can be as simple as running
the next process, or it can use relatively complex rules to pick a running process.
A method for simultaneous CPU execution and IO handling. Processing is going on even
as IO is occurring in preparation for future CPU work.
Off Line Processing; not only are IO and CPU happening concurrently, but some off-
board processing is occurring with the IO.
OPERATING SYSTEM
OVERVIEW
Components
4
Chanderprabhu Jain College of Higher
Studies & School of Law
The CPU is wasted if a job waits for I/O. This leads to:
– Multiprogramming ( dynamic switching ). While one job waits for a resource, the
CPU can find another job to run. It means that several jobs are ready to run and
only need the CPU in order to continue.
CPU scheduling is the subject of Chapter 6.
All of this leads to:
– memory management
– resource scheduling
– deadlock protection
which are the subject of the rest of this course.
OPERATING SYSTEM
OVERVIEW
Components
5
Chanderprabhu Jain College of Higher
Studies & School of Law
Other Characteristics include:
• Time Sharing - multiprogramming environment that's also interactive.
• Multiprocessing - Tightly coupled systems that communicate via shared memory. Used for
scientific applications. Used for speed improvement by putting together a number of off-the-shelf
processors.
• Distributed Systems - Loosely coupled systems that communicate via message passing. Advantages
include resource sharing, speed up, reliability, communication.
• Real Time Systems - Rapid response time is main characteristic. Used in control of applications
where rapid response to a stimulus is essential.
OPERATING SYSTEM
OVERVIEW
Characteristics
6
Chanderprabhu Jain College of Higher
Studies & School of Law
OPERATING SYSTEM
OVERVIEW
Characteristics
Interrupts:
• Interrupt transfers control to the interrupt service routine generally, through
the interrupt vector, which contains the addresses of all the service routines.
• Interrupt architecture must save the address of the interrupted instruction.
• Incoming interrupts are disabled while another interrupt is being processed to
prevent a lost interrupt.
• A trap is a software-generated interrupt caused either by an error or a user
request.
• An operating system is interrupt driven.
7
Chanderprabhu Jain College of Higher
Studies & School of Law
OPERATING SYSTEM
OVERVIEW
Hardware
Support
These are the
devices that make
up a typical
system.
Any of these
devices can cause
an electrical
interrupt that grabs
the attention of the
CPU.
8
Chanderprabhu Jain College of Higher
Studies & School of Law
Operating System
• Is a program that controls the execution of
application programs
– OS must relinquish control to user programs
and regain it safely and efficiently
– Tells the CPU when to execute other pgms
• Is an interface between the user and
hardware
• Masks the details of the hardware to
application programs
– Hence OS must deal with hardware details
9
Chanderprabhu Jain College of Higher
Studies & School of Law
Services Provided by the OS
• Facilities for Program creation
– editors, compilers, linkers, and debuggers
• Program execution
– loading in memory, I/O and file initialization
• Access to I/O and files
– deals with the specifics of I/O and file formats
• System access
– Protection in access to resources and data
– Resolves conflicts for resource contention
10
Chanderprabhu Jain College of Higher
Studies & School of Law
Services Provided by the OS
• Accounting
– collect statistics on resource usage
– monitor performance (eg: response time)
– used for system parameter tuning to improve
performance
– useful for anticipating future enhancements
– used for billing users (on multiuser systems)
11
Chanderprabhu Jain College of Higher
Studies & School of Law
Evolution of an Operating System
• Must adapt to hardware upgrades and new
types of hardware. Examples:
– Character vs graphic terminals
– Introduction of paging hardware
• Must offer new services, eg: internet
support
• The need to change the OS on regular basis
place requirements on it’s design
– modular construction with clean interfaces
– object oriented methodology
12
Chanderprabhu Jain College of Higher
Studies & School of Law
The Monitor
• Monitor reads jobs one at a time
from the input device
• Monitor places a job in the user
program area
• A monitor instruction branches to
the start of the user program
• Execution of user pgm continues
until:
– end-of-pgm occurs
– error occurs
• Causes the CPU to fetch its next
instruction from Monitor
13
Chander
prabhu
Jain
College
of Higher
Job Control Language (JCL)
• Is the language to provide
instructions to the monitor
– what compiler to use
– what data to use
• Example of job format: ------->>
• $FTN loads the compiler and
transfers control to it
• $LOAD loads the object code (in
place of compiler)
• $RUN transfers control to user
program
$JOB
$FTN
...
FORTRAN
program
...
$LOAD
$RUN
...
Data
...
$END
14
Chander
prabhu
Jain
College
of Higher
Job Control Language (JCL)
• Each read instruction (in user pgm) causes one
line of input to be read
• Causes (OS) input routine to be invoke
– checks for not reading a JCL line
– skip to the next JCL line at completion of user
program
15
Chanderprabhu Jain College of Higher
Studies & School of Law
Timesharing
• Multiprogramming allowed several jobs to be
active at one time
– Initially used for batch systems
– Cheaper hardware terminals -> interactive use
• Computer use got much cheaper and easier
– No more “priesthood”
– Quick turnaround meant quick fixes for problems
16
Chanderprabhu Jain College of Higher
Studies & School of Law
Types of modern operating
systems
• Mainframe operating systems: MVS
• Server operating systems: FreeBSD, Solaris
• Multiprocessor operating systems: Cellular IRIX
• Personal computer operating systems: Windows, Unix
• Real-time operating systems: VxWorks
• Embedded operating systems
• Smart card operating systems
 Some operating systems can fit into more than one category
17
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory
User program
and data
User program
and data
Operating
system
Address
0x1dfff
0x23000
0x27fff
0x2b000
0x2ffff
0
• Single base/limit pair: set for each process
• Two base/limit registers: one for program, one for data
Base
Limit
User data
User program
Operating
system
User data
Base1
Limit2
Limit1
Base2
Address
0x1dfff
0x23000
0x29000
0x2bfff
0x2ffff
0
0x2d000
0x24fff
18Chanderprabhu Jain College of Higher Studies & School of Law
Anatomy of a device request
Interrupt
controller
CPU
5 Disk
controller
3 2
61 4
• Left: sequence as seen by hardware
– Request sent to controller, then to disk
– Disk responds, signals disk controller which tells interrupt controller
– Interrupt controller notifies CPU
• Right: interrupt handling (software point of view)
Instructionn
Operating
system
Instructionn+1
Interrupt handler
1: Interrupt
2: Process interrupt
3: Return
19Chanderprabhu Jain College of Higher Studies & School of Law
Processes
• Process: program in execution
– Address space (memory) the
program can use
– State (registers, including
program counter & stack
pointer)
• OS keeps track of all processes in
a process table
• Processes can create other
processes
– Process tree tracks these
relationships
– A is the root of the tree
– A created three child processes:
B, C, and D
– C created two child processes: E
and F
– D created one child process: G
A
B
E F
C D
G
20Chanderprabhu Jain College of Higher Studies & School of Law
Inside a (Unix) process
• Processes have three
segments
– Text: program code
– Data: program data
• Statically declared variables
• Areas allocated by malloc()
or new
– Stack
• Automatic variables
• Procedure call information
• Address space growth
– Text: doesn’t grow
– Data: grows “up”
– Stack: grows “down”
Stack
Data
Text
0x7fffffff
0
Data
21Chanderprabhu Jain College of Higher Studies & School of Law
Services Provided by the OS
• Error Detection
– internal and external
hardware errors
• memory error
• device failure
– software errors
• arithmetic overflow
• access forbidden memory
locations
– Inability of OS to grant
request of application
• Error Response
– simply report error to the
application
– Retry the operation
– Abort the application
22
Chanderprabhu Jain College of Higher
Studies & School of Law
Batch OS
• Alternates execution between user program
and the monitor program
• Relies on available hardware to effectively
alternate execution from various parts of
memory
23
Chanderprabhu Jain College of Higher
Studies & School of Law
Desirable Hardware Features
• Memory protection
– do not allow the memory area containing the
monitor to be altered by user programs
• Timer
– prevents a job from monopolizing the system
– an interrupt occurs when time expires
24
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory hierarchy
• What is the memory hierarchy?
– Different levels of memory
– Some are small & fast
– Others are large & slow
• What levels are usually included?
– Cache: small amount of fast, expensive memory
• L1 (level 1) cache: usually on the CPU chip
• L2 & L3 cache: off-chip, made of SRAM
– Main memory: medium-speed, medium price memory (DRAM)
– Disk: many gigabytes of slow, cheap, non-volatile storage
• Memory manager handles the memory
hierarchy
25
Chanderprabhu Jain College of Higher
Studies & School of Law
Basic memory management
• Components include
– Operating system (perhaps with device drivers)
– Single process
• Goal: lay these out in memory
– Memory protection may not be an issue (only one program)
– Flexibility may still be useful (allow OS changes, etc.)
• No swapping or paging
Operating system
(RAM)
User program
(RAM)
0xFFFF 0xFFFF
0 0
User program
(RAM)
Operating system
(ROM)
Operating system
(RAM)
User program
(RAM)
Device drivers
(ROM)
26Chanderprabhu Jain College of Higher Studies & School of Law
Fixed partitions: multiple
programs
• Fixed memory partitions
– Divide memory into fixed spaces
– Assign a process to a space when it’s free
• Mechanisms
– Separate input queues for each partition
– Single input queue: better ability to optimize CPU usage
OS
Partition 1
Partition 2
Partition 3
Partition 4
0
100K
500K
600K
700K
900K
OS
Partition 1
Partition 2
Partition 3
Partition 4
0
100K
500K
600K
700K
900K
27Chanderprabhu Jain College of Higher Studies & School of Law
Swapping
• Memory allocation changes as
– Processes come into memory
– Processes leave memory
• Swapped to disk
• Complete execution
• Gray regions are unused memory
OS OS OS OS OS OS OS
A A
B
A
B
C
B
C
B
C
D
C
D
C
D
A
28Chanderprabhu Jain College of Higher Studies & School of Law
Tracking memory usage: linked
lists
• Keep track of free / allocated memory regions with a linked list
– Each entry in the list corresponds to a contiguous region of memory
– Entry can indicate either allocated or free (and, optionally, owning process)
– May have separate lists for free and allocated areas
• Efficient if chunks are large
– Fixed-size representation for each region
– More regions => more space needed for free lists
A B C D
16 24 32
Memory regions
A 0 6 - 6 4 B 10 3 - 13 4 C 17 9
- 29 3D 26 3
8
29Chanderprabhu Jain College of Higher Studies & School of Law
Allocating memory
• Search through region list to find a large enough space
• Suppose there are several choices: which one to use?
– First fit: the first suitable hole on the list
– Next fit: the first suitable after the previously allocated hole
– Best fit: the smallest hole that is larger than the desired region (wastes least
space?)
– Worst fit: the largest available hole (leaves largest fragment)
• Option: maintain separate queues for different-size holes
- 6 5 - 19 14 - 52 25 - 102 30 - 135 16
- 202 10 - 302 20 - 350 30 - 411 19 - 510 3
Allocate 20 blocks first fit
5
Allocate 12 blocks next fit
18
Allocate 13 blocks best fit
1
Allocate 15 blocks worst fit
15
30Chanderprabhu Jain College of Higher Studies & School of Law
Freeing memory
• Allocation structures must be updated when memory is freed
• Easy with bitmaps: just set the appropriate bits in the bitmap
• Linked lists: modify adjacent elements as needed
– Merge adjacent free regions into a single region
– May involve merging two regions with the just-freed area
A X B
A X
X B
X
A B
A
B
31Chanderprabhu Jain College of Higher Studies & School of Law
Limitations of swapping
• Problems with swapping
– Process must fit into physical memory (impossible to run larger
processes)
– Memory becomes fragmented
• External fragmentation: lots of small free areas
• Compaction needed to reassemble larger free areas
– Processes are either in memory or on disk: half and half doesn’t do
any good
• Overlays solved the first problem
– Bring in pieces of the process over time (typically data)
– Still doesn’t solve the problem of fragmentation or partially resident
processes
32
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual memory
• Basic idea: allow the OS to hand out more memory than exists
on the system
• Keep recently used stuff in physical memory
• Move less recently used stuff to disk
• Keep all of this hidden from processes
– Processes still see an address space from 0 – max address
– Movement of information to and from disk handled by the
OS without process help
• Virtual memory (VM) especially helpful in multiprogrammed
system
– CPU schedules process B while process A waits for its
memory to be retrieved from disk
33
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual and physical addresses
• Program uses virtual
addresses
– Addresses local to the
process
– Hardware translates virtual
address to physical address
• Translation done by the
Memory Management Unit
– Usually on the same chip as
the CPU
– Only physical addresses leave
the CPU/MMU chip
• Physical memory indexed
by physical addresses
CPU chip
CPU
Memory
Disk
controller
MMU
Virtual addresses
from CPU to MMU
Physical addresses
on bus, in memory
34Chanderprabhu Jain College of Higher Studies & School of Law
0–4K
4–8K
8–12K
12–16K
16–20K
20–24K
24–28K
28–32K
Paging and page tables
• Virtual addresses mapped to
physical addresses
– Unit of mapping is called a page
– All addresses in the same virtual
page are in the same physical
page
– Page table entry (PTE) contains
translation for a single page
• Table translates virtual page
number to physical page number
– Not all virtual memory has a
physical page
– Not every physical page need be
used
• Example:
– 64 KB virtual memory
– 32 KB physical memory
70–4K
44–8K
8–12K
12–16K
016–20K
20–24K
24–28K
328–32K
32–36K
36–40K
140–44K
544–48K
648–52K
-52–56K
56–60K
-60–64K
Virtual
address
space
Physical
memory
-
-
-
-
-
-
-
35Chanderprabhu Jain College of Higher Studies & School of Law
What’s in a page table entry?
• Each entry in the page table contains
– Valid bit: set if this logical page number has a corresponding physical frame in
memory
• If not valid, remainder of PTE is irrelevant
– Page frame number: page in physical memory
– Referenced bit: set if data on the page has been accessed
– Dirty (modified) bit :set if data on the page has been modified
– Protection information
Page frame numberVRDProtection
Valid bitReferenced bitDirty bit
36Chanderprabhu Jain College of Higher Studies & School of Law
Example:
• 4 KB (=4096 byte) pages
• 32 bit logical addresses
p d
2d = 4096 d = 12
12 bits
32 bit logical address
32-12 = 20 bits
Mapping logical => physical
address
• Split address from CPU into
two pieces
– Page number (p)
– Page offset (d)
• Page number
– Index into page table
– Page table contains base
address of page in physical
memory
• Page offset
– Added to base address to get
actual physical memory
address
• Page size = 2d bytes
37Chanderprabhu Jain College of Higher Studies & School of Law
page number
p d
page offset
0
1
p-1
p
p+1
f
f d
Page frame number
...
page table
physical memory
0
1
...
f-1
f
f+1
f+2
...
Page frame number
CPU
Address translation architecture
38
Chanderprabhu Jain College of Higher
Studies & School of Law
0
Page frame number
Logical memory (P0)
1
2
3
4
5
6
7
8
9
Physical
memory
Page table (P0)
Logical memory (P1) Page table (P1)
Page 4
Page 3
Page 2
Page 1
Page 0
Page 1
Page 0
0
8
2
9
4
3
6
Page 3 (P0)
Page 0 (P1)
Page 0 (P0)
Page 2 (P0)
Page 1 (P0)
Page 4 (P0)
Page 1 (P1)
Free
pages
Memory & paging structures
39
Chanderprabhu Jain College of Higher
Studies & School of Law
884
960
955
...
220
657
401
...
1st level
page table
2nd level
page tables
...
...
...
...
...
...
...
...
...
main
memory
...
125
613
961
...
Two-level page tables
• Problem: page tables can be too
large
– 232 bytes in 4KB pages need 1
million PTEs
• Solution: use multi-level page
tables
– “Page size” in first page table is
large (megabytes)
– PTE marked invalid in first page
table needs no 2nd level page
table
• 1st level page table has pointers
to 2nd level page tables
• 2nd level page table has actual
physical page numbers in it
40Chanderprabhu Jain College of Higher Studies & School of Law
More on two-level page tables
• Tradeoffs between 1st and 2nd level page table sizes
– Total number of bits indexing 1st and 2nd level table is constant for a
given page size and logical address length
– Tradeoff between number of bits indexing 1st and number indexing
2nd level tables
• More bits in 1st level: fine granularity at 2nd level
• Fewer bits in 1st level: maybe less wasted space?
• All addresses in table are physical addresses
• Protection bits kept in 2nd level table
41
Chanderprabhu Jain College of Higher
Studies & School of Law
p1 = 10 bits p2 = 9 bits offset = 13 bits
page offsetpage number
Two-level paging: example
• System characteristics
– 8 KB pages
– 32-bit logical address divided into 13 bit page offset, 19 bit page number
• Page number divided into:
– 10 bit page number
– 9 bit page offset
• Logical address looks like this:
– p1 is an index into the 1st level page table
– p2 is an index into the 2nd level page table pointed to by p1
42Chanderprabhu Jain College of Higher Studies & School of Law
...
...
2-level address translation example
p1 = 10 bits p2 = 9 bits offset = 13 bits
page offsetpage number
...
0
1
p1
...
0
1
p2
19
physical address
1st level page table
2nd level page table
main memory
0
1
frame
number
13
Page
table
base
...
...
43
Chanderprabhu Jain College of Higher
Studies & School of Law
Implementing page tables in hardware
• Page table resides in main (physical) memory
• CPU uses special registers for paging
– Page table base register (PTBR) points to the page table
– Page table length register (PTLR) contains length of page table:
restricts maximum legal logical address
• Translating an address requires two memory accesses
– First access reads page table entry (PTE)
– Second access reads the data / instruction from memory
• Reduce number of memory accesses
– Can’t avoid second access (we need the value from memory)
– Eliminate first access by keeping a hardware cache (called a translation
lookaside buffer or TLB) of recently used page table entries
44
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical
page #
Physical
frame #
Example TLB
8
unused
2
3
12
29
22
7
3
1
0
12
6
11
4
Translation Lookaside Buffer
(TLB)
• Search the TLB for the desired
logical page number
– Search entries in parallel
– Use standard cache techniques
• If desired logical page number is
found, get frame number from
TLB
• If desired logical page number
isn’t found
– Get frame number from page
table in memory
– Replace an entry in the TLB with
the logical & physical page
numbers from this reference
45Chanderprabhu Jain College of Higher Studies & School of Law
Handling TLB misses
• If PTE isn’t found in TLB, OS needs to do the lookup in the
page table
• Lookup can be done in hardware or software
• Hardware TLB replacement
– CPU hardware does page table lookup
– Can be faster than software
– Less flexible than software, and more complex hardware
• Software TLB replacement
– OS gets TLB exception
– Exception handler does page table lookup & places the result into the
TLB
– Program continues after return from exception
– Larger TLB (lower miss rate) can make this feasible 46
Chanderprabhu Jain College of Higher
Studies & School of Law
How long do memory accesses take?
• Assume the following times:
– TLB lookup time = a (often zero - overlapped in CPU)
– Memory access time = m
• Hit ratio (h) is percentage of time that a logical page number
is found in the TLB
– Larger TLB usually means higher h
– TLB structure can affect h as well
• Effective access time (an average) is calculated as:
– EAT = (m + a)h + (m + m + a)(1-h)
– EAT =a + (2-h)m
• Interpretation
– Reference always requires TLB lookup, 1 memory access
– TLB misses also require an additional memory reference 47
Chanderprabhu Jain College of Higher
Studies & School of Law
Inverted page table
• Reduce page table size further: keep one entry for each frame
in memory
• PTE contains
– Virtual address pointing to this frame
– Information about the process that owns this page
• Search page table by
– Hashing the virtual page number and process ID
– Starting at the entry corresponding to the hash result
– Search until either the entry is found or a limit is reached
• Page frame number is index of PTE
• Improve performance by using more advanced hashing
algorithms
48
Chanderprabhu Jain College of Higher
Studies & School of Law
pid1
pidk
pid0
Inverted page table architecture
process ID p = 19 bits offset = 13 bits
page number
1319
physical address
inverted page table
main memory
...
0
1
...
Page frame
number
page offset
pid p
p0
p1
pk
...
...
0
1
k
search
k
49
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory Management
Requirements
• Relocation
– programmer cannot know where the program will
be placed in memory when it is executed
– a process may be (often) relocated in main
memory due to swapping
– swapping enables the OS to have a larger pool of
ready-to-execute processes
– memory references in code (for both instructions
and data) must be translated to actual physical
memory address
50
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory Management
Requirements
• Protection
– processes should not be able to reference
memory locations in another process without
permission
– impossible to check addresses at compile time in
programs since the program could be relocated
– address references must be checked at run time
by hardware
51
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory Management
Requirements
• Sharing
– must allow several processes to access a common
portion of main memory without compromising
protection
• cooperating processes may need to share access to the
same data structure
• better to allow each process to access the same copy of
the program rather than have their own separate copy
52
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory Management
Requirements
• Logical Organization
– users write programs in modules with different
characteristics
• instruction modules are execute-only
• data modules are either read-only or read/write
• some modules are private others are public
– To effectively deal with user programs, the OS and
hardware should support a basic form of module
to provide the required protection and sharing
53
Chanderprabhu Jain College of Higher
Studies & School of Law
Memory Management
Requirements
• Physical Organization
– secondary memory is the long term store for
programs and data while main memory holds
program and data currently in use
– moving information between these two levels of
memory is a major concern of memory
management (OS)
• it is highly inefficient to leave this responsibility to the
application programmer
54
Chanderprabhu Jain College of Higher
Studies & School of Law
Simple Memory Management
• In this chapter we study the simpler case where there is no
virtual memory
• An executing process must be loaded entirely in main
memory (if overlays are not used)
• Although the following simple memory management
techniques are not used in modern OS, they lay the ground
for a proper discussion of virtual memory (next chapter)
– fixed partitioning
– dynamic partitioning
– simple paging
– simple segmentation
55
Chanderprabhu Jain College of Higher
Studies & School of Law
Fixed
Partitioning
• Partition main memory
into a set of non
overlapping regions called
partitions
• Partitions can be of equal
or unequal sizes
56
Chander
prabhu
Jain
College
of Higher
Fixed Partitioning
• any process whose size is less than or equal to a partition
size can be loaded into the partition
• if all partitions are occupied, the operating system can
swap a process out of a partition
• a program may be too large to fit in a partition. The
programmer must then design the program with overlays
– when the module needed is not present the user
program must load that module into the program’s
partition, overlaying whatever program or data are
there
57
Chanderprabhu Jain College of Higher
Studies & School of Law
Fixed Partitioning
• Main memory use is inefficient. Any program,
no matter how small, occupies an entire
partition. This is called internal
fragmentation.
• Unequal-size partitions lessens these
problems but they still remain...
• Equal-size partitions was used in early IBM’s
OS/MFT (Multiprogramming with a Fixed
number of Tasks)
58
Chanderprabhu Jain College of Higher
Studies & School of Law
Placement Algorithm with
Partitions
• Equal-size partitions
– If there is an available partition, a process can be
loaded into that partition
• because all partitions are of equal size, it does not
matter which partition is used
– If all partitions are occupied by blocked processes,
choose one process to swap out to make room for
the new process
59
Chanderprabhu Jain College of Higher
Studies & School of Law
Placement Algorithm with
Partitions
• Unequal-size partitions:
use of multiple queues
– assign each process to
the smallest partition
within which it will fit
– A queue for each
partition size
– tries to minimize internal
fragmentation
– Problem: some queues
will be empty if no
processes within a size
range is present
60
Chander
prabhu
Jain
College
of Higher
Placement Algorithm with
Partitions
• Unequal-size partitions:
use of a single queue
– When its time to load a
process into main memory
the smallest available
partition that will hold the
process is selected
– increases the level of
multiprogramming at the
expense of internal
fragmentation
61
Chander
prabhu
Jain
College
of Higher
Dynamic Partitioning
• Partitions are of variable length and number
• Each process is allocated exactly as much memory
as it requires
• Eventually holes are formed in main memory. This
is called external fragmentation
• Must use compaction to shift processes so they are
contiguous and all free memory is in one block
• Used in IBM’s OS/MVT (Multiprogramming with a
Variable number of Tasks)
62
Chanderprabhu Jain College of Higher
Studies & School of Law
Dynamic Partitioning: an example
• A hole of 64K is left after loading 3 processes: not enough
room for another process
• Eventually each process is blocked. The OS swaps out
process 2 to bring in process 4
63
Chanderprabhu Jain College of Higher
Studies & School of Law
Dynamic Partitioning: an example
• another hole of 96K is created
• Eventually each process is blocked. The OS swaps out
process 1 to bring in again process 2 and another hole of
96K is created...
• Compaction would produce a single hole of 256K 64
Chanderprabhu Jain College of Higher
Studies & School of Law
Placement
Algorithm
• Used to decide which
free block to allocate to
a process
• Goal: to reduce usage of
compaction (time
consuming)
• Possible algorithms:
– Best-fit: choose
smallest hole
– First-fit: choose first
hole from beginning
– Next-fit: choose first
hole from last
placement
65
Chander
prabhu
Jain
College
of Higher
Placement Algorithm: comments
• Next-fit often leads to allocation of the
largest block at the end of memory
• First-fit favors allocation near the beginning:
tends to create less fragmentation then
Next-fit
• Best-fit searches for smallest block: the
fragment left behind is small as possible
– main memory quickly forms holes too small to
hold any process: compaction generally needs
to be done more often
66
Chanderprabhu Jain College of Higher
Studies & School of Law
Replacement Algorithm
• When all processes in main memory are
blocked, the OS must choose which process to
replace
– A process must be swapped out (to a Blocked-
Suspend state) and be replaced by a new process
or a process from the Ready-Suspend queue
– We will discuss later such algorithms for memory
management schemes using virtual memory
67
Chanderprabhu Jain College of Higher
Studies & School of Law
Buddy System
• A reasonable compromize to overcome
disadvantages of both fixed and variable
partitionning schemes
• A modified form is used in Unix SVR4 for
kernal memory allocation
• Memory blocks are available in size of 2^{K}
where L <= K <= U and where
– 2^{L} = smallest size of block allocatable
– 2^{U} = largest size of block allocatable
(generally, the entire memory available)
68
Chanderprabhu Jain College of Higher
Studies & School of Law
Buddy System
• We start with the entire block of size 2^{U}
• When a request of size S is made:
– If 2^{U-1} < S <= 2^{U} then allocate the entire block of
size 2^{U}
– Else, split this block into two buddies, each of size
2^{U-1}
– If 2^{U-2} < S <= 2^{U-1} then allocate one of the 2
buddies
– Otherwise one of the 2 buddies is split again
• This process is repeated until the smallest block greater or
equal to S is generated
• Two buddies are coalesced whenever both of them
become unallocated
69
Chanderprabhu Jain College of Higher
Studies & School of Law
Buddy System
• The OS maintains several lists of holes
– the i-list is the list of holes of size 2^{i}
– whenever a pair of buddies in the i-list occur,
they are removed from that list and coalesced
into a single hole in the (i+1)-list
• Presented with a request for an allocation of
size k such that 2^{i-1} < k <= 2^{i}:
– the i-list is first examined
– if the i-list is empty, the (i+1)-list is then
examined... 70
Chanderprabhu Jain College of Higher
Studies & School of Law
Example of Buddy System
71
Chanderprabhu Jain College of Higher
Studies & School of Law
Buddy Systems: remarks
• On average, internal fragmentation is 25%
– each memory block is at least 50% occupied
• Programs are not moved in memory
– simplifies memory management
• Mostly efficient when the size M of memory used by the
Buddy System is a power of 2
– M = 2^{U} “bytes” where U is an integer
– then the size of each block is a power of 2
– the smallest block is of size 1
– Ex: if M = 10, then the smallest block would be of size 5
72
Chanderprabhu Jain College of Higher
Studies & School of Law
Relocation
• Because of swapping and compaction, a
process may occupy different main
memory locations during its lifetime
• Hence physical memory references by a
process cannot be fixed
• This problem is solved by distinguishing
between logical address and physical
address
73
Chanderprabhu Jain College of Higher
Studies & School of Law
Address Types
• A physical address (absolute address) is a physical location
in main memory
• A logical address is a reference to a memory location
independent of the physical structure/organization of
memory
• Compilers produce code in which all memory references
are logical addresses
• A relative address is an example of logical address in which
the address is expressed as a location relative to some
known point in the program (ex: the beginning)
74
Chanderprabhu Jain College of Higher
Studies & School of Law
Address Translation
• Relative address is the most frequent type of logical
address used in pgm modules (ie: executable files)
• Such modules are loaded in main memory with all memory
references in relative form
• Physical addresses are calculated “on the fly” as the
instructions are executed
• For adequate performance, the translation from relative to
physical address must by done by hardware
75
Chanderprabhu Jain College of Higher
Studies & School of Law
Simple example of hardware
translation of addresses
• When a process is assigned to the running state, a base
register (in CPU) gets loaded with the starting physical
address of the process
• A bound register gets loaded with the process’s ending
physical address
• When a relative addresses is encountered, it is added with
the content of the base register to obtain the physical
address which is compared with the content of the bound
register
• This provides hardware protection: each process can only
access memory within its process image
76
Chanderprabhu Jain College of Higher
Studies & School of Law
Example Hardware for Address
Translation
77
Chanderprabhu Jain College of Higher
Studies & School of Law
Simple Paging
• Main memory is partition into equal fixed-
sized chunks (of relatively small size)
• Trick: each process is also divided into chunks
of the same size called pages
• The process pages can thus be assigned to the
available chunks in main memory called
frames (or page frames)
• Consequence: a process does not need to
occupy a contiguous portion of memory
78
Chanderprabhu Jain College of Higher
Studies & School of Law
Example of process loading
• Now suppose that process B is swapped out
79
Chanderprabhu Jain College of Higher
Studies & School of Law
Example of process loading
(cont.)
• When process A and C are
blocked, the pager loads a
new process D consisting
of 5 pages
• Process D does not
occupied a contiguous
portion of memory
• There is no external
fragmentation
• Internal fragmentation
consist only of the last
page of each process
80
Chander
prabhu
Jain
College
of Higher
Page Tables
• The OS now needs to maintain (in main memory) a page
table for each process
• Each entry of a page table consist of the frame number
where the corresponding page is physically located
• The page table is indexed by the page number to obtain
the frame number
• A free frame list, available for pages, is maintained
81
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical address used in paging
• Within each program, each logical address must
consist of a page number and an offset within the
page
• A CPU register always holds the starting physical
address of the page table of the currently running
process
• Presented with the logical address (page number,
offset) the processor accesses the page table to
obtain the physical address (frame number, offset)
82
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical address in
paging
• The logical address becomes a
relative address when the page size
is a power of 2
• Ex: if 16 bits addresses are used and
page size = 1K, we need 10 bits for
offset and have 6 bits available for
page number
• Then the 16 bit address obtained
with the 10 least significant bit as
offset and 6 most significant bit as
page number is a location relative
to the beginning of the process
83
Chander
prabhu
Jain
College
of Higher
Logical address in paging
• By using a page size of a power of 2, the pages
are invisible to the programmer,
compiler/assembler, and the linker
• Address translation at run-time is then easy to
implement in hardware
– logical address (n,m) gets translated to physical
address (k,m) by indexing the page table and
appending the same offset m to the frame
number k
84
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical-to-Physical Address
Translation in Paging
85
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical-to-Physical Address
Translation in segmentation
86
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual memory
• Consider a typical, large application:
– There are many components that are mutually exclusive.
Example: A unique function selected dependent on user
choice.
– Error routines and exception handlers are very rarely used.
– Most programs exhibit a slowly changing locality of
reference. There are two types of locality: spatial and
temporal.
87
Chanderprabhu Jain College of Higher
Studies & School of Law
Characteristics of Paging and
Segmentation
• Memory references are dynamically translated into
physical addresses at run time
– a process may be swapped in and out of main memory
such that it occupies different regions
• A process may be broken up into pieces (pages or
segments) that do not need to be located
contiguously in main memory
• Hence: all pieces of a process do not need to be
loaded in main memory during execution
– computation may proceed for some time if the next
instruction to be fetch (or the next data to be accessed) is
in a piece located in main memory
88
Chanderprabhu Jain College of Higher
Studies & School of Law
Process Execution
• The OS brings into main memory only a few
pieces of the program (including its starting
point)
• Each page/segment table entry has a present bit
that is set only if the corresponding piece is in
main memory
• The resident set is the portion of the process
that is in main memory
• An interrupt (memory fault) is generated when
the memory reference is on a piece not present
in main memory
89
Chanderprabhu Jain College of Higher
Studies & School of Law
Process Execution (cont.)
• OS places the process in a Blocking state
• OS issues a disk I/O Read request to bring into main
memory the piece referenced to
• another process is dispatched to run while the disk
I/O takes place
• an interrupt is issued when the disk I/O completes
– this causes the OS to place the affected process in the
Ready state
90
Chanderprabhu Jain College of Higher
Studies & School of Law
Advantages of Partial Loading
• More processes can be maintained in main
memory
– only load in some of the pieces of each process
– With more processes in main memory, it is
more likely that a process will be in the Ready
state at any given time
• A process can now execute even if it is
larger than the main memory size
– it is even possible to use more bits for logical
addresses than the bits needed for addressing
the physical memory
91
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual Memory: large as you wish!
– Ex: 16 bits are needed to address a physical memory of
64KB
– lets use a page size of 1KB so that 10 bits are needed
for offsets within a page
– For the page number part of a logical address we may
use a number of bits larger than 6, say 22 (a modest
value!!)
• The memory referenced by a logical address is
called virtual memory
– is maintained on secondary memory (ex: disk)
– pieces are bring into main memory only when needed
92
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual Memory (cont.)
– For better performance, the file system is often
bypassed and virtual memory is stored in a special
area of the disk called the swap space
• larger blocks are used and file lookups and indirect
allocation methods are not used
• By contrast, physical memory is the memory
referenced by a physical address
– is located on DRAM
• The translation from logical address to physical
address is done by indexing the appropriate
page/segment table with the help of memory
management hardware
93
Chanderprabhu Jain College of Higher
Studies & School of Law
Possibility of trashing
• To accommodate as many processes as possible,
only a few pieces of each process is maintained in
main memory
• But main memory may be full: when the OS brings
one piece in, it must swap one piece out
• The OS must not swap out a piece of a process just
before that piece is needed
• If it does this too often this leads to trashing:
– The processor spends most of its time swapping
pieces rather than executing user instructions
94
Chanderprabhu Jain College of Higher
Studies & School of Law
Locality
• Temporal locality: Addresses that are referenced at
some time Ts will be accessed in the near future (Ts +
delta_time) with high probability. Example :
Execution in a loop.
• Spatial locality: Items whose addresses are near one
another tend to be referenced close together in time.
Example: Accessing array elements.
• How can we exploit this characteristics of programs?
Keep only the current locality in the main memory.
Need not keep the entire program in the main
memory.
95
Chanderprabhu Jain College of Higher
Studies & School of Law
Locality and Virtual Memory
• Principle of locality of references: memory
references within a process tend to cluster
• Hence: only a few pieces of a process will be needed
over a short period of time
• Possible to make intelligent guesses about which
pieces will be needed in the future
• This suggests that virtual memory may work
efficiently (ie: trashing should not occur too often)
96
Chanderprabhu Jain College of Higher
Studies & School of Law
Space and Time
CPU
cache Main
memory
Secondary
Storage
Desirable
increasing
97
Chanderprabhu Jain College of Higher
Studies & School of Law
Demand paging
• Main memory (physical address space) as well as user
address space (virtual address space) are logically
partitioned into equal chunks known as pages. Main
memory pages (sometimes known as frames) and
virtual memory pages are of the same size.
• Virtual address (VA) is viewed as a pair (virtual page
number, offset within the page). Example: Consider a
virtual space of 16K , with 2K page size and an address
3045. What the virtual page number and offset
corresponding to this VA?
98
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual Page Number and Offset
3045 / 2048 = 1
3045 % 2048 = 3045 - 2048 = 997
VP# = 1
Offset within page = 997
Page Size is always a power of 2? Why?
99
Chanderprabhu Jain College of Higher
Studies & School of Law
Page Size Criteria
Consider the binary value of address 3045 :
1011 1110 0101
for 16K address space the address will be 14 bits.
Rewrite:
00 1011 1110 0101
A 2K address space will have offset range 0 -2047 (11
bits)
Offset within pagePage#
001 011 1110 0101
100
Chanderprabhu Jain College of Higher
Studies & School of Law
Demand paging (contd.)
• There is only one physical address space but as many
virtual address spaces as the number of processes in the
system. At any time physical memory may contain pages
from many process address space.
• Pages are brought into the main memory when needed
and “rolled out” depending on a page replacement policy.
• Consider a 8K main (physical) memory and three virtual
address spaces of 2K, 3K and 4K each. Page size of 1K. The
status of the memory mapping at some time is as shown.
101
Chanderprabhu Jain College of Higher
Studies & School of Law
Demand Paging (contd.)
0
1
2
3
4
5
6
7
Main memory
LAS 0
LAS 1
LAS 2
(Physical Address Space -PAS)
LAS - Logical Address Space
Executable
code space
102
Chanderprabhu Jain College of Higher
Studies & School of Law
Issues in demand paging
• How to keep track of which logical page goes where
in the main memory? More specifically, what are the
data structures needed?
– Page table, one per logical address space.
• How to translate logical address into physical
address and when?
– Address translation algorithm applied every time a
memory reference is needed.
• How to avoid repeated translations?
– After all most programs exhibit good locality. “cache
recent translations”
103
Chanderprabhu Jain College of Higher
Studies & School of Law
Issues in demand paging (contd.)
• What if main memory is full and your process
demands a new page? What is the policy for page
replacement? LRU, MRU, FIFO, random?
• Do we need to roll out every page that goes into
main memory? No, only the ones that are
modified. How to keep track of this info and such
other memory management information? In the
page table as special bits.
104
Chanderprabhu Jain College of Higher
Studies & School of Law
Support Needed for
Virtual Memory
• Memory management hardware must support
paging and/or segmentation
• OS must be able to manage the movement of pages
and/or segments between secondary memory and
main memory
• We will first discuss the hardware aspects; then the
algorithms used by the OS
105
Chanderprabhu Jain College of Higher
Studies & School of Law
Paging
• Each page table entry contains a present bit to indicate
whether the page is in main memory or not.
– If it is in main memory, the entry contains the frame
number of the corresponding page in main memory
– If it is not in main memory, the entry may contain the
address of that page on disk or the page number may
be used to index another table (often in the PCB) to
obtain the address of that page on disk
Typically, each process has its own page table
106
Chanderprabhu Jain College of Higher
Studies & School of Law
Paging
• A modified bit indicates if the page has been
altered since it was last loaded into main
memory
– If no change has been made, the page does not
have to be written to the disk when it needs to
be swapped out
• Other control bits may be present if protection is
managed at the page level
– a read-only/read-write bit
– protection level bit: kernel page or user page
(more bits are used when the processor supports
more than 2 protection levels)
107
Chanderprabhu Jain College of Higher
Studies & School of Law
Page Table Structure
• Page tables are variable in length (depends
on process size)
– then must be in main memory instead of
registers
• A single register holds the starting physical
address of the page table of the currently
running process
108
Chanderprabhu Jain College of Higher
Studies & School of Law
Address Translation in a Paging System
109
Chanderprabhu Jain College of Higher
Studies & School of Law
Sharing Pages
• If we share the same code among different users,
it is sufficient to keep only one copy in main
memory
• Shared code must be reentrant (ie: non self-
modifying) so that 2 or more processes can
execute the same code
• If we use paging, each sharing process will have a
page table who’s entry points to the same frames:
only one copy is in main memory
• But each user needs to have its own private data
pages
110
Chanderprabhu Jain College of Higher
Studies & School of Law
Sharing Pages: a text editor
111
Chanderprabhu Jain College of Higher
Studies & School of Law
Translation Lookaside Buffer
• Because the page table is in main memory, each
virtual memory reference causes at least two
physical memory accesses
– one to fetch the page table entry
– one to fetch the data
• To overcome this problem a special cache is set up
for page table entries
– called the TLB - Translation Lookaside Buffer
• Contains page table entries that have been most recently used
• Works similar to main memory cache
112
Chanderprabhu Jain College of Higher
Studies & School of Law
Translation Lookaside Buffer
• Given a logical address, the processor examines the
TLB
• If page table entry is present (a hit), the frame
number is retrieved and the real (physical) address
is formed
• If page table entry is not found in the TLB (a miss),
the page number is used to index the process page
table
– if present bit is set then the corresponding frame is
accessed
– if not, a page fault is issued to bring in the referenced
page in main memory
• The TLB is updated to include the new page entry
113
Chanderprabhu Jain College of Higher
Studies & School of Law
Use of a Translation Lookaside Buffer
114
Chanderprabhu Jain College of Higher
Studies & School of Law
TLB: further comments
• TLB use associative mapping hardware to
simultaneously interrogates all TLB entries to find
a match on page number
• The TLB must be flushed each time a new process
enters the Running state
• The CPU uses two levels of cache on each virtual
memory reference
– first the TLB: to convert the logical address to the
physical address
– once the physical address is formed, the CPU then looks
in the cache for the referenced word
115
Chanderprabhu Jain College of Higher
Studies & School of Law
Page Tables and Virtual Memory
• Most computer systems support a very large virtual
address space
– 32 to 64 bits are used for logical addresses
– If (only) 32 bits are used with 4KB pages, a page table may
have 2^{20} entries
• The entire page table may take up too much main memory.
Hence, page tables are often also stored in virtual memory
and subjected to paging
– When a process is running, part of its page table must be in
main memory (including the page table entry of the currently
executing page)
116
Chanderprabhu Jain College of Higher
Studies & School of Law
Inverted Page Table
• Another solution (PowerPC, IBM Risk 6000) to the problem
of maintaining large page tables is to use an Inverted Page
Table (IPT)
• We generally have only one IPT for the whole system
• There is only one IPT entry per physical frame (rather than
one per virtual page)
– this reduces a lot the amount of memory needed for
page tables
• The 1st entry of the IPT is for frame #1 ... the nth entry of
the IPT is for frame #n and each of these entries contains
the virtual page number
• Thus this table is inverted
117
Chanderprabhu Jain College of Higher
Studies & School of Law
Inverted Page Table
• The process ID with the virtual
page number could be used to
search the IPT to obtain the
frame #
• For better performance,
hashing is used to obtain a
hash table entry which points
to a IPT entry
– A page fault occurs if no
match is found
– chaining is used to
manage hashing
overflow d = offset within page
118
Chander
prabhu
Jain
College
of Higher
The Page Size Issue
• Page size is defined by hardware; always a power
of 2 for more efficient logical to physical address
translation. But exactly which size to use is a
difficult question:
– Large page size is good since for a small page size,
more pages are required per process
• More pages per process means larger page tables. Hence, a
large portion of page tables in virtual memory
– Small page size is good to minimize internal
fragmentation
– Large page size is good since disks are designed to
efficiently transfer large blocks of data
– Larger page sizes means less pages in main memory;
this increases the TLB hit ratio
119
Chanderprabhu Jain College of Higher
Studies & School of Law
The Page Size Issue
• With a very small page
size, each page matches
the code that is actually
used: faults are low
• Increased page size causes
each page to contain more
code that is not used.
Page faults rise.
• Page faults decrease if we
can approach point P were
the size of a page is equal
to the size of the entire
process
120
Chander
prabhu
Jain
College
of Higher
The Page Size Issue
• Page fault rate is also
determined by the
number of frames
allocated per process
• Page faults drops to a
reasonable value when W
frames are allocated
• Drops to 0 when the
number (N) of frames is
such that a process is
entirely in memory
121
Chander
prabhu
Jain
College
of Higher
The Page Size Issue
• Page sizes from 1KB to 4KB are most
commonly used
• But the issue is non trivial. Hence some
processors are now supporting multiple
page sizes. Ex:
– Pentium supports 2 sizes: 4KB or 4MB
– R4000 supports 7 sizes: 4KB to 16MB
122
Chanderprabhu Jain College of Higher
Studies & School of Law
Operating System Software
• Memory management software depends on
whether the hardware supports paging or
segmentation or both
• Pure segmentation systems are rare. Segments
are usually paged -- memory management issues
are then those of paging
• We shall thus concentrate on issues associated
with paging
• To achieve good performance we need a low page
fault rate
123
Chanderprabhu Jain College of Higher
Studies & School of Law
The LRU Policy
• Replaces the page that has not been referenced for
the longest time
– By the principle of locality, this should be the page least
likely to be referenced in the near future
– performs nearly as well as the optimal policy
• Example: A process of 5 pages with an OS that fixes
the resident set size to 3
124
Chanderprabhu Jain College of Higher
Studies & School of Law
Implementation of the LRU Policy
• Each page could be tagged (in the page table
entry) with the time at each memory
reference.
• The LRU page is the one with the smallest time
value (needs to be searched at each page fault)
• This would require expensive hardware and a
great deal of overhead.
• Consequently very few computer systems
provide sufficient hardware support for true
LRU replacement policy
• Other algorithms are used instead
125
Chanderprabhu Jain College of Higher
Studies & School of Law
The FIFO Policy
• Treats page frames allocated to a process as
a circular buffer
– When the buffer is full, the oldest page is
replaced. Hence: first-in, first-out
• This is not necessarily the same as the LRU page
• A frequently used page is often the oldest, so it will
be repeatedly paged out by FIFO
– Simple to implement
• requires only a pointer that circles through the page
frames of the process
126
Chanderprabhu Jain College of Higher
Studies & School of Law
Chanderprabhu Jain College of Higher Studies & School of Law
Plot No. OCF, Sector A-8, Narela, New Delhi – 110040
(Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India)
Semester: Fifth Semester
Name of the Subject: Operating System
UNIT- 2
Process Concept
• Process is a program in execution; forms the basis of all
computation; process execution must progress in sequential
fashion.
• Program is a passive entity stored on disk (executable file),
Process is an active entity; A program becomes a process when
executable file is loaded into memory.
• Execution of program is started via CLI entry of its name,
GUI mouse clicks, etc.
• A process is an instance of a running program; it can be
assigned to, and executed on, a processor.
• Related terms for Process: Job, Step, Load Module, Task,
Thread.
Process Parts
• A process includes three segments/sections:
1. Program: code/text.
2. Data: global variables and heap
• Heap contains memory dynamically allocated during run time.
3. Stack: temporary data
• Procedure/Function parameters, return addresses,
local variables.
• Current activity of a program includes its Context:
program counter, state, processor registers, etc.
• One program can be several processes:
– Multiple users executing the same Sequential program.
– Concurrent program running several process.
Process in Memory (1)
Processes in Memory (2)
Process Attributes
• Process ID
• Parent process ID
• User ID
• Process state/priority
• Program counter
• CPU registers
• Memory management information
• I/O status information
• Access Control
• Accounting information
Process Control Block (PCB)
Process States
• Let us start with three states:
1) Running state –
• the process that gets executed (single CPU);
its instructions are being executed.
2) Ready state –
• any process that is ready to be executed; the process
is waiting to be assigned to a processor.
3) Waiting/Blocked state –
• when a process cannot execute until its I/O
completes or some other event occurs.
A Three-state Process Model
Ready Running
Waiting
Event
Occurs
Dispatc
h
Time-out
Event
Wait
Five-state Process Model
PROCESSES PROCESS STATE
 New The process is just being put together.
 Running Instructions being executed. This running process holds the
CPU.
 Waiting For an event (hardware, human, or another process.)
 Ready The process has all needed resources - waiting for CPU only.
 Suspended Another process has explicitly told this process to
sleep. It will be awakened when a process explicitly awakens it.
 Terminated The process is being torn apart.
PROCESS CONTROL BLOCK:
CONTAINS INFORMATION ASSOCIATED WITH
EACH PROCESS:
It's a data structure holding:
 PC, CPU registers,
 memory management information,
 accounting ( time used, ID, ... )
 I/O status ( such as file resources ),
 scheduling data ( relative priority, etc. )
 Process State (so running, suspended, etc. is
simply a field in the PCB ).
PROCESSES Process State
The act of Scheduling a process means changing the active PCB pointed to
by the CPU. Also called a context switch.
A context switch is essentially the same as a process switch - it means that the
memory, as seen by one process is changed to the memory seen by another
process.
See Figure on Next Page (4.3)
SCHEDULING QUEUES:
(Process is driven by events that are triggered by needs and availability )
Ready queue = contains those processes that are ready to run.
I/O queue (waiting state ) = holds those processes waiting for I/O service.
What do the queues look like? They can be implemented as single or double
linked. See Figure Several Pages from Now (4.4)
PROCESSES
Scheduling
Components
PROCESSES
Scheduling
Components
The CPU switching
from one process to
another.
Figure
PROCESSES
Scheduling
Components
Ready Q
And
IO Q’s
LONG TERM SCHEDULER
 Run seldom ( when job comes into memory )
 Controls degree of multiprogramming
 Tries to balance arrival and departure rate through an appropriate
job mix.
SHORT TERM SCHEDULER
Contains three functions:
 Code to remove a process from the processor at the end of its run.
a)Process may go to ready queue or to a wait state.
 Code to put a process on the ready queue –
a)Process must be ready to run.
b)Process placed on queue based on priority.
PROCESSES
Scheduling
Components
SHORT TERM SCHEDULER (cont.)
 Code to take a process off the ready queue and run that process (also called
dispatcher).
a) Always takes the first process on the queue (no intelligence required)
b) Places the process on the processor.
This code runs frequently and so should be as short as possible.
MEDIUM TERM SCHEDULER
• Mixture of CPU and memory resource management.
• Swap out/in jobs to improve mix and to get memory.
• Controls change of priority.
PROCESSES
Scheduling
Components
INTERRUPT HANDLER
• In addition to doing device work, it also readies processes, moving them,
for instance, from waiting to ready.
How do all
these
scheduling
components
fit together?
Fig
PROCESSES
Scheduling
Components
Interrupt Handler
Short Term
Scheduler
Short Term
Scheduler
Long Term
Scheduler
Medium Term
Scheduler
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3
24 27 300
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:
• Waiting time for P1 = 6;P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
P1P3P2
63 300
Shortest-Job-First (SJF)
Scheduling
• Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time.
• SJF is optimal – gives minimum average waiting
time for a given set of processes
– The difficulty is knowing the length of the
next CPU request.
Example of SJF
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart
• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
P4
P3P1
3 160 9
P2
24
Determining Length of Next CPU
Burst
• Can only estimate the length
• Can be done by using the length of previous CPU
bursts, using exponential averaging
:Define4.
10,3.
burstCPUnexttheforvaluepredicted2.
burstCPUoflengthactual1.





 1n
th
n nt
n+1 =  tn + (1- ) n.
Multilevel Queue
• Ready queue is partitioned into separate queues:
– foreground (interactive)
– background (batch)
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which
it can schedule amongst its processes; i.e., 80% to foreground in
RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues;
aging can be implemented this way.
• Multilevel-feedback-queue scheduler defined by the
following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a
process
– method used to determine when to demote a
process
– method used to determine which queue a process
will enter when that process needs service
Deadlock
• Deadlock – two or more processes are waiting
indefinitely for an event that can be caused by
only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Priority Inversion - Scheduling problem when
Starvation – indefinite blocking.
A process may never be removed from the semaphore queue in which it is
suspended
 Order of arrival retainment:
 Weak semaphores:
 The thread that will access the critical region next is selected randomly
 Starvation is possible
 Strong semaphores:
 The thread that will access the critical region next is selected based on
its arrival time, e.g. FIFO
 Starvation is not possible
Starvation
Classical Problems of
Synchronization
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• N buffers, each can hold one item
• Semaphore mutex initialized to the
value 1
• Semaphore full initialized to the value 0
• Semaphore empty initialized to the
value N.
Readers-Writers Problem
• A data set is shared among a number of
concurrent processes
– Readers – only read the data set; they do not
perform any updates
– Writers – can both read and write
• Problem – allow multiple readers to read at
the same time.
– Only one single writer can access the shared data
at the same time
Dining-Philosophers Problem
• Shared data
– Bowl of rice (data set)
– Semaphore chopstick [5] initialized to 1
Problems with Semaphores
• Correct use of semaphore operations:
– signal (mutex) …. wait (mutex)
– wait (mutex) … wait (mutex)
– Omitting of wait (mutex) or signal
(mutex) (or both)
Monitors
• A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
• Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
Condition Variables
• condition x, y;
• Two operations on a condition variable:
– x.wait () – a process that invokes the operation
is suspended.
– x.signal () – resumes one of processes (if any)
that invoked x.wait ()
Types of Storage Media
• Volatile storage – information stored here
does not survive system crashes
– Example: main memory, cache
• Nonvolatile storage – Information usually
survives crashes
– Example: disk and tape
• Stable storage – Information never lost
– Not actually possible,
• approximated via replication or RAID to devices with
independent failure modes
Goal is to assure transaction atomicity where failures cause loss of
information on volatile storage
Log-Based Recovery
• Record to stable storage information about all
modifications by a transaction
• Most common is write-ahead logging
– Log on stable storage, each log record describes
single transaction write operation, including
• Transaction name
• Data item name
• Old value
• New value
– <Ti starts> written to log when transaction Ti starts
Log-Based Recovery Algorithm
• Using the log, system can handle any volatile
memory errors
– Undo(Ti) restores value of all data updated by Ti
– Redo(Ti) sets values of all data in transaction Ti to
new values
• Undo(Ti) and redo(Ti) must be idempotent
– Multiple executions must have the same result as
one execution
• If system fails, restore state of all updated data
Checkpoints
 Log could become long, and recovery could
take long
 Checkpoints shorten log and recovery time.
 Checkpoint scheme:
1. Output all log records currently in volatile storage
to stable storage
2. Output all modified data from volatile to stable
storage
3. Output a log record <checkpoint> to the log on
Concurrent Transactions
• Must be equivalent to serial execution
– serializability
• Could perform all transactions in critical
section
– Inefficient, too restrictive
• Concurrency-control algorithms provide
serializability
Serializability
• Consider two data items A and B
• Consider Transactions T0 and T1
• Execute T0, T1 atomically
• Execution sequence called schedule
• Atomically executed transaction order called
serial schedule
• For N transactions, there are N! valid serial
schedules
Nonserial Schedule
• Nonserial schedule allows overlapped execute
– Resulting execution not necessarily incorrect
• Consider schedule S, operations Oi, Oj
– Conflict if access same data item, with at least one
write
• If Oi, Oj consecutive and operations of
different transactions & Oi and Oj don’t
conflict
Schedule 2: Concurrent Serializable Schedule
Locking Protocol
• Ensure serializability by associating lock with
each data item
– Follow locking protocol for access control
• Locks
– Shared
• Ti has shared-mode lock (S) on item Q,
• Ti can read Q but not write Q
– Exclusive
• Ti has exclusive-mode lock (X) on Q,
• Ti can read and write Q
Two-phase Locking Protocol
• Generally ensures conflict serializability
• Each transaction issues lock and unlock
requests in two phases
– Growing – obtaining locks
– Shrinking – releasing locks
• Does not prevent deadlock
Timestamp-based Protocols
• Select order among transactions in advance
– timestamp-ordering
• Transaction Ti associated with timestamp
TS(Ti) before Ti starts
– TS(Ti) < TS(Tj) if Ti entered system before Tj
– TS can be generated from system clock or as
logical counter incremented at each entry of
transaction
Timestamp-based Protocol Implementation
• Data item Q gets two timestamps
– W-timestamp(Q) – largest timestamp of any
transaction that executed write(Q) successfully
– R-timestamp(Q) – largest timestamp of successful
read(Q)
– Updated whenever read(Q) or write(Q) executed
• Timestamp-ordering protocol assures any
conflicting read and write executed in
timestamp order
• Suppose Ti executes read(Q)
Timestamp-ordering Protocol
• Suppose Ti executes write(Q)
– If TS(Ti) < R-timestamp(Q), value Q produced by Ti
was needed previously and Ti assumed it would
never be produced
• Write operation rejected, Ti rolled back
– If TS(Ti) < W-tiimestamp(Q), Ti attempting to write
obsolete value of Q
• Write operation rejected and Ti rolled back
– Otherwise, write executed
• Any rolled back transaction T is assigned new
Chanderprabhu Jain College of Higher Studies & School of Law
Plot No. OCF, Sector A-8, Narela, New Delhi – 110040
(Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India)
Semester: Fifth Semester
Name of the Subject: Operating System
UNIT- 3
175
Chanderprabhu Jain College of Higher
Studies & School of Law
Deadlock
• Examples: Traffic Jam :
• Dining Philosophers
• Device allocation
– process 1 requests tape drive 1 & gets it
– process 2 requests tape drive 2 & gets it
– process 1 requests tape drive 2 but is blocked
– process 2 requests tape drive 1 but is blocked
• Semaphores : P(s) P(t)
P(t) P(s)
176
Chanderprabhu Jain College of Higher
Studies & School of Law
• I/O spooling disc
– disc full of spooled input
– no room for subsequent output
• Over-allocation of pages in a virtual memory OS
– each process has a allocation of notional pages it must work within
– process acquires pages one by one
– normally does not use its full allocation
– kernel over-allocates total number of notional pages
• more efficient uses of memory
• like airlines overbooking seats
– deadlock may arise
• all processes by mischance approach use of their full allocation
• kernel cannot provide last pages it promised
• partial deadlock also - some processes blocked
– recovery ?
177
Chanderprabhu Jain College of Higher
Studies & School of Law
• Resource:
– used by a single process at a single point in time
– any one of the same type can be allocated
• Pre-emptible:
– can be taken away from a process without ill effect
– no deadlocks with pre-emptible resources
• Non-Pre-emptible:
– cannot be taken away without problems
– most resources like this 178
Chanderprabhu Jain College of Higher
Studies & School of Law
Definition : A set of processes is deadlocked if
each process in the set is waiting for an event
that only another process in the set can cause
Necessary conditions for deadlock :
• Mutual Exclusion : each resource is either
currently assigned to one process or is available
to be assigned
• Hold and wait : processes currently holding
resources granted earlier can request new 179
Chanderprabhu Jain College of Higher
Studies & School of Law
Resource Allocation Modelling using Graphs
Nodes : resource process
Arcs : resource requested :
resource allocated :
180
Chanderprabhu Jain College of Higher
Studies & School of Law
181
Chanderprabhu Jain College of Higher
Studies & School of Law
182
Chanderprabhu Jain College of Higher
Studies & School of Law
• For multiple resources of the same type :
• Deadlock :
• A cycle not sufficient to
imply a deadlock :
183
Chanderprabhu Jain College of Higher
Studies & School of Law
Possible Strategies :
• Ignore - the Ostrich or Head-in-the-Sand
algorithm
– try to reduce chance of deadlock as far as
reasonable
– accept that deadlocks will occur occasionally
• example: kernel table sizes - max number of pages,
open files etc.
184
Chanderprabhu Jain College of Higher
Studies & School of Law
• Deadlock Prevention
– negate one of the necessary conditions
• negating Mutual Exclusion :
– example: shared use of a printer
• give exclusive use of the printer to each user in turn wanting to print?
• deadlock possibility if exclusive access to another resource also allowed
• better to have a spooler process to which print jobs are sent
– complete output file must be generated first
– example: file system actions
• give a process exclusive access rights to a file directory
• example: moving a file from one directory to another
– possible deadlock if allowed exclusive access to two directories
simultaneously
– should write code so as only to need to access one directory at a time
– solution?
• make resources concurrently sharable wherever possible e.g. read-only access
• most resources inherently not sharable!
185
Chanderprabhu Jain College of Higher
Studies & School of Law
Resource Trajectory Graph
186
Chanderprabhu Jain College of Higher
Studies & School of Law
187
Chanderprabhu Jain College of Higher
Studies & School of Law
• negating Hold and Wait
– process could request all the resources it will ever
need at once
188
Chanderprabhu Jain College of Higher
Studies & School of Law
• negating Circular Wait
– require that a process can only acquire one
resource at a time
• example: moving a file from one directory to another
– require processes to acquire resources in a certain
order
– example:
• 1: tape drive
• 2: disc drive
• 3: printer
• 4: plotter 189
Chanderprabhu Jain College of Higher
Studies & School of Law
• Deadlock Avoidance
– deadlock possible but avoided by careful allocation of resources
– avoid entering unsafe states
– a state is safe if it is not deadlocked and there is a way to satisfy all
requests currently pending by running the processes in some order
– need to know all future requests of processes
190
Chanderprabhu Jain College of Higher
Studies & School of Law
• Example: can processes run to completion in
some order?
– with 10 units of resource to allocate :
– if A runs first and acquires a further unit :
191
Chanderprabhu Jain College of Higher
Studies & School of Law
• avoidance using resource allocation graphs -
for one instance resources
– add an extra type of arc - the claim arc to indicate
future requests
– when the future request is actually made, convert
this to an allocation arc
192
Chanderprabhu Jain College of Higher
Studies & School of Law
Banker’s Algorithm (Dijkstra)
• Single resource
– at each request, consider whether granting will
lead to an unsafe state - if so, deny
– is state after the notional grant still safe? 193
Chanderprabhu Jain College of Higher
Studies & School of Law
• Multiple resources
– m types of resource, n processes
194
Chanderprabhu Jain College of Higher
Studies & School of Law
• look for a row in R  A i.e. a process whose requests can be
met
• if no such row exists, state is unsafe
• add this row of R into the same row of C and subtract it from
C R
195
Chanderprabhu Jain College of Higher
Studies & School of Law
Drawbacks of Banker’s Algorithm
– processes rarely know in advance how many resources they
will need
– the number of processes changes as time progresses
– resources once available can disappear
– the algorithm assumes processes will return their resources
within a reasonable time
– processes may only get their resources after an arbitrarily
long delay
196
Chanderprabhu Jain College of Higher
Studies & School of Law
• Detection and Recovery
– let deadlock occur, then detect and recover
somehow
• Methods of Detection - single resources
– search for loops in resource allocation graph
197
Chanderprabhu Jain College of Higher
Studies & School of Law
Chapter Seven : Device
Management
• System Devices
• Sequential Access Storage
Media
• Direct Access Storage
Devices
• Components of I/O
Subsystem
• Communication Among
Devices
• Management of I/O
Requests
Paper Storage Media
Magnetic Tape Storage
Magnetic Disk Storage
Optical Disc Storage
198
Chanderprabhu Jain College of Higher
Studies & School of Law
Device Management Functions
• Track status of each device (such as tape drives, disk drives,
printers, plotters, and terminals).
• Use preset policies to determine which process will get a
device and for how long.
• Allocate the devices.
• Deallocate the devices at 2 levels:
– At process level when I/O command has been executed &
device is temporarily released
– At job level when job is finished & device is permanently
released.
199
Chanderprabhu Jain College of Higher
Studies & School of Law
System Devices
• Differences among system’s peripheral devices
are a function of characteristics of devices,
and how well they’re managed by the Device
Manager.
• Most important differences among devices
– Speeds
– Degree of sharability.
• By minimizing variances among devices, a 200
Chanderprabhu Jain College of Higher
Studies & School of Law
Dedicated Devices
• Assigned to only one job at a time and serve
that job for entire time it’s active.
– E.g., tape drives, printers, and plotters, demand this kind
of allocation scheme, because it would be awkward to
share.
• Disadvantage -- must be allocated to a single
user for duration of a job’s execution.
– Can be quite inefficient, especially when device isn’t used
100 % of time.
201
Chanderprabhu Jain College of Higher
Studies & School of Law
Shared Devices
• Assigned to several processes.
– E.g., disk pack (or other direct access storage device) can be shared by
several processes at same time by interleaving their requests.
• Interleaving must be carefully controlled by Device Manager.
• All conflicts must be resolved based on predetermined policies to decide
which request will be handled first.
202
Chanderprabhu Jain College of Higher
Studies & School of Law
Virtual Devices
• Combination of dedicated devices that have
been transformed into shared devices.
– E.g, printers are converted into sharable devices through a
spooling program that reroutes all print requests to a disk.
– Output sent to printer for printing only when all of a job’s
output is complete and printer is ready to print out entire
document.
– Because disks are sharable devices, this technique can
convert one printer into several “virtual” printers, thus
improving both its performance and use.
203
Chanderprabhu Jain College of Higher
Studies & School of Law
Sequential Access Storage Media
• Magnetic tape used for secondary storage on
early computer systems; now used for routine
archiving & storing back-up data.
• Records on magnetic tapes are stored serially,
one after other.
• Each record can be of any length.
– Length is usually determined by the application
program.
• Each record can be identified by its position
on the tape. 204
Chanderprabhu Jain College of Higher
Studies & School of Law
Magnetic Tapes
• Data is recorded on 8 parallel
tracks that run length of tape.
• Ninth track holds parity bit used
for routine error checking.
• Number of characters that can
be recorded per inch is
determined by density of tape
(e.g., 1600 or 6250 bpi).
Parity
•
•
••
•
•
•
• •
••
•
Characters
205
Chanderprabhu Jain College of Higher
Studies & School of Law
First Come First Served (FCFS)
Device Scheduling Algorithm
• Simplest device-scheduling algorithm:
• Easy to program and essentially fair to users.
• On average, it doesn’t meet any of the three goals of a seek
strategy.
• Remember, seek time is most time-consuming of functions
performed here, so any algorithm that can minimize it is
preferable to FCFS.
206
Chanderprabhu Jain College of Higher
Studies & School of Law
Shortest Seek Time First (SSTF)
Device Scheduling Algorithm
• Uses same underlying philosophy as shortest
job next where shortest jobs are processed
first & longer jobs wait.
• Request with track closest to one being served
(that is, one with shortest distance to travel) is
next to be satisfied.
• Minimizes overall seek time.
207
Chanderprabhu Jain College of Higher
Studies & School of Law
SCAN Device Scheduling Algorithm
• SCAN uses a directional bit to indicate whether the arm is
moving toward the center of the disk or away from it.
• Algorithm moves arm methodically from outer to inner track
servicing every request in its path.
• When it reaches innermost track it reverses direction and
moves toward outer tracks, again servicing every request in its
path.
208
Chanderprabhu Jain College of Higher
Studies & School of Law
LOOK (Elevator Algorithm) : A
Variation of SCAN
• Arm doesn’t necessarily go all the way to either edge unless
there are requests there.
• “Looks” ahead for a request before going to service it.
• Eliminates possibility of indefinite postponement of requests
in out-of-the-way places—at either edge of disk.
• As requests arrive each is incorporated in its proper place in
queue and serviced when the arm reaches that track.
209
Chanderprabhu Jain College of Higher
Studies & School of Law
Other Variations of SCAN
• N-Step SCAN -- holds all requests until arm starts on way back. New
requests are grouped together for next sweep.
• C-SCAN (Circular SCAN) -- arm picks up requests on its path during inward
sweep.
– When innermost track has been reached returns to outermost track
and starts servicing requests that arrived during last inward sweep.
– Provides a more uniform wait time.
• C-LOOK (optimization of C-SCAN) --sweep inward stops at last high-
numbered track request, so arm doesn’t move all the way to last track
unless it’s required to do so.
– Arm doesn’t necessarily return to the lowest-numbered track; it
returns only to the lowest-numbered track that’s requested.
210
Chanderprabhu Jain College of Higher
Studies & School of Law
Which Device Scheduling
Algorithm?
• FCFS works well with light loads, but as soon
as load grows, service time becomes
unacceptably long.
• SSTF is quite popular and intuitively appealing.
It works well with moderate loads but has
problem of localization under heavy loads.
• SCAN works well with light to moderate loads
and eliminates problem of indefinite
postponement. SCAN is similar to SSTF in
throughput and mean service times. 211
Chanderprabhu Jain College of Higher
Studies & School of Law
Search Strategies: Rotational
Ordering
• Rotational ordering -- optimizes search times
by ordering requests once read/write heads
have been positioned.
– Nothing can be done to improve time spent moving
read/write head because it’s dependent on hardware.
• Amount of time wasted due to rotational
delay can be reduced.
– If requests are ordered within each track so that first
sector requested on second track is next number higher
than one just served, rotational delay is minimized. 212
Chanderprabhu Jain College of Higher
Studies & School of Law
Redundant Array of Inexpensive
Disks (RAID)
• RAID is a set of physical disk drives that is viewed as a single
logical unit by OS.
• RAID assumes several smaller-capacity disk drives preferable
to few large-capacity disk drives because, by distributing data
among several smaller disks, system can simultaneously
access requested data from multiple drives.
• System shows improved I/O performance and improved data
recovery in event of disk failure.
213
Chanderprabhu Jain College of Higher
Studies & School of Law
RAID -2
• RAID introduces much-needed concept of
redundancy to help systems recover from
hardware failure.
• Also requires more disk drives which increase
hardware costs.
214
Chanderprabhu Jain College of Higher
Studies & School of Law
Chanderprabhu Jain College of Higher Studies & School of Law
Plot No. OCF, Sector A-8, Narela, New Delhi – 110040
(Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India)
Semester: Fifth Semester
Name of the Subject: Operating System
UNIT- 4
215
Chanderprabhu Jain College of Higher
Studies & School of Law
FILE MANAGEMENT
216
Chanderprabhu Jain College of Higher
Studies & School of Law
Operating Systems:
Internals and Design Principles
If there is one singular characteristic that makes squirrels unique
among small mammals it is their natural instinct to hoard food.
Squirrels have developed sophisticated capabilities in their hoarding.
Different types of food are stored in different ways to maintain quality.
Mushrooms, for instance, are usually dried before storing. This is done
by impaling them on branches or leaving them in the forks of trees for
later retrieval. Pine cones, on the other hand, are often harvested while
green and cached in damp conditions that keep seeds from ripening.
Gray squirrels usually strip outer husks from walnuts before storing.
— SQUIRRELS: A WILDLIFE HANDBOOK,
Kim Long
217
Chanderprabhu Jain College of Higher
Studies & School of Law
Files
• Data collections created by users
• The File System is one of the most important
parts of the OS to a user
• Desirable properties of files:
Long-term existence
• files are stored on disk or other secondary storage and do not disappear when a user logs off
Sharable between processes
• files have names and can have associated access permissions that permit controlled sharing
Structure
• files can be organized into hierarchical or more complex structure to reflect the relationships among
files
218
Chanderprabhu Jain College of Higher
Studies & School of Law
File Systems
• Provide a means to store data organized as
files as well as a collection of functions that
can be performed on files
• Maintain a set of attributes associated with
the file
• Typical operations include:
– Create
– Delete
– Open 219
Chanderprabhu Jain College of Higher
Studies & School of Law
File Structure
Four terms are
commonly used when
discussing files:
Field Record File Database
220
Chanderprabhu Jain College of Higher
Studies & School of Law
File Structure
• Files can be structured as a collection of records or as a
sequence of bytes
• UNIX, Linux, Windows, Mac OS’s consider files as a
sequence of bytes
• Other OS’s, notably many IBM mainframes, adopt the
collection-of-records approach; useful for DB
• COBOL supports the collection-of-records file and can
implement it even on systems that don’t provide such files
natively.
221
Chanderprabhu Jain College of Higher
Studies & School of Law
Structure Terms
Field
– basic element of data
– contains a single value
– fixed or variable length
File
• collection of related
fields that can be treated
as a unit by some
application program
• One field is the key – a
Record
Database
 collection of similar records
 treated as a single entity
 may be referenced by name
 access control restrictions
usually apply at the file level
 collection of related data
 relationships among elements
of data are explicit
 designed for use by a number
of different applications
 consists of one or more types
of files
222
Chanderprabhu Jain College of Higher
Studies & School of Law
File System Architecture
• Notice that the top layer consists of a number of different
file formats: pile, sequential, indexed sequential, …
• These file formats are consistent with the collection-of-
records approach to files and determine how file data is
accessed
• Even in a byte-stream oriented file system it’s possible to
build files with record-based structures but it’s up to the
application to design the files and build in access methods,
indexes, etc.
• Operating systems that include a variety of file formats
provide access methods and other support automatically.
223
Chanderprabhu Jain College of Higher
Studies & School of Law
Layered File System Architecture
• File Formats – Access methods provide
the interface to users
• Logical I/O
• Basic I/O
• Basic file system
• Device drivers
224
Chanderprabhu Jain College of Higher
Studies & School of Law
Device Drivers
• Lowest level
• Communicates directly with peripheral devices
• Responsible for starting I/O operations on a device
• Processes the completion of an I/O request
• Considered to be part of the operating system
225
Chanderprabhu Jain College of Higher
Studies & School of Law
Basic File System
• Also referred to as the physical I/O level
• Primary interface with the environment outside the
computer system
• Deals with blocks of data that are exchanged with disk or
other mass storage devices.
– placement of blocks on the secondary storage device
– buffering blocks in main memory
• Considered part of the operating system
226
Chanderprabhu Jain College of Higher
Studies & School of Law
Basic I/O Supervisor
• Responsible for all file I/O initiation and
termination
• Control structures that deal with device I/O,
scheduling, and file status are maintained
• Selects the device on which I/O is to be
performed
• Concerned with scheduling disk and tape
accesses to optimize performance
227
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical I/O
Enables users
and
applications to
access records
Provides
general-
purpose record
I/O capability Maintains
basic data
about file
228
Chanderprabhu Jain College of Higher
Studies & School of Law
Logical I/O
This level is the interface between the
logical commands issued by a program and
the physical details required by the disk.
Logical units of data versus physical blocks
of data to match disk requirements.
229
Chanderprabhu Jain College of Higher
Studies & School of Law
Access Method
 Level of the file system closest to the user
 Provides a standard interface between applications and
the file systems and devices that hold the data
 Different access methods reflect different file structures
and different ways of accessing and
processing the data
230
Chanderprabhu Jain College of Higher
Studies & School of Law
Elements of File Management
231
Chanderprabhu Jain College of Higher
Studies & School of Law
File Organization and Access
• File organization is the logical structuring of
the records as determined by the way in
which they are accessed
• In choosing a file organization, several
criteria are important:
– short access time
– ease of update
– economy of storage 232
Chanderprabhu Jain College of Higher
Studies & School of Law
File Organization Types
Five of the common
file organizations
are:
The pile
The
sequential
file
The indexed
sequential
file
The
indexed
file
The direct, or
hashed, file
233
Chanderprabhu Jain College of Higher
Studies & School of Law
The Pile
• Least
complicated
form of file
organization
• Data are
collected in the
order they
arrive
• Each record
234
Chanderprabhu Jain College of Higher
Studies & School of Law
The Sequential File
• Most common form of file
structure
• A fixed format is used for
records
• Key field uniquely identifies
the record & determines
storage order
• Typically used in batch
applications
• Only organization that is
easily stored on tape as well
as disk
235
Chanderprabhu Jain College of Higher
Studies & School of Law
Indexed Sequential File
• Adds an index
to the file to
support
random
access
• Adds an
overflow file
• Greatly 236
Chanderprabhu Jain College of Higher
Studies & School of Law
Indexed File
• Records are accessed only
through their indexes
• Variable-length records can
be employed
• Exhaustive index contains one
entry for every record in the
main file
• Partial index contains entries
to records where the field of
interest exists
• Used mostly in applications
where timeliness of
information is critical
• Examples would be airline
reservation systems and
inventory control systems
237
Chanderprabhu Jain College of Higher
Studies & School of Law
Direct or Hashed File
• Access directly any block of
a known address
• Makes use of hashing on
the key value
• Often used where:
– very rapid access is required
– fixed-length records are used
– records are always accessed one
at a time
Examples are:
• directories
• pricing tables
• schedules
• name lists
238
Chanderprabhu Jain College of Higher
Studies & School of Law
B-Trees
• A balanced tree structure with all branches
of equal length
• Standard method of organizing indexes for
databases
• Commonly used in OS file systems
• Provides for efficient searching, adding, and
deleting of items
239
Chanderprabhu Jain College of Higher
Studies & School of Law
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System
Operating System

More Related Content

What's hot

Lec 1
Lec 1Lec 1
Nt introduction(os)
Nt introduction(os)Nt introduction(os)
Nt introduction(os)
NehaTadam
 
Characteristics and Quality Attributes of Embedded System
Characteristics and Quality Attributes of Embedded SystemCharacteristics and Quality Attributes of Embedded System
Characteristics and Quality Attributes of Embedded System
anand hd
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
SandhyaTatekalva
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
Arnav Chowdhury
 
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
vtunotesbysree
 
Co module 1 2019 20-converted
Co module 1 2019 20-convertedCo module 1 2019 20-converted
Co module 1 2019 20-converted
ramamani keshava
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
Arnav Chowdhury
 
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
vtunotesbysree
 
Operating system 06 operating system classification
Operating system 06 operating system classificationOperating system 06 operating system classification
Operating system 06 operating system classification
Vaibhav Khanna
 
Chapter 2 part 1
Chapter 2 part 1Chapter 2 part 1
Chapter 2 part 1rohassanie
 
Operating system
Operating systemOperating system
Operating system concepts (notes)
Operating system concepts (notes)Operating system concepts (notes)
Operating system concepts (notes)Sohaib Danish
 
OPERATING SYSTEM AND ITS TYPES REPORT
OPERATING SYSTEM AND ITS TYPES REPORTOPERATING SYSTEM AND ITS TYPES REPORT
OPERATING SYSTEM AND ITS TYPES REPORT
Amin Hussain
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2) rohassanie
 
Unit 1 operating system
Unit 1 operating systemUnit 1 operating system
Unit 1 operating system
Meeta
 
Understanding Basics of OS
Understanding Basics of OSUnderstanding Basics of OS
Understanding Basics of OS
E.M.G.yadava womens college
 
OperatingSystem01..(B.SC Part 2)
OperatingSystem01..(B.SC Part 2)OperatingSystem01..(B.SC Part 2)
OperatingSystem01..(B.SC Part 2)
Muhammad Osama
 

What's hot (20)

Lec 1
Lec 1Lec 1
Lec 1
 
Lec # 1 chapter 2
Lec # 1 chapter 2Lec # 1 chapter 2
Lec # 1 chapter 2
 
Nt introduction(os)
Nt introduction(os)Nt introduction(os)
Nt introduction(os)
 
Characteristics and Quality Attributes of Embedded System
Characteristics and Quality Attributes of Embedded SystemCharacteristics and Quality Attributes of Embedded System
Characteristics and Quality Attributes of Embedded System
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
Operating system concepts
Operating system conceptsOperating system concepts
Operating system concepts
 
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
 
Co module 1 2019 20-converted
Co module 1 2019 20-convertedCo module 1 2019 20-converted
Co module 1 2019 20-converted
 
Co question 2008
Co question 2008Co question 2008
Co question 2008
 
Os unit 2
Os unit 2Os unit 2
Os unit 2
 
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
 
Operating system 06 operating system classification
Operating system 06 operating system classificationOperating system 06 operating system classification
Operating system 06 operating system classification
 
Chapter 2 part 1
Chapter 2 part 1Chapter 2 part 1
Chapter 2 part 1
 
Operating system
Operating systemOperating system
Operating system
 
Operating system concepts (notes)
Operating system concepts (notes)Operating system concepts (notes)
Operating system concepts (notes)
 
OPERATING SYSTEM AND ITS TYPES REPORT
OPERATING SYSTEM AND ITS TYPES REPORTOPERATING SYSTEM AND ITS TYPES REPORT
OPERATING SYSTEM AND ITS TYPES REPORT
 
Chapter 2 (Part 2)
Chapter 2 (Part 2) Chapter 2 (Part 2)
Chapter 2 (Part 2)
 
Unit 1 operating system
Unit 1 operating systemUnit 1 operating system
Unit 1 operating system
 
Understanding Basics of OS
Understanding Basics of OSUnderstanding Basics of OS
Understanding Basics of OS
 
OperatingSystem01..(B.SC Part 2)
OperatingSystem01..(B.SC Part 2)OperatingSystem01..(B.SC Part 2)
OperatingSystem01..(B.SC Part 2)
 

Similar to Operating System

os unit 1.pdf
os unit 1.pdfos unit 1.pdf
os unit 1.pdf
MananBatra17
 
Operating System-adi.pdf
Operating System-adi.pdfOperating System-adi.pdf
Operating System-adi.pdf
Prof. Dr. K. Adisesha
 
CS403: Operating System : Lec 4 OS services.pptx
CS403: Operating System : Lec 4 OS services.pptxCS403: Operating System : Lec 4 OS services.pptx
CS403: Operating System : Lec 4 OS services.pptx
Asst.prof M.Gokilavani
 
Kernel security Concepts
Kernel security ConceptsKernel security Concepts
Kernel security Concepts
Mohit Saxena
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptx
ssuser3dfcef
 
Os unit i
Os unit iOs unit i
Os unit i
SandhyaTatekalva
 
4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado
Mark John Lado, MIT
 
Operating systems introduction
Operating systems   introductionOperating systems   introduction
Operating systems introduction
veeravanithaD
 
Operating System
Operating SystemOperating System
Operating System
Hitesh Mohapatra
 
Services of Operating System
Services of Operating SystemServices of Operating System
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
HarshitKoshta2
 
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
KrishRaj48
 
OPERATING SYSTEM
OPERATING SYSTEMOPERATING SYSTEM
OPERATING SYSTEMS - INTRODUCTION
OPERATING SYSTEMS - INTRODUCTIONOPERATING SYSTEMS - INTRODUCTION
OPERATING SYSTEMS - INTRODUCTION
priyasoundar
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfEngg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdf
nikhil287188
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
AJAL A J
 
Chapter02-rev.pptx
Chapter02-rev.pptxChapter02-rev.pptx
Chapter02-rev.pptx
AbhishekThummar4
 
Unit 4
Unit  4Unit  4
Unit 4
pm_ghate
 
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI LearningAN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
PHI Learning Pvt. Ltd.
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systems
Rai University
 

Similar to Operating System (20)

os unit 1.pdf
os unit 1.pdfos unit 1.pdf
os unit 1.pdf
 
Operating System-adi.pdf
Operating System-adi.pdfOperating System-adi.pdf
Operating System-adi.pdf
 
CS403: Operating System : Lec 4 OS services.pptx
CS403: Operating System : Lec 4 OS services.pptxCS403: Operating System : Lec 4 OS services.pptx
CS403: Operating System : Lec 4 OS services.pptx
 
Kernel security Concepts
Kernel security ConceptsKernel security Concepts
Kernel security Concepts
 
opearating system notes mumbai university.pptx
opearating system notes mumbai university.pptxopearating system notes mumbai university.pptx
opearating system notes mumbai university.pptx
 
Os unit i
Os unit iOs unit i
Os unit i
 
4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado4 Module - Operating Systems Configuration and Use by Mark John Lado
4 Module - Operating Systems Configuration and Use by Mark John Lado
 
Operating systems introduction
Operating systems   introductionOperating systems   introduction
Operating systems introduction
 
Operating System
Operating SystemOperating System
Operating System
 
Services of Operating System
Services of Operating SystemServices of Operating System
Services of Operating System
 
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
 
Unit 2.pptx
Unit 2.pptxUnit 2.pptx
Unit 2.pptx
 
OPERATING SYSTEM
OPERATING SYSTEMOPERATING SYSTEM
OPERATING SYSTEM
 
OPERATING SYSTEMS - INTRODUCTION
OPERATING SYSTEMS - INTRODUCTIONOPERATING SYSTEMS - INTRODUCTION
OPERATING SYSTEMS - INTRODUCTION
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdfEngg-0505-IT-Operating-Systems-2nd-year.pdf
Engg-0505-IT-Operating-Systems-2nd-year.pdf
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Chapter02-rev.pptx
Chapter02-rev.pptxChapter02-rev.pptx
Chapter02-rev.pptx
 
Unit 4
Unit  4Unit  4
Unit 4
 
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI LearningAN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
AN INTRODUCTION TO OPERATING SYSTEMS : CONCEPTS AND PRACTICE - PHI Learning
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systems
 

More from cpjcollege

Tax Law (LLB-403)
Tax Law (LLB-403)Tax Law (LLB-403)
Tax Law (LLB-403)
cpjcollege
 
Law and Emerging Technology (LLB -405)
 Law and Emerging Technology (LLB -405) Law and Emerging Technology (LLB -405)
Law and Emerging Technology (LLB -405)
cpjcollege
 
Law of Crimes-I ( LLB -205)
 Law of Crimes-I  ( LLB -205)  Law of Crimes-I  ( LLB -205)
Law of Crimes-I ( LLB -205)
cpjcollege
 
Socio-Legal Dimensions of Gender (LLB-507 & 509 )
Socio-Legal Dimensions of Gender (LLB-507 & 509 )Socio-Legal Dimensions of Gender (LLB-507 & 509 )
Socio-Legal Dimensions of Gender (LLB-507 & 509 )
cpjcollege
 
Family Law-I ( LLB -201)
Family Law-I  ( LLB -201) Family Law-I  ( LLB -201)
Family Law-I ( LLB -201)
cpjcollege
 
Alternative Dispute Resolution (ADR) [LLB -309]
Alternative Dispute Resolution (ADR) [LLB -309] Alternative Dispute Resolution (ADR) [LLB -309]
Alternative Dispute Resolution (ADR) [LLB -309]
cpjcollege
 
Law of Evidence (LLB-303)
Law of Evidence  (LLB-303) Law of Evidence  (LLB-303)
Law of Evidence (LLB-303)
cpjcollege
 
Environmental Studies and Environmental Laws (: LLB -301)
Environmental Studies and Environmental Laws (: LLB -301)Environmental Studies and Environmental Laws (: LLB -301)
Environmental Studies and Environmental Laws (: LLB -301)
cpjcollege
 
Code of Civil Procedure (LLB -307)
 Code of Civil Procedure (LLB -307) Code of Civil Procedure (LLB -307)
Code of Civil Procedure (LLB -307)
cpjcollege
 
Constitutional Law-I (LLB -203)
 Constitutional Law-I (LLB -203) Constitutional Law-I (LLB -203)
Constitutional Law-I (LLB -203)
cpjcollege
 
Women and Law [LLB 409 (c)]
Women and Law [LLB 409 (c)]Women and Law [LLB 409 (c)]
Women and Law [LLB 409 (c)]
cpjcollege
 
Corporate Law ( LLB- 305)
Corporate Law ( LLB- 305)Corporate Law ( LLB- 305)
Corporate Law ( LLB- 305)
cpjcollege
 
Human Rights Law ( LLB -407)
 Human Rights Law ( LLB -407) Human Rights Law ( LLB -407)
Human Rights Law ( LLB -407)
cpjcollege
 
Labour Law-I (LLB 401)
 Labour Law-I (LLB 401) Labour Law-I (LLB 401)
Labour Law-I (LLB 401)
cpjcollege
 
Legal Ethics and Court Craft (LLB 501)
 Legal Ethics and Court Craft (LLB 501) Legal Ethics and Court Craft (LLB 501)
Legal Ethics and Court Craft (LLB 501)
cpjcollege
 
Political Science-II (BALLB- 209)
Political Science-II (BALLB- 209)Political Science-II (BALLB- 209)
Political Science-II (BALLB- 209)
cpjcollege
 
Health Care Law ( LLB 507 & LLB 509 )
Health Care Law ( LLB 507 & LLB 509 )Health Care Law ( LLB 507 & LLB 509 )
Health Care Law ( LLB 507 & LLB 509 )
cpjcollege
 
Land and Real Estate Laws (LLB-505)
Land and Real Estate Laws (LLB-505)Land and Real Estate Laws (LLB-505)
Land and Real Estate Laws (LLB-505)
cpjcollege
 
Business Environment and Ethical Practices (BBA LLB 213 )
Business Environment and Ethical Practices (BBA LLB 213 )Business Environment and Ethical Practices (BBA LLB 213 )
Business Environment and Ethical Practices (BBA LLB 213 )
cpjcollege
 
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
cpjcollege
 

More from cpjcollege (20)

Tax Law (LLB-403)
Tax Law (LLB-403)Tax Law (LLB-403)
Tax Law (LLB-403)
 
Law and Emerging Technology (LLB -405)
 Law and Emerging Technology (LLB -405) Law and Emerging Technology (LLB -405)
Law and Emerging Technology (LLB -405)
 
Law of Crimes-I ( LLB -205)
 Law of Crimes-I  ( LLB -205)  Law of Crimes-I  ( LLB -205)
Law of Crimes-I ( LLB -205)
 
Socio-Legal Dimensions of Gender (LLB-507 & 509 )
Socio-Legal Dimensions of Gender (LLB-507 & 509 )Socio-Legal Dimensions of Gender (LLB-507 & 509 )
Socio-Legal Dimensions of Gender (LLB-507 & 509 )
 
Family Law-I ( LLB -201)
Family Law-I  ( LLB -201) Family Law-I  ( LLB -201)
Family Law-I ( LLB -201)
 
Alternative Dispute Resolution (ADR) [LLB -309]
Alternative Dispute Resolution (ADR) [LLB -309] Alternative Dispute Resolution (ADR) [LLB -309]
Alternative Dispute Resolution (ADR) [LLB -309]
 
Law of Evidence (LLB-303)
Law of Evidence  (LLB-303) Law of Evidence  (LLB-303)
Law of Evidence (LLB-303)
 
Environmental Studies and Environmental Laws (: LLB -301)
Environmental Studies and Environmental Laws (: LLB -301)Environmental Studies and Environmental Laws (: LLB -301)
Environmental Studies and Environmental Laws (: LLB -301)
 
Code of Civil Procedure (LLB -307)
 Code of Civil Procedure (LLB -307) Code of Civil Procedure (LLB -307)
Code of Civil Procedure (LLB -307)
 
Constitutional Law-I (LLB -203)
 Constitutional Law-I (LLB -203) Constitutional Law-I (LLB -203)
Constitutional Law-I (LLB -203)
 
Women and Law [LLB 409 (c)]
Women and Law [LLB 409 (c)]Women and Law [LLB 409 (c)]
Women and Law [LLB 409 (c)]
 
Corporate Law ( LLB- 305)
Corporate Law ( LLB- 305)Corporate Law ( LLB- 305)
Corporate Law ( LLB- 305)
 
Human Rights Law ( LLB -407)
 Human Rights Law ( LLB -407) Human Rights Law ( LLB -407)
Human Rights Law ( LLB -407)
 
Labour Law-I (LLB 401)
 Labour Law-I (LLB 401) Labour Law-I (LLB 401)
Labour Law-I (LLB 401)
 
Legal Ethics and Court Craft (LLB 501)
 Legal Ethics and Court Craft (LLB 501) Legal Ethics and Court Craft (LLB 501)
Legal Ethics and Court Craft (LLB 501)
 
Political Science-II (BALLB- 209)
Political Science-II (BALLB- 209)Political Science-II (BALLB- 209)
Political Science-II (BALLB- 209)
 
Health Care Law ( LLB 507 & LLB 509 )
Health Care Law ( LLB 507 & LLB 509 )Health Care Law ( LLB 507 & LLB 509 )
Health Care Law ( LLB 507 & LLB 509 )
 
Land and Real Estate Laws (LLB-505)
Land and Real Estate Laws (LLB-505)Land and Real Estate Laws (LLB-505)
Land and Real Estate Laws (LLB-505)
 
Business Environment and Ethical Practices (BBA LLB 213 )
Business Environment and Ethical Practices (BBA LLB 213 )Business Environment and Ethical Practices (BBA LLB 213 )
Business Environment and Ethical Practices (BBA LLB 213 )
 
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
HUMAN RESOURCE MANAGEMENT (BBA LLB215 )
 

Recently uploaded

Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
DhatriParmar
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
ArianaBusciglio
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
Mohammed Sikander
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 

Recently uploaded (20)

Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 

Operating System

  • 1. Chanderprabhu Jain College of Higher Studies & School of Law Plot No. OCF, Sector A-8, Narela, New Delhi – 110040 (Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India) Semester: Fifth Semester Name of the Subject: Operating System UNIT- 1
  • 2. WHAT IS AN OPERATING SYSTEM? • An interface between users and hardware - an environment "architecture” • Allows convenient usage; hides the tedious stuff • Allows efficient usage; parallel activity, avoids wasted cycles • Provides information protection • Gives each user a slice of the resources • Acts as a control program. OPERATING SYSTEM OVERVIEW 2 Chanderprabhu Jain College of Higher Studies & School of Law
  • 3. OPERATING SYSTEM OVERVIEW The Layers Of A System Program Interface Humans User Programs O.S. Interface O.S. Hardware Interface/ Privileged Instructions Disk/Tape/Memory 3 Chanderprabhu Jain College of Higher Studies & School of Law
  • 4. A mechanism for scheduling jobs or processes. Scheduling can be as simple as running the next process, or it can use relatively complex rules to pick a running process. A method for simultaneous CPU execution and IO handling. Processing is going on even as IO is occurring in preparation for future CPU work. Off Line Processing; not only are IO and CPU happening concurrently, but some off- board processing is occurring with the IO. OPERATING SYSTEM OVERVIEW Components 4 Chanderprabhu Jain College of Higher Studies & School of Law
  • 5. The CPU is wasted if a job waits for I/O. This leads to: – Multiprogramming ( dynamic switching ). While one job waits for a resource, the CPU can find another job to run. It means that several jobs are ready to run and only need the CPU in order to continue. CPU scheduling is the subject of Chapter 6. All of this leads to: – memory management – resource scheduling – deadlock protection which are the subject of the rest of this course. OPERATING SYSTEM OVERVIEW Components 5 Chanderprabhu Jain College of Higher Studies & School of Law
  • 6. Other Characteristics include: • Time Sharing - multiprogramming environment that's also interactive. • Multiprocessing - Tightly coupled systems that communicate via shared memory. Used for scientific applications. Used for speed improvement by putting together a number of off-the-shelf processors. • Distributed Systems - Loosely coupled systems that communicate via message passing. Advantages include resource sharing, speed up, reliability, communication. • Real Time Systems - Rapid response time is main characteristic. Used in control of applications where rapid response to a stimulus is essential. OPERATING SYSTEM OVERVIEW Characteristics 6 Chanderprabhu Jain College of Higher Studies & School of Law
  • 7. OPERATING SYSTEM OVERVIEW Characteristics Interrupts: • Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines. • Interrupt architecture must save the address of the interrupted instruction. • Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt. • A trap is a software-generated interrupt caused either by an error or a user request. • An operating system is interrupt driven. 7 Chanderprabhu Jain College of Higher Studies & School of Law
  • 8. OPERATING SYSTEM OVERVIEW Hardware Support These are the devices that make up a typical system. Any of these devices can cause an electrical interrupt that grabs the attention of the CPU. 8 Chanderprabhu Jain College of Higher Studies & School of Law
  • 9. Operating System • Is a program that controls the execution of application programs – OS must relinquish control to user programs and regain it safely and efficiently – Tells the CPU when to execute other pgms • Is an interface between the user and hardware • Masks the details of the hardware to application programs – Hence OS must deal with hardware details 9 Chanderprabhu Jain College of Higher Studies & School of Law
  • 10. Services Provided by the OS • Facilities for Program creation – editors, compilers, linkers, and debuggers • Program execution – loading in memory, I/O and file initialization • Access to I/O and files – deals with the specifics of I/O and file formats • System access – Protection in access to resources and data – Resolves conflicts for resource contention 10 Chanderprabhu Jain College of Higher Studies & School of Law
  • 11. Services Provided by the OS • Accounting – collect statistics on resource usage – monitor performance (eg: response time) – used for system parameter tuning to improve performance – useful for anticipating future enhancements – used for billing users (on multiuser systems) 11 Chanderprabhu Jain College of Higher Studies & School of Law
  • 12. Evolution of an Operating System • Must adapt to hardware upgrades and new types of hardware. Examples: – Character vs graphic terminals – Introduction of paging hardware • Must offer new services, eg: internet support • The need to change the OS on regular basis place requirements on it’s design – modular construction with clean interfaces – object oriented methodology 12 Chanderprabhu Jain College of Higher Studies & School of Law
  • 13. The Monitor • Monitor reads jobs one at a time from the input device • Monitor places a job in the user program area • A monitor instruction branches to the start of the user program • Execution of user pgm continues until: – end-of-pgm occurs – error occurs • Causes the CPU to fetch its next instruction from Monitor 13 Chander prabhu Jain College of Higher
  • 14. Job Control Language (JCL) • Is the language to provide instructions to the monitor – what compiler to use – what data to use • Example of job format: ------->> • $FTN loads the compiler and transfers control to it • $LOAD loads the object code (in place of compiler) • $RUN transfers control to user program $JOB $FTN ... FORTRAN program ... $LOAD $RUN ... Data ... $END 14 Chander prabhu Jain College of Higher
  • 15. Job Control Language (JCL) • Each read instruction (in user pgm) causes one line of input to be read • Causes (OS) input routine to be invoke – checks for not reading a JCL line – skip to the next JCL line at completion of user program 15 Chanderprabhu Jain College of Higher Studies & School of Law
  • 16. Timesharing • Multiprogramming allowed several jobs to be active at one time – Initially used for batch systems – Cheaper hardware terminals -> interactive use • Computer use got much cheaper and easier – No more “priesthood” – Quick turnaround meant quick fixes for problems 16 Chanderprabhu Jain College of Higher Studies & School of Law
  • 17. Types of modern operating systems • Mainframe operating systems: MVS • Server operating systems: FreeBSD, Solaris • Multiprocessor operating systems: Cellular IRIX • Personal computer operating systems: Windows, Unix • Real-time operating systems: VxWorks • Embedded operating systems • Smart card operating systems  Some operating systems can fit into more than one category 17 Chanderprabhu Jain College of Higher Studies & School of Law
  • 18. Memory User program and data User program and data Operating system Address 0x1dfff 0x23000 0x27fff 0x2b000 0x2ffff 0 • Single base/limit pair: set for each process • Two base/limit registers: one for program, one for data Base Limit User data User program Operating system User data Base1 Limit2 Limit1 Base2 Address 0x1dfff 0x23000 0x29000 0x2bfff 0x2ffff 0 0x2d000 0x24fff 18Chanderprabhu Jain College of Higher Studies & School of Law
  • 19. Anatomy of a device request Interrupt controller CPU 5 Disk controller 3 2 61 4 • Left: sequence as seen by hardware – Request sent to controller, then to disk – Disk responds, signals disk controller which tells interrupt controller – Interrupt controller notifies CPU • Right: interrupt handling (software point of view) Instructionn Operating system Instructionn+1 Interrupt handler 1: Interrupt 2: Process interrupt 3: Return 19Chanderprabhu Jain College of Higher Studies & School of Law
  • 20. Processes • Process: program in execution – Address space (memory) the program can use – State (registers, including program counter & stack pointer) • OS keeps track of all processes in a process table • Processes can create other processes – Process tree tracks these relationships – A is the root of the tree – A created three child processes: B, C, and D – C created two child processes: E and F – D created one child process: G A B E F C D G 20Chanderprabhu Jain College of Higher Studies & School of Law
  • 21. Inside a (Unix) process • Processes have three segments – Text: program code – Data: program data • Statically declared variables • Areas allocated by malloc() or new – Stack • Automatic variables • Procedure call information • Address space growth – Text: doesn’t grow – Data: grows “up” – Stack: grows “down” Stack Data Text 0x7fffffff 0 Data 21Chanderprabhu Jain College of Higher Studies & School of Law
  • 22. Services Provided by the OS • Error Detection – internal and external hardware errors • memory error • device failure – software errors • arithmetic overflow • access forbidden memory locations – Inability of OS to grant request of application • Error Response – simply report error to the application – Retry the operation – Abort the application 22 Chanderprabhu Jain College of Higher Studies & School of Law
  • 23. Batch OS • Alternates execution between user program and the monitor program • Relies on available hardware to effectively alternate execution from various parts of memory 23 Chanderprabhu Jain College of Higher Studies & School of Law
  • 24. Desirable Hardware Features • Memory protection – do not allow the memory area containing the monitor to be altered by user programs • Timer – prevents a job from monopolizing the system – an interrupt occurs when time expires 24 Chanderprabhu Jain College of Higher Studies & School of Law
  • 25. Memory hierarchy • What is the memory hierarchy? – Different levels of memory – Some are small & fast – Others are large & slow • What levels are usually included? – Cache: small amount of fast, expensive memory • L1 (level 1) cache: usually on the CPU chip • L2 & L3 cache: off-chip, made of SRAM – Main memory: medium-speed, medium price memory (DRAM) – Disk: many gigabytes of slow, cheap, non-volatile storage • Memory manager handles the memory hierarchy 25 Chanderprabhu Jain College of Higher Studies & School of Law
  • 26. Basic memory management • Components include – Operating system (perhaps with device drivers) – Single process • Goal: lay these out in memory – Memory protection may not be an issue (only one program) – Flexibility may still be useful (allow OS changes, etc.) • No swapping or paging Operating system (RAM) User program (RAM) 0xFFFF 0xFFFF 0 0 User program (RAM) Operating system (ROM) Operating system (RAM) User program (RAM) Device drivers (ROM) 26Chanderprabhu Jain College of Higher Studies & School of Law
  • 27. Fixed partitions: multiple programs • Fixed memory partitions – Divide memory into fixed spaces – Assign a process to a space when it’s free • Mechanisms – Separate input queues for each partition – Single input queue: better ability to optimize CPU usage OS Partition 1 Partition 2 Partition 3 Partition 4 0 100K 500K 600K 700K 900K OS Partition 1 Partition 2 Partition 3 Partition 4 0 100K 500K 600K 700K 900K 27Chanderprabhu Jain College of Higher Studies & School of Law
  • 28. Swapping • Memory allocation changes as – Processes come into memory – Processes leave memory • Swapped to disk • Complete execution • Gray regions are unused memory OS OS OS OS OS OS OS A A B A B C B C B C D C D C D A 28Chanderprabhu Jain College of Higher Studies & School of Law
  • 29. Tracking memory usage: linked lists • Keep track of free / allocated memory regions with a linked list – Each entry in the list corresponds to a contiguous region of memory – Entry can indicate either allocated or free (and, optionally, owning process) – May have separate lists for free and allocated areas • Efficient if chunks are large – Fixed-size representation for each region – More regions => more space needed for free lists A B C D 16 24 32 Memory regions A 0 6 - 6 4 B 10 3 - 13 4 C 17 9 - 29 3D 26 3 8 29Chanderprabhu Jain College of Higher Studies & School of Law
  • 30. Allocating memory • Search through region list to find a large enough space • Suppose there are several choices: which one to use? – First fit: the first suitable hole on the list – Next fit: the first suitable after the previously allocated hole – Best fit: the smallest hole that is larger than the desired region (wastes least space?) – Worst fit: the largest available hole (leaves largest fragment) • Option: maintain separate queues for different-size holes - 6 5 - 19 14 - 52 25 - 102 30 - 135 16 - 202 10 - 302 20 - 350 30 - 411 19 - 510 3 Allocate 20 blocks first fit 5 Allocate 12 blocks next fit 18 Allocate 13 blocks best fit 1 Allocate 15 blocks worst fit 15 30Chanderprabhu Jain College of Higher Studies & School of Law
  • 31. Freeing memory • Allocation structures must be updated when memory is freed • Easy with bitmaps: just set the appropriate bits in the bitmap • Linked lists: modify adjacent elements as needed – Merge adjacent free regions into a single region – May involve merging two regions with the just-freed area A X B A X X B X A B A B 31Chanderprabhu Jain College of Higher Studies & School of Law
  • 32. Limitations of swapping • Problems with swapping – Process must fit into physical memory (impossible to run larger processes) – Memory becomes fragmented • External fragmentation: lots of small free areas • Compaction needed to reassemble larger free areas – Processes are either in memory or on disk: half and half doesn’t do any good • Overlays solved the first problem – Bring in pieces of the process over time (typically data) – Still doesn’t solve the problem of fragmentation or partially resident processes 32 Chanderprabhu Jain College of Higher Studies & School of Law
  • 33. Virtual memory • Basic idea: allow the OS to hand out more memory than exists on the system • Keep recently used stuff in physical memory • Move less recently used stuff to disk • Keep all of this hidden from processes – Processes still see an address space from 0 – max address – Movement of information to and from disk handled by the OS without process help • Virtual memory (VM) especially helpful in multiprogrammed system – CPU schedules process B while process A waits for its memory to be retrieved from disk 33 Chanderprabhu Jain College of Higher Studies & School of Law
  • 34. Virtual and physical addresses • Program uses virtual addresses – Addresses local to the process – Hardware translates virtual address to physical address • Translation done by the Memory Management Unit – Usually on the same chip as the CPU – Only physical addresses leave the CPU/MMU chip • Physical memory indexed by physical addresses CPU chip CPU Memory Disk controller MMU Virtual addresses from CPU to MMU Physical addresses on bus, in memory 34Chanderprabhu Jain College of Higher Studies & School of Law
  • 35. 0–4K 4–8K 8–12K 12–16K 16–20K 20–24K 24–28K 28–32K Paging and page tables • Virtual addresses mapped to physical addresses – Unit of mapping is called a page – All addresses in the same virtual page are in the same physical page – Page table entry (PTE) contains translation for a single page • Table translates virtual page number to physical page number – Not all virtual memory has a physical page – Not every physical page need be used • Example: – 64 KB virtual memory – 32 KB physical memory 70–4K 44–8K 8–12K 12–16K 016–20K 20–24K 24–28K 328–32K 32–36K 36–40K 140–44K 544–48K 648–52K -52–56K 56–60K -60–64K Virtual address space Physical memory - - - - - - - 35Chanderprabhu Jain College of Higher Studies & School of Law
  • 36. What’s in a page table entry? • Each entry in the page table contains – Valid bit: set if this logical page number has a corresponding physical frame in memory • If not valid, remainder of PTE is irrelevant – Page frame number: page in physical memory – Referenced bit: set if data on the page has been accessed – Dirty (modified) bit :set if data on the page has been modified – Protection information Page frame numberVRDProtection Valid bitReferenced bitDirty bit 36Chanderprabhu Jain College of Higher Studies & School of Law
  • 37. Example: • 4 KB (=4096 byte) pages • 32 bit logical addresses p d 2d = 4096 d = 12 12 bits 32 bit logical address 32-12 = 20 bits Mapping logical => physical address • Split address from CPU into two pieces – Page number (p) – Page offset (d) • Page number – Index into page table – Page table contains base address of page in physical memory • Page offset – Added to base address to get actual physical memory address • Page size = 2d bytes 37Chanderprabhu Jain College of Higher Studies & School of Law
  • 38. page number p d page offset 0 1 p-1 p p+1 f f d Page frame number ... page table physical memory 0 1 ... f-1 f f+1 f+2 ... Page frame number CPU Address translation architecture 38 Chanderprabhu Jain College of Higher Studies & School of Law
  • 39. 0 Page frame number Logical memory (P0) 1 2 3 4 5 6 7 8 9 Physical memory Page table (P0) Logical memory (P1) Page table (P1) Page 4 Page 3 Page 2 Page 1 Page 0 Page 1 Page 0 0 8 2 9 4 3 6 Page 3 (P0) Page 0 (P1) Page 0 (P0) Page 2 (P0) Page 1 (P0) Page 4 (P0) Page 1 (P1) Free pages Memory & paging structures 39 Chanderprabhu Jain College of Higher Studies & School of Law
  • 40. 884 960 955 ... 220 657 401 ... 1st level page table 2nd level page tables ... ... ... ... ... ... ... ... ... main memory ... 125 613 961 ... Two-level page tables • Problem: page tables can be too large – 232 bytes in 4KB pages need 1 million PTEs • Solution: use multi-level page tables – “Page size” in first page table is large (megabytes) – PTE marked invalid in first page table needs no 2nd level page table • 1st level page table has pointers to 2nd level page tables • 2nd level page table has actual physical page numbers in it 40Chanderprabhu Jain College of Higher Studies & School of Law
  • 41. More on two-level page tables • Tradeoffs between 1st and 2nd level page table sizes – Total number of bits indexing 1st and 2nd level table is constant for a given page size and logical address length – Tradeoff between number of bits indexing 1st and number indexing 2nd level tables • More bits in 1st level: fine granularity at 2nd level • Fewer bits in 1st level: maybe less wasted space? • All addresses in table are physical addresses • Protection bits kept in 2nd level table 41 Chanderprabhu Jain College of Higher Studies & School of Law
  • 42. p1 = 10 bits p2 = 9 bits offset = 13 bits page offsetpage number Two-level paging: example • System characteristics – 8 KB pages – 32-bit logical address divided into 13 bit page offset, 19 bit page number • Page number divided into: – 10 bit page number – 9 bit page offset • Logical address looks like this: – p1 is an index into the 1st level page table – p2 is an index into the 2nd level page table pointed to by p1 42Chanderprabhu Jain College of Higher Studies & School of Law
  • 43. ... ... 2-level address translation example p1 = 10 bits p2 = 9 bits offset = 13 bits page offsetpage number ... 0 1 p1 ... 0 1 p2 19 physical address 1st level page table 2nd level page table main memory 0 1 frame number 13 Page table base ... ... 43 Chanderprabhu Jain College of Higher Studies & School of Law
  • 44. Implementing page tables in hardware • Page table resides in main (physical) memory • CPU uses special registers for paging – Page table base register (PTBR) points to the page table – Page table length register (PTLR) contains length of page table: restricts maximum legal logical address • Translating an address requires two memory accesses – First access reads page table entry (PTE) – Second access reads the data / instruction from memory • Reduce number of memory accesses – Can’t avoid second access (we need the value from memory) – Eliminate first access by keeping a hardware cache (called a translation lookaside buffer or TLB) of recently used page table entries 44 Chanderprabhu Jain College of Higher Studies & School of Law
  • 45. Logical page # Physical frame # Example TLB 8 unused 2 3 12 29 22 7 3 1 0 12 6 11 4 Translation Lookaside Buffer (TLB) • Search the TLB for the desired logical page number – Search entries in parallel – Use standard cache techniques • If desired logical page number is found, get frame number from TLB • If desired logical page number isn’t found – Get frame number from page table in memory – Replace an entry in the TLB with the logical & physical page numbers from this reference 45Chanderprabhu Jain College of Higher Studies & School of Law
  • 46. Handling TLB misses • If PTE isn’t found in TLB, OS needs to do the lookup in the page table • Lookup can be done in hardware or software • Hardware TLB replacement – CPU hardware does page table lookup – Can be faster than software – Less flexible than software, and more complex hardware • Software TLB replacement – OS gets TLB exception – Exception handler does page table lookup & places the result into the TLB – Program continues after return from exception – Larger TLB (lower miss rate) can make this feasible 46 Chanderprabhu Jain College of Higher Studies & School of Law
  • 47. How long do memory accesses take? • Assume the following times: – TLB lookup time = a (often zero - overlapped in CPU) – Memory access time = m • Hit ratio (h) is percentage of time that a logical page number is found in the TLB – Larger TLB usually means higher h – TLB structure can affect h as well • Effective access time (an average) is calculated as: – EAT = (m + a)h + (m + m + a)(1-h) – EAT =a + (2-h)m • Interpretation – Reference always requires TLB lookup, 1 memory access – TLB misses also require an additional memory reference 47 Chanderprabhu Jain College of Higher Studies & School of Law
  • 48. Inverted page table • Reduce page table size further: keep one entry for each frame in memory • PTE contains – Virtual address pointing to this frame – Information about the process that owns this page • Search page table by – Hashing the virtual page number and process ID – Starting at the entry corresponding to the hash result – Search until either the entry is found or a limit is reached • Page frame number is index of PTE • Improve performance by using more advanced hashing algorithms 48 Chanderprabhu Jain College of Higher Studies & School of Law
  • 49. pid1 pidk pid0 Inverted page table architecture process ID p = 19 bits offset = 13 bits page number 1319 physical address inverted page table main memory ... 0 1 ... Page frame number page offset pid p p0 p1 pk ... ... 0 1 k search k 49 Chanderprabhu Jain College of Higher Studies & School of Law
  • 50. Memory Management Requirements • Relocation – programmer cannot know where the program will be placed in memory when it is executed – a process may be (often) relocated in main memory due to swapping – swapping enables the OS to have a larger pool of ready-to-execute processes – memory references in code (for both instructions and data) must be translated to actual physical memory address 50 Chanderprabhu Jain College of Higher Studies & School of Law
  • 51. Memory Management Requirements • Protection – processes should not be able to reference memory locations in another process without permission – impossible to check addresses at compile time in programs since the program could be relocated – address references must be checked at run time by hardware 51 Chanderprabhu Jain College of Higher Studies & School of Law
  • 52. Memory Management Requirements • Sharing – must allow several processes to access a common portion of main memory without compromising protection • cooperating processes may need to share access to the same data structure • better to allow each process to access the same copy of the program rather than have their own separate copy 52 Chanderprabhu Jain College of Higher Studies & School of Law
  • 53. Memory Management Requirements • Logical Organization – users write programs in modules with different characteristics • instruction modules are execute-only • data modules are either read-only or read/write • some modules are private others are public – To effectively deal with user programs, the OS and hardware should support a basic form of module to provide the required protection and sharing 53 Chanderprabhu Jain College of Higher Studies & School of Law
  • 54. Memory Management Requirements • Physical Organization – secondary memory is the long term store for programs and data while main memory holds program and data currently in use – moving information between these two levels of memory is a major concern of memory management (OS) • it is highly inefficient to leave this responsibility to the application programmer 54 Chanderprabhu Jain College of Higher Studies & School of Law
  • 55. Simple Memory Management • In this chapter we study the simpler case where there is no virtual memory • An executing process must be loaded entirely in main memory (if overlays are not used) • Although the following simple memory management techniques are not used in modern OS, they lay the ground for a proper discussion of virtual memory (next chapter) – fixed partitioning – dynamic partitioning – simple paging – simple segmentation 55 Chanderprabhu Jain College of Higher Studies & School of Law
  • 56. Fixed Partitioning • Partition main memory into a set of non overlapping regions called partitions • Partitions can be of equal or unequal sizes 56 Chander prabhu Jain College of Higher
  • 57. Fixed Partitioning • any process whose size is less than or equal to a partition size can be loaded into the partition • if all partitions are occupied, the operating system can swap a process out of a partition • a program may be too large to fit in a partition. The programmer must then design the program with overlays – when the module needed is not present the user program must load that module into the program’s partition, overlaying whatever program or data are there 57 Chanderprabhu Jain College of Higher Studies & School of Law
  • 58. Fixed Partitioning • Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation. • Unequal-size partitions lessens these problems but they still remain... • Equal-size partitions was used in early IBM’s OS/MFT (Multiprogramming with a Fixed number of Tasks) 58 Chanderprabhu Jain College of Higher Studies & School of Law
  • 59. Placement Algorithm with Partitions • Equal-size partitions – If there is an available partition, a process can be loaded into that partition • because all partitions are of equal size, it does not matter which partition is used – If all partitions are occupied by blocked processes, choose one process to swap out to make room for the new process 59 Chanderprabhu Jain College of Higher Studies & School of Law
  • 60. Placement Algorithm with Partitions • Unequal-size partitions: use of multiple queues – assign each process to the smallest partition within which it will fit – A queue for each partition size – tries to minimize internal fragmentation – Problem: some queues will be empty if no processes within a size range is present 60 Chander prabhu Jain College of Higher
  • 61. Placement Algorithm with Partitions • Unequal-size partitions: use of a single queue – When its time to load a process into main memory the smallest available partition that will hold the process is selected – increases the level of multiprogramming at the expense of internal fragmentation 61 Chander prabhu Jain College of Higher
  • 62. Dynamic Partitioning • Partitions are of variable length and number • Each process is allocated exactly as much memory as it requires • Eventually holes are formed in main memory. This is called external fragmentation • Must use compaction to shift processes so they are contiguous and all free memory is in one block • Used in IBM’s OS/MVT (Multiprogramming with a Variable number of Tasks) 62 Chanderprabhu Jain College of Higher Studies & School of Law
  • 63. Dynamic Partitioning: an example • A hole of 64K is left after loading 3 processes: not enough room for another process • Eventually each process is blocked. The OS swaps out process 2 to bring in process 4 63 Chanderprabhu Jain College of Higher Studies & School of Law
  • 64. Dynamic Partitioning: an example • another hole of 96K is created • Eventually each process is blocked. The OS swaps out process 1 to bring in again process 2 and another hole of 96K is created... • Compaction would produce a single hole of 256K 64 Chanderprabhu Jain College of Higher Studies & School of Law
  • 65. Placement Algorithm • Used to decide which free block to allocate to a process • Goal: to reduce usage of compaction (time consuming) • Possible algorithms: – Best-fit: choose smallest hole – First-fit: choose first hole from beginning – Next-fit: choose first hole from last placement 65 Chander prabhu Jain College of Higher
  • 66. Placement Algorithm: comments • Next-fit often leads to allocation of the largest block at the end of memory • First-fit favors allocation near the beginning: tends to create less fragmentation then Next-fit • Best-fit searches for smallest block: the fragment left behind is small as possible – main memory quickly forms holes too small to hold any process: compaction generally needs to be done more often 66 Chanderprabhu Jain College of Higher Studies & School of Law
  • 67. Replacement Algorithm • When all processes in main memory are blocked, the OS must choose which process to replace – A process must be swapped out (to a Blocked- Suspend state) and be replaced by a new process or a process from the Ready-Suspend queue – We will discuss later such algorithms for memory management schemes using virtual memory 67 Chanderprabhu Jain College of Higher Studies & School of Law
  • 68. Buddy System • A reasonable compromize to overcome disadvantages of both fixed and variable partitionning schemes • A modified form is used in Unix SVR4 for kernal memory allocation • Memory blocks are available in size of 2^{K} where L <= K <= U and where – 2^{L} = smallest size of block allocatable – 2^{U} = largest size of block allocatable (generally, the entire memory available) 68 Chanderprabhu Jain College of Higher Studies & School of Law
  • 69. Buddy System • We start with the entire block of size 2^{U} • When a request of size S is made: – If 2^{U-1} < S <= 2^{U} then allocate the entire block of size 2^{U} – Else, split this block into two buddies, each of size 2^{U-1} – If 2^{U-2} < S <= 2^{U-1} then allocate one of the 2 buddies – Otherwise one of the 2 buddies is split again • This process is repeated until the smallest block greater or equal to S is generated • Two buddies are coalesced whenever both of them become unallocated 69 Chanderprabhu Jain College of Higher Studies & School of Law
  • 70. Buddy System • The OS maintains several lists of holes – the i-list is the list of holes of size 2^{i} – whenever a pair of buddies in the i-list occur, they are removed from that list and coalesced into a single hole in the (i+1)-list • Presented with a request for an allocation of size k such that 2^{i-1} < k <= 2^{i}: – the i-list is first examined – if the i-list is empty, the (i+1)-list is then examined... 70 Chanderprabhu Jain College of Higher Studies & School of Law
  • 71. Example of Buddy System 71 Chanderprabhu Jain College of Higher Studies & School of Law
  • 72. Buddy Systems: remarks • On average, internal fragmentation is 25% – each memory block is at least 50% occupied • Programs are not moved in memory – simplifies memory management • Mostly efficient when the size M of memory used by the Buddy System is a power of 2 – M = 2^{U} “bytes” where U is an integer – then the size of each block is a power of 2 – the smallest block is of size 1 – Ex: if M = 10, then the smallest block would be of size 5 72 Chanderprabhu Jain College of Higher Studies & School of Law
  • 73. Relocation • Because of swapping and compaction, a process may occupy different main memory locations during its lifetime • Hence physical memory references by a process cannot be fixed • This problem is solved by distinguishing between logical address and physical address 73 Chanderprabhu Jain College of Higher Studies & School of Law
  • 74. Address Types • A physical address (absolute address) is a physical location in main memory • A logical address is a reference to a memory location independent of the physical structure/organization of memory • Compilers produce code in which all memory references are logical addresses • A relative address is an example of logical address in which the address is expressed as a location relative to some known point in the program (ex: the beginning) 74 Chanderprabhu Jain College of Higher Studies & School of Law
  • 75. Address Translation • Relative address is the most frequent type of logical address used in pgm modules (ie: executable files) • Such modules are loaded in main memory with all memory references in relative form • Physical addresses are calculated “on the fly” as the instructions are executed • For adequate performance, the translation from relative to physical address must by done by hardware 75 Chanderprabhu Jain College of Higher Studies & School of Law
  • 76. Simple example of hardware translation of addresses • When a process is assigned to the running state, a base register (in CPU) gets loaded with the starting physical address of the process • A bound register gets loaded with the process’s ending physical address • When a relative addresses is encountered, it is added with the content of the base register to obtain the physical address which is compared with the content of the bound register • This provides hardware protection: each process can only access memory within its process image 76 Chanderprabhu Jain College of Higher Studies & School of Law
  • 77. Example Hardware for Address Translation 77 Chanderprabhu Jain College of Higher Studies & School of Law
  • 78. Simple Paging • Main memory is partition into equal fixed- sized chunks (of relatively small size) • Trick: each process is also divided into chunks of the same size called pages • The process pages can thus be assigned to the available chunks in main memory called frames (or page frames) • Consequence: a process does not need to occupy a contiguous portion of memory 78 Chanderprabhu Jain College of Higher Studies & School of Law
  • 79. Example of process loading • Now suppose that process B is swapped out 79 Chanderprabhu Jain College of Higher Studies & School of Law
  • 80. Example of process loading (cont.) • When process A and C are blocked, the pager loads a new process D consisting of 5 pages • Process D does not occupied a contiguous portion of memory • There is no external fragmentation • Internal fragmentation consist only of the last page of each process 80 Chander prabhu Jain College of Higher
  • 81. Page Tables • The OS now needs to maintain (in main memory) a page table for each process • Each entry of a page table consist of the frame number where the corresponding page is physically located • The page table is indexed by the page number to obtain the frame number • A free frame list, available for pages, is maintained 81 Chanderprabhu Jain College of Higher Studies & School of Law
  • 82. Logical address used in paging • Within each program, each logical address must consist of a page number and an offset within the page • A CPU register always holds the starting physical address of the page table of the currently running process • Presented with the logical address (page number, offset) the processor accesses the page table to obtain the physical address (frame number, offset) 82 Chanderprabhu Jain College of Higher Studies & School of Law
  • 83. Logical address in paging • The logical address becomes a relative address when the page size is a power of 2 • Ex: if 16 bits addresses are used and page size = 1K, we need 10 bits for offset and have 6 bits available for page number • Then the 16 bit address obtained with the 10 least significant bit as offset and 6 most significant bit as page number is a location relative to the beginning of the process 83 Chander prabhu Jain College of Higher
  • 84. Logical address in paging • By using a page size of a power of 2, the pages are invisible to the programmer, compiler/assembler, and the linker • Address translation at run-time is then easy to implement in hardware – logical address (n,m) gets translated to physical address (k,m) by indexing the page table and appending the same offset m to the frame number k 84 Chanderprabhu Jain College of Higher Studies & School of Law
  • 85. Logical-to-Physical Address Translation in Paging 85 Chanderprabhu Jain College of Higher Studies & School of Law
  • 86. Logical-to-Physical Address Translation in segmentation 86 Chanderprabhu Jain College of Higher Studies & School of Law
  • 87. Virtual memory • Consider a typical, large application: – There are many components that are mutually exclusive. Example: A unique function selected dependent on user choice. – Error routines and exception handlers are very rarely used. – Most programs exhibit a slowly changing locality of reference. There are two types of locality: spatial and temporal. 87 Chanderprabhu Jain College of Higher Studies & School of Law
  • 88. Characteristics of Paging and Segmentation • Memory references are dynamically translated into physical addresses at run time – a process may be swapped in and out of main memory such that it occupies different regions • A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory • Hence: all pieces of a process do not need to be loaded in main memory during execution – computation may proceed for some time if the next instruction to be fetch (or the next data to be accessed) is in a piece located in main memory 88 Chanderprabhu Jain College of Higher Studies & School of Law
  • 89. Process Execution • The OS brings into main memory only a few pieces of the program (including its starting point) • Each page/segment table entry has a present bit that is set only if the corresponding piece is in main memory • The resident set is the portion of the process that is in main memory • An interrupt (memory fault) is generated when the memory reference is on a piece not present in main memory 89 Chanderprabhu Jain College of Higher Studies & School of Law
  • 90. Process Execution (cont.) • OS places the process in a Blocking state • OS issues a disk I/O Read request to bring into main memory the piece referenced to • another process is dispatched to run while the disk I/O takes place • an interrupt is issued when the disk I/O completes – this causes the OS to place the affected process in the Ready state 90 Chanderprabhu Jain College of Higher Studies & School of Law
  • 91. Advantages of Partial Loading • More processes can be maintained in main memory – only load in some of the pieces of each process – With more processes in main memory, it is more likely that a process will be in the Ready state at any given time • A process can now execute even if it is larger than the main memory size – it is even possible to use more bits for logical addresses than the bits needed for addressing the physical memory 91 Chanderprabhu Jain College of Higher Studies & School of Law
  • 92. Virtual Memory: large as you wish! – Ex: 16 bits are needed to address a physical memory of 64KB – lets use a page size of 1KB so that 10 bits are needed for offsets within a page – For the page number part of a logical address we may use a number of bits larger than 6, say 22 (a modest value!!) • The memory referenced by a logical address is called virtual memory – is maintained on secondary memory (ex: disk) – pieces are bring into main memory only when needed 92 Chanderprabhu Jain College of Higher Studies & School of Law
  • 93. Virtual Memory (cont.) – For better performance, the file system is often bypassed and virtual memory is stored in a special area of the disk called the swap space • larger blocks are used and file lookups and indirect allocation methods are not used • By contrast, physical memory is the memory referenced by a physical address – is located on DRAM • The translation from logical address to physical address is done by indexing the appropriate page/segment table with the help of memory management hardware 93 Chanderprabhu Jain College of Higher Studies & School of Law
  • 94. Possibility of trashing • To accommodate as many processes as possible, only a few pieces of each process is maintained in main memory • But main memory may be full: when the OS brings one piece in, it must swap one piece out • The OS must not swap out a piece of a process just before that piece is needed • If it does this too often this leads to trashing: – The processor spends most of its time swapping pieces rather than executing user instructions 94 Chanderprabhu Jain College of Higher Studies & School of Law
  • 95. Locality • Temporal locality: Addresses that are referenced at some time Ts will be accessed in the near future (Ts + delta_time) with high probability. Example : Execution in a loop. • Spatial locality: Items whose addresses are near one another tend to be referenced close together in time. Example: Accessing array elements. • How can we exploit this characteristics of programs? Keep only the current locality in the main memory. Need not keep the entire program in the main memory. 95 Chanderprabhu Jain College of Higher Studies & School of Law
  • 96. Locality and Virtual Memory • Principle of locality of references: memory references within a process tend to cluster • Hence: only a few pieces of a process will be needed over a short period of time • Possible to make intelligent guesses about which pieces will be needed in the future • This suggests that virtual memory may work efficiently (ie: trashing should not occur too often) 96 Chanderprabhu Jain College of Higher Studies & School of Law
  • 97. Space and Time CPU cache Main memory Secondary Storage Desirable increasing 97 Chanderprabhu Jain College of Higher Studies & School of Law
  • 98. Demand paging • Main memory (physical address space) as well as user address space (virtual address space) are logically partitioned into equal chunks known as pages. Main memory pages (sometimes known as frames) and virtual memory pages are of the same size. • Virtual address (VA) is viewed as a pair (virtual page number, offset within the page). Example: Consider a virtual space of 16K , with 2K page size and an address 3045. What the virtual page number and offset corresponding to this VA? 98 Chanderprabhu Jain College of Higher Studies & School of Law
  • 99. Virtual Page Number and Offset 3045 / 2048 = 1 3045 % 2048 = 3045 - 2048 = 997 VP# = 1 Offset within page = 997 Page Size is always a power of 2? Why? 99 Chanderprabhu Jain College of Higher Studies & School of Law
  • 100. Page Size Criteria Consider the binary value of address 3045 : 1011 1110 0101 for 16K address space the address will be 14 bits. Rewrite: 00 1011 1110 0101 A 2K address space will have offset range 0 -2047 (11 bits) Offset within pagePage# 001 011 1110 0101 100 Chanderprabhu Jain College of Higher Studies & School of Law
  • 101. Demand paging (contd.) • There is only one physical address space but as many virtual address spaces as the number of processes in the system. At any time physical memory may contain pages from many process address space. • Pages are brought into the main memory when needed and “rolled out” depending on a page replacement policy. • Consider a 8K main (physical) memory and three virtual address spaces of 2K, 3K and 4K each. Page size of 1K. The status of the memory mapping at some time is as shown. 101 Chanderprabhu Jain College of Higher Studies & School of Law
  • 102. Demand Paging (contd.) 0 1 2 3 4 5 6 7 Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address Space Executable code space 102 Chanderprabhu Jain College of Higher Studies & School of Law
  • 103. Issues in demand paging • How to keep track of which logical page goes where in the main memory? More specifically, what are the data structures needed? – Page table, one per logical address space. • How to translate logical address into physical address and when? – Address translation algorithm applied every time a memory reference is needed. • How to avoid repeated translations? – After all most programs exhibit good locality. “cache recent translations” 103 Chanderprabhu Jain College of Higher Studies & School of Law
  • 104. Issues in demand paging (contd.) • What if main memory is full and your process demands a new page? What is the policy for page replacement? LRU, MRU, FIFO, random? • Do we need to roll out every page that goes into main memory? No, only the ones that are modified. How to keep track of this info and such other memory management information? In the page table as special bits. 104 Chanderprabhu Jain College of Higher Studies & School of Law
  • 105. Support Needed for Virtual Memory • Memory management hardware must support paging and/or segmentation • OS must be able to manage the movement of pages and/or segments between secondary memory and main memory • We will first discuss the hardware aspects; then the algorithms used by the OS 105 Chanderprabhu Jain College of Higher Studies & School of Law
  • 106. Paging • Each page table entry contains a present bit to indicate whether the page is in main memory or not. – If it is in main memory, the entry contains the frame number of the corresponding page in main memory – If it is not in main memory, the entry may contain the address of that page on disk or the page number may be used to index another table (often in the PCB) to obtain the address of that page on disk Typically, each process has its own page table 106 Chanderprabhu Jain College of Higher Studies & School of Law
  • 107. Paging • A modified bit indicates if the page has been altered since it was last loaded into main memory – If no change has been made, the page does not have to be written to the disk when it needs to be swapped out • Other control bits may be present if protection is managed at the page level – a read-only/read-write bit – protection level bit: kernel page or user page (more bits are used when the processor supports more than 2 protection levels) 107 Chanderprabhu Jain College of Higher Studies & School of Law
  • 108. Page Table Structure • Page tables are variable in length (depends on process size) – then must be in main memory instead of registers • A single register holds the starting physical address of the page table of the currently running process 108 Chanderprabhu Jain College of Higher Studies & School of Law
  • 109. Address Translation in a Paging System 109 Chanderprabhu Jain College of Higher Studies & School of Law
  • 110. Sharing Pages • If we share the same code among different users, it is sufficient to keep only one copy in main memory • Shared code must be reentrant (ie: non self- modifying) so that 2 or more processes can execute the same code • If we use paging, each sharing process will have a page table who’s entry points to the same frames: only one copy is in main memory • But each user needs to have its own private data pages 110 Chanderprabhu Jain College of Higher Studies & School of Law
  • 111. Sharing Pages: a text editor 111 Chanderprabhu Jain College of Higher Studies & School of Law
  • 112. Translation Lookaside Buffer • Because the page table is in main memory, each virtual memory reference causes at least two physical memory accesses – one to fetch the page table entry – one to fetch the data • To overcome this problem a special cache is set up for page table entries – called the TLB - Translation Lookaside Buffer • Contains page table entries that have been most recently used • Works similar to main memory cache 112 Chanderprabhu Jain College of Higher Studies & School of Law
  • 113. Translation Lookaside Buffer • Given a logical address, the processor examines the TLB • If page table entry is present (a hit), the frame number is retrieved and the real (physical) address is formed • If page table entry is not found in the TLB (a miss), the page number is used to index the process page table – if present bit is set then the corresponding frame is accessed – if not, a page fault is issued to bring in the referenced page in main memory • The TLB is updated to include the new page entry 113 Chanderprabhu Jain College of Higher Studies & School of Law
  • 114. Use of a Translation Lookaside Buffer 114 Chanderprabhu Jain College of Higher Studies & School of Law
  • 115. TLB: further comments • TLB use associative mapping hardware to simultaneously interrogates all TLB entries to find a match on page number • The TLB must be flushed each time a new process enters the Running state • The CPU uses two levels of cache on each virtual memory reference – first the TLB: to convert the logical address to the physical address – once the physical address is formed, the CPU then looks in the cache for the referenced word 115 Chanderprabhu Jain College of Higher Studies & School of Law
  • 116. Page Tables and Virtual Memory • Most computer systems support a very large virtual address space – 32 to 64 bits are used for logical addresses – If (only) 32 bits are used with 4KB pages, a page table may have 2^{20} entries • The entire page table may take up too much main memory. Hence, page tables are often also stored in virtual memory and subjected to paging – When a process is running, part of its page table must be in main memory (including the page table entry of the currently executing page) 116 Chanderprabhu Jain College of Higher Studies & School of Law
  • 117. Inverted Page Table • Another solution (PowerPC, IBM Risk 6000) to the problem of maintaining large page tables is to use an Inverted Page Table (IPT) • We generally have only one IPT for the whole system • There is only one IPT entry per physical frame (rather than one per virtual page) – this reduces a lot the amount of memory needed for page tables • The 1st entry of the IPT is for frame #1 ... the nth entry of the IPT is for frame #n and each of these entries contains the virtual page number • Thus this table is inverted 117 Chanderprabhu Jain College of Higher Studies & School of Law
  • 118. Inverted Page Table • The process ID with the virtual page number could be used to search the IPT to obtain the frame # • For better performance, hashing is used to obtain a hash table entry which points to a IPT entry – A page fault occurs if no match is found – chaining is used to manage hashing overflow d = offset within page 118 Chander prabhu Jain College of Higher
  • 119. The Page Size Issue • Page size is defined by hardware; always a power of 2 for more efficient logical to physical address translation. But exactly which size to use is a difficult question: – Large page size is good since for a small page size, more pages are required per process • More pages per process means larger page tables. Hence, a large portion of page tables in virtual memory – Small page size is good to minimize internal fragmentation – Large page size is good since disks are designed to efficiently transfer large blocks of data – Larger page sizes means less pages in main memory; this increases the TLB hit ratio 119 Chanderprabhu Jain College of Higher Studies & School of Law
  • 120. The Page Size Issue • With a very small page size, each page matches the code that is actually used: faults are low • Increased page size causes each page to contain more code that is not used. Page faults rise. • Page faults decrease if we can approach point P were the size of a page is equal to the size of the entire process 120 Chander prabhu Jain College of Higher
  • 121. The Page Size Issue • Page fault rate is also determined by the number of frames allocated per process • Page faults drops to a reasonable value when W frames are allocated • Drops to 0 when the number (N) of frames is such that a process is entirely in memory 121 Chander prabhu Jain College of Higher
  • 122. The Page Size Issue • Page sizes from 1KB to 4KB are most commonly used • But the issue is non trivial. Hence some processors are now supporting multiple page sizes. Ex: – Pentium supports 2 sizes: 4KB or 4MB – R4000 supports 7 sizes: 4KB to 16MB 122 Chanderprabhu Jain College of Higher Studies & School of Law
  • 123. Operating System Software • Memory management software depends on whether the hardware supports paging or segmentation or both • Pure segmentation systems are rare. Segments are usually paged -- memory management issues are then those of paging • We shall thus concentrate on issues associated with paging • To achieve good performance we need a low page fault rate 123 Chanderprabhu Jain College of Higher Studies & School of Law
  • 124. The LRU Policy • Replaces the page that has not been referenced for the longest time – By the principle of locality, this should be the page least likely to be referenced in the near future – performs nearly as well as the optimal policy • Example: A process of 5 pages with an OS that fixes the resident set size to 3 124 Chanderprabhu Jain College of Higher Studies & School of Law
  • 125. Implementation of the LRU Policy • Each page could be tagged (in the page table entry) with the time at each memory reference. • The LRU page is the one with the smallest time value (needs to be searched at each page fault) • This would require expensive hardware and a great deal of overhead. • Consequently very few computer systems provide sufficient hardware support for true LRU replacement policy • Other algorithms are used instead 125 Chanderprabhu Jain College of Higher Studies & School of Law
  • 126. The FIFO Policy • Treats page frames allocated to a process as a circular buffer – When the buffer is full, the oldest page is replaced. Hence: first-in, first-out • This is not necessarily the same as the LRU page • A frequently used page is often the oldest, so it will be repeatedly paged out by FIFO – Simple to implement • requires only a pointer that circles through the page frames of the process 126 Chanderprabhu Jain College of Higher Studies & School of Law
  • 127. Chanderprabhu Jain College of Higher Studies & School of Law Plot No. OCF, Sector A-8, Narela, New Delhi – 110040 (Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India) Semester: Fifth Semester Name of the Subject: Operating System UNIT- 2
  • 128. Process Concept • Process is a program in execution; forms the basis of all computation; process execution must progress in sequential fashion. • Program is a passive entity stored on disk (executable file), Process is an active entity; A program becomes a process when executable file is loaded into memory. • Execution of program is started via CLI entry of its name, GUI mouse clicks, etc. • A process is an instance of a running program; it can be assigned to, and executed on, a processor. • Related terms for Process: Job, Step, Load Module, Task, Thread.
  • 129. Process Parts • A process includes three segments/sections: 1. Program: code/text. 2. Data: global variables and heap • Heap contains memory dynamically allocated during run time. 3. Stack: temporary data • Procedure/Function parameters, return addresses, local variables. • Current activity of a program includes its Context: program counter, state, processor registers, etc. • One program can be several processes: – Multiple users executing the same Sequential program. – Concurrent program running several process.
  • 132. Process Attributes • Process ID • Parent process ID • User ID • Process state/priority • Program counter • CPU registers • Memory management information • I/O status information • Access Control • Accounting information
  • 134. Process States • Let us start with three states: 1) Running state – • the process that gets executed (single CPU); its instructions are being executed. 2) Ready state – • any process that is ready to be executed; the process is waiting to be assigned to a processor. 3) Waiting/Blocked state – • when a process cannot execute until its I/O completes or some other event occurs.
  • 135. A Three-state Process Model Ready Running Waiting Event Occurs Dispatc h Time-out Event Wait
  • 137. PROCESSES PROCESS STATE  New The process is just being put together.  Running Instructions being executed. This running process holds the CPU.  Waiting For an event (hardware, human, or another process.)  Ready The process has all needed resources - waiting for CPU only.  Suspended Another process has explicitly told this process to sleep. It will be awakened when a process explicitly awakens it.  Terminated The process is being torn apart.
  • 138. PROCESS CONTROL BLOCK: CONTAINS INFORMATION ASSOCIATED WITH EACH PROCESS: It's a data structure holding:  PC, CPU registers,  memory management information,  accounting ( time used, ID, ... )  I/O status ( such as file resources ),  scheduling data ( relative priority, etc. )  Process State (so running, suspended, etc. is simply a field in the PCB ). PROCESSES Process State
  • 139. The act of Scheduling a process means changing the active PCB pointed to by the CPU. Also called a context switch. A context switch is essentially the same as a process switch - it means that the memory, as seen by one process is changed to the memory seen by another process. See Figure on Next Page (4.3) SCHEDULING QUEUES: (Process is driven by events that are triggered by needs and availability ) Ready queue = contains those processes that are ready to run. I/O queue (waiting state ) = holds those processes waiting for I/O service. What do the queues look like? They can be implemented as single or double linked. See Figure Several Pages from Now (4.4) PROCESSES Scheduling Components
  • 142. LONG TERM SCHEDULER  Run seldom ( when job comes into memory )  Controls degree of multiprogramming  Tries to balance arrival and departure rate through an appropriate job mix. SHORT TERM SCHEDULER Contains three functions:  Code to remove a process from the processor at the end of its run. a)Process may go to ready queue or to a wait state.  Code to put a process on the ready queue – a)Process must be ready to run. b)Process placed on queue based on priority. PROCESSES Scheduling Components
  • 143. SHORT TERM SCHEDULER (cont.)  Code to take a process off the ready queue and run that process (also called dispatcher). a) Always takes the first process on the queue (no intelligence required) b) Places the process on the processor. This code runs frequently and so should be as short as possible. MEDIUM TERM SCHEDULER • Mixture of CPU and memory resource management. • Swap out/in jobs to improve mix and to get memory. • Controls change of priority. PROCESSES Scheduling Components
  • 144. INTERRUPT HANDLER • In addition to doing device work, it also readies processes, moving them, for instance, from waiting to ready. How do all these scheduling components fit together? Fig PROCESSES Scheduling Components Interrupt Handler Short Term Scheduler Short Term Scheduler Long Term Scheduler Medium Term Scheduler
  • 145. First-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17 P1 P2 P3 24 27 300
  • 146. FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P2 , P3 , P1 • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3 • Much better than previous case P1P3P2 63 300
  • 147. Shortest-Job-First (SJF) Scheduling • Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. • SJF is optimal – gives minimum average waiting time for a given set of processes – The difficulty is knowing the length of the next CPU request.
  • 148. Example of SJF Process Arrival Time Burst Time P1 0.0 6 P2 2.0 8 P3 4.0 7 P4 5.0 3 • SJF scheduling chart • Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P4 P3P1 3 160 9 P2 24
  • 149. Determining Length of Next CPU Burst • Can only estimate the length • Can be done by using the length of previous CPU bursts, using exponential averaging :Define4. 10,3. burstCPUnexttheforvaluepredicted2. burstCPUoflengthactual1.       1n th n nt n+1 =  tn + (1- ) n.
  • 150. Multilevel Queue • Ready queue is partitioned into separate queues: – foreground (interactive) – background (batch) • Each queue has its own scheduling algorithm: – foreground – RR – background – FCFS • Scheduling must be done between the queues: – Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. – Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR – 20% to background in FCFS
  • 152. Multilevel Feedback Queue • A process can move between the various queues; aging can be implemented this way. • Multilevel-feedback-queue scheduler defined by the following parameters: – number of queues – scheduling algorithms for each queue – method used to determine when to upgrade a process – method used to determine when to demote a process – method used to determine which queue a process will enter when that process needs service
  • 153. Deadlock • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0 P1 wait (S); wait (Q); wait (Q); wait (S); . . . . signal (S); signal (Q); signal (Q); signal (S); • Priority Inversion - Scheduling problem when
  • 154. Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended  Order of arrival retainment:  Weak semaphores:  The thread that will access the critical region next is selected randomly  Starvation is possible  Strong semaphores:  The thread that will access the critical region next is selected based on its arrival time, e.g. FIFO  Starvation is not possible Starvation
  • 155. Classical Problems of Synchronization • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem
  • 156. Bounded-Buffer Problem • N buffers, each can hold one item • Semaphore mutex initialized to the value 1 • Semaphore full initialized to the value 0 • Semaphore empty initialized to the value N.
  • 157. Readers-Writers Problem • A data set is shared among a number of concurrent processes – Readers – only read the data set; they do not perform any updates – Writers – can both read and write • Problem – allow multiple readers to read at the same time. – Only one single writer can access the shared data at the same time
  • 158. Dining-Philosophers Problem • Shared data – Bowl of rice (data set) – Semaphore chopstick [5] initialized to 1
  • 159. Problems with Semaphores • Correct use of semaphore operations: – signal (mutex) …. wait (mutex) – wait (mutex) … wait (mutex) – Omitting of wait (mutex) or signal (mutex) (or both)
  • 160. Monitors • A high-level abstraction that provides a convenient and effective mechanism for process synchronization • Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } … procedure Pn (…) {……} Initialization code ( ….) { … } … } }
  • 161. Condition Variables • condition x, y; • Two operations on a condition variable: – x.wait () – a process that invokes the operation is suspended. – x.signal () – resumes one of processes (if any) that invoked x.wait ()
  • 162. Types of Storage Media • Volatile storage – information stored here does not survive system crashes – Example: main memory, cache • Nonvolatile storage – Information usually survives crashes – Example: disk and tape • Stable storage – Information never lost – Not actually possible, • approximated via replication or RAID to devices with independent failure modes Goal is to assure transaction atomicity where failures cause loss of information on volatile storage
  • 163. Log-Based Recovery • Record to stable storage information about all modifications by a transaction • Most common is write-ahead logging – Log on stable storage, each log record describes single transaction write operation, including • Transaction name • Data item name • Old value • New value – <Ti starts> written to log when transaction Ti starts
  • 164. Log-Based Recovery Algorithm • Using the log, system can handle any volatile memory errors – Undo(Ti) restores value of all data updated by Ti – Redo(Ti) sets values of all data in transaction Ti to new values • Undo(Ti) and redo(Ti) must be idempotent – Multiple executions must have the same result as one execution • If system fails, restore state of all updated data
  • 165. Checkpoints  Log could become long, and recovery could take long  Checkpoints shorten log and recovery time.  Checkpoint scheme: 1. Output all log records currently in volatile storage to stable storage 2. Output all modified data from volatile to stable storage 3. Output a log record <checkpoint> to the log on
  • 166. Concurrent Transactions • Must be equivalent to serial execution – serializability • Could perform all transactions in critical section – Inefficient, too restrictive • Concurrency-control algorithms provide serializability
  • 167. Serializability • Consider two data items A and B • Consider Transactions T0 and T1 • Execute T0, T1 atomically • Execution sequence called schedule • Atomically executed transaction order called serial schedule • For N transactions, there are N! valid serial schedules
  • 168. Nonserial Schedule • Nonserial schedule allows overlapped execute – Resulting execution not necessarily incorrect • Consider schedule S, operations Oi, Oj – Conflict if access same data item, with at least one write • If Oi, Oj consecutive and operations of different transactions & Oi and Oj don’t conflict
  • 169. Schedule 2: Concurrent Serializable Schedule
  • 170. Locking Protocol • Ensure serializability by associating lock with each data item – Follow locking protocol for access control • Locks – Shared • Ti has shared-mode lock (S) on item Q, • Ti can read Q but not write Q – Exclusive • Ti has exclusive-mode lock (X) on Q, • Ti can read and write Q
  • 171. Two-phase Locking Protocol • Generally ensures conflict serializability • Each transaction issues lock and unlock requests in two phases – Growing – obtaining locks – Shrinking – releasing locks • Does not prevent deadlock
  • 172. Timestamp-based Protocols • Select order among transactions in advance – timestamp-ordering • Transaction Ti associated with timestamp TS(Ti) before Ti starts – TS(Ti) < TS(Tj) if Ti entered system before Tj – TS can be generated from system clock or as logical counter incremented at each entry of transaction
  • 173. Timestamp-based Protocol Implementation • Data item Q gets two timestamps – W-timestamp(Q) – largest timestamp of any transaction that executed write(Q) successfully – R-timestamp(Q) – largest timestamp of successful read(Q) – Updated whenever read(Q) or write(Q) executed • Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order • Suppose Ti executes read(Q)
  • 174. Timestamp-ordering Protocol • Suppose Ti executes write(Q) – If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti assumed it would never be produced • Write operation rejected, Ti rolled back – If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q • Write operation rejected and Ti rolled back – Otherwise, write executed • Any rolled back transaction T is assigned new
  • 175. Chanderprabhu Jain College of Higher Studies & School of Law Plot No. OCF, Sector A-8, Narela, New Delhi – 110040 (Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India) Semester: Fifth Semester Name of the Subject: Operating System UNIT- 3 175 Chanderprabhu Jain College of Higher Studies & School of Law
  • 176. Deadlock • Examples: Traffic Jam : • Dining Philosophers • Device allocation – process 1 requests tape drive 1 & gets it – process 2 requests tape drive 2 & gets it – process 1 requests tape drive 2 but is blocked – process 2 requests tape drive 1 but is blocked • Semaphores : P(s) P(t) P(t) P(s) 176 Chanderprabhu Jain College of Higher Studies & School of Law
  • 177. • I/O spooling disc – disc full of spooled input – no room for subsequent output • Over-allocation of pages in a virtual memory OS – each process has a allocation of notional pages it must work within – process acquires pages one by one – normally does not use its full allocation – kernel over-allocates total number of notional pages • more efficient uses of memory • like airlines overbooking seats – deadlock may arise • all processes by mischance approach use of their full allocation • kernel cannot provide last pages it promised • partial deadlock also - some processes blocked – recovery ? 177 Chanderprabhu Jain College of Higher Studies & School of Law
  • 178. • Resource: – used by a single process at a single point in time – any one of the same type can be allocated • Pre-emptible: – can be taken away from a process without ill effect – no deadlocks with pre-emptible resources • Non-Pre-emptible: – cannot be taken away without problems – most resources like this 178 Chanderprabhu Jain College of Higher Studies & School of Law
  • 179. Definition : A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause Necessary conditions for deadlock : • Mutual Exclusion : each resource is either currently assigned to one process or is available to be assigned • Hold and wait : processes currently holding resources granted earlier can request new 179 Chanderprabhu Jain College of Higher Studies & School of Law
  • 180. Resource Allocation Modelling using Graphs Nodes : resource process Arcs : resource requested : resource allocated : 180 Chanderprabhu Jain College of Higher Studies & School of Law
  • 181. 181 Chanderprabhu Jain College of Higher Studies & School of Law
  • 182. 182 Chanderprabhu Jain College of Higher Studies & School of Law
  • 183. • For multiple resources of the same type : • Deadlock : • A cycle not sufficient to imply a deadlock : 183 Chanderprabhu Jain College of Higher Studies & School of Law
  • 184. Possible Strategies : • Ignore - the Ostrich or Head-in-the-Sand algorithm – try to reduce chance of deadlock as far as reasonable – accept that deadlocks will occur occasionally • example: kernel table sizes - max number of pages, open files etc. 184 Chanderprabhu Jain College of Higher Studies & School of Law
  • 185. • Deadlock Prevention – negate one of the necessary conditions • negating Mutual Exclusion : – example: shared use of a printer • give exclusive use of the printer to each user in turn wanting to print? • deadlock possibility if exclusive access to another resource also allowed • better to have a spooler process to which print jobs are sent – complete output file must be generated first – example: file system actions • give a process exclusive access rights to a file directory • example: moving a file from one directory to another – possible deadlock if allowed exclusive access to two directories simultaneously – should write code so as only to need to access one directory at a time – solution? • make resources concurrently sharable wherever possible e.g. read-only access • most resources inherently not sharable! 185 Chanderprabhu Jain College of Higher Studies & School of Law
  • 186. Resource Trajectory Graph 186 Chanderprabhu Jain College of Higher Studies & School of Law
  • 187. 187 Chanderprabhu Jain College of Higher Studies & School of Law
  • 188. • negating Hold and Wait – process could request all the resources it will ever need at once 188 Chanderprabhu Jain College of Higher Studies & School of Law
  • 189. • negating Circular Wait – require that a process can only acquire one resource at a time • example: moving a file from one directory to another – require processes to acquire resources in a certain order – example: • 1: tape drive • 2: disc drive • 3: printer • 4: plotter 189 Chanderprabhu Jain College of Higher Studies & School of Law
  • 190. • Deadlock Avoidance – deadlock possible but avoided by careful allocation of resources – avoid entering unsafe states – a state is safe if it is not deadlocked and there is a way to satisfy all requests currently pending by running the processes in some order – need to know all future requests of processes 190 Chanderprabhu Jain College of Higher Studies & School of Law
  • 191. • Example: can processes run to completion in some order? – with 10 units of resource to allocate : – if A runs first and acquires a further unit : 191 Chanderprabhu Jain College of Higher Studies & School of Law
  • 192. • avoidance using resource allocation graphs - for one instance resources – add an extra type of arc - the claim arc to indicate future requests – when the future request is actually made, convert this to an allocation arc 192 Chanderprabhu Jain College of Higher Studies & School of Law
  • 193. Banker’s Algorithm (Dijkstra) • Single resource – at each request, consider whether granting will lead to an unsafe state - if so, deny – is state after the notional grant still safe? 193 Chanderprabhu Jain College of Higher Studies & School of Law
  • 194. • Multiple resources – m types of resource, n processes 194 Chanderprabhu Jain College of Higher Studies & School of Law
  • 195. • look for a row in R  A i.e. a process whose requests can be met • if no such row exists, state is unsafe • add this row of R into the same row of C and subtract it from C R 195 Chanderprabhu Jain College of Higher Studies & School of Law
  • 196. Drawbacks of Banker’s Algorithm – processes rarely know in advance how many resources they will need – the number of processes changes as time progresses – resources once available can disappear – the algorithm assumes processes will return their resources within a reasonable time – processes may only get their resources after an arbitrarily long delay 196 Chanderprabhu Jain College of Higher Studies & School of Law
  • 197. • Detection and Recovery – let deadlock occur, then detect and recover somehow • Methods of Detection - single resources – search for loops in resource allocation graph 197 Chanderprabhu Jain College of Higher Studies & School of Law
  • 198. Chapter Seven : Device Management • System Devices • Sequential Access Storage Media • Direct Access Storage Devices • Components of I/O Subsystem • Communication Among Devices • Management of I/O Requests Paper Storage Media Magnetic Tape Storage Magnetic Disk Storage Optical Disc Storage 198 Chanderprabhu Jain College of Higher Studies & School of Law
  • 199. Device Management Functions • Track status of each device (such as tape drives, disk drives, printers, plotters, and terminals). • Use preset policies to determine which process will get a device and for how long. • Allocate the devices. • Deallocate the devices at 2 levels: – At process level when I/O command has been executed & device is temporarily released – At job level when job is finished & device is permanently released. 199 Chanderprabhu Jain College of Higher Studies & School of Law
  • 200. System Devices • Differences among system’s peripheral devices are a function of characteristics of devices, and how well they’re managed by the Device Manager. • Most important differences among devices – Speeds – Degree of sharability. • By minimizing variances among devices, a 200 Chanderprabhu Jain College of Higher Studies & School of Law
  • 201. Dedicated Devices • Assigned to only one job at a time and serve that job for entire time it’s active. – E.g., tape drives, printers, and plotters, demand this kind of allocation scheme, because it would be awkward to share. • Disadvantage -- must be allocated to a single user for duration of a job’s execution. – Can be quite inefficient, especially when device isn’t used 100 % of time. 201 Chanderprabhu Jain College of Higher Studies & School of Law
  • 202. Shared Devices • Assigned to several processes. – E.g., disk pack (or other direct access storage device) can be shared by several processes at same time by interleaving their requests. • Interleaving must be carefully controlled by Device Manager. • All conflicts must be resolved based on predetermined policies to decide which request will be handled first. 202 Chanderprabhu Jain College of Higher Studies & School of Law
  • 203. Virtual Devices • Combination of dedicated devices that have been transformed into shared devices. – E.g, printers are converted into sharable devices through a spooling program that reroutes all print requests to a disk. – Output sent to printer for printing only when all of a job’s output is complete and printer is ready to print out entire document. – Because disks are sharable devices, this technique can convert one printer into several “virtual” printers, thus improving both its performance and use. 203 Chanderprabhu Jain College of Higher Studies & School of Law
  • 204. Sequential Access Storage Media • Magnetic tape used for secondary storage on early computer systems; now used for routine archiving & storing back-up data. • Records on magnetic tapes are stored serially, one after other. • Each record can be of any length. – Length is usually determined by the application program. • Each record can be identified by its position on the tape. 204 Chanderprabhu Jain College of Higher Studies & School of Law
  • 205. Magnetic Tapes • Data is recorded on 8 parallel tracks that run length of tape. • Ninth track holds parity bit used for routine error checking. • Number of characters that can be recorded per inch is determined by density of tape (e.g., 1600 or 6250 bpi). Parity • • •• • • • • • •• • Characters 205 Chanderprabhu Jain College of Higher Studies & School of Law
  • 206. First Come First Served (FCFS) Device Scheduling Algorithm • Simplest device-scheduling algorithm: • Easy to program and essentially fair to users. • On average, it doesn’t meet any of the three goals of a seek strategy. • Remember, seek time is most time-consuming of functions performed here, so any algorithm that can minimize it is preferable to FCFS. 206 Chanderprabhu Jain College of Higher Studies & School of Law
  • 207. Shortest Seek Time First (SSTF) Device Scheduling Algorithm • Uses same underlying philosophy as shortest job next where shortest jobs are processed first & longer jobs wait. • Request with track closest to one being served (that is, one with shortest distance to travel) is next to be satisfied. • Minimizes overall seek time. 207 Chanderprabhu Jain College of Higher Studies & School of Law
  • 208. SCAN Device Scheduling Algorithm • SCAN uses a directional bit to indicate whether the arm is moving toward the center of the disk or away from it. • Algorithm moves arm methodically from outer to inner track servicing every request in its path. • When it reaches innermost track it reverses direction and moves toward outer tracks, again servicing every request in its path. 208 Chanderprabhu Jain College of Higher Studies & School of Law
  • 209. LOOK (Elevator Algorithm) : A Variation of SCAN • Arm doesn’t necessarily go all the way to either edge unless there are requests there. • “Looks” ahead for a request before going to service it. • Eliminates possibility of indefinite postponement of requests in out-of-the-way places—at either edge of disk. • As requests arrive each is incorporated in its proper place in queue and serviced when the arm reaches that track. 209 Chanderprabhu Jain College of Higher Studies & School of Law
  • 210. Other Variations of SCAN • N-Step SCAN -- holds all requests until arm starts on way back. New requests are grouped together for next sweep. • C-SCAN (Circular SCAN) -- arm picks up requests on its path during inward sweep. – When innermost track has been reached returns to outermost track and starts servicing requests that arrived during last inward sweep. – Provides a more uniform wait time. • C-LOOK (optimization of C-SCAN) --sweep inward stops at last high- numbered track request, so arm doesn’t move all the way to last track unless it’s required to do so. – Arm doesn’t necessarily return to the lowest-numbered track; it returns only to the lowest-numbered track that’s requested. 210 Chanderprabhu Jain College of Higher Studies & School of Law
  • 211. Which Device Scheduling Algorithm? • FCFS works well with light loads, but as soon as load grows, service time becomes unacceptably long. • SSTF is quite popular and intuitively appealing. It works well with moderate loads but has problem of localization under heavy loads. • SCAN works well with light to moderate loads and eliminates problem of indefinite postponement. SCAN is similar to SSTF in throughput and mean service times. 211 Chanderprabhu Jain College of Higher Studies & School of Law
  • 212. Search Strategies: Rotational Ordering • Rotational ordering -- optimizes search times by ordering requests once read/write heads have been positioned. – Nothing can be done to improve time spent moving read/write head because it’s dependent on hardware. • Amount of time wasted due to rotational delay can be reduced. – If requests are ordered within each track so that first sector requested on second track is next number higher than one just served, rotational delay is minimized. 212 Chanderprabhu Jain College of Higher Studies & School of Law
  • 213. Redundant Array of Inexpensive Disks (RAID) • RAID is a set of physical disk drives that is viewed as a single logical unit by OS. • RAID assumes several smaller-capacity disk drives preferable to few large-capacity disk drives because, by distributing data among several smaller disks, system can simultaneously access requested data from multiple drives. • System shows improved I/O performance and improved data recovery in event of disk failure. 213 Chanderprabhu Jain College of Higher Studies & School of Law
  • 214. RAID -2 • RAID introduces much-needed concept of redundancy to help systems recover from hardware failure. • Also requires more disk drives which increase hardware costs. 214 Chanderprabhu Jain College of Higher Studies & School of Law
  • 215. Chanderprabhu Jain College of Higher Studies & School of Law Plot No. OCF, Sector A-8, Narela, New Delhi – 110040 (Affiliated to Guru Gobind Singh Indraprastha University and Approved by Govt of NCT of Delhi & Bar Council of India) Semester: Fifth Semester Name of the Subject: Operating System UNIT- 4 215 Chanderprabhu Jain College of Higher Studies & School of Law
  • 216. FILE MANAGEMENT 216 Chanderprabhu Jain College of Higher Studies & School of Law
  • 217. Operating Systems: Internals and Design Principles If there is one singular characteristic that makes squirrels unique among small mammals it is their natural instinct to hoard food. Squirrels have developed sophisticated capabilities in their hoarding. Different types of food are stored in different ways to maintain quality. Mushrooms, for instance, are usually dried before storing. This is done by impaling them on branches or leaving them in the forks of trees for later retrieval. Pine cones, on the other hand, are often harvested while green and cached in damp conditions that keep seeds from ripening. Gray squirrels usually strip outer husks from walnuts before storing. — SQUIRRELS: A WILDLIFE HANDBOOK, Kim Long 217 Chanderprabhu Jain College of Higher Studies & School of Law
  • 218. Files • Data collections created by users • The File System is one of the most important parts of the OS to a user • Desirable properties of files: Long-term existence • files are stored on disk or other secondary storage and do not disappear when a user logs off Sharable between processes • files have names and can have associated access permissions that permit controlled sharing Structure • files can be organized into hierarchical or more complex structure to reflect the relationships among files 218 Chanderprabhu Jain College of Higher Studies & School of Law
  • 219. File Systems • Provide a means to store data organized as files as well as a collection of functions that can be performed on files • Maintain a set of attributes associated with the file • Typical operations include: – Create – Delete – Open 219 Chanderprabhu Jain College of Higher Studies & School of Law
  • 220. File Structure Four terms are commonly used when discussing files: Field Record File Database 220 Chanderprabhu Jain College of Higher Studies & School of Law
  • 221. File Structure • Files can be structured as a collection of records or as a sequence of bytes • UNIX, Linux, Windows, Mac OS’s consider files as a sequence of bytes • Other OS’s, notably many IBM mainframes, adopt the collection-of-records approach; useful for DB • COBOL supports the collection-of-records file and can implement it even on systems that don’t provide such files natively. 221 Chanderprabhu Jain College of Higher Studies & School of Law
  • 222. Structure Terms Field – basic element of data – contains a single value – fixed or variable length File • collection of related fields that can be treated as a unit by some application program • One field is the key – a Record Database  collection of similar records  treated as a single entity  may be referenced by name  access control restrictions usually apply at the file level  collection of related data  relationships among elements of data are explicit  designed for use by a number of different applications  consists of one or more types of files 222 Chanderprabhu Jain College of Higher Studies & School of Law
  • 223. File System Architecture • Notice that the top layer consists of a number of different file formats: pile, sequential, indexed sequential, … • These file formats are consistent with the collection-of- records approach to files and determine how file data is accessed • Even in a byte-stream oriented file system it’s possible to build files with record-based structures but it’s up to the application to design the files and build in access methods, indexes, etc. • Operating systems that include a variety of file formats provide access methods and other support automatically. 223 Chanderprabhu Jain College of Higher Studies & School of Law
  • 224. Layered File System Architecture • File Formats – Access methods provide the interface to users • Logical I/O • Basic I/O • Basic file system • Device drivers 224 Chanderprabhu Jain College of Higher Studies & School of Law
  • 225. Device Drivers • Lowest level • Communicates directly with peripheral devices • Responsible for starting I/O operations on a device • Processes the completion of an I/O request • Considered to be part of the operating system 225 Chanderprabhu Jain College of Higher Studies & School of Law
  • 226. Basic File System • Also referred to as the physical I/O level • Primary interface with the environment outside the computer system • Deals with blocks of data that are exchanged with disk or other mass storage devices. – placement of blocks on the secondary storage device – buffering blocks in main memory • Considered part of the operating system 226 Chanderprabhu Jain College of Higher Studies & School of Law
  • 227. Basic I/O Supervisor • Responsible for all file I/O initiation and termination • Control structures that deal with device I/O, scheduling, and file status are maintained • Selects the device on which I/O is to be performed • Concerned with scheduling disk and tape accesses to optimize performance 227 Chanderprabhu Jain College of Higher Studies & School of Law
  • 228. Logical I/O Enables users and applications to access records Provides general- purpose record I/O capability Maintains basic data about file 228 Chanderprabhu Jain College of Higher Studies & School of Law
  • 229. Logical I/O This level is the interface between the logical commands issued by a program and the physical details required by the disk. Logical units of data versus physical blocks of data to match disk requirements. 229 Chanderprabhu Jain College of Higher Studies & School of Law
  • 230. Access Method  Level of the file system closest to the user  Provides a standard interface between applications and the file systems and devices that hold the data  Different access methods reflect different file structures and different ways of accessing and processing the data 230 Chanderprabhu Jain College of Higher Studies & School of Law
  • 231. Elements of File Management 231 Chanderprabhu Jain College of Higher Studies & School of Law
  • 232. File Organization and Access • File organization is the logical structuring of the records as determined by the way in which they are accessed • In choosing a file organization, several criteria are important: – short access time – ease of update – economy of storage 232 Chanderprabhu Jain College of Higher Studies & School of Law
  • 233. File Organization Types Five of the common file organizations are: The pile The sequential file The indexed sequential file The indexed file The direct, or hashed, file 233 Chanderprabhu Jain College of Higher Studies & School of Law
  • 234. The Pile • Least complicated form of file organization • Data are collected in the order they arrive • Each record 234 Chanderprabhu Jain College of Higher Studies & School of Law
  • 235. The Sequential File • Most common form of file structure • A fixed format is used for records • Key field uniquely identifies the record & determines storage order • Typically used in batch applications • Only organization that is easily stored on tape as well as disk 235 Chanderprabhu Jain College of Higher Studies & School of Law
  • 236. Indexed Sequential File • Adds an index to the file to support random access • Adds an overflow file • Greatly 236 Chanderprabhu Jain College of Higher Studies & School of Law
  • 237. Indexed File • Records are accessed only through their indexes • Variable-length records can be employed • Exhaustive index contains one entry for every record in the main file • Partial index contains entries to records where the field of interest exists • Used mostly in applications where timeliness of information is critical • Examples would be airline reservation systems and inventory control systems 237 Chanderprabhu Jain College of Higher Studies & School of Law
  • 238. Direct or Hashed File • Access directly any block of a known address • Makes use of hashing on the key value • Often used where: – very rapid access is required – fixed-length records are used – records are always accessed one at a time Examples are: • directories • pricing tables • schedules • name lists 238 Chanderprabhu Jain College of Higher Studies & School of Law
  • 239. B-Trees • A balanced tree structure with all branches of equal length • Standard method of organizing indexes for databases • Commonly used in OS file systems • Provides for efficient searching, adding, and deleting of items 239 Chanderprabhu Jain College of Higher Studies & School of Law