SlideShare a Scribd company logo
CHAPTER 2:
MEMORY AND PROCESS
MANAGEMENT
2.1 Explain memory management of operating system
2.2 Explain process management of operating system
2.3 Explain a Deadlock Situation in an operating system
CLO 1:
Explain the concept of operating system, memory,
and process and file management (C2, PLO1).
MEMORY MANAGEMENT
• Memory management is concerned
with managing:
– The computer’s available pool of
memory
– Allocating space to application routines
and making sure that they do not
interfere with each other
3
MEMORY MANAGER
FUNCTIONS
• To keep track of which parts of memory
are in use and which parts are not in use
• Coordinates how memory hierarchy is
used
4
MEMORY HIERARCHY
• Main memory
– Should store currently needed program instructions and
data only
• Secondary storage
– Stores data and programs that are not actively needed
• Cache memory
– Extremely high speed
– Usually located on processor itself
– Most-commonly-used data copied to cache for faster
access
– Small amount of cache still effective for boosting
performance
5
MEMORY HIERARCHY
FIGURE 1: Hierarchical memory organization.
6
MEMORY MANAGEMENT
STRATEGIES
• Strategies divided into several categories
– Fetch strategies
• Decides which piece of data to load next
– Placement strategies
• Decides where in main memory to place incoming data
– Replacement strategies
• Decides which data to remove from main memory to make more
space
7
RESIDENT ROUTINE
VS
TRANSIENT ROUTINE
RESIDENT ROUTINE TRANSIENT ROUTINE
Refers to routine that stays in
memory; the part of program must
remain in memory all the times
Refers to a routine that is loaded
as needed
Instructions and data that remains
in memory can be accessed
instantly
Stored on disk and read into
memory only when needed
Example: Windows operating
system
Example: database programs, web
browser, drawing application, paint
application, image editing
programs and etc
8
MEMORY SWAPPING
TECHNIQUE
• Not necessary to keep inactive processes in
memory
– Swapping
• Only put currently running process in main memory
• Others temporarily moved to secondary storage
• Maximizes available memory
• Significant overhead when switching processes
– Better yet: keep several processes in memory at
once
• Less available memory
• Much faster response times 9
CONTIGUOUS
VS
NONCONTIGUOUS MEMORY ALLOCATION
• Ways of organizing programs in memory
– Contiguous allocation
• Program must exist as a single block of contiguous
addresses
• Sometimes it is impossible to find a large enough block
• Low overhead
– Noncontiguous allocation
• Program divided into chunks called segments
• Each segment can be placed in different part of memory
• Easier to find “holes” in which a segment will fit
• Increased number of processes that can exist simultaneously
in memory offsets the overhead incurred by this technique 10
MEMORY ALLOCATION
11
CONTIGOUS
 FIXED-PARTITION
 DYNAMIC
NON-CONTIGOUS
 PAGING
 SEGMENTATION
FIXED-PARTITIONS
• The simplest approach
– to managing memory for multiple concurrent process.
• Divides the available space into fixed-
length partitions,
– each of which holds one process.
• When a partition is free,
– a process is selected from the input queue and is
loaded into a free partition
– Best-fit? First-fit? Worst-fit? 12
FIXED-PARTITIONS
– Best-fit? First-fit? Worst-fit?
13
FIXED-PARTITIONS
• Partition can be of equal size or
unequal size
• Any process whose size is less than or
equal to a partition size,
– can be loaded into the partition
• If all partitions are occupied,
– the OS can swap a process out of a
partition 14
• A process is either entirely in main memory or
entirely on backing store
• A program may be too large to fit in partition.
The programmer must then design the program
with overlays
– When the module needed is not present the user program
must load the module into the program’s partition, overlaying
whatever program or data are there
• Main memory use inefficient. Any program, no
matter how small, occupies an entire position.
This is called internal fragmentation 15
FIXED-PARTITIONS
• Unequal-size partitions lessens this problem
but it is still remains.
• Equal-size partitions
– If there is an available partition, a process can
be loaded into that partition
• Because all partitions are of equal size, it does not
matter which partitions is used
– If all partition are occupied by blocked
processes, choose one process to swap out to
make room for the new process.
16
FIXED-PARTITIONS:
PLACEMENT ALGORITHM
FIXED-PARTITIONS:
PLACEMENT ALGORITHM
17
• Unequal-size partitions:
use of multiple queues
– Assign each process to the
smallest partition within
which it will fit
– A queue for each partition
size
– Tries to minimize internal
fragmentation
– Problem: some queues will
be empty if no processes
within a size range is present
18
• Unequal-size partitions:
use of single queue
– When its time to load a
process into main
memory the smallest
available that will hold the
process is selected
– Increases the level of
multiprogramming at the
expense of internal
fragmentation
FIXED-PARTITIONS:
PLACEMENT ALGORITHM
VIRTUAL MEMORY
• Real, or physical, memory exists on RAM chips inside the computer
• Virtual memory, as its name suggests, doesn’t physically exist on a memory
chip
• It is an optimization technique and is implemented by the operating system
in order to give an application program the impression that it has more
memory than actually exists
• Virtual memory is implemented by various operating systems such as
Windows, Mac OS X, and Linux.
19
HOW VIRTUAL MEMORY WORKS
• Let’s say that an operating system needs 120 MB of memory in order to
hold all the running programs, but there’s currently only 50 MB of available
physical memory stored on the RAM chips
• The operating system will then set up 120 MB of virtual memory, and will
use a program called the virtual memory manager (VMM) to manage that
120 MB
• The VMM will create a file on the hard disk that is 70 MB (120 – 50) in size
to account for the extra memory that’s needed
• The O.S. will now proceed to address memory as if there were actually 120
MB of real memory stored on the RAM, even though there’s really only 50
MB
20
• So, to the O.S., it now appears as if the full 120 MB actually exists
• It is the responsibility of the VMM to deal with the fact that there is only 50
MB of real memory.
21
HOW VIRTUAL MEMORY WORKS
VIRTUAL MEMORY: PAGING
• The VMM creates a file on the hard disk that holds the extra memory that is
needed by the O.S., for example 70 MB in size
• This file is called a paging file (also known as a swap file), and plays an
important role in virtual memory
• The paging file combined with the RAM accounts for all of the memory.
• Whenever the O.S. needs a ‘block’ of memory that’s not in the real (RAM)
memory, the VMM takes a block from the real memory that hasn’t been
used recently, writes it to the paging file, and then reads the block of
memory that the O.S. needs from the paging file.
22
• The VMM then takes the block of memory from the paging file, and moves it
into the real memory – in place of the old block.
– This process is called swapping (also known as paging), and the blocks of memory that are
swapped are called pages.
• There are two reasons why virtual memory is important
– to allow the use of programs that are too big to physically fit in memory
– to allow for multitasking – multiple programs running at once
• Before virtual memory existed, a word processor, e-mail program, and
browser couldn’t be run at the same time unless there was enough memory
to hold all three programs at once
• This would mean that one would have to close one program in order to run
the other, but now with virtual memory, multitasking is possible even when
there is not enough memory to hold all executing programs at once.
23
VIRTUAL MEMORY: PAGING
PAGING
• Main memory is partition into equal-fixed sized
• Each process is also divided into partition of the
same size called pages
• The process page can thus be assigned to the
available partition in main memory called
frames
• Consequence: a process does not need to
occupy a contiguous portion of memory 24
25
Now suppose that Process B is swapped out
PAGING
26
• When Process A and C
are blocked, the pager
load a new Process D
consisting of 5 pages
• Process D does not
occupied a contiguous
portion of memory
• There is no external
fragmentation
• Internal fragmentation
consist only of the last
page of each process
PAGING
• The OS now needs to maintain (in main memory) a page table for
each process
• Each entry of a page table consist of the frame number where the
corresponding page is physically located
• The page table is indexed by the page number to obtain the frame
number
• A free frame list, available for pages, is maintained
27
PAGING
PAGING: LOGICAL ADDRESS &
PHYSICAL ADDRESS
• Within each program, each logical address must consist of a
page number and an offset within the page
– Page number: used as an index into a page table which contains
base address of each page in physical memory
– Page offset: combined with base address to define the physical
memory address that is sent to the memory unit
• A CPU register always holds the starting physical address of
the page table of the currently running process
• Presented with logical address (page number, page offset) the
processor access the page table to obtain the physical address
(frame number, offset)
28
29
PAGING: LOGICAL ADDRESS &
PHYSICAL ADDRESS
30
PAGING: LOGICAL ADDRESS &
PHYSICAL ADDRESS
31
PAGING
• When we use a paging scheme, there is no
external fragmentation:
– any free frame can be allocated to a process
that needs it
• However, there is internal fragmentation
• Example: if a page size is 2048 bytes, a process
of 72766 bytes would need 35 pages plus 1086
bytes 32
PAGING
SEGMENTATION
• Each segment is subdivided into blocks of
non-equal size called segments
• When a process gets loaded into main
memory, its different segments can be
located anywhere
• Each segment is fully packed with
instruction/data: no internal fragmentation
33
SEGMENTATION
• There is external fragmentation; it is reduced
when using small segments
• The OS maintain a segment table for each
process. Each entry may contains:
– The starting physical address of that segment
(base)
– The length of that segment (limit)
34
SEGMENTATION: USER VIEW
OF A PROGRAM
35
SEGMENTATION: LOGICAL
VIEW OF SEGMENTATION
36
37
SEGMENTATION
38
SEGMENTATION
MAJOR SYSTEM RESOURCE
• A resource, or system resource, is any physical or virtual component
of limited availability within a computer system.
• Every device connected to a computer system is a resource.
• Every internal system component is a resource
• System resource including:
– CPU time
– Memory
– Hard disk space
– Network throughput
– Electrical power
– External devices
– I/O operations
*Explain in your own word how system resource types function in
computer system. 39
PROCESS STATES
• As a process executes, it changes state:
– New: the process is being created
– Running: instructions are being executed
– Waiting: the process are waiting for some
event to occur
– Ready: the process is waiting to be assigned
to a process
– Terminated: the process has finished
execution 41
LIFE CYCLE
42
INTERRUPTS
• An interrupt is an electronic signal.
• Hardware senses the signal, saves key control information for the currently
executing program, and starts the operating system’s interrupt handler
routine. At that instant, the interrupt ends.
• The operating system then handles the interrupt.
• Subsequently, after the interrupt is processed, the dispatcher starts an
application program.
• Eventually, the program that was executing at the time of the interrupt
resumes processing.
43
Example of how interrupt work
Step 1:
Step 2:
Example of how interrupt work
Step 3:
Example of how interrupt work
Step 4:
Example of how interrupt work
Step 5:
Example of how interrupt work
Step 6:
Example of how interrupt work
CPU Scheduling
• Types of scheduling:
– Short-term scheduling;
• which determines which of the ready processes can have CPU
resources, and for how long.
• Invoked whenever event occurs that interrupts current process or
provides an opportunity to preempt current one in favor of another
• Events: clock interrupt, I/O interrupt, OS call, signal
CPU Scheduling
• Types of scheduling:
– Medium-term scheduling;
• determines when processes are to be suspended and resumed
• Part of swapping function between main memory and disk
• based on how many processes the OS wants available at any one time
• must consider memory management if no virtual memory (VM), so look at
memory requirements of swapped out processes
CPU Scheduling
• Types of scheduling:
– Long-term scheduling;
• determines which programs are admitted to the system for execution and when,
and which ones should be exited.
• Determine which programs admitted to system for processing - controls degree of
multiprogramming
• Once admitted, program becomes a process, either:
– added to queue for short-term scheduler
– swapped out (to disk), so added to queue for medium-term scheduler
54
LONG TERM
MEDIUM TERM
SHORT TERM
CPU Scheduler
• Selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them
• CPU scheduling decisions may take place when a
process:
– Switches from running to waiting state (nonpreemptive)
– Switches from running to ready state (preemptive)
– Switches from waiting to ready (preemptive)
– Terminates (nonpreemptive)
• All other scheduling is preemptive
• Preemptive scheduling policy interrupts processing of a
job and transfers the CPU to another job.
- The process may be pre-empted by the operating
system when:
- a new process arrives (perhaps at a higher priority), or
- an interrupt or signal occurs, or
- a (frequent) clock interrupt occurs.
CPU Scheduler
• Non-preemptive scheduling policy functions without
external interrupts.
– once a process is executing, it will continue to execute until it
terminates, or
– it makes an I/O request which would block the process, or
– it makes an operating system call.
CPU Scheduler
PREEMPTIVE VS NON-
PREEMPTIVE SCHEDULING
• Preemptive
– Preemptive Scheduling is when a computer process is interrupted and
the CPU's power is given over to another process with a higher priority.
This type of scheduling occurs when a process switches from running
state to a ready state or from a waiting state to a ready state.
• Non-preemptive
– One the CPU has been allocated to a process, the process keep the
CPU until it release the CPU either by terminating or by switching to
waiting state
58
• Types of scheduling algorithm:
Basic strategies
- First In First Out (FIFO)
- Round Robin (RR)
- Shortest Job First (SJF)
- Priority
Combined strategies
- Multi-level queue
- Multi-level feedback queue
CPU Scheduling
Turnaround time
 The sum of time spent in the ready queue, execution time and I/O
time.
tat = t(process completed) – t(process submitted)
 minimize, time of submission to time of completion.
Waiting time
 minimize, time spent in ready queue - affected solely by scheduling
policy
Response time
 The amount of time it takes to start responding to a request. This
criterion is important for interactive systems.
rt = t(first response) – t(submission of request)
 minimize
CPU Scheduling
First In First Out (FIFO)
• Non-preemptive. Also known as FCFS
• Handles jobs according to their arrival time;
– the earlier they arrive, the sooner they’re served.
• Simple algorithm to implement -- uses a FIFO
queue.
• Good for batch systems;
– not so good for interactive ones.
• Turnaround time is unpredictable.
Suppose that the processes arrive in the order: P1, P2, P3. The Gantt Chart
for the schedule is:
First In First Out (FIFO)
Process Burst time
P1 24
P2 3
P3 3
P1 P2 P3
0 24 27 30
Waiting time for P1=0; P2=24; P3=27
Average waiting time = (0+24+27)/3 =17s
Round Robin (RR)
• FCFS with Preemption.
• Used extensively in interactive systems because it’s
easy to implement.
• Isn’t based on job characteristics but on a
predetermined slice of time that’s given to each job.
– Ensures CPU is equally shared among all active processes and
isn’t monopolized by any one job.
• Time slice is called a time quantum
– size crucial to system performance (100 ms to 1-
2 secs)
Suppose that the processes arrive in the order: P1, P2, P3, P4. Given time
quantum, Q=20s. The Gantt Chart for the schedule is:
Process Burst time
P1 53
P2 17
P3 68
P4 24
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
0 20 37 57
Waiting time for P1=?; P2=?; P3=?; P4=?,
Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s
Round Robin (RR)
97 11777 121 134 154 162
Shortest Job First (SJF)
• Non-preemptive or Preemptive
• Handles jobs based on length of their CPU cycle time.
– Use lengths to schedule process with shortest time.
• Optimal
– gives minimum average waiting time for a given set of
processes.
– optimal only when all of jobs are available at same time and
the CPU estimates are available and accurate.
• Doesn’t work in interactive systems because users
don’t estimate in advance CPU time required to run
their jobs.
Shortest Job First (SJF)
Preemptive
Suppose that the processes arrive in the order: P1, P2, P3, P4. The Gantt
Chart for the schedule is:
Process Arrival Time Burst time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
P1 P2 P3 P2 P4 P1
0 2 4 5
Waiting time for P1=?; P2=?; P3=?; P4=?,
Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s
117 16
Shortest Job First (SJF)
Non-Preemptive
Suppose that the processes arrive in the order: P1, P2, P3, P4. The Gantt
Chart for the schedule is:
Process Arrival Time Burst time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
P1 P3 P2 P4
0 7
Waiting time for P1=?; P2=?; P3=?; P4=?,
Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s
128 16
Priority
• Non-preemptive.
• Gives preferential treatment to important jobs.
– Programs with highest priority are processed first.
– Aren’t interrupted until CPU cycles are completed or a natural
wait occurs.
• If 2+ jobs with equal priority are in READY queue,
processor is allocated to one that arrived first
– (first come first served within priority).
• Many different methods of assigning priorities by
system administrator or by Processor Manager.
The Gantt Chart for the schedule is:
Process Burst time Priority Arrival Time
P1 10 3 0.0
P2 1 1 1.0
P3 2 4 2.0
P4 1 5 3.0
P5 5 2 4.0
P1 P2 P5 P3 P4
0 10
Waiting time for P1=?; P2=?; P3=?; P4=?,
Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/5 =?s
1811 19
Priority
16
The Gantt Chart for the schedule is:
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
P2 P5 P1 P3 P4
0 1
Waiting time for P1=?; P2=?; P3=?; P4=?,
Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s
186 19
Priority
16
Multi-level queue
Multi-level queue
Multi-level queue
Multi-level feedback queue
(MLFQ)
THREADS
• A thread is a separate part of a process.
• A process can consist of several threads,
each of which execute separately.
• For example,
– one thread could handle screen refresh and
drawing, another thread printing, another thread
the mouse and keyboard.
– This gives good response times for complex
programs.
– Windows NT is an example of an operating
system which supports multi-threading.
75
MULTITHREADING
• A thread is a way for program to fork or split itself into two or more
simultaneously running tasks
• A lot of software that run on modern desktop PCs are multithreaded
• Example:
– Web browser – one thread to display images or text, another
thread retrieves data from the network
– Word processor – one thread for displaying graphics, another
thread is for responding to keystrokes from the user, another
thread for performing spelling and grammar checking.
76
SINGLE VS MULTITHREADED PROCESSES
77
• Web browser example:
– Web browser accepts client request for webpage,
image, sound and so on.
– A web browser server may have several clients
concurrently accessing it
– If the web browser ran as a traditional single
threaded process, it would be able to service only one
client at a time. The amount of time the client may
have to wait is enormous.
78
SINGLE VS MULTITHREADED PROCESSES
• Solution 1
– When a server received a request, it creates a separate process
to service that request
– Old solution before threads become popular
– Disadvantage: time consuming and resource intensive, new
process will perform the same task as the existing process.
79
• Solution 2:
– It is more efficient to use one process with multiple threads
– Multithreaded web server process
– Create a separate thread that would listen for client requests
– When a request is made, rather than creating another process,
the server would create another thread to serve that request
– Many OS are multithreaded. E.g: Linux.
SINGLE VS MULTITHREADED PROCESSES
BENEFITS OF MULTITHREADING
• Responsiveness - One thread may provide rapid response while other
threads are blocked or slowed down doing intensive calculations.
• Resource sharing - By default threads share common code, data, and
other resources, which allows multiple tasks to be performed
simultaneously in a single address space.
• Economy - Creating and managing threads ( and context switches
between them ) is much faster than performing the same tasks for
processes.
• Utilization of multiprocessor architectures - A single threaded process
can only run on one CPU, no matter how many may be available, whereas
the execution of a multi-threaded application may be split amongst
available processors.
(Note that single threaded processes can still benefit from multi-processor architectures
when there are multiple processes contending for the CPU, i.e. when the load average is
above some certain threshold.)
80
DEADLOCK
• Process deadlock
– A process is deadlocked when it is waiting on an event which will
never happen
• System deadlock
– A system is deadlocked when one or more processes are
deadlock
• Under normal operation, a resource allocations proceed
like this
– Request a resource
– Use the resource
– Release the resource
81
NECESSARY AND SUFFICIENT
DEADLOCK CONDITIONS
• Coffman (1971) identified four (4) conditions that must
hold simultaneously for there to be a deadlock.
– Mutual exclusion condition
– Hold and wait condition
– No-preemptive condition
– Circular wait condition
82
NECESSARY AND SUFFICIENT
DEADLOCK CONDITIONS
– Mutual exclusion condition
• the resource involved are non-sharable
• At least one resource (thread) must be held in a non-
shareable mode, that is, only one process at a time
claims exclusive control of the resource.
• If another process requests that resource, the
requesting process must be delayed until the resource
has been released.
83
NECESSARY AND SUFFICIENT
DEADLOCK CONDITIONS
– Hold and wait condition
• Requesting process hold already, resources while waiting for
requested resources.
• There must exist a process that is holding a resource already
allocated to it while waiting for additional resource that are
currently being held by other processes.
84
– No-preemptive condition
• Resources already allocated to a process cannot be
preempted.
• Resources cannot be removed from the processes are used
to completion or released voluntarily by the process holding
it.
− Circular wait condition
• The processes in the system form a circular list or chain
where each process in the list is waiting for a resource held
by the next process in the list.
85
NECESSARY AND SUFFICIENT
DEADLOCK CONDITIONS
METHODS FOR HANDLING
DEADLOCKS
• Deadlock problem can be deal in 3 ways:
i. Use a protocol to prevent or avoid deadlocks, ensuring that the
system will never enter a deadlock state
ii. Allow the system to enter a deadlock state, detect it, and
recover
iii. Ignore the problem, pretend that deadlock never occur in the
system. This solution used by most OS including UNIX
86
DEADLOCK PREVENTION
• Deadlock prevention is a set of methods for ensuring that at least
one of the necessary conditions cannot hold.
• Deadlock prevention for:
– Mutual exclusion: allow multiple processes to access computer
resource.
– Hold and wait: force each process to request all required resources
at once (in one shot). It cannot proceed until all resources have been
acquired. (process either acquires all resources or stops)
– No-preemption: allow a process to be aborted or its resources
reclaimed by another or by system, when competing over a resource
– Circular wait: all resource types are numbered by an integer
resource id. Processes must request resources in numerical
(decreasing) order of resource id. 87
ACTIVITY
• Describe the characteristic of the
different levels in the hierarchy of
memory organization.
• Describe how the scheduling
process is performed by an
operating system
• Describe threads relationship to
processes
Multi-level feedback queue
(MLFQ) : Example

More Related Content

What's hot

Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory managementrprajat007
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategiesDr. Loganathan R
 
Lecture 3 parallel programming platforms
Lecture 3   parallel programming platformsLecture 3   parallel programming platforms
Lecture 3 parallel programming platformsVajira Thambawita
 
Memory management
Memory managementMemory management
Memory managementcpjcollege
 
OS Memory Management
OS Memory ManagementOS Memory Management
OS Memory Managementanand hd
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)Sandesh Jonchhe
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OSKumar Pritam
 
Distributed shred memory architecture
Distributed shred memory architectureDistributed shred memory architecture
Distributed shred memory architectureMaulik Togadiya
 
Inter Process Communication
Inter Process CommunicationInter Process Communication
Inter Process CommunicationAdeel Rasheed
 

What's hot (20)

Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Chapter 3 - Processes
Chapter 3 - ProcessesChapter 3 - Processes
Chapter 3 - Processes
 
Cache coherence ppt
Cache coherence pptCache coherence ppt
Cache coherence ppt
 
Chapter 3
Chapter 3Chapter 3
Chapter 3
 
Lecture 3 parallel programming platforms
Lecture 3   parallel programming platformsLecture 3   parallel programming platforms
Lecture 3 parallel programming platforms
 
Memory management
Memory managementMemory management
Memory management
 
Bus interconnection
Bus interconnectionBus interconnection
Bus interconnection
 
OS Memory Management
OS Memory ManagementOS Memory Management
OS Memory Management
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
cache memory
 cache memory cache memory
cache memory
 
Distributed shred memory architecture
Distributed shred memory architectureDistributed shred memory architecture
Distributed shred memory architecture
 
Inter Process Communication
Inter Process CommunicationInter Process Communication
Inter Process Communication
 
Demand paging
Demand pagingDemand paging
Demand paging
 
Memory management
Memory managementMemory management
Memory management
 
Cache memory
Cache memoryCache memory
Cache memory
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Lecture 4 process cpu scheduling
Lecture 4   process cpu schedulingLecture 4   process cpu scheduling
Lecture 4 process cpu scheduling
 

Viewers also liked

Process management
Process managementProcess management
Process managementMohd Arif
 
Operating system and its function
Operating system and its functionOperating system and its function
Operating system and its functionNikhi Jain
 
Discrete-Chapter 11 Graphs Part I
Discrete-Chapter 11 Graphs Part IDiscrete-Chapter 11 Graphs Part I
Discrete-Chapter 11 Graphs Part IWongyos Keardsri
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management Shashank Asthana
 
Operating System Overview
Operating System OverviewOperating System Overview
Operating System OverviewAnas Ebrahim
 
Memory Management in Windows 7
Memory Management in Windows 7Memory Management in Windows 7
Memory Management in Windows 7Naveed Qadri
 
Memory management early_systems
Memory management early_systemsMemory management early_systems
Memory management early_systemsMybej Che
 
Process in operating system
Process in operating systemProcess in operating system
Process in operating systemChetan Mahawar
 
Intermediate Operating Systems
Intermediate Operating SystemsIntermediate Operating Systems
Intermediate Operating SystemsJohn Cutajar
 
Operating Systems - Virtual Memory
Operating Systems - Virtual MemoryOperating Systems - Virtual Memory
Operating Systems - Virtual MemoryEmery Berger
 
Study of memory in psychology
Study of memory in psychologyStudy of memory in psychology
Study of memory in psychologyAbdo_452
 
Theories and models of learning instruction revised
Theories and models of learning instruction revisedTheories and models of learning instruction revised
Theories and models of learning instruction revisedFelisa Isakson
 

Viewers also liked (20)

Process management
Process managementProcess management
Process management
 
Operating system and its function
Operating system and its functionOperating system and its function
Operating system and its function
 
Memory management
Memory managementMemory management
Memory management
 
Memory
MemoryMemory
Memory
 
How MongoDB works
How MongoDB worksHow MongoDB works
How MongoDB works
 
Os7
Os7Os7
Os7
 
Discrete-Chapter 11 Graphs Part I
Discrete-Chapter 11 Graphs Part IDiscrete-Chapter 11 Graphs Part I
Discrete-Chapter 11 Graphs Part I
 
Discrete-Chapter 10 Trees
Discrete-Chapter 10 TreesDiscrete-Chapter 10 Trees
Discrete-Chapter 10 Trees
 
Unit 4
Unit  4Unit  4
Unit 4
 
33
3333
33
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
Operating System Overview
Operating System OverviewOperating System Overview
Operating System Overview
 
Memory Management in Windows 7
Memory Management in Windows 7Memory Management in Windows 7
Memory Management in Windows 7
 
Memory management early_systems
Memory management early_systemsMemory management early_systems
Memory management early_systems
 
Process in operating system
Process in operating systemProcess in operating system
Process in operating system
 
Intermediate Operating Systems
Intermediate Operating SystemsIntermediate Operating Systems
Intermediate Operating Systems
 
Operating Systems - Virtual Memory
Operating Systems - Virtual MemoryOperating Systems - Virtual Memory
Operating Systems - Virtual Memory
 
Study of memory in psychology
Study of memory in psychologyStudy of memory in psychology
Study of memory in psychology
 
Processes and threads
Processes and threadsProcesses and threads
Processes and threads
 
Theories and models of learning instruction revised
Theories and models of learning instruction revisedTheories and models of learning instruction revised
Theories and models of learning instruction revised
 

Similar to chapter 2 memory and process management

Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for allVSKAMCSPSGCT
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppthello509579
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfHarika Pudugosula
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportBitta_man
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportdilip kumar
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxHarikishnaKNHk
 
08 operating system support
08 operating system support08 operating system support
08 operating system supportAnwal Mirza
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdfHarika Pudugosula
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxAmanuelmergia
 
Operating system 36 virtual memory
Operating system 36 virtual memoryOperating system 36 virtual memory
Operating system 36 virtual memoryVaibhav Khanna
 
16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptxMyName1sJeff
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memoryMazin Alwaaly
 
Memory Management Architecture.ppt
Memory Management Architecture.pptMemory Management Architecture.ppt
Memory Management Architecture.pptSulTanOmid
 

Similar to chapter 2 memory and process management (20)

Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
OS UNIT4.pptx
OS UNIT4.pptxOS UNIT4.pptx
OS UNIT4.pptx
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
Virtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdfVirtual Memory Management Part - I.pdf
Virtual Memory Management Part - I.pdf
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptx
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Memory Management Strategies - II.pdf
Memory Management Strategies - II.pdfMemory Management Strategies - II.pdf
Memory Management Strategies - II.pdf
 
Lecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptxLecture-7 Main Memroy.pptx
Lecture-7 Main Memroy.pptx
 
Ch4 memory management
Ch4 memory managementCh4 memory management
Ch4 memory management
 
Operating system 36 virtual memory
Operating system 36 virtual memoryOperating system 36 virtual memory
Operating system 36 virtual memory
 
16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memory
 
Memory Management Architecture.ppt
Memory Management Architecture.pptMemory Management Architecture.ppt
Memory Management Architecture.ppt
 

Recently uploaded

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasiemaillard
 
Industrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training ReportIndustrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training ReportAvinash Rai
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfVivekanand Anglo Vedic Academy
 
NLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxNLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxssuserbdd3e8
 
The Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational ResourcesThe Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational Resourcesaileywriter
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
 
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...Nguyen Thanh Tu Collection
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleCeline George
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxShajedul Islam Pavel
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfjoachimlavalley1
 
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptBasic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptSourabh Kumar
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsCol Mukteshwar Prasad
 
[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online PresentationGDSCYCCE
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxRaedMohamed3
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxbennyroshan06
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resourcesdimpy50
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345beazzy04
 
Forest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFForest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFVivekanand Anglo Vedic Academy
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...Denish Jangid
 

Recently uploaded (20)

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Industrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training ReportIndustrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training Report
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
 
NLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptxNLC-2024-Orientation-for-RO-SDO (1).pptx
NLC-2024-Orientation-for-RO-SDO (1).pptx
 
The Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational ResourcesThe Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational Resources
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
 
How to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS ModuleHow to Split Bills in the Odoo 17 POS Module
How to Split Bills in the Odoo 17 POS Module
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptBasic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
Operations Management - Book1.p  - Dr. Abdulfatah A. SalemOperations Management - Book1.p  - Dr. Abdulfatah A. Salem
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
 
[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
 
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptxMARUTI SUZUKI- A Successful Joint Venture in India.pptx
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resources
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Forest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDFForest and Wildlife Resources Class 10 Free Study Material PDF
Forest and Wildlife Resources Class 10 Free Study Material PDF
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
 

chapter 2 memory and process management

  • 1. CHAPTER 2: MEMORY AND PROCESS MANAGEMENT 2.1 Explain memory management of operating system 2.2 Explain process management of operating system 2.3 Explain a Deadlock Situation in an operating system CLO 1: Explain the concept of operating system, memory, and process and file management (C2, PLO1).
  • 2.
  • 3. MEMORY MANAGEMENT • Memory management is concerned with managing: – The computer’s available pool of memory – Allocating space to application routines and making sure that they do not interfere with each other 3
  • 4. MEMORY MANAGER FUNCTIONS • To keep track of which parts of memory are in use and which parts are not in use • Coordinates how memory hierarchy is used 4
  • 5. MEMORY HIERARCHY • Main memory – Should store currently needed program instructions and data only • Secondary storage – Stores data and programs that are not actively needed • Cache memory – Extremely high speed – Usually located on processor itself – Most-commonly-used data copied to cache for faster access – Small amount of cache still effective for boosting performance 5
  • 6. MEMORY HIERARCHY FIGURE 1: Hierarchical memory organization. 6
  • 7. MEMORY MANAGEMENT STRATEGIES • Strategies divided into several categories – Fetch strategies • Decides which piece of data to load next – Placement strategies • Decides where in main memory to place incoming data – Replacement strategies • Decides which data to remove from main memory to make more space 7
  • 8. RESIDENT ROUTINE VS TRANSIENT ROUTINE RESIDENT ROUTINE TRANSIENT ROUTINE Refers to routine that stays in memory; the part of program must remain in memory all the times Refers to a routine that is loaded as needed Instructions and data that remains in memory can be accessed instantly Stored on disk and read into memory only when needed Example: Windows operating system Example: database programs, web browser, drawing application, paint application, image editing programs and etc 8
  • 9. MEMORY SWAPPING TECHNIQUE • Not necessary to keep inactive processes in memory – Swapping • Only put currently running process in main memory • Others temporarily moved to secondary storage • Maximizes available memory • Significant overhead when switching processes – Better yet: keep several processes in memory at once • Less available memory • Much faster response times 9
  • 10. CONTIGUOUS VS NONCONTIGUOUS MEMORY ALLOCATION • Ways of organizing programs in memory – Contiguous allocation • Program must exist as a single block of contiguous addresses • Sometimes it is impossible to find a large enough block • Low overhead – Noncontiguous allocation • Program divided into chunks called segments • Each segment can be placed in different part of memory • Easier to find “holes” in which a segment will fit • Increased number of processes that can exist simultaneously in memory offsets the overhead incurred by this technique 10
  • 11. MEMORY ALLOCATION 11 CONTIGOUS  FIXED-PARTITION  DYNAMIC NON-CONTIGOUS  PAGING  SEGMENTATION
  • 12. FIXED-PARTITIONS • The simplest approach – to managing memory for multiple concurrent process. • Divides the available space into fixed- length partitions, – each of which holds one process. • When a partition is free, – a process is selected from the input queue and is loaded into a free partition – Best-fit? First-fit? Worst-fit? 12
  • 14. FIXED-PARTITIONS • Partition can be of equal size or unequal size • Any process whose size is less than or equal to a partition size, – can be loaded into the partition • If all partitions are occupied, – the OS can swap a process out of a partition 14
  • 15. • A process is either entirely in main memory or entirely on backing store • A program may be too large to fit in partition. The programmer must then design the program with overlays – When the module needed is not present the user program must load the module into the program’s partition, overlaying whatever program or data are there • Main memory use inefficient. Any program, no matter how small, occupies an entire position. This is called internal fragmentation 15 FIXED-PARTITIONS
  • 16. • Unequal-size partitions lessens this problem but it is still remains. • Equal-size partitions – If there is an available partition, a process can be loaded into that partition • Because all partitions are of equal size, it does not matter which partitions is used – If all partition are occupied by blocked processes, choose one process to swap out to make room for the new process. 16 FIXED-PARTITIONS: PLACEMENT ALGORITHM
  • 17. FIXED-PARTITIONS: PLACEMENT ALGORITHM 17 • Unequal-size partitions: use of multiple queues – Assign each process to the smallest partition within which it will fit – A queue for each partition size – Tries to minimize internal fragmentation – Problem: some queues will be empty if no processes within a size range is present
  • 18. 18 • Unequal-size partitions: use of single queue – When its time to load a process into main memory the smallest available that will hold the process is selected – Increases the level of multiprogramming at the expense of internal fragmentation FIXED-PARTITIONS: PLACEMENT ALGORITHM
  • 19. VIRTUAL MEMORY • Real, or physical, memory exists on RAM chips inside the computer • Virtual memory, as its name suggests, doesn’t physically exist on a memory chip • It is an optimization technique and is implemented by the operating system in order to give an application program the impression that it has more memory than actually exists • Virtual memory is implemented by various operating systems such as Windows, Mac OS X, and Linux. 19
  • 20. HOW VIRTUAL MEMORY WORKS • Let’s say that an operating system needs 120 MB of memory in order to hold all the running programs, but there’s currently only 50 MB of available physical memory stored on the RAM chips • The operating system will then set up 120 MB of virtual memory, and will use a program called the virtual memory manager (VMM) to manage that 120 MB • The VMM will create a file on the hard disk that is 70 MB (120 – 50) in size to account for the extra memory that’s needed • The O.S. will now proceed to address memory as if there were actually 120 MB of real memory stored on the RAM, even though there’s really only 50 MB 20
  • 21. • So, to the O.S., it now appears as if the full 120 MB actually exists • It is the responsibility of the VMM to deal with the fact that there is only 50 MB of real memory. 21 HOW VIRTUAL MEMORY WORKS
  • 22. VIRTUAL MEMORY: PAGING • The VMM creates a file on the hard disk that holds the extra memory that is needed by the O.S., for example 70 MB in size • This file is called a paging file (also known as a swap file), and plays an important role in virtual memory • The paging file combined with the RAM accounts for all of the memory. • Whenever the O.S. needs a ‘block’ of memory that’s not in the real (RAM) memory, the VMM takes a block from the real memory that hasn’t been used recently, writes it to the paging file, and then reads the block of memory that the O.S. needs from the paging file. 22
  • 23. • The VMM then takes the block of memory from the paging file, and moves it into the real memory – in place of the old block. – This process is called swapping (also known as paging), and the blocks of memory that are swapped are called pages. • There are two reasons why virtual memory is important – to allow the use of programs that are too big to physically fit in memory – to allow for multitasking – multiple programs running at once • Before virtual memory existed, a word processor, e-mail program, and browser couldn’t be run at the same time unless there was enough memory to hold all three programs at once • This would mean that one would have to close one program in order to run the other, but now with virtual memory, multitasking is possible even when there is not enough memory to hold all executing programs at once. 23 VIRTUAL MEMORY: PAGING
  • 24. PAGING • Main memory is partition into equal-fixed sized • Each process is also divided into partition of the same size called pages • The process page can thus be assigned to the available partition in main memory called frames • Consequence: a process does not need to occupy a contiguous portion of memory 24
  • 25. 25 Now suppose that Process B is swapped out PAGING
  • 26. 26 • When Process A and C are blocked, the pager load a new Process D consisting of 5 pages • Process D does not occupied a contiguous portion of memory • There is no external fragmentation • Internal fragmentation consist only of the last page of each process PAGING
  • 27. • The OS now needs to maintain (in main memory) a page table for each process • Each entry of a page table consist of the frame number where the corresponding page is physically located • The page table is indexed by the page number to obtain the frame number • A free frame list, available for pages, is maintained 27 PAGING
  • 28. PAGING: LOGICAL ADDRESS & PHYSICAL ADDRESS • Within each program, each logical address must consist of a page number and an offset within the page – Page number: used as an index into a page table which contains base address of each page in physical memory – Page offset: combined with base address to define the physical memory address that is sent to the memory unit • A CPU register always holds the starting physical address of the page table of the currently running process • Presented with logical address (page number, page offset) the processor access the page table to obtain the physical address (frame number, offset) 28
  • 29. 29 PAGING: LOGICAL ADDRESS & PHYSICAL ADDRESS
  • 30. 30 PAGING: LOGICAL ADDRESS & PHYSICAL ADDRESS
  • 32. • When we use a paging scheme, there is no external fragmentation: – any free frame can be allocated to a process that needs it • However, there is internal fragmentation • Example: if a page size is 2048 bytes, a process of 72766 bytes would need 35 pages plus 1086 bytes 32 PAGING
  • 33. SEGMENTATION • Each segment is subdivided into blocks of non-equal size called segments • When a process gets loaded into main memory, its different segments can be located anywhere • Each segment is fully packed with instruction/data: no internal fragmentation 33
  • 34. SEGMENTATION • There is external fragmentation; it is reduced when using small segments • The OS maintain a segment table for each process. Each entry may contains: – The starting physical address of that segment (base) – The length of that segment (limit) 34
  • 36. SEGMENTATION: LOGICAL VIEW OF SEGMENTATION 36
  • 39. MAJOR SYSTEM RESOURCE • A resource, or system resource, is any physical or virtual component of limited availability within a computer system. • Every device connected to a computer system is a resource. • Every internal system component is a resource • System resource including: – CPU time – Memory – Hard disk space – Network throughput – Electrical power – External devices – I/O operations *Explain in your own word how system resource types function in computer system. 39
  • 40.
  • 41. PROCESS STATES • As a process executes, it changes state: – New: the process is being created – Running: instructions are being executed – Waiting: the process are waiting for some event to occur – Ready: the process is waiting to be assigned to a process – Terminated: the process has finished execution 41
  • 43. INTERRUPTS • An interrupt is an electronic signal. • Hardware senses the signal, saves key control information for the currently executing program, and starts the operating system’s interrupt handler routine. At that instant, the interrupt ends. • The operating system then handles the interrupt. • Subsequently, after the interrupt is processed, the dispatcher starts an application program. • Eventually, the program that was executing at the time of the interrupt resumes processing. 43
  • 44. Example of how interrupt work Step 1:
  • 45. Step 2: Example of how interrupt work
  • 46. Step 3: Example of how interrupt work
  • 47. Step 4: Example of how interrupt work
  • 48. Step 5: Example of how interrupt work
  • 49. Step 6: Example of how interrupt work
  • 50.
  • 51. CPU Scheduling • Types of scheduling: – Short-term scheduling; • which determines which of the ready processes can have CPU resources, and for how long. • Invoked whenever event occurs that interrupts current process or provides an opportunity to preempt current one in favor of another • Events: clock interrupt, I/O interrupt, OS call, signal
  • 52. CPU Scheduling • Types of scheduling: – Medium-term scheduling; • determines when processes are to be suspended and resumed • Part of swapping function between main memory and disk • based on how many processes the OS wants available at any one time • must consider memory management if no virtual memory (VM), so look at memory requirements of swapped out processes
  • 53. CPU Scheduling • Types of scheduling: – Long-term scheduling; • determines which programs are admitted to the system for execution and when, and which ones should be exited. • Determine which programs admitted to system for processing - controls degree of multiprogramming • Once admitted, program becomes a process, either: – added to queue for short-term scheduler – swapped out (to disk), so added to queue for medium-term scheduler
  • 55. CPU Scheduler • Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: – Switches from running to waiting state (nonpreemptive) – Switches from running to ready state (preemptive) – Switches from waiting to ready (preemptive) – Terminates (nonpreemptive) • All other scheduling is preemptive
  • 56. • Preemptive scheduling policy interrupts processing of a job and transfers the CPU to another job. - The process may be pre-empted by the operating system when: - a new process arrives (perhaps at a higher priority), or - an interrupt or signal occurs, or - a (frequent) clock interrupt occurs. CPU Scheduler
  • 57. • Non-preemptive scheduling policy functions without external interrupts. – once a process is executing, it will continue to execute until it terminates, or – it makes an I/O request which would block the process, or – it makes an operating system call. CPU Scheduler
  • 58. PREEMPTIVE VS NON- PREEMPTIVE SCHEDULING • Preemptive – Preemptive Scheduling is when a computer process is interrupted and the CPU's power is given over to another process with a higher priority. This type of scheduling occurs when a process switches from running state to a ready state or from a waiting state to a ready state. • Non-preemptive – One the CPU has been allocated to a process, the process keep the CPU until it release the CPU either by terminating or by switching to waiting state 58
  • 59. • Types of scheduling algorithm: Basic strategies - First In First Out (FIFO) - Round Robin (RR) - Shortest Job First (SJF) - Priority Combined strategies - Multi-level queue - Multi-level feedback queue CPU Scheduling
  • 60. Turnaround time  The sum of time spent in the ready queue, execution time and I/O time. tat = t(process completed) – t(process submitted)  minimize, time of submission to time of completion. Waiting time  minimize, time spent in ready queue - affected solely by scheduling policy Response time  The amount of time it takes to start responding to a request. This criterion is important for interactive systems. rt = t(first response) – t(submission of request)  minimize CPU Scheduling
  • 61. First In First Out (FIFO) • Non-preemptive. Also known as FCFS • Handles jobs according to their arrival time; – the earlier they arrive, the sooner they’re served. • Simple algorithm to implement -- uses a FIFO queue. • Good for batch systems; – not so good for interactive ones. • Turnaround time is unpredictable.
  • 62. Suppose that the processes arrive in the order: P1, P2, P3. The Gantt Chart for the schedule is: First In First Out (FIFO) Process Burst time P1 24 P2 3 P3 3 P1 P2 P3 0 24 27 30 Waiting time for P1=0; P2=24; P3=27 Average waiting time = (0+24+27)/3 =17s
  • 63. Round Robin (RR) • FCFS with Preemption. • Used extensively in interactive systems because it’s easy to implement. • Isn’t based on job characteristics but on a predetermined slice of time that’s given to each job. – Ensures CPU is equally shared among all active processes and isn’t monopolized by any one job. • Time slice is called a time quantum – size crucial to system performance (100 ms to 1- 2 secs)
  • 64. Suppose that the processes arrive in the order: P1, P2, P3, P4. Given time quantum, Q=20s. The Gantt Chart for the schedule is: Process Burst time P1 53 P2 17 P3 68 P4 24 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 Waiting time for P1=?; P2=?; P3=?; P4=?, Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s Round Robin (RR) 97 11777 121 134 154 162
  • 65. Shortest Job First (SJF) • Non-preemptive or Preemptive • Handles jobs based on length of their CPU cycle time. – Use lengths to schedule process with shortest time. • Optimal – gives minimum average waiting time for a given set of processes. – optimal only when all of jobs are available at same time and the CPU estimates are available and accurate. • Doesn’t work in interactive systems because users don’t estimate in advance CPU time required to run their jobs.
  • 66. Shortest Job First (SJF) Preemptive Suppose that the processes arrive in the order: P1, P2, P3, P4. The Gantt Chart for the schedule is: Process Arrival Time Burst time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 P1 P2 P3 P2 P4 P1 0 2 4 5 Waiting time for P1=?; P2=?; P3=?; P4=?, Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s 117 16
  • 67. Shortest Job First (SJF) Non-Preemptive Suppose that the processes arrive in the order: P1, P2, P3, P4. The Gantt Chart for the schedule is: Process Arrival Time Burst time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 P1 P3 P2 P4 0 7 Waiting time for P1=?; P2=?; P3=?; P4=?, Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s 128 16
  • 68. Priority • Non-preemptive. • Gives preferential treatment to important jobs. – Programs with highest priority are processed first. – Aren’t interrupted until CPU cycles are completed or a natural wait occurs. • If 2+ jobs with equal priority are in READY queue, processor is allocated to one that arrived first – (first come first served within priority). • Many different methods of assigning priorities by system administrator or by Processor Manager.
  • 69. The Gantt Chart for the schedule is: Process Burst time Priority Arrival Time P1 10 3 0.0 P2 1 1 1.0 P3 2 4 2.0 P4 1 5 3.0 P5 5 2 4.0 P1 P2 P5 P3 P4 0 10 Waiting time for P1=?; P2=?; P3=?; P4=?, Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/5 =?s 1811 19 Priority 16
  • 70. The Gantt Chart for the schedule is: Process Burst time Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 P2 P5 P1 P3 P4 0 1 Waiting time for P1=?; P2=?; P3=?; P4=?, Average waiting time = (wt P1 + wt P2 + wt P3 + wt P4 )/4 =?s 186 19 Priority 16
  • 75. THREADS • A thread is a separate part of a process. • A process can consist of several threads, each of which execute separately. • For example, – one thread could handle screen refresh and drawing, another thread printing, another thread the mouse and keyboard. – This gives good response times for complex programs. – Windows NT is an example of an operating system which supports multi-threading. 75
  • 76. MULTITHREADING • A thread is a way for program to fork or split itself into two or more simultaneously running tasks • A lot of software that run on modern desktop PCs are multithreaded • Example: – Web browser – one thread to display images or text, another thread retrieves data from the network – Word processor – one thread for displaying graphics, another thread is for responding to keystrokes from the user, another thread for performing spelling and grammar checking. 76
  • 77. SINGLE VS MULTITHREADED PROCESSES 77
  • 78. • Web browser example: – Web browser accepts client request for webpage, image, sound and so on. – A web browser server may have several clients concurrently accessing it – If the web browser ran as a traditional single threaded process, it would be able to service only one client at a time. The amount of time the client may have to wait is enormous. 78 SINGLE VS MULTITHREADED PROCESSES
  • 79. • Solution 1 – When a server received a request, it creates a separate process to service that request – Old solution before threads become popular – Disadvantage: time consuming and resource intensive, new process will perform the same task as the existing process. 79 • Solution 2: – It is more efficient to use one process with multiple threads – Multithreaded web server process – Create a separate thread that would listen for client requests – When a request is made, rather than creating another process, the server would create another thread to serve that request – Many OS are multithreaded. E.g: Linux. SINGLE VS MULTITHREADED PROCESSES
  • 80. BENEFITS OF MULTITHREADING • Responsiveness - One thread may provide rapid response while other threads are blocked or slowed down doing intensive calculations. • Resource sharing - By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space. • Economy - Creating and managing threads ( and context switches between them ) is much faster than performing the same tasks for processes. • Utilization of multiprocessor architectures - A single threaded process can only run on one CPU, no matter how many may be available, whereas the execution of a multi-threaded application may be split amongst available processors. (Note that single threaded processes can still benefit from multi-processor architectures when there are multiple processes contending for the CPU, i.e. when the load average is above some certain threshold.) 80
  • 81. DEADLOCK • Process deadlock – A process is deadlocked when it is waiting on an event which will never happen • System deadlock – A system is deadlocked when one or more processes are deadlock • Under normal operation, a resource allocations proceed like this – Request a resource – Use the resource – Release the resource 81
  • 82. NECESSARY AND SUFFICIENT DEADLOCK CONDITIONS • Coffman (1971) identified four (4) conditions that must hold simultaneously for there to be a deadlock. – Mutual exclusion condition – Hold and wait condition – No-preemptive condition – Circular wait condition 82
  • 83. NECESSARY AND SUFFICIENT DEADLOCK CONDITIONS – Mutual exclusion condition • the resource involved are non-sharable • At least one resource (thread) must be held in a non- shareable mode, that is, only one process at a time claims exclusive control of the resource. • If another process requests that resource, the requesting process must be delayed until the resource has been released. 83
  • 84. NECESSARY AND SUFFICIENT DEADLOCK CONDITIONS – Hold and wait condition • Requesting process hold already, resources while waiting for requested resources. • There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes. 84
  • 85. – No-preemptive condition • Resources already allocated to a process cannot be preempted. • Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it. − Circular wait condition • The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list. 85 NECESSARY AND SUFFICIENT DEADLOCK CONDITIONS
  • 86. METHODS FOR HANDLING DEADLOCKS • Deadlock problem can be deal in 3 ways: i. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state ii. Allow the system to enter a deadlock state, detect it, and recover iii. Ignore the problem, pretend that deadlock never occur in the system. This solution used by most OS including UNIX 86
  • 87. DEADLOCK PREVENTION • Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions cannot hold. • Deadlock prevention for: – Mutual exclusion: allow multiple processes to access computer resource. – Hold and wait: force each process to request all required resources at once (in one shot). It cannot proceed until all resources have been acquired. (process either acquires all resources or stops) – No-preemption: allow a process to be aborted or its resources reclaimed by another or by system, when competing over a resource – Circular wait: all resource types are numbered by an integer resource id. Processes must request resources in numerical (decreasing) order of resource id. 87
  • 88. ACTIVITY • Describe the characteristic of the different levels in the hierarchy of memory organization. • Describe how the scheduling process is performed by an operating system • Describe threads relationship to processes