SlideShare a Scribd company logo
1 of 36
UNIT - II
PART- B
3. What is a critical section problem? List the requirements to meet out CS and Explain
Peterson solution for CS.
Critical section problem
 Consider system of nprocesses {p0, p1, … pn-1}
 Each process has critical section segment of code
a. Process may be changing common variables, updating table, writing file, etc
b. When one process in critical section, no other may be in its critical section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in entry section, may
follow critical section with exit section, then remainder section
Critical Section
General structure of process Pi
Requirements for the solution of Critical-Section Problem
1. Mutual Exclusion- If process Piis executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress- If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely
3. Bounded Waiting- A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of thenprocesses.
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or non- preemptive
1. Preemptive – allows preemption of process when running in kernel mode
2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
 Essentially free of race conditions in kernel mode.
Two process Solutions:
Peterson’s Solution:
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language instructions are atomic; that is,
cannot be interrupted
 The two processes share two variables:
a. int turn;
b. Boolean flag[2]
c. The variable turn indicates whose turn it is to enter the critical section
 The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pienters CS only if:
eitherflag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
5. With a neat sketch explain how logical address is translated into physical address using
paging mechanism.
PAGING
 Paging is a memory-management scheme that permits the physical address space of
a process to be non-contiguous.
 Paging avoids external fragmentation and the need forcompaction, whereas
segmentation does not.
Basic Method
 The basic method for implementing paging involves
 breaking physical memory into fixed-sized blocks called frames
 breaking logical memory into blocks of the same size called pages.
 When a process is to be executed, its pages are loaded into any available memory
frames from the backing store.
 The backing store is divided into fixed-sized blocks that are of the same size as the
memory frames.
 Every address generated by the CPU is divided into two parts: a page number (p)
and a page offset (d).
 The page number is used as an index into a page table.
 The page table contains the base address of each page in physical memory.
 This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.
Figure: Paging hardware showingtranslation of logical address into physical address using
paging mechanism.
Conversion of logical address to physical address:
 The concept of a logical address space that is bound to a separate physical address space
is central to proper memory management
o Logical address - generated by the CPU; also referred to as virtual address.
o Physical address - address seen by the memory unit.
 Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme.
 The logical address size is 2^m and the page size is 2^n, then the high-order m-n
bits of a logical address designate the page number and the remaining n bits
represent the offset.
 Thus, the logical address is as follows:
Where p is an index into the page table and
d is the displacement within the page.
Figure: Paging model of logical and physical memory
6. Write in detail about process and process concept.
PROCESES AND PROCESS CONCEPT
1. An operating system executes a variety of programs:
 Batch system – jobs
2. Time-shared systems – user programs or tasks
Process – a program in execution; process execution must progress in sequential fashion
It has Multiple parts
1. The program code, also called text section
2. Current activity including program counter, processor registers
3. Stack containing temporary data
4. Function parameters, return addresses, local variables
5. Data section containing global variables
6. Heap containing memory dynamically allocated during run time
 Program is passive entity stored on disk (executable file), process is active
 Program becomes process when executable file loaded into memory
7.Execution of program started via GUI mouse clicks, command line entry of its
name, etc
 One program can be several processes
 Consider multiple users executing the same program
Process in memory
Figure 2.1 Process in memory.
Process States
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
1. new: The process is being created
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution
Figure 2.2 Process states
Process Control Block (PCB)
Each process is represented in the operating system by a process control block (PCB)—
also called a task control block. A PCB is shown in Figure.
1. Process state –may be new, ready, running, waiting, terminated etc
2. Program counter – location of instruction to next execute
3. CPU registers – contents of all process-centric registers
4. CPU scheduling information- priorities, scheduling queue pointers
5. Memory-management information – memory allocated to the process.This
information may include such items as the value of the base and limit registers and
the page tables, or the segment tables, depending on the memory system used by the
operating system
6. Accounting information – the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
7. I/O status information – I/O devices allocated to process, list of open files and so
on.
Figure 2.3 Process control block (PCB).
Figure: Diagram showing CPU switch from process to process
7. Write a detailed note on process scheduling.
The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running.
To meet these objectives, the process scheduler selects an available process (possibly from a set
of several available processes) for program execution on the CPU.
 To Maximize CPU use, quickly switch processes onto CPU for time sharing
 For a single-processor system, there will never be more than one running process.
 If there are more processes, the rest have to wait until the CPU is free and can be
rescheduled.
Scheduling Queues
 Maintains scheduling queuesof processes
1. Job queue – set of all processes in the system
2. Ready queue – set of all processes residing in main memory, ready and waiting
to execute
3. Device queues – set of processes waiting for an I/O device Processes migrate
among the various queues
Ready Queue and Various I/O Device Queues
Figure The ready queue and various I/O device queues.
As processes enter the system, they are put into a job queue, which consists of all
processes in the system.
The processes that are residing in main memory and are ready and waiting to execute are
kept on a list called the ready queue. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it executes
for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event,
such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since there
are many processes in the system, the disk may be busy with the I/O request of some other
process.
The process therefore may have to wait for the disk. The list of processes waiting for a
particular I/O device is called a device queue. Each device has its own device queue.
Representation of Process Scheduling
A common representation of process scheduling is a queueing diagram. Queueing diagram
represents queues, resources, flows.
Figure : Queueing-diagram representation of process scheduling.
Two types of queues are present:
 Ready queue and a set of device queues.
 The circles represent the resources that serve the queues, and the arrows indicate the flow
of processes in the system.
 A new process is initially put in the ready queue. It waits there until it is selected for
execution, or dispatched.
 Once the process is allocated the CPU and is executing, one of several events could
occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could create a new child process and wait for the child’s termination.
 The process could be removed by force from the CPU, as a result of an interrupt, and be
put back in the ready queue.
The process switches from the waiting state to the ready state and is then put back in the
ready queue. This cycle until it terminates, at which time it is removed from all queues and has
its PCB and resources deallocated.
Schedulers
The operating system selects processes from queues for scheduling. The selection process
is carried out by the appropriate scheduler.
Two types of schedulers:
1. Short Term Schedulers (STS) or CPU Scheduler
2. Long Term Schedulers (LTS) or Job Scheduler
3. Medium Term Scheduler
1. Short-term scheduler (or) CPU scheduler – selects which process should be
executed next and allocates CPU
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (milliseconds)  (must fast)
 Selects from among the processes that are ready to execute and
 allocates the CPU to one of them.
 A process may execute for only a few milliseconds before waiting for an I/O
request.
 Executes at least once every 100 milliseconds.
 Because of the short time between executions, it must be fast. For Ex: If it takes
10 milliseconds to decide to execute a process for 100 milliseconds, then 10/(100
+ 10) = 9 percent of the CPU is being used (wasted) simply for scheduling the
work.
2. Long-term scheduler (or) job scheduler – selects which processes should be
brought into the ready queue
 Long-term scheduler is invoked infrequently (seconds, minutes)  (may be
slow)
 The long-term scheduler controls the degree of multiprogramming
In general, most processes can be described as
 Either I/O bound or
 CPU bound.
 I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts
 CPU-bound process – spends more time doing computations; few very long CPU
bursts
Long-term scheduler strives for good process mix
3. Medium Term Scheduler
The long-term scheduler selects a good process mix of I/O-bound and CPU-bound
processes.
 If all processes are I/O bound - the ready queue will almost always be empty, and the
short-term scheduler will have little to do.
 If all processes are CPU bound - the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced.
Some operating systems, such as time-sharing systems, may introduce an additional,
intermediate level of scheduling called medium-term scheduler.
The key idea behind a medium-term scheduler - it can be advantageous to remove a
process from memory (and from active contention for the CPU) and thus reduce the degree of
multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
The process is swapped out, and is later swapped in, by the medium-term scheduler.
Swapping may be necessary to improve the process mix.
Medium-term scheduler can be added if degree of multiple programming needs to decrease
 Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping
Figure 2.7 Addition of medium-term scheduling to the queueing diagram.
8. Explain in detail about inter process communication.
 Processes executing concurrently in the operating system may be either independent
processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other process
is independent.
 A process is cooperating if it can affect or be affected by the other processes
executing in the system.
 Cooperating process can affect or be affected by other processes, including sharing
data
 Reasons for cooperating processes:
 Information sharing - an environment must be provided to allow concurrent
access to information(shared file etc)
 Computation speedup - To improve the speedup, tasks are divided into
subtasks, and executes in parallel
 Modularity-dividing the system functions into separate processes or threads
 Convenience- Even an individual user may work on many tasks at the same time.
 Cooperating processes need inter-process communication (IPC)
Communications Models
 Two models of IPC
 Shared memory
 Message passing
Figure : Communications models. (a) Message passing. (b) Shared memory.
Communication Model – Shared Memory
Interprocess communication using shared memory requires communicating processes to
establish a region of shared memory that resides in the address space of the process creating the
shared-memory segment. Other processes also attach it to their address space.
They can then exchange information by reading and writing data in the shared areas and
are not under the operating system’s control.
The processes are also responsible for ensuring that they are not writing to the same
location simultaneously.
To illustrate the concept of cooperating processes, consider the producer–consumer
problem, which is a common paradigm for cooperating processes.
 A producer process produces information that is consumed by a consumer process. For
example, a compiler may produce assembly code that is consumed by an assembler. The
assembler, in turn, may produce object modules that are consumed by the loader.
 Two types of processes, producers and consumers, wish to communicate.
 Producers have information to send to the consumers. They place the information in a
bounded (limited size) buffer, where consumers can obtain it.
 The buffer is a shared resource; use of the buffer must be synchronized. Obviously each
access to the buffer must be done in a critical section.
 When the buffer is full producers cannot place any more items in it. They therefore go to
sleep, to be awakened when a consumer removes an item.
The producer process using shared memory.
One solution to the producer–consumer problem uses shared memory.
To allow producer and consumer processes to run concurrently, a buffer of items must be
available that can be filled by the producer and emptied by the consumer. This buffer will reside
in a region of memory that is shared by the producer and consumer processes.
A producer can produce one item while the consumer is consuming another item. The
producer and consumer must be synchronized, so that the consumer does not try to consume an
item that has not yet been produced.
The code for the producer process,
itemnext_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
The code for the consumer process,
itemnextconsumed;
while (true){
while (in == out)
; /* do nothing */
nextconsumed = buffer[out];
out = (out + 1)% BUFFERSIZE;
/* consume the item in nextconsumed */
}
The local variable nextproduced in which the new item to be produced is stored.
The consumer process has a local variable next consumed in which the item to be
consumed is stored.
This scheme allows at most BUFFERSIZE−1 items in the buffer at the same time.
Communication Model – Message Passing
Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
A message-passing facility provides at least two operations:
 send(message)
 receive(message)
Messages sent by a process can be either fixed or variable in size.
 If only fixed-sized messages can be sent, the system-level implementation is
straightforward & makes the task of programming more difficult.
 Conversely, variable-sized messages require a more complex system- level
implementation, but the programming task becomes simpler.
If processes P and Q want to communicate, they must send messages to and receive
messages from each other: ie. a communication link must exist between them.
This link can be implemented in a variety of ways, for implementing a link and the
send()/ receive() operations:
 Direct or indirect communication
 Synchronous or asynchronous communication
 Automatic or explicit buffering
Direct Communication
Each process that wants to communicate must explicitly name the recipient or sender of
the communication.
In this scheme, the send () and receive () primitives are defined as:
 send (P, message)—Send a message to process P.
 receive (Q, message)—Receive a message from process Q.
A communication link in this scheme has the following properties:
 A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other’s identity to communicate.
 A link is associated with exactly two processes.
 Between each pair of processes, there exists exactly one link.
1. This scheme exhibits symmetry in addressing; that is, both the sender process and the
receiver process must name the other to communicate.
2. A variant of this scheme employs asymmetry in addressing. Here, only the sender
names the recipient; the recipient is not required to name the sender. In this scheme,
the send() and receive() primitives are defined as follows:
a. Send (P, message)—Send a message to process P.
b. Receive (id, message)—Receive a message from any process. The variable id is
set to the name of the process with which communication has taken place.
Disadvantage in both of these schemes (symmetric and asymmetric) is the limited modularity of
the resulting process definitions.
All references to the old identifier must be found, so that they can be modified to the new
identifier. In general, any such hard-coding techniques, where identifiers must be explicitly
stated, are less desirable than techniques involving indirection.
Indirect communication
The messages are sent to and received from mailboxes, or ports. Each mailbox has a
unique identification.
A process can communicate with another process via a number of different mailboxes,
but two processes can communicate only if they have a shared mailbox.
The send () and receive () primitives are defined as follows:
 send(A, message)—Send a message to mailbox A.
 receive(A, message)—Receive a message from mailbox A.
A communication link has the following properties:
 A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
 A link may be associated with more than two processes.
 Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox.
 Suppose that processes P1, P2, and P3 all share mailbox A.
 Process P1sends a message to A, while both P2 and P3 execute a receive () from A.
 Which process will receive the message sent byP1?
Solutions:
1. Allow a link to be associated with two processes at most.
2. Allow at most one process at a time to execute a receive () operation.
3. Allow the system to select arbitrarily which process will receive the message (that is,
either P2 or P3, but not both, will receive the message).
The operating system then must provide a mechanism that allows a process to do the following:
Create a new mailbox.
1. Send and receive messages through the mailbox.
2. Delete a mailbox.
Synchronization
Message passing may be blocking or non blocking— also known as synchronous and
asynchronous.
Blocking is considered synchronous
 Blocking send. The sending process is blocked until the message is received
 Blocking receive. The receiver blocks until a message is available.
Non-blocking is considered asynchronous
 Nonblocking send. The sending process sends the message and resumes operation.
 Nonblocking receive. The receiver retrieves either a valid message or a null.
Different combinations of send () and receive () are possible. When both send () and
receive () are blocking, a rendezvous is happened between the sender and the receiver.
The solution to the producer–consumer problem becomes trivial when used blocking
send () and receive () statements.
Zero capacity:
The queue has a maximum length of zero; the link cannot have any messages waiting in
it. So the sender must block until the recipient receives the message (rendezvous).
Bounded capacity:
The queue has finite length n; thus, at most n messages can reside in it.
If the queue is not full when a new message is sent, the message is placed in the queue
and the sender can continue execution without waiting.
If the link is full, the sender must block until space is available in the queue.
Unbounded capacity:
The queue’s length is potentially infinite; thus, any number of messages can wait in it.
The sender never blocks.
9. Write in detail about threads.
A thread is a basic unit of CPU utilization:
 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
Motivation
 Most software applications that run on modern computers are multithreaded.
 A separate process with several threads runs within application.
 If a process has multiple threads of control, it can perform more than one task at a
time. Figure 2.10 illustrates the difference between a traditional single-threaded
process and a multithreaded process.
Figure : Single-threaded and multithreaded processes.
A web browser might have one thread display images or text while another thread
retrieves data from the network,
For example,
 A word processor may have a thread for displaying graphics,
 Another thread for responding to keystrokes from the user,
 A third thread for performing spelling and grammar checking in the background.
When the server receives a request, it creates a separate process to service that request.
This process-creation method was in common use before threads became popular.
Process creation is
 Time consuming and
 Resource intensive.
It is generally more efficient to use one process that contains multiple threads.
If the web-server process is multithreaded, the server will create a separate thread that
listens for client requests. When a request is made, rather than creating another process, the
server creates a new thread to service the request and resume listening for additional requests.
Threads also play a vital role in remote procedure call (RPC) systems.
RPC servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several concurrent requests.
Figure : Multithreaded server architecture.
Finally, most operating-system kernels are multithreaded.
Several threads operate in the kernel, and each thread performs a specific task, such as
managing devices, managing memory, or interrupt handling.
Benefits
1. Responsiveness – Multithreading an interactive application may allow continued
execution if part of process is blocked, thereby increasing responsiveness to the user.
Especially important for user interfaces.
2. Resource Sharing – threads share resources of process, easier than shared memory
or message passing
3. Economy – cheaper than process creation, thread switching lower overhead than
context switching
4. Scalability – process can take advantage of multiprocessor architectures, where
threads may be running in parallel on different processing cores. A single-threaded
process can run on only one processor, regardless how many are available.
10. Explain in detail about deadlock detection and deadlock recovery.
DEADLOCKS
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is called
a deadlock.
Deadlock Detection
 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme
Single Instance of Each Resource Type
 Maintain wait-for graph
 Nodes are processes
 PiPjif Piis waiting forPj
 Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
 An algorithm to detect a cycle in a graph requires an order ofn2operations, where n is the
number of vertices in the graph
Resource-Allocation Graph and Wait-for Graph
Figure (a) Resource-allocation graph. (b) Corresponding
wait-for graph
Several Instances of a Resource Type
 Available: A vector of length m indicates the number of available resources of each type
 Allocation: An n x mmatrix defines the number of resources of each type currently
allocated to each process
 Request: An n x mmatrix indicates the current request of each process. If Request [i][j]
= k, then processPi is requestingk more instances of resource type Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
a) (Work = Available)
b) For i = 1,2, …, n, if Allocation 0, then Finish[i] = false; otherwise, Finish[i]
= true
2. Find an index isuch that both:
a) Finish[i] == false
b) RequestiWork
If no such iexists, go to step 4
3. Work = Work + Allocation Finish[i] = true go to step 2
4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked
Example of Detection Algorithm
 Five processes P0 through P4;three resource types A (7 instances), B (2 instances), and C
(6 instances)
 Snapshot at time T0:
Allocation Request Available
A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
 P2 requests an additional instance of typeC
Request
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
 State of system?
 Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests
 Deadlock exists, consisting of processes P1, P2, P3, and P4
Detection-Algorithm Usage
 When, and how often, to invoke depends on:
 How often a deadlock is likely to occur?
 How many processes will need to be rolled back?
 one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many cycles in the resource
graph and so we would not be able to tell which of the many deadlocked processes
“caused” the deadlock.
Deadlock Recovery
i. Recovery using Process Termination
 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is eliminated
 In which order should we choose to abort?
1. Priority of the process
2. How long process has computed, and how much longer to completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
ii. Recovery using Resource Preemption
 Selecting a victim – minimize cost
 Rollback – return to some safe state, restart process for that state
 Starvation – same process may always be picked as victim, include number of rollback
in cost factor
11. Analyze FCFS, preemptive and non-preemptive versions of Shortest-Job First and
Round Robin (time slice = 2) scheduling algorithms with Gantt charts for the four
processes given. Compare their average turnaround and waiting time.
Process Arrival Time Burst Time
P1 0 10
P2 1 6
P3 2 12
P4 3 15
PART C
11. Under what circumferences do page fault occur? Describe the actions taken by the OS
when the page fault occurs.
Page fault
 If the process tries to access a page that was not bought into memory.
 Access to a page marked invalid causes a page-fault trap.
 The procedure for handling this page fault is straightforward.
Figure : Page table when some pages are not in main memor y.
Actions taken by the OS when the page fault occurs.
Figure : Steps in handling a page fault
1. OS checks the internal table, to determine whether the reference was a valid or
invalid memory access.
2. If the reference was invalid, terminate the process. If it was valid, but not in
main memory.
3. Find a free frame.
4. Swap the page into newly allocated frame.
5. Modify the table to indicate that the page is now in memory.
6. Restart the instruction that was interrupted by the illegal address trap. The
page can be accessed now.
12. Explain thrashing and the methods to avoid thrashing.
A process that is spending more time in paging than executing is said to be
thrashing.
Cause of Thrashing:
 When memory is filled up and processes started spending lots of time waiting for
their pages to page in, then CPU utilization would lower, causing the schedule to add in
even more processes and the system would grind to a halt.
To limit the effect of thrashing:
Local page replacement policies can prevent one thrashing process from taking pages away
from other processes.
Figure : Thrashing
 To prevent thrashing, a process is provided with as many frames as possible.
 The locality model states that processes typically accessmemory references in a given
locality, making lots of references to the same general area of memory before moving
periodically to a new locality.
 If many frames are involved in the current locality, then page faulting would occur
primarily on switches from one locality to another.
Figure : Locality in a memory-reference pattern.
Methods to avoid thrashing
Working-Set Model
 The working set model is based on the conceptof locality, and defines a working
set window, of length delta. Whatever pages are included in the most recent delta
page referencesare said to be in the processes working setwindow, and comprise
its current working set.
Figure : Working-set model.
 The selection of delta is critical to the success of the working set model
 If it is too small then it does not encompass all of the pages of the current
locality.
 if it is too large, then it encompasses pages that are no longer being
frequently accessed.
 The total demand, D = sum of the sizes of the working sets for all processes.
 If D > total number of available frames, then at least one process is
thrashing, becausethere are not enough frames available to satisfy its
minimum working set.
 If D < the currently available frames, then additional processes can be
launched.
 The hard part of the working-set model is keepingtrack of what pages are in the
current working set, since every reference adds one to the set and removes one
older page.
 An approximation can be made using reference bits and a timer that goes off
after a set interval of memory references:
 For example, suppose that the timer is set to go off after every 5000
references ( by any process ), and can store two additional historical
reference bits in addition to the current reference bit.
 Every time the timer goes off, the current reference bit is copied to one of
the two historical bits, and then cleared.
 If any of the three bits is set, then that page was referenced within the last
15,000 references, and is considered to be in that processes reference set.
Page-Fault Frequency
 A more direct approach is to recognize that what really want to control is the
page-fault rate, and to allocate frames based on this directly measurable value.
 If the page-fault rate exceeds a certain upper bound then that process needs
more frames, and if it is below a given lower bound, then it can afford to give up
some of its frames to other processes.
Figure : Page-fault frequency.
12. (i). Compare internal and external fragmentation with examples. (5)
(ii). What is virtual memory? Mention its advantages. (5)
BASIS FOR
COMPARISON
INTERNAL FRAGMENTATION
EXTERNAL
FRAGMENTATION
Basic It occurs when fixed sized memory
blocks are allocated to the processes.
It occurs when variable size
memory space are allocated to
the processes dynamically.
BASIS FOR
COMPARISON
INTERNAL FRAGMENTATION
EXTERNAL
FRAGMENTATION
Occurrence When the memory assigned to the
process is slightly larger than the
memory requested by the process this
creates free space in the allocated block
causing internal fragmentation.
When the process is removed
from the memory, it creates the
free space in the memory causing
external fragmentation.
Solution The memory must be partitioned into
variable sized blocks and assign the best
fit block to the process.
Compaction, paging and
segmentation.
VIRTUAL MEMORY
 Virtual memory is a technique that allows the execution of processes that are not
completely in memory.
 One major advantage of this scheme is that programs can be larger than
physical memory.
 Further, virtual memory abstracts main memory into an extremely large,
uniform array of storage, separating logical memory as viewed by the user
from physical memory.
Advantages of Virtual memory
 Virtual memory also allows processesto share files easily and to implement shared
memory.
 Virtual memory is not easy to implement, however, and may substantially decrease
performance if it is used carelessly.
 System libraries can be shared by several processes through mapping of the shared
object into a virtual address space. Actual pages where the libraries reside in
physical memory are shared by all the processes.
 Similarly, virtual memory enables processes to share memory. Two or more
processes can communicate through the use of shared memory.
 Virtual memory can allow pages to be shared during process creation with the fork()
system call, thus speeding up process creation.
13. What is demand paging? Give its use.
 Consider how an executable program might be loaded from disk into memory.
 One option is to load the entire program in physical memory at program
execution time. However, a problem with this approach is that we may not
initially need the entire program in memory.
 Demand paging load pages only as they are needed.
 A demand-paging system is similar to a paging system with swapping where
processes reside in secondary memory (usually a disk).
 To execute a process, process must be swapped into the memory. Instead of
swapping entire process, a page that is needed is swapped. A lazy swapper is used
for this purpose.
 A lazy swapper is a swapper never swaps a page into memory unless that page will
be needed.
 A swapper manipulates entire processes, whereas a pager is concerned with
the individual pages of a process. The term pager is used instead of swapper,
in connection with demand paging.
Figure: Transfer of a paged memory to contiguous disk space
Use:
 When a process is to be swapped in, the pager guesses which pages will be used
before the process is swapped out again.
 It avoids reading into memory pages that will not be used anyway, decreasing the
swap time and the amount of physical memory needed.
 Some form of hardware support is needed to distinguish between the pages that are
in memory and the pages that are on the disk.
 The valid -invalid bit scheme can be used for this purpose.
 This time however, when this bit is set to ``valid'', the associated page is both
legal and in memory.
 If the bit is set to ``invalid'', the page either is not valid (that is, not in the
logical address space of the process) or is valid but is currently on the disk.
 The page-table entry for a page that is brought into memory is set as usual,
but the page-table entry for a page that is not currently in memory is either
simply marked invalid or contains the address of the page on disk.
 While the process executes and accesses pagesthat are memory resident, execution
proceeds normally.
14. Write in detail about any 2 scheduling algorithms.
Scheduling Algorithms
1. First Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue
6. Multilevel Feedback Queue
1. First- Come, First-Served (FCFS) Scheduling
In this algorithm, the process that requests the CPU first is allocated the CPU first. This
implementation of the FCFS policy is easily managed with a FIFO queue.
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 ,P2 , P3 The Gantt Chart for the schedule is:
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
 Suppose that the processes arrive in the order:
P2 ,P3 , P1
The Gantt chart for the schedule is:
 Waiting time for P1 = 6;P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)
 Preemptive.
P P P
1 2 3
0 24 30
27
P1
0 3 6 30
P2
P3
 Non preemptive.
 SJF is priority scheduling where priority is the inverse of predicted next CPU burst
time.
 Problem Starvation – low priority processes may never execute
 Solution Aging – as time progresses increase the priority of the process
Example 1
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
Average waiting time = 8.2 msec
6. Write short notes on operations on processes.
 The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, system must provide mechanisms for:
 Process creation
 Process termination
Process Creation
 Generally, process identified and managed via a process identifier (pid)
 During the course of execution, a process may create several new processes.
 The creating process is called a parent process, and
 The new processes are called the children of that process.
 Each of these new processes may in turn create other processes, forming a tree of
processes.
 Most operating systems (including UNIX, Linux, and Windows) identify processes
according to a unique process identifier (or pid), which is typically an integer number.
1
0 1 19
P1
P2
16
P4
P3
6 18
P
Figure illustrates a typical process tree for the Linux operating system, showing the name
of each process and itspid. The init process (which always has a pid of 1) serves as the root
parent process for all user processes.
Once the system has booted, the init process can also create various user processes, such
as a web or print server, an ssh server, and the like. There are two children of init—k threadd and
sshd.
 The kthreadd process is responsible for creating additional processes that perform tasks
on behalf of the kernel (in this situation, khelper and pdflush).
 The sshd process is responsible for managing clients that connect to the system by using
ssh(which is short for secure shell).
The login process is responsible for managing clients that directly log onto the system.
In this example, a client has logged on and is using the bash shell, which has been
assigned pid 8416.
Figure : A tree of processes on a typical linux system.
Resource sharing options
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a new program loaded into it.
Process Termination
A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit () system call.
 At that point, the process may return a status value (typically an integer) to its parent
process (via the wait () system call).
 All the resources of the process—including physical and virtual memory, open files, and
I/O buffers—are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
 The child has exceeded its usage of some of the resources that it has been allocated.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
If a process terminates (either normally or abnormally), then all its children must also be
terminated. This phenomenon, referred to as cascading termination, is normally initiated by the
operating system.
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait (), because
the process table contains the process’s exit status.
A process that has terminated, but whose parent has not yet called wait (), is known as a
zombie process. All processes transition to this state when they terminate, but generally they
exist as zombies. Once the parent calls wait (), the process identifier of the zombie process and
its entry in the process table are released.
If a parent did not invoke wait () and instead terminated, thereby leaving its child
processes as orphans.
7. Write short notes on threading issues
a. Many to one
b. One to one
c. Many to many
d. Two level
a. Many-to-One
 Many user-level threads mapped to single kernel thread
 One thread blocking causes all to block
 Multiple threads may not run in parallel on muticore system because only one may be in
kernel at a time
 Few systems currently use this model.
Figure 2.15 Many-to-one model.
 Examples:
a. Solaris Green Threads
b. GNU Portable Threads
b. One-to-One
 Each user-level thread maps to kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to overhead
 Examples:
a. Windows
b. Linux
c. Solaris 9 and later
Figure 2.16 One-to-one model.
c. Many-to-Many Model
 Allows many user level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel threads
 Solaris prior to version 9
 Windows with the Thread Fiber package.
Shortcomings
 Developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor
 When a thread performs a blocking system call, the kernel can schedule another thread
for execution
Figure 2.17 Many-to-many model.
d.Two-level Model
 Similar to Many to many, except that it allows a user thread to be bound to kernel thread
 Examples
a. IRIX
b. HP-UX
c. Tru64 UNIX
d. Solaris 8 and earlier
Figure 2.18 Two-level model

More Related Content

What's hot

Feng’s classification
Feng’s classificationFeng’s classification
Feng’s classificationNarayan Kandel
 
Cse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionCse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionShobha Kumar
 
program partitioning and scheduling IN Advanced Computer Architecture
program partitioning and scheduling  IN Advanced Computer Architectureprogram partitioning and scheduling  IN Advanced Computer Architecture
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
 
Ch02 early system memory management
Ch02 early system  memory managementCh02 early system  memory management
Ch02 early system memory managementJacob Cadeliña
 
Unit II - 1 - Operating System Process
Unit II - 1 - Operating System ProcessUnit II - 1 - Operating System Process
Unit II - 1 - Operating System Processcscarcas
 
Computer memory management
Computer memory managementComputer memory management
Computer memory managementKumar
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OSC.U
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management Maitree Patel
 

What's hot (19)

CO Module 5
CO Module 5CO Module 5
CO Module 5
 
Feng’s classification
Feng’s classificationFeng’s classification
Feng’s classification
 
OS_Ch9
OS_Ch9OS_Ch9
OS_Ch9
 
Chapter 8 - Main Memory
Chapter 8 - Main MemoryChapter 8 - Main Memory
Chapter 8 - Main Memory
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Report
ReportReport
Report
 
Cse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solutionCse viii-advanced-computer-architectures-06cs81-solution
Cse viii-advanced-computer-architectures-06cs81-solution
 
Memory+management
Memory+managementMemory+management
Memory+management
 
program partitioning and scheduling IN Advanced Computer Architecture
program partitioning and scheduling  IN Advanced Computer Architectureprogram partitioning and scheduling  IN Advanced Computer Architecture
program partitioning and scheduling IN Advanced Computer Architecture
 
Main Memory
Main MemoryMain Memory
Main Memory
 
Ch02 early system memory management
Ch02 early system  memory managementCh02 early system  memory management
Ch02 early system memory management
 
Bt0070
Bt0070Bt0070
Bt0070
 
Unit II - 1 - Operating System Process
Unit II - 1 - Operating System ProcessUnit II - 1 - Operating System Process
Unit II - 1 - Operating System Process
 
Computer memory management
Computer memory managementComputer memory management
Computer memory management
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OS
 
Process Management
Process ManagementProcess Management
Process Management
 
Memory management
Memory managementMemory management
Memory management
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management
 

Similar to OS_UNIT_2 Study Materials (1).docx

Operating Systems Unit Two - Fourth Semester - Engineering
Operating Systems Unit Two - Fourth Semester - EngineeringOperating Systems Unit Two - Fourth Semester - Engineering
Operating Systems Unit Two - Fourth Semester - EngineeringYogesh Santhan
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptMohammad Almuiet
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/AAbdul Munam
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its conceptsKaran Thakkar
 
operating system question bank
operating system question bankoperating system question bank
operating system question bankrajatdeep kaur
 
Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)Gaurav Kakade
 
OperatingSystem02..(B.SC Part 2)
OperatingSystem02..(B.SC Part 2)OperatingSystem02..(B.SC Part 2)
OperatingSystem02..(B.SC Part 2)Muhammad Osama
 
operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptgezaegebre1
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process ConceptsMukesh Chinta
 
Unit 2 part 1(Process)
Unit 2 part 1(Process)Unit 2 part 1(Process)
Unit 2 part 1(Process)WajeehaBaig
 
OS_Unit II - Process Management_CATI.pptx
OS_Unit II - Process Management_CATI.pptxOS_Unit II - Process Management_CATI.pptx
OS_Unit II - Process Management_CATI.pptxGokhul2
 
Operating system || Chapter 3: Process
Operating system || Chapter 3: ProcessOperating system || Chapter 3: Process
Operating system || Chapter 3: ProcessAnkonGopalBanik
 

Similar to OS_UNIT_2 Study Materials (1).docx (20)

UNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdfUNIT - 3 PPT(Part- 1)_.pdf
UNIT - 3 PPT(Part- 1)_.pdf
 
Operating Systems Unit Two - Fourth Semester - Engineering
Operating Systems Unit Two - Fourth Semester - EngineeringOperating Systems Unit Two - Fourth Semester - Engineering
Operating Systems Unit Two - Fourth Semester - Engineering
 
Ch03- PROCESSES.ppt
Ch03- PROCESSES.pptCh03- PROCESSES.ppt
Ch03- PROCESSES.ppt
 
Ch2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.pptCh2_Processes_and_process_management_1.ppt
Ch2_Processes_and_process_management_1.ppt
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/A
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
 
operating system question bank
operating system question bankoperating system question bank
operating system question bank
 
Chapter 3.pdf
Chapter 3.pdfChapter 3.pdf
Chapter 3.pdf
 
Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)Introduction to Operating System (Important Notes)
Introduction to Operating System (Important Notes)
 
OperatingSystem02..(B.SC Part 2)
OperatingSystem02..(B.SC Part 2)OperatingSystem02..(B.SC Part 2)
OperatingSystem02..(B.SC Part 2)
 
Completeosnotes
CompleteosnotesCompleteosnotes
Completeosnotes
 
operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.ppt
 
Complete Operating System notes
Complete Operating System notesComplete Operating System notes
Complete Operating System notes
 
The process states
The process statesThe process states
The process states
 
OS - Process Concepts
OS - Process ConceptsOS - Process Concepts
OS - Process Concepts
 
Process
ProcessProcess
Process
 
Unit 2 part 1(Process)
Unit 2 part 1(Process)Unit 2 part 1(Process)
Unit 2 part 1(Process)
 
OS_Unit II - Process Management_CATI.pptx
OS_Unit II - Process Management_CATI.pptxOS_Unit II - Process Management_CATI.pptx
OS_Unit II - Process Management_CATI.pptx
 
unit-2.pdf
unit-2.pdfunit-2.pdf
unit-2.pdf
 
Operating system || Chapter 3: Process
Operating system || Chapter 3: ProcessOperating system || Chapter 3: Process
Operating system || Chapter 3: Process
 

More from GayathriRHICETCSESTA

More from GayathriRHICETCSESTA (20)

nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
Smart material - Unit 3 (1).pdf
Smart material - Unit 3 (1).pdfSmart material - Unit 3 (1).pdf
Smart material - Unit 3 (1).pdf
 
Unit 2 notes.pdf
Unit 2 notes.pdfUnit 2 notes.pdf
Unit 2 notes.pdf
 
CS8601-IQ.pdf
CS8601-IQ.pdfCS8601-IQ.pdf
CS8601-IQ.pdf
 
CS8601-QB.pdf
CS8601-QB.pdfCS8601-QB.pdf
CS8601-QB.pdf
 
Smart material - Unit 2 (1).pdf
Smart material - Unit 2 (1).pdfSmart material - Unit 2 (1).pdf
Smart material - Unit 2 (1).pdf
 
Smart material - Unit 3 (2).pdf
Smart material - Unit 3 (2).pdfSmart material - Unit 3 (2).pdf
Smart material - Unit 3 (2).pdf
 
Annexure 2 .pdf
Annexure 2  .pdfAnnexure 2  .pdf
Annexure 2 .pdf
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
Smart material - Unit 2 (1).pdf
Smart material - Unit 2 (1).pdfSmart material - Unit 2 (1).pdf
Smart material - Unit 2 (1).pdf
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
alumni form.pdf
alumni form.pdfalumni form.pdf
alumni form.pdf
 
Semester VI.pdf
Semester VI.pdfSemester VI.pdf
Semester VI.pdf
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
Semester V-converted.pdf
Semester V-converted.pdfSemester V-converted.pdf
Semester V-converted.pdf
 
Elective II.pdf
Elective II.pdfElective II.pdf
Elective II.pdf
 
Unit IV Notes.docx
Unit IV Notes.docxUnit IV Notes.docx
Unit IV Notes.docx
 
unit 1 and 2 qb.docx
unit 1 and 2 qb.docxunit 1 and 2 qb.docx
unit 1 and 2 qb.docx
 

Recently uploaded

Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...tanu pandey
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 

Recently uploaded (20)

Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 

OS_UNIT_2 Study Materials (1).docx

  • 1. UNIT - II PART- B 3. What is a critical section problem? List the requirements to meet out CS and Explain Peterson solution for CS. Critical section problem  Consider system of nprocesses {p0, p1, … pn-1}  Each process has critical section segment of code a. Process may be changing common variables, updating table, writing file, etc b. When one process in critical section, no other may be in its critical section  Critical section problem is to design protocol to solve this  Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section Critical Section General structure of process Pi Requirements for the solution of Critical-Section Problem 1. Mutual Exclusion- If process Piis executing in its critical section, then no other processes can be executing in their critical sections 2. Progress- If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting- A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
  • 2.  Assume that each process executes at a nonzero speed  No assumption concerning relative speed of thenprocesses. Critical-Section Handling in OS Two approaches depending on if kernel is preemptive or non- preemptive 1. Preemptive – allows preemption of process when running in kernel mode 2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU  Essentially free of race conditions in kernel mode. Two process Solutions: Peterson’s Solution:  Good algorithmic description of solving the problem  Two process solution  Assume that the load and store machine-language instructions are atomic; that is, cannot be interrupted  The two processes share two variables: a. int turn; b. Boolean flag[2] c. The variable turn indicates whose turn it is to enter the critical section  The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready! Algorithm for Process Pi do { flag[i] = true; turn = j; while (flag[j] && turn = = j); critical section flag[i] = false; remainder section } while (true); Provable that the three CS requirement are met: 1. Mutual exclusion is preserved Pienters CS only if: eitherflag[j] = false or turn = i 2. Progress requirement is satisfied 3. Bounded-waiting requirement is met 5. With a neat sketch explain how logical address is translated into physical address using paging mechanism.
  • 3. PAGING  Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous.  Paging avoids external fragmentation and the need forcompaction, whereas segmentation does not. Basic Method  The basic method for implementing paging involves  breaking physical memory into fixed-sized blocks called frames  breaking logical memory into blocks of the same size called pages.  When a process is to be executed, its pages are loaded into any available memory frames from the backing store.  The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.  Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d).  The page number is used as an index into a page table.  The page table contains the base address of each page in physical memory.  This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.
  • 4. Figure: Paging hardware showingtranslation of logical address into physical address using paging mechanism. Conversion of logical address to physical address:  The concept of a logical address space that is bound to a separate physical address space is central to proper memory management o Logical address - generated by the CPU; also referred to as virtual address. o Physical address - address seen by the memory unit.  Logical and physical addresses are the same in compile-time and load-time address- binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.  The logical address size is 2^m and the page size is 2^n, then the high-order m-n bits of a logical address designate the page number and the remaining n bits represent the offset.  Thus, the logical address is as follows: Where p is an index into the page table and d is the displacement within the page.
  • 5. Figure: Paging model of logical and physical memory 6. Write in detail about process and process concept. PROCESES AND PROCESS CONCEPT 1. An operating system executes a variety of programs:  Batch system – jobs 2. Time-shared systems – user programs or tasks Process – a program in execution; process execution must progress in sequential fashion It has Multiple parts 1. The program code, also called text section 2. Current activity including program counter, processor registers 3. Stack containing temporary data 4. Function parameters, return addresses, local variables 5. Data section containing global variables 6. Heap containing memory dynamically allocated during run time  Program is passive entity stored on disk (executable file), process is active  Program becomes process when executable file loaded into memory 7.Execution of program started via GUI mouse clicks, command line entry of its name, etc  One program can be several processes  Consider multiple users executing the same program Process in memory
  • 6. Figure 2.1 Process in memory. Process States As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states: 1. new: The process is being created 2. running: Instructions are being executed 3. waiting: The process is waiting for some event to occur 4. ready: The process is waiting to be assigned to a processor 5. terminated: The process has finished execution Figure 2.2 Process states Process Control Block (PCB) Each process is represented in the operating system by a process control block (PCB)— also called a task control block. A PCB is shown in Figure. 1. Process state –may be new, ready, running, waiting, terminated etc 2. Program counter – location of instruction to next execute 3. CPU registers – contents of all process-centric registers 4. CPU scheduling information- priorities, scheduling queue pointers 5. Memory-management information – memory allocated to the process.This information may include such items as the value of the base and limit registers and
  • 7. the page tables, or the segment tables, depending on the memory system used by the operating system 6. Accounting information – the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. 7. I/O status information – I/O devices allocated to process, list of open files and so on. Figure 2.3 Process control block (PCB). Figure: Diagram showing CPU switch from process to process
  • 8. 7. Write a detailed note on process scheduling. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on the CPU.  To Maximize CPU use, quickly switch processes onto CPU for time sharing  For a single-processor system, there will never be more than one running process.  If there are more processes, the rest have to wait until the CPU is free and can be rescheduled. Scheduling Queues  Maintains scheduling queuesof processes 1. Job queue – set of all processes in the system 2. Ready queue – set of all processes residing in main memory, ready and waiting to execute 3. Device queues – set of processes waiting for an I/O device Processes migrate among the various queues Ready Queue and Various I/O Device Queues Figure The ready queue and various I/O device queues.
  • 9. As processes enter the system, they are put into a job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is generally stored as a linked list. A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue. The system also includes other queues. When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O request. Suppose the process makes an I/O request to a shared device, such as a disk. Since there are many processes in the system, the disk may be busy with the I/O request of some other process. The process therefore may have to wait for the disk. The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. Representation of Process Scheduling A common representation of process scheduling is a queueing diagram. Queueing diagram represents queues, resources, flows. Figure : Queueing-diagram representation of process scheduling. Two types of queues are present:  Ready queue and a set of device queues.  The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system.  A new process is initially put in the ready queue. It waits there until it is selected for execution, or dispatched.  Once the process is allocated the CPU and is executing, one of several events could occur:  The process could issue an I/O request and then be placed in an I/O queue.
  • 10.  The process could create a new child process and wait for the child’s termination.  The process could be removed by force from the CPU, as a result of an interrupt, and be put back in the ready queue. The process switches from the waiting state to the ready state and is then put back in the ready queue. This cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated. Schedulers The operating system selects processes from queues for scheduling. The selection process is carried out by the appropriate scheduler. Two types of schedulers: 1. Short Term Schedulers (STS) or CPU Scheduler 2. Long Term Schedulers (LTS) or Job Scheduler 3. Medium Term Scheduler 1. Short-term scheduler (or) CPU scheduler – selects which process should be executed next and allocates CPU  Sometimes the only scheduler in a system  Short-term scheduler is invoked frequently (milliseconds)  (must fast)  Selects from among the processes that are ready to execute and  allocates the CPU to one of them.  A process may execute for only a few milliseconds before waiting for an I/O request.  Executes at least once every 100 milliseconds.  Because of the short time between executions, it must be fast. For Ex: If it takes 10 milliseconds to decide to execute a process for 100 milliseconds, then 10/(100 + 10) = 9 percent of the CPU is being used (wasted) simply for scheduling the work. 2. Long-term scheduler (or) job scheduler – selects which processes should be brought into the ready queue  Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)  The long-term scheduler controls the degree of multiprogramming In general, most processes can be described as  Either I/O bound or  CPU bound.  I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
  • 11.  CPU-bound process – spends more time doing computations; few very long CPU bursts Long-term scheduler strives for good process mix 3. Medium Term Scheduler The long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes.  If all processes are I/O bound - the ready queue will almost always be empty, and the short-term scheduler will have little to do.  If all processes are CPU bound - the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. Some operating systems, such as time-sharing systems, may introduce an additional, intermediate level of scheduling called medium-term scheduler. The key idea behind a medium-term scheduler - it can be advantageous to remove a process from memory (and from active contention for the CPU) and thus reduce the degree of multiprogramming. Later, the process can be reintroduced into memory, and its execution can be continued where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium-term scheduler. Swapping may be necessary to improve the process mix. Medium-term scheduler can be added if degree of multiple programming needs to decrease  Remove process from memory, store on disk, bring back in from disk to continue execution: swapping Figure 2.7 Addition of medium-term scheduling to the queueing diagram. 8. Explain in detail about inter process communication.  Processes executing concurrently in the operating system may be either independent processes or cooperating processes.
  • 12.  A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent.  A process is cooperating if it can affect or be affected by the other processes executing in the system.  Cooperating process can affect or be affected by other processes, including sharing data  Reasons for cooperating processes:  Information sharing - an environment must be provided to allow concurrent access to information(shared file etc)  Computation speedup - To improve the speedup, tasks are divided into subtasks, and executes in parallel  Modularity-dividing the system functions into separate processes or threads  Convenience- Even an individual user may work on many tasks at the same time.  Cooperating processes need inter-process communication (IPC) Communications Models  Two models of IPC  Shared memory  Message passing Figure : Communications models. (a) Message passing. (b) Shared memory. Communication Model – Shared Memory Interprocess communication using shared memory requires communicating processes to establish a region of shared memory that resides in the address space of the process creating the shared-memory segment. Other processes also attach it to their address space.
  • 13. They can then exchange information by reading and writing data in the shared areas and are not under the operating system’s control. The processes are also responsible for ensuring that they are not writing to the same location simultaneously. To illustrate the concept of cooperating processes, consider the producer–consumer problem, which is a common paradigm for cooperating processes.  A producer process produces information that is consumed by a consumer process. For example, a compiler may produce assembly code that is consumed by an assembler. The assembler, in turn, may produce object modules that are consumed by the loader.  Two types of processes, producers and consumers, wish to communicate.  Producers have information to send to the consumers. They place the information in a bounded (limited size) buffer, where consumers can obtain it.  The buffer is a shared resource; use of the buffer must be synchronized. Obviously each access to the buffer must be done in a critical section.  When the buffer is full producers cannot place any more items in it. They therefore go to sleep, to be awakened when a consumer removes an item. The producer process using shared memory. One solution to the producer–consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, a buffer of items must be available that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced. The code for the producer process, itemnext_produced; while (true) { /* produce an item in next produced */ while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; } The code for the consumer process, itemnextconsumed; while (true){ while (in == out)
  • 14. ; /* do nothing */ nextconsumed = buffer[out]; out = (out + 1)% BUFFERSIZE; /* consume the item in nextconsumed */ } The local variable nextproduced in which the new item to be produced is stored. The consumer process has a local variable next consumed in which the item to be consumed is stored. This scheme allows at most BUFFERSIZE−1 items in the buffer at the same time. Communication Model – Message Passing Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. A message-passing facility provides at least two operations:  send(message)  receive(message) Messages sent by a process can be either fixed or variable in size.  If only fixed-sized messages can be sent, the system-level implementation is straightforward & makes the task of programming more difficult.  Conversely, variable-sized messages require a more complex system- level implementation, but the programming task becomes simpler. If processes P and Q want to communicate, they must send messages to and receive messages from each other: ie. a communication link must exist between them. This link can be implemented in a variety of ways, for implementing a link and the send()/ receive() operations:  Direct or indirect communication  Synchronous or asynchronous communication  Automatic or explicit buffering Direct Communication Each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send () and receive () primitives are defined as:  send (P, message)—Send a message to process P.  receive (Q, message)—Receive a message from process Q.
  • 15. A communication link in this scheme has the following properties:  A link is established automatically between every pair of processes that want to communicate. The processes need to know only each other’s identity to communicate.  A link is associated with exactly two processes.  Between each pair of processes, there exists exactly one link. 1. This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver process must name the other to communicate. 2. A variant of this scheme employs asymmetry in addressing. Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive() primitives are defined as follows: a. Send (P, message)—Send a message to process P. b. Receive (id, message)—Receive a message from any process. The variable id is set to the name of the process with which communication has taken place. Disadvantage in both of these schemes (symmetric and asymmetric) is the limited modularity of the resulting process definitions. All references to the old identifier must be found, so that they can be modified to the new identifier. In general, any such hard-coding techniques, where identifiers must be explicitly stated, are less desirable than techniques involving indirection. Indirect communication The messages are sent to and received from mailboxes, or ports. Each mailbox has a unique identification. A process can communicate with another process via a number of different mailboxes, but two processes can communicate only if they have a shared mailbox. The send () and receive () primitives are defined as follows:  send(A, message)—Send a message to mailbox A.  receive(A, message)—Receive a message from mailbox A. A communication link has the following properties:  A link is established between a pair of processes only if both members of the pair have a shared mailbox.  A link may be associated with more than two processes.  Between each pair of communicating processes, a number of different links may exist, with each link corresponding to one mailbox.  Suppose that processes P1, P2, and P3 all share mailbox A.  Process P1sends a message to A, while both P2 and P3 execute a receive () from A.  Which process will receive the message sent byP1?
  • 16. Solutions: 1. Allow a link to be associated with two processes at most. 2. Allow at most one process at a time to execute a receive () operation. 3. Allow the system to select arbitrarily which process will receive the message (that is, either P2 or P3, but not both, will receive the message). The operating system then must provide a mechanism that allows a process to do the following: Create a new mailbox. 1. Send and receive messages through the mailbox. 2. Delete a mailbox. Synchronization Message passing may be blocking or non blocking— also known as synchronous and asynchronous. Blocking is considered synchronous  Blocking send. The sending process is blocked until the message is received  Blocking receive. The receiver blocks until a message is available. Non-blocking is considered asynchronous  Nonblocking send. The sending process sends the message and resumes operation.  Nonblocking receive. The receiver retrieves either a valid message or a null. Different combinations of send () and receive () are possible. When both send () and receive () are blocking, a rendezvous is happened between the sender and the receiver. The solution to the producer–consumer problem becomes trivial when used blocking send () and receive () statements. Zero capacity: The queue has a maximum length of zero; the link cannot have any messages waiting in it. So the sender must block until the recipient receives the message (rendezvous). Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue and the sender can continue execution without waiting. If the link is full, the sender must block until space is available in the queue. Unbounded capacity: The queue’s length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks.
  • 17. 9. Write in detail about threads. A thread is a basic unit of CPU utilization:  It comprises a thread ID, a program counter, a register set, and a stack.  It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. Motivation  Most software applications that run on modern computers are multithreaded.  A separate process with several threads runs within application.  If a process has multiple threads of control, it can perform more than one task at a time. Figure 2.10 illustrates the difference between a traditional single-threaded process and a multithreaded process. Figure : Single-threaded and multithreaded processes. A web browser might have one thread display images or text while another thread retrieves data from the network, For example,  A word processor may have a thread for displaying graphics,  Another thread for responding to keystrokes from the user,  A third thread for performing spelling and grammar checking in the background. When the server receives a request, it creates a separate process to service that request. This process-creation method was in common use before threads became popular. Process creation is  Time consuming and
  • 18.  Resource intensive. It is generally more efficient to use one process that contains multiple threads. If the web-server process is multithreaded, the server will create a separate thread that listens for client requests. When a request is made, rather than creating another process, the server creates a new thread to service the request and resume listening for additional requests. Threads also play a vital role in remote procedure call (RPC) systems. RPC servers are multithreaded. When a server receives a message, it services the message using a separate thread. This allows the server to service several concurrent requests. Figure : Multithreaded server architecture. Finally, most operating-system kernels are multithreaded. Several threads operate in the kernel, and each thread performs a specific task, such as managing devices, managing memory, or interrupt handling. Benefits 1. Responsiveness – Multithreading an interactive application may allow continued execution if part of process is blocked, thereby increasing responsiveness to the user. Especially important for user interfaces. 2. Resource Sharing – threads share resources of process, easier than shared memory or message passing 3. Economy – cheaper than process creation, thread switching lower overhead than context switching 4. Scalability – process can take advantage of multiprocessor architectures, where threads may be running in parallel on different processing cores. A single-threaded process can run on only one processor, regardless how many are available. 10. Explain in detail about deadlock detection and deadlock recovery. DEADLOCKS In a multiprogramming environment, several processes may compete for a finite number of resources. A process requests resources; if the resources are not available at that time, the
  • 19. process enters a waiting state. Sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes. This situation is called a deadlock. Deadlock Detection  Allow system to enter deadlock state  Detection algorithm  Recovery scheme Single Instance of Each Resource Type  Maintain wait-for graph  Nodes are processes  PiPjif Piis waiting forPj  Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock  An algorithm to detect a cycle in a graph requires an order ofn2operations, where n is the number of vertices in the graph Resource-Allocation Graph and Wait-for Graph Figure (a) Resource-allocation graph. (b) Corresponding wait-for graph Several Instances of a Resource Type  Available: A vector of length m indicates the number of available resources of each type  Allocation: An n x mmatrix defines the number of resources of each type currently allocated to each process  Request: An n x mmatrix indicates the current request of each process. If Request [i][j] = k, then processPi is requestingk more instances of resource type Rj.
  • 20. Detection Algorithm 1. Let Work and Finish be vectors of length m and n, respectively Initialize: a) (Work = Available) b) For i = 1,2, …, n, if Allocation 0, then Finish[i] = false; otherwise, Finish[i] = true 2. Find an index isuch that both: a) Finish[i] == false b) RequestiWork If no such iexists, go to step 4 3. Work = Work + Allocation Finish[i] = true go to step 2 4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked Example of Detection Algorithm  Five processes P0 through P4;three resource types A (7 instances), B (2 instances), and C (6 instances)  Snapshot at time T0: Allocation Request Available A B C A B C A B C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 0 P3 2 1 1 1 0 0 P4 0 0 2 0 0 2  Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i  P2 requests an additional instance of typeC Request A B C P0 0 0 0 P1 2 0 2 P2 0 0 1 P3 1 0 0 P4 0 0 2  State of system?  Can reclaim resources held by process P0, but insufficient resources to fulfill other processes; requests  Deadlock exists, consisting of processes P1, P2, P3, and P4 Detection-Algorithm Usage  When, and how often, to invoke depends on:
  • 21.  How often a deadlock is likely to occur?  How many processes will need to be rolled back?  one for each disjoint cycle  If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock. Deadlock Recovery i. Recovery using Process Termination  Abort all deadlocked processes  Abort one process at a time until the deadlock cycle is eliminated  In which order should we choose to abort? 1. Priority of the process 2. How long process has computed, and how much longer to completion 3. Resources the process has used 4. Resources process needs to complete 5. How many processes will need to be terminated 6. Is process interactive or batch? ii. Recovery using Resource Preemption  Selecting a victim – minimize cost  Rollback – return to some safe state, restart process for that state  Starvation – same process may always be picked as victim, include number of rollback in cost factor 11. Analyze FCFS, preemptive and non-preemptive versions of Shortest-Job First and Round Robin (time slice = 2) scheduling algorithms with Gantt charts for the four processes given. Compare their average turnaround and waiting time. Process Arrival Time Burst Time P1 0 10 P2 1 6 P3 2 12 P4 3 15
  • 22. PART C 11. Under what circumferences do page fault occur? Describe the actions taken by the OS when the page fault occurs. Page fault  If the process tries to access a page that was not bought into memory.  Access to a page marked invalid causes a page-fault trap.  The procedure for handling this page fault is straightforward. Figure : Page table when some pages are not in main memor y.
  • 23. Actions taken by the OS when the page fault occurs. Figure : Steps in handling a page fault 1. OS checks the internal table, to determine whether the reference was a valid or invalid memory access. 2. If the reference was invalid, terminate the process. If it was valid, but not in main memory. 3. Find a free frame. 4. Swap the page into newly allocated frame. 5. Modify the table to indicate that the page is now in memory. 6. Restart the instruction that was interrupted by the illegal address trap. The page can be accessed now. 12. Explain thrashing and the methods to avoid thrashing. A process that is spending more time in paging than executing is said to be thrashing. Cause of Thrashing:  When memory is filled up and processes started spending lots of time waiting for their pages to page in, then CPU utilization would lower, causing the schedule to add in even more processes and the system would grind to a halt.
  • 24. To limit the effect of thrashing: Local page replacement policies can prevent one thrashing process from taking pages away from other processes. Figure : Thrashing  To prevent thrashing, a process is provided with as many frames as possible.  The locality model states that processes typically accessmemory references in a given locality, making lots of references to the same general area of memory before moving periodically to a new locality.  If many frames are involved in the current locality, then page faulting would occur primarily on switches from one locality to another. Figure : Locality in a memory-reference pattern. Methods to avoid thrashing Working-Set Model
  • 25.  The working set model is based on the conceptof locality, and defines a working set window, of length delta. Whatever pages are included in the most recent delta page referencesare said to be in the processes working setwindow, and comprise its current working set. Figure : Working-set model.  The selection of delta is critical to the success of the working set model  If it is too small then it does not encompass all of the pages of the current locality.  if it is too large, then it encompasses pages that are no longer being frequently accessed.  The total demand, D = sum of the sizes of the working sets for all processes.  If D > total number of available frames, then at least one process is thrashing, becausethere are not enough frames available to satisfy its minimum working set.  If D < the currently available frames, then additional processes can be launched.  The hard part of the working-set model is keepingtrack of what pages are in the current working set, since every reference adds one to the set and removes one older page.  An approximation can be made using reference bits and a timer that goes off after a set interval of memory references:  For example, suppose that the timer is set to go off after every 5000 references ( by any process ), and can store two additional historical reference bits in addition to the current reference bit.  Every time the timer goes off, the current reference bit is copied to one of the two historical bits, and then cleared.
  • 26.  If any of the three bits is set, then that page was referenced within the last 15,000 references, and is considered to be in that processes reference set. Page-Fault Frequency  A more direct approach is to recognize that what really want to control is the page-fault rate, and to allocate frames based on this directly measurable value.  If the page-fault rate exceeds a certain upper bound then that process needs more frames, and if it is below a given lower bound, then it can afford to give up some of its frames to other processes. Figure : Page-fault frequency. 12. (i). Compare internal and external fragmentation with examples. (5) (ii). What is virtual memory? Mention its advantages. (5) BASIS FOR COMPARISON INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION Basic It occurs when fixed sized memory blocks are allocated to the processes. It occurs when variable size memory space are allocated to the processes dynamically.
  • 27. BASIS FOR COMPARISON INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION Occurrence When the memory assigned to the process is slightly larger than the memory requested by the process this creates free space in the allocated block causing internal fragmentation. When the process is removed from the memory, it creates the free space in the memory causing external fragmentation. Solution The memory must be partitioned into variable sized blocks and assign the best fit block to the process. Compaction, paging and segmentation. VIRTUAL MEMORY  Virtual memory is a technique that allows the execution of processes that are not completely in memory.  One major advantage of this scheme is that programs can be larger than physical memory.  Further, virtual memory abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory. Advantages of Virtual memory  Virtual memory also allows processesto share files easily and to implement shared memory.
  • 28.  Virtual memory is not easy to implement, however, and may substantially decrease performance if it is used carelessly.  System libraries can be shared by several processes through mapping of the shared object into a virtual address space. Actual pages where the libraries reside in physical memory are shared by all the processes.  Similarly, virtual memory enables processes to share memory. Two or more processes can communicate through the use of shared memory.  Virtual memory can allow pages to be shared during process creation with the fork() system call, thus speeding up process creation. 13. What is demand paging? Give its use.  Consider how an executable program might be loaded from disk into memory.  One option is to load the entire program in physical memory at program execution time. However, a problem with this approach is that we may not initially need the entire program in memory.  Demand paging load pages only as they are needed.  A demand-paging system is similar to a paging system with swapping where processes reside in secondary memory (usually a disk).  To execute a process, process must be swapped into the memory. Instead of swapping entire process, a page that is needed is swapped. A lazy swapper is used for this purpose.  A lazy swapper is a swapper never swaps a page into memory unless that page will be needed.  A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process. The term pager is used instead of swapper, in connection with demand paging.
  • 29. Figure: Transfer of a paged memory to contiguous disk space Use:  When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again.  It avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed.  Some form of hardware support is needed to distinguish between the pages that are in memory and the pages that are on the disk.  The valid -invalid bit scheme can be used for this purpose.  This time however, when this bit is set to ``valid'', the associated page is both legal and in memory.  If the bit is set to ``invalid'', the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk.  The page-table entry for a page that is brought into memory is set as usual, but the page-table entry for a page that is not currently in memory is either simply marked invalid or contains the address of the page on disk.  While the process executes and accesses pagesthat are memory resident, execution proceeds normally. 14. Write in detail about any 2 scheduling algorithms. Scheduling Algorithms 1. First Come, First-Served Scheduling (FCFS)
  • 30. 2. Shortest-Job-First (SJF) Scheduling 3. Priority Scheduling 4. Round Robin (RR) 5. Multilevel Queue 6. Multilevel Feedback Queue 1. First- Come, First-Served (FCFS) Scheduling In this algorithm, the process that requests the CPU first is allocated the CPU first. This implementation of the FCFS policy is easily managed with a FIFO queue. Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 ,P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17  Suppose that the processes arrive in the order: P2 ,P3 , P1 The Gantt chart for the schedule is:  Waiting time for P1 = 6;P2 = 0; P3 = 3  Average waiting time: (6 + 0 + 3)/3 = 3  Much better than previous case  Convoy effect - short process behind long process Consider one CPU-bound and many I/O-bound processes Priority Scheduling  A priority number (integer) is associated with each process  The CPU is allocated to the process with the highest priority (smallest integer  highest priority)  Preemptive. P P P 1 2 3 0 24 30 27 P1 0 3 6 30 P2 P3
  • 31.  Non preemptive.  SJF is priority scheduling where priority is the inverse of predicted next CPU burst time.  Problem Starvation – low priority processes may never execute  Solution Aging – as time progresses increase the priority of the process Example 1 Process Burst Time Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 Priority scheduling Gantt Chart Average waiting time = 8.2 msec 6. Write short notes on operations on processes.  The processes in most systems can execute concurrently, and they may be created and deleted dynamically. Thus, system must provide mechanisms for:  Process creation  Process termination Process Creation  Generally, process identified and managed via a process identifier (pid)  During the course of execution, a process may create several new processes.  The creating process is called a parent process, and  The new processes are called the children of that process.  Each of these new processes may in turn create other processes, forming a tree of processes.  Most operating systems (including UNIX, Linux, and Windows) identify processes according to a unique process identifier (or pid), which is typically an integer number. 1 0 1 19 P1 P2 16 P4 P3 6 18 P
  • 32. Figure illustrates a typical process tree for the Linux operating system, showing the name of each process and itspid. The init process (which always has a pid of 1) serves as the root parent process for all user processes. Once the system has booted, the init process can also create various user processes, such as a web or print server, an ssh server, and the like. There are two children of init—k threadd and sshd.  The kthreadd process is responsible for creating additional processes that perform tasks on behalf of the kernel (in this situation, khelper and pdflush).  The sshd process is responsible for managing clients that connect to the system by using ssh(which is short for secure shell). The login process is responsible for managing clients that directly log onto the system. In this example, a client has logged on and is using the bash shell, which has been assigned pid 8416. Figure : A tree of processes on a typical linux system. Resource sharing options  Parent and children share all resources  Children share subset of parent’s resources  Parent and child share no resources When a process creates a new process, two possibilities for execution exist: 1. The parent continues to execute concurrently with its children. 2. The parent waits until some or all of its children have terminated. There are also two address-space possibilities for the new process: 1. The child process is a duplicate of the parent process. 2. The child process has a new program loaded into it.
  • 33. Process Termination A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit () system call.  At that point, the process may return a status value (typically an integer) to its parent process (via the wait () system call).  All the resources of the process—including physical and virtual memory, open files, and I/O buffers—are deallocated by the operating system. A parent may terminate the execution of one of its children for a variety of reasons, such as these:  The child has exceeded its usage of some of the resources that it has been allocated.  The task assigned to the child is no longer required.  The parent is exiting, and the operating system does not allow a child to continue if its parent terminates. If a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system. When a process terminates, its resources are deallocated by the operating system. However, its entry in the process table must remain there until the parent calls wait (), because the process table contains the process’s exit status. A process that has terminated, but whose parent has not yet called wait (), is known as a zombie process. All processes transition to this state when they terminate, but generally they exist as zombies. Once the parent calls wait (), the process identifier of the zombie process and its entry in the process table are released. If a parent did not invoke wait () and instead terminated, thereby leaving its child processes as orphans. 7. Write short notes on threading issues a. Many to one b. One to one c. Many to many d. Two level a. Many-to-One  Many user-level threads mapped to single kernel thread  One thread blocking causes all to block
  • 34.  Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time  Few systems currently use this model. Figure 2.15 Many-to-one model.  Examples: a. Solaris Green Threads b. GNU Portable Threads b. One-to-One  Each user-level thread maps to kernel thread  Creating a user-level thread creates a kernel thread  More concurrency than many-to-one  Number of threads per process sometimes restricted due to overhead  Examples: a. Windows b. Linux c. Solaris 9 and later Figure 2.16 One-to-one model.
  • 35. c. Many-to-Many Model  Allows many user level threads to be mapped to many kernel threads  Allows the operating system to create a sufficient number of kernel threads  Solaris prior to version 9  Windows with the Thread Fiber package. Shortcomings  Developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor  When a thread performs a blocking system call, the kernel can schedule another thread for execution Figure 2.17 Many-to-many model. d.Two-level Model  Similar to Many to many, except that it allows a user thread to be bound to kernel thread  Examples a. IRIX b. HP-UX c. Tru64 UNIX d. Solaris 8 and earlier