SlideShare a Scribd company logo
1 of 62
CHAPTER 6
CPU SCHEDULING
Basic Concepts
• Maximum CPU utilization obtained with
multiprogramming
• CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main concern
Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program to restart
that program
• Dispatch latency – time it takes for the dispatcher to stop one
process and start another running.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – Number of processes that complete their execution
per time unit.
• Turnaround time – amount of time to execute a particular
process.
• Waiting time – amount of time a process has been waiting in the
ready queue.
• Response time – amount of time it takes from when a request
was submitted until the first response is produced.
Multilevel Queue Scheduling
• Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
• These two types of process have different response time
requirements and different scheduling needs. Reside
permanently in a given queue.
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
Multilevel Queue Scheduling
 Scheduling must be done between the queues:
 Fixed priority scheduling : Serve all from foreground, then
from background.
 Time slice : Each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR ,20% to background in FCFS.
Multilevel Feedback Queue Scheduling
• Scheduling : A new job enters queue Q0
which is served FCFS. When it gains CPU,
job receives 8 milliseconds .If it does not
finish in 8 milliseconds, job is moved to tail
of queue Q1.
• If queue 0 is empty, the process at the
head of queue 1 is given a quantum of 16
milliseconds. If it does not complete, it is
preempted and is put into queue 2.
• Processes in queue 2 are run on an FCFS
basis but are run only when queues 0 and
1 are empty.
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available.
• Asymmetric multiprocessing–All scheduling decisions, I/O
processing, and other system activities are handled by a single
processor- master server. The other processors execute only user
code.
• Symmetric multiprocessing (SMP)– Each processor is self
scheduling. Each has its own private queue of ready processes .
Having the scheduler for each processor examine the ready queue
and select a process to execute.
Processor Affinity :
• If the process migrates to another processor. The contents of cache
memory must be invalidated for the first processor, and the cache
for the second processor must be repopulated.
Processor Affinity
• Avoid migration of processes from one processor to another and
instead attempt to keep a process running on the same processor.
This is known as processor affinity.
• Soft Affinity : When an operating system has a policy of attempting
to keep a process running on the same processor but not
guaranteeing that it will do so.
• Hard Affinity : Linux provide system calls that support hard affinity,
thereby allowing a process to specify that it is not to migrate to
other processors.
NUMA and CPU Scheduling
 In SMP, it is important to keep the workload balanced among all
processors to fully utilize the benefits of having more than one
processor.
 Otherwise, one or more processors may sit idle while other
processors have high workloads, along with lists of processes
awaiting the CPU.
Load Balancing
Load Balancing
• Load balancing attempts to keep workload evenly distributed
across all the processors in SMP system.
• Push migration – a specific task periodically checks the load on
each processor and if it finds an imbalance, evenly distributes the
load by moving (or pushing) processes from overloaded to idle or
less-busy processors.
• Pull migration – When an idle processor pulls a waiting task from
a busy processor.
Multicore Processors
• Recent trend to place multiple processor cores on same physical
chip resulting in multicore processors.
• SMP systems that use multicore processors are faster & consumes
less power.
• Memory Stall : When a processor accesses memory, it spends a
significant amount of time waiting for the data to become
available.
Multicore Processors
 Many recent hardware designs have implemented multithreaded
processor cores in which two (or more) hardware threads are
assigned to each core.
 If one thread stalls while waiting for memory, the core can switch
to another thread.
 From an operating-system perspective, each hardware thread
appears as a logical processor that is available to run a software
thread.
CHAPTER 4
MULTITHREADED PROGRAMMING
Multithreaded programming
• A thread is a basic unit of CPU utilization. It comprises a thread ID, a
program counter, a register set and stack.
• Process creation is heavy-weight while thread creation is light
weight.
• If a process has multiple threads of control it can perform more
than one task at a time.
Single and Multithreaded Processes
If the web server ran as a single-threaded process, it would be
able to service only one client at a time, a client might have to
wait a very long time for its request to be serviced.
If the web-server ran as multithreaded process, when a request
is made, the server will create a new thread to service the
request and resume listening for additional requests.
Multithreaded Server Architecture
Benefits of Multithreaded Programming
• Responsiveness – It allows the program to continue execution
even if part of process is blocked, thereby increasing
responsiveness to user.
• Resource Sharing – Threads share the memory and resources of
the process to which they belong ,so it is easier than shared
memory or message passing.
• Economy – Threads share the resources of the process to which
they belong, it is more economical to create and context switch
threads.
• Scalability – Threads may be running in parallel on different
processors as a result parallelism can be improved.
Multicore Programming
• Challenges in multicore programming
-- Dividing activities : Examining applications to find areas that can be
divided into separate, concurrent tasks and thus can run in parallel on
individual cores.
-- Balance : Programmers must also ensure that the task perform
equal work of equal value.
-- Data splitting :Applications are divided into separate tasks, the data
accessed and manipulated by the tasks must be divided to run on
separate cores.
-- Data dependency :The data accessed by the tasks must be examined
for dependencies between two or more tasks.
-- Testing and debugging :When a program is running in parallel on
multiple cores, testing and debugging is inherently more difficult than
single-threaded applications.
Concurrency vs. Parallelism
 Concurrent execution on single-core system:
Here concurrency means processing core is capable of executing only
one thread at a time.
Concurrency supports more than one task making progress.
 Parallelism on a multi-core system:
Here concurrency means that the threads can run in parallel as the
system assign a separate thread to each core.
Parallelism implies a system can perform more than one task
simultaneously.
Multithreading Models
• Two types of threads :
 User level threads : User threads are supported above the
kernel, these are the threads that application programmers
would put into their programs.
 Kernel level threads :Kernel threads are supported within the
kernel of the OS itself.
 Multithreading models :
• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One
• Many user-level threads mapped
to single kernel thread.
• One thread blocking causes all to
block.
• Multiple threads may not run in
parallel on multicore system
because only one may be in
kernel at a time.
• Examples:
– Solaris Green Threads
One-to-One
• Each user-level thread maps to kernel thread.
• Creating a user-level thread creates a kernel thread.
• More concurrency than many-to-one.
• Number of threads per process sometimes restricted
due to overhead.
• Examples
– Windows
– Linux
– Solaris 9
Many-to-Many Model
• Allows many user level threads to
be mapped to many kernel
threads.
• Allows the operating system to
create a sufficient number of
kernel threads . E.g. : Solaris
• Two Level Model :
• Similar to M:M, except that it
allows a user thread to be bound
to kernel thread.
• E.g.: Irix
Thread Libraries
• Thread library provides programmer with API for creating and
managing threads.
• Two primary ways of implementing thread library.
– User level library: All library reside entirely in user space with
no kernel support. No support from operating system.
– Kernel-level library :All library reside entirely in kernel space.
Requires support from operating system.
• Three main thread libraries :
-- POSIX PThreads : refers to the POSIX standard (IEEE 1003.1c)
defining an API for thread creation and synchronization. Common in
UNIX operating systems (Solaris, Linux, Mac OS X).
-- JAVA Threads :The Java thread API allows threads to be created
and managed directly in Java programs.
-- Win32 Threads :The Win32 thread library is a kernel-level library
available on Windows systems.
Pthread Example
Pthreads Example
Pthreads Example (Cont.)
Threading Issues
 Semantics of fork() and exec() system calls :
• Two versions of fork(), one that duplicates all threads and another
that duplicates only the thread that invoked the fork() system call.
• If a thread invokes the exec() system call, the program specified in
the parameter to exec () will replace the entire process-including
all threads.
 Thread Pools :
• Create a number of threads at process startup and place them into
a pool, where they sit and wait for work.
• When a server receives a request, it awakens a thread from this
pool-if one is available-and passes it the request for service.
• Once the thread completes its service, it returns to the pool and
awaits more work.
Signal Handling
 Signals are used in UNIX systems to notify a process that a
particular event has occurred.
• Synchronous signals : They are delivered to the same process that
performed the operation that caused the signal.
E.g.: illegal memory access, division by zero.
• Asynchronous signals : When a signal is generated by an event
external to a running process, that process receives the signal
asynchronously.
E.g. : terminating a process with specific keystrokes (Ctrl D)
 A signal handler is used to process signals.
1. Signal is generated by a particular event.
2. Signal is delivered to a process.
3. Once delivered it must be handled.
Signal Handling (Cont.)
 A signal may be handled by one of two possible handlers:
1.A default signal handler
2.A user-defined signal handler.
• Every signal that is run by the kernel when handling that signal.
• This default action can be overridden by a user defined signal
handler that is called to handle the signal.
 Thread Specific Data :
• Threads belonging to a process share the data of the process.
• Each thread need its own copy of certain data. We will call such
data as thread specific data.
Process Example
Speeding up with Multiple Process
Thread Model
CHAPTER 6
PROCESS SYNCHRONIZATION
Introduction
• Processes can execute concurrently.
– May be interrupted at any time, partially completing execution.
• Concurrent access to shared data may result in data inconsistency.
• Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes.
Producer Consumer problem
 Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers.
 We can do so by having an integer counter that keeps track of the
number of full buffers. Initially, counter is set to 0.
 It is incremented by the producer after it produces a new buffer and
is decremented by the consumer after it consumes a buffer.
Producer :
while (true) {
/* produce an item in next produced */
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Solution using compare and swap instruction
• void swap(boolean &a, boolean &b)
{
boolean temp=a;
a= b;
b=temp;
}
• Solution:
do {
flag=true;
while(flag)
swap(lock,flag);
/* critical section */
lock = false;
/* remainder section */
} while (true);
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Race Condition :
 Several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the
particular order in which the access takes place is called race
condition.
 To guard against race condition we require processes be
synchronized in some way.
Consumer:
Producer Consumer problem
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}.
• Each process has a segment of code called critical section, in which
process may be changing common variables, updating table, writing
file, etc.
– When one process in critical section, no other may be in its
critical section.
• Critical section problem is to design a protocol that the processes
can use to cooperate.
• Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then
remainder section.
Critical Section
• General structure of process Pi
• Entry Section: It is a block of code executed in preparation for
entering critical section.
• Exit Section :The code executed upon leaving the critical section.
• Remainder Section : Rest of the code is remainder section.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section,
then no other processes can be executing in their critical section.
2. Progress - When no process is executing in its critical section , any
process that requests entry in to the critical section must be
permitted without delay.
3. Bounded Waiting - A bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
Critical-Section Handling in OS
• Two approaches depending on if kernel is preemptive or non-
preemptive .
– Preemptive – allows preemption of process when running in
kernel mode.
– Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU.
• Essentially free of race conditions in kernel mode.
Peterson’s Solution
• Good algorithmic description of solving the problem.
Two process solution :
• Assume that the load and store machine-language instructions are
atomic ie., cannot be interrupted.
• The two processes share two variables:
– int turn;
– Boolean flag[2]
• The variable turn indicates whose turn it is to enter the critical
section.
• The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
do
{
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s Solution (Cont.)
• Provable that the three CS requirement are met:
1. Mutual exclusion is preserved , Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied.
3. Bounded-waiting requirement is met.
Semaphores
• Synchronization tool that provides more sophisticated ways for
process to synchronize their activities.
• Semaphore S – integer variable.
• Can only be accessed via two indivisible (atomic) operations.
– wait() and signal() Originally called P() and V()
• Definition of the wait() operation
wait(S) {
while (S <= 0); // busy wait
S--;
}
• Definition of the signal() operation
signal(S) {
S++;
}
Semaphore Usage
 Advantages of semaphores :
• Solves critical section problem.
• Decides the order of execution of process.
• Resource management and can solve various synchronization
problems.
 Types of semaphores :
• Counting semaphore – integer value can range over an unrestricted
domain.
• Binary semaphore – integer value can range only between 0 and 1.
 Semaphore Implementation :
• In multiprogramming system , busy waiting wastes CPU cycles that
some other process might be able to use productively. This type of
semaphore is called spin lock i.e process spins waiting for lock.
Semaphore Implementation with no Busy waiting
• With each semaphore there is an associated waiting queue.
• Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list.
• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue.
– wakeup – remove one of processes in the waiting queue and
place it in the ready queue.
typedef struct{
int value;
struct process *list;
} semaphore;
Implementation with no Busy waiting (Cont.)
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
• Starvation – indefinite blocking
– A process may never be removed from the semaphore queue in
which it is suspended.
• Priority Inversion – Scheduling problem when lower-priority
process holds a lock needed by higher-priority process.
– Solved via priority-inheritance protocol.
Synchronization Hardware
• In single processor environment, process donot execute
concurrently.
• As a result mutual exclusion can be achieved by disabling the
interrupt before entering critical section and enabling the interrupt
after the process has exited from the critical section.
• But in multiprocessor environments , several processes are
executing concurrently on different processors.
• Inorder to disable the interrupt, instruction is sent to all the
processors, it is really a time consuming task and decreases the
efficiency of the system.
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Test and set Instructions
• Test and set instructions are simple hardware instructions that
solve the critical section problem by providing mutual exclusion in
a easy and efficient way in a multiprocessor environment.
Bounded-waiting Mutual Exclusion with test and set
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test and set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization.
• Abstract data type, internal variables are accessible by code within
the procedure.
• Only one process may be active within the monitor at a time.
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
Module2 MultiThreads.ppt
Module2 MultiThreads.ppt

More Related Content

Similar to Module2 MultiThreads.ppt

EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
Multicore processor.pdf
Multicore processor.pdfMulticore processor.pdf
Multicore processor.pdfrajaratna4
 
OperatingSystemFeature.pptx
OperatingSystemFeature.pptxOperatingSystemFeature.pptx
OperatingSystemFeature.pptxCharuJain396881
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxakhilagajjala
 
Chip Multithreading Systems Need a New Operating System Scheduler
Chip Multithreading Systems Need a New Operating System Scheduler Chip Multithreading Systems Need a New Operating System Scheduler
Chip Multithreading Systems Need a New Operating System Scheduler Sarwan ali
 
Hardware Multithreading.pdf
Hardware Multithreading.pdfHardware Multithreading.pdf
Hardware Multithreading.pdfrajaratna4
 
Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3AbdullahMunir32
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptxAbcvDef
 
Advanced processor principles
Advanced processor principlesAdvanced processor principles
Advanced processor principlesDhaval Bagal
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................harendersin82880
 
Unit 2 part 2(Process)
Unit 2 part 2(Process)Unit 2 part 2(Process)
Unit 2 part 2(Process)WajeehaBaig
 
Parallel Computing
Parallel ComputingParallel Computing
Parallel ComputingMohsin Bhat
 

Similar to Module2 MultiThreads.ppt (20)

EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Lecture 3 threads
Lecture 3   threadsLecture 3   threads
Lecture 3 threads
 
Operating System Overview.pdf
Operating System Overview.pdfOperating System Overview.pdf
Operating System Overview.pdf
 
Multicore processor.pdf
Multicore processor.pdfMulticore processor.pdf
Multicore processor.pdf
 
Threads.ppt
Threads.pptThreads.ppt
Threads.ppt
 
OperatingSystemFeature.pptx
OperatingSystemFeature.pptxOperatingSystemFeature.pptx
OperatingSystemFeature.pptx
 
Ch04 threads
Ch04 threadsCh04 threads
Ch04 threads
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptx
 
Chip Multithreading Systems Need a New Operating System Scheduler
Chip Multithreading Systems Need a New Operating System Scheduler Chip Multithreading Systems Need a New Operating System Scheduler
Chip Multithreading Systems Need a New Operating System Scheduler
 
OS Thr schd.ppt
OS Thr schd.pptOS Thr schd.ppt
OS Thr schd.ppt
 
Hardware Multithreading.pdf
Hardware Multithreading.pdfHardware Multithreading.pdf
Hardware Multithreading.pdf
 
Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3Parallel and Distributed Computing chapter 3
Parallel and Distributed Computing chapter 3
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptx
 
Advanced processor principles
Advanced processor principlesAdvanced processor principles
Advanced processor principles
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
Types of operating system.................
Types of operating system.................Types of operating system.................
Types of operating system.................
 
Unit 2 part 2(Process)
Unit 2 part 2(Process)Unit 2 part 2(Process)
Unit 2 part 2(Process)
 
Parallel Computing
Parallel ComputingParallel Computing
Parallel Computing
 
parallel-processing.ppt
parallel-processing.pptparallel-processing.ppt
parallel-processing.ppt
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 

Recently uploaded

Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,
Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,
Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,Pooja Nehwal
 
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...Suhani Kapoor
 
Gaya Call Girls #9907093804 Contact Number Escorts Service Gaya
Gaya Call Girls #9907093804 Contact Number Escorts Service GayaGaya Call Girls #9907093804 Contact Number Escorts Service Gaya
Gaya Call Girls #9907093804 Contact Number Escorts Service Gayasrsj9000
 
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一ss ss
 
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...Suhani Kapoor
 
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...ranjana rawat
 
Thane Escorts, (Pooja 09892124323), Thane Call Girls
Thane Escorts, (Pooja 09892124323), Thane Call GirlsThane Escorts, (Pooja 09892124323), Thane Call Girls
Thane Escorts, (Pooja 09892124323), Thane Call GirlsPooja Nehwal
 
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service - Bandra F...
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service -  Bandra F...WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service -  Bandra F...
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service - Bandra F...Pooja Nehwal
 
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一(办理学位证)韩国汉阳大学毕业证成绩单原版一比一
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一C SSS
 
Pallawi 9167673311 Call Girls in Thane , Independent Escort Service Thane
Pallawi 9167673311  Call Girls in Thane , Independent Escort Service ThanePallawi 9167673311  Call Girls in Thane , Independent Escort Service Thane
Pallawi 9167673311 Call Girls in Thane , Independent Escort Service ThanePooja Nehwal
 
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一ga6c6bdl
 
Russian Call Girls Kolkata Chhaya 🤌 8250192130 🚀 Vip Call Girls Kolkata
Russian Call Girls Kolkata Chhaya 🤌  8250192130 🚀 Vip Call Girls KolkataRussian Call Girls Kolkata Chhaya 🤌  8250192130 🚀 Vip Call Girls Kolkata
Russian Call Girls Kolkata Chhaya 🤌 8250192130 🚀 Vip Call Girls Kolkataanamikaraghav4
 
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...nagunakhan
 
Vip Noida Escorts 9873940964 Greater Noida Escorts Service
Vip Noida Escorts 9873940964 Greater Noida Escorts ServiceVip Noida Escorts 9873940964 Greater Noida Escorts Service
Vip Noida Escorts 9873940964 Greater Noida Escorts Serviceankitnayak356677
 
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...srsj9000
 
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一Fi sss
 

Recently uploaded (20)

Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,
Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,
Call Girls In Andheri East Call 9892124323 Book Hot And Sexy Girls,
 
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Bhavna Call 7001035870 Meet With Nagpur Escorts
 
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Hitech City ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
 
Gaya Call Girls #9907093804 Contact Number Escorts Service Gaya
Gaya Call Girls #9907093804 Contact Number Escorts Service GayaGaya Call Girls #9907093804 Contact Number Escorts Service Gaya
Gaya Call Girls #9907093804 Contact Number Escorts Service Gaya
 
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一
定制(Salford学位证)索尔福德大学毕业证成绩单原版一比一
 
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...
VIP Call Girls Kavuri Hills ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With ...
 
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...
(MEGHA) Hinjewadi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune E...
 
Thane Escorts, (Pooja 09892124323), Thane Call Girls
Thane Escorts, (Pooja 09892124323), Thane Call GirlsThane Escorts, (Pooja 09892124323), Thane Call Girls
Thane Escorts, (Pooja 09892124323), Thane Call Girls
 
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service - Bandra F...
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service -  Bandra F...WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service -  Bandra F...
WhatsApp 9892124323 ✓Call Girls In Khar ( Mumbai ) secure service - Bandra F...
 
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一(办理学位证)韩国汉阳大学毕业证成绩单原版一比一
(办理学位证)韩国汉阳大学毕业证成绩单原版一比一
 
Pallawi 9167673311 Call Girls in Thane , Independent Escort Service Thane
Pallawi 9167673311  Call Girls in Thane , Independent Escort Service ThanePallawi 9167673311  Call Girls in Thane , Independent Escort Service Thane
Pallawi 9167673311 Call Girls in Thane , Independent Escort Service Thane
 
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service
(ZARA) Call Girls Jejuri ( 7001035870 ) HI-Fi Pune Escorts Service
 
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一
定制宾州州立大学毕业证(PSU毕业证) 成绩单留信学历认证原版一比一
 
Russian Call Girls Kolkata Chhaya 🤌 8250192130 🚀 Vip Call Girls Kolkata
Russian Call Girls Kolkata Chhaya 🤌  8250192130 🚀 Vip Call Girls KolkataRussian Call Girls Kolkata Chhaya 🤌  8250192130 🚀 Vip Call Girls Kolkata
Russian Call Girls Kolkata Chhaya 🤌 8250192130 🚀 Vip Call Girls Kolkata
 
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...
Russian Call Girls In South Delhi Delhi 9711199012 💋✔💕😘 Independent Escorts D...
 
9953330565 Low Rate Call Girls In Jahangirpuri Delhi NCR
9953330565 Low Rate Call Girls In Jahangirpuri  Delhi NCR9953330565 Low Rate Call Girls In Jahangirpuri  Delhi NCR
9953330565 Low Rate Call Girls In Jahangirpuri Delhi NCR
 
Vip Noida Escorts 9873940964 Greater Noida Escorts Service
Vip Noida Escorts 9873940964 Greater Noida Escorts ServiceVip Noida Escorts 9873940964 Greater Noida Escorts Service
Vip Noida Escorts 9873940964 Greater Noida Escorts Service
 
CIVIL ENGINEERING
CIVIL ENGINEERINGCIVIL ENGINEERING
CIVIL ENGINEERING
 
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...
Hifi Defence Colony Call Girls Service WhatsApp -> 9999965857 Available 24x7 ...
 
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一
(办理学位证)加州州立大学北岭分校毕业证成绩单原版一比一
 

Module2 MultiThreads.ppt

  • 2. Basic Concepts • Maximum CPU utilization obtained with multiprogramming • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait • CPU burst followed by I/O burst • CPU burst distribution is of main concern
  • 3. Dispatcher • Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program • Dispatch latency – time it takes for the dispatcher to stop one process and start another running.
  • 4. Scheduling Criteria • CPU utilization – keep the CPU as busy as possible • Throughput – Number of processes that complete their execution per time unit. • Turnaround time – amount of time to execute a particular process. • Waiting time – amount of time a process has been waiting in the ready queue. • Response time – amount of time it takes from when a request was submitted until the first response is produced.
  • 5. Multilevel Queue Scheduling • Ready queue is partitioned into separate queues, eg: – foreground (interactive) – background (batch) • These two types of process have different response time requirements and different scheduling needs. Reside permanently in a given queue. • Each queue has its own scheduling algorithm: – foreground – RR – background – FCFS
  • 6. Multilevel Queue Scheduling  Scheduling must be done between the queues:  Fixed priority scheduling : Serve all from foreground, then from background.  Time slice : Each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR ,20% to background in FCFS.
  • 7. Multilevel Feedback Queue Scheduling • Scheduling : A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds .If it does not finish in 8 milliseconds, job is moved to tail of queue Q1. • If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. • Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and 1 are empty.
  • 8. Multiple-Processor Scheduling • CPU scheduling more complex when multiple CPUs are available. • Asymmetric multiprocessing–All scheduling decisions, I/O processing, and other system activities are handled by a single processor- master server. The other processors execute only user code. • Symmetric multiprocessing (SMP)– Each processor is self scheduling. Each has its own private queue of ready processes . Having the scheduler for each processor examine the ready queue and select a process to execute. Processor Affinity : • If the process migrates to another processor. The contents of cache memory must be invalidated for the first processor, and the cache for the second processor must be repopulated.
  • 9. Processor Affinity • Avoid migration of processes from one processor to another and instead attempt to keep a process running on the same processor. This is known as processor affinity. • Soft Affinity : When an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing that it will do so. • Hard Affinity : Linux provide system calls that support hard affinity, thereby allowing a process to specify that it is not to migrate to other processors.
  • 10. NUMA and CPU Scheduling  In SMP, it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor.  Otherwise, one or more processors may sit idle while other processors have high workloads, along with lists of processes awaiting the CPU. Load Balancing
  • 11. Load Balancing • Load balancing attempts to keep workload evenly distributed across all the processors in SMP system. • Push migration – a specific task periodically checks the load on each processor and if it finds an imbalance, evenly distributes the load by moving (or pushing) processes from overloaded to idle or less-busy processors. • Pull migration – When an idle processor pulls a waiting task from a busy processor.
  • 12. Multicore Processors • Recent trend to place multiple processor cores on same physical chip resulting in multicore processors. • SMP systems that use multicore processors are faster & consumes less power. • Memory Stall : When a processor accesses memory, it spends a significant amount of time waiting for the data to become available.
  • 13. Multicore Processors  Many recent hardware designs have implemented multithreaded processor cores in which two (or more) hardware threads are assigned to each core.  If one thread stalls while waiting for memory, the core can switch to another thread.  From an operating-system perspective, each hardware thread appears as a logical processor that is available to run a software thread.
  • 15. Multithreaded programming • A thread is a basic unit of CPU utilization. It comprises a thread ID, a program counter, a register set and stack. • Process creation is heavy-weight while thread creation is light weight. • If a process has multiple threads of control it can perform more than one task at a time.
  • 16. Single and Multithreaded Processes If the web server ran as a single-threaded process, it would be able to service only one client at a time, a client might have to wait a very long time for its request to be serviced. If the web-server ran as multithreaded process, when a request is made, the server will create a new thread to service the request and resume listening for additional requests. Multithreaded Server Architecture
  • 17. Benefits of Multithreaded Programming • Responsiveness – It allows the program to continue execution even if part of process is blocked, thereby increasing responsiveness to user. • Resource Sharing – Threads share the memory and resources of the process to which they belong ,so it is easier than shared memory or message passing. • Economy – Threads share the resources of the process to which they belong, it is more economical to create and context switch threads. • Scalability – Threads may be running in parallel on different processors as a result parallelism can be improved.
  • 18. Multicore Programming • Challenges in multicore programming -- Dividing activities : Examining applications to find areas that can be divided into separate, concurrent tasks and thus can run in parallel on individual cores. -- Balance : Programmers must also ensure that the task perform equal work of equal value. -- Data splitting :Applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores. -- Data dependency :The data accessed by the tasks must be examined for dependencies between two or more tasks. -- Testing and debugging :When a program is running in parallel on multiple cores, testing and debugging is inherently more difficult than single-threaded applications.
  • 19. Concurrency vs. Parallelism  Concurrent execution on single-core system: Here concurrency means processing core is capable of executing only one thread at a time. Concurrency supports more than one task making progress.  Parallelism on a multi-core system: Here concurrency means that the threads can run in parallel as the system assign a separate thread to each core. Parallelism implies a system can perform more than one task simultaneously.
  • 20. Multithreading Models • Two types of threads :  User level threads : User threads are supported above the kernel, these are the threads that application programmers would put into their programs.  Kernel level threads :Kernel threads are supported within the kernel of the OS itself.  Multithreading models : • Many-to-One • One-to-One • Many-to-Many
  • 21. Many-to-One • Many user-level threads mapped to single kernel thread. • One thread blocking causes all to block. • Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time. • Examples: – Solaris Green Threads
  • 22. One-to-One • Each user-level thread maps to kernel thread. • Creating a user-level thread creates a kernel thread. • More concurrency than many-to-one. • Number of threads per process sometimes restricted due to overhead. • Examples – Windows – Linux – Solaris 9
  • 23. Many-to-Many Model • Allows many user level threads to be mapped to many kernel threads. • Allows the operating system to create a sufficient number of kernel threads . E.g. : Solaris • Two Level Model : • Similar to M:M, except that it allows a user thread to be bound to kernel thread. • E.g.: Irix
  • 24. Thread Libraries • Thread library provides programmer with API for creating and managing threads. • Two primary ways of implementing thread library. – User level library: All library reside entirely in user space with no kernel support. No support from operating system. – Kernel-level library :All library reside entirely in kernel space. Requires support from operating system. • Three main thread libraries : -- POSIX PThreads : refers to the POSIX standard (IEEE 1003.1c) defining an API for thread creation and synchronization. Common in UNIX operating systems (Solaris, Linux, Mac OS X). -- JAVA Threads :The Java thread API allows threads to be created and managed directly in Java programs. -- Win32 Threads :The Win32 thread library is a kernel-level library available on Windows systems.
  • 28. Threading Issues  Semantics of fork() and exec() system calls : • Two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. • If a thread invokes the exec() system call, the program specified in the parameter to exec () will replace the entire process-including all threads.  Thread Pools : • Create a number of threads at process startup and place them into a pool, where they sit and wait for work. • When a server receives a request, it awakens a thread from this pool-if one is available-and passes it the request for service. • Once the thread completes its service, it returns to the pool and awaits more work.
  • 29. Signal Handling  Signals are used in UNIX systems to notify a process that a particular event has occurred. • Synchronous signals : They are delivered to the same process that performed the operation that caused the signal. E.g.: illegal memory access, division by zero. • Asynchronous signals : When a signal is generated by an event external to a running process, that process receives the signal asynchronously. E.g. : terminating a process with specific keystrokes (Ctrl D)  A signal handler is used to process signals. 1. Signal is generated by a particular event. 2. Signal is delivered to a process. 3. Once delivered it must be handled.
  • 30. Signal Handling (Cont.)  A signal may be handled by one of two possible handlers: 1.A default signal handler 2.A user-defined signal handler. • Every signal that is run by the kernel when handling that signal. • This default action can be overridden by a user defined signal handler that is called to handle the signal.  Thread Specific Data : • Threads belonging to a process share the data of the process. • Each thread need its own copy of certain data. We will call such data as thread specific data.
  • 32. Speeding up with Multiple Process
  • 35. Introduction • Processes can execute concurrently. – May be interrupted at any time, partially completing execution. • Concurrent access to shared data may result in data inconsistency. • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.
  • 36. Producer Consumer problem  Suppose that we wanted to provide a solution to the consumer- producer problem that fills all the buffers.  We can do so by having an integer counter that keeps track of the number of full buffers. Initially, counter is set to 0.  It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. Producer : while (true) { /* produce an item in next produced */ while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; counter++; }
  • 37. Solution using compare and swap instruction • void swap(boolean &a, boolean &b) { boolean temp=a; a= b; b=temp; } • Solution: do { flag=true; while(flag) swap(lock,flag); /* critical section */ lock = false; /* remainder section */ } while (true);
  • 38. while (true) { while (counter == 0) ; /* do nothing */ next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; /* consume the item in next consumed */ } Race Condition :  Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called race condition.  To guard against race condition we require processes be synchronized in some way. Consumer: Producer Consumer problem
  • 39. Critical Section Problem • Consider system of n processes {p0, p1, … pn-1}. • Each process has a segment of code called critical section, in which process may be changing common variables, updating table, writing file, etc. – When one process in critical section, no other may be in its critical section. • Critical section problem is to design a protocol that the processes can use to cooperate. • Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section.
  • 40. Critical Section • General structure of process Pi • Entry Section: It is a block of code executed in preparation for entering critical section. • Exit Section :The code executed upon leaving the critical section. • Remainder Section : Rest of the code is remainder section.
  • 41. Solution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical section. 2. Progress - When no process is executing in its critical section , any process that requests entry in to the critical section must be permitted without delay. 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
  • 42. Critical-Section Handling in OS • Two approaches depending on if kernel is preemptive or non- preemptive . – Preemptive – allows preemption of process when running in kernel mode. – Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU. • Essentially free of race conditions in kernel mode.
  • 43. Peterson’s Solution • Good algorithmic description of solving the problem. Two process solution : • Assume that the load and store machine-language instructions are atomic ie., cannot be interrupted. • The two processes share two variables: – int turn; – Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section. • The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
  • 44. Algorithm for Process Pi do { flag[i] = true; turn = j; while (flag[j] && turn = = j); critical section flag[i] = false; remainder section } while (true);
  • 45. Peterson’s Solution (Cont.) • Provable that the three CS requirement are met: 1. Mutual exclusion is preserved , Pi enters CS only if: either flag[j] = false or turn = i 2. Progress requirement is satisfied. 3. Bounded-waiting requirement is met.
  • 46. Semaphores • Synchronization tool that provides more sophisticated ways for process to synchronize their activities. • Semaphore S – integer variable. • Can only be accessed via two indivisible (atomic) operations. – wait() and signal() Originally called P() and V() • Definition of the wait() operation wait(S) { while (S <= 0); // busy wait S--; } • Definition of the signal() operation signal(S) { S++; }
  • 47. Semaphore Usage  Advantages of semaphores : • Solves critical section problem. • Decides the order of execution of process. • Resource management and can solve various synchronization problems.  Types of semaphores : • Counting semaphore – integer value can range over an unrestricted domain. • Binary semaphore – integer value can range only between 0 and 1.  Semaphore Implementation : • In multiprogramming system , busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is called spin lock i.e process spins waiting for lock.
  • 48. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue. • Each entry in a waiting queue has two data items: – value (of type integer) – pointer to next record in the list. • Two operations: – block – place the process invoking the operation on the appropriate waiting queue. – wakeup – remove one of processes in the waiting queue and place it in the ready queue. typedef struct{ int value; struct process *list; } semaphore;
  • 49. Implementation with no Busy waiting (Cont.) wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }
  • 50. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. • Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S); ... ... signal(S); signal(Q); signal(Q); signal(S); • Starvation – indefinite blocking – A process may never be removed from the semaphore queue in which it is suspended. • Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by higher-priority process. – Solved via priority-inheritance protocol.
  • 51. Synchronization Hardware • In single processor environment, process donot execute concurrently. • As a result mutual exclusion can be achieved by disabling the interrupt before entering critical section and enabling the interrupt after the process has exited from the critical section. • But in multiprocessor environments , several processes are executing concurrently on different processors. • Inorder to disable the interrupt, instruction is sent to all the processors, it is really a time consuming task and decreases the efficiency of the system.
  • 52. Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE);
  • 53. Test and set Instructions • Test and set instructions are simple hardware instructions that solve the critical section problem by providing mutual exclusion in a easy and efficient way in a multiprocessor environment.
  • 54. Bounded-waiting Mutual Exclusion with test and set do { waiting[i] = true; key = true; while (waiting[i] && key) key = test and set(&lock); waiting[i] = false; /* critical section */ j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = false; else waiting[j] = false; /* remainder section */ } while (true);
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60. Monitors • A high-level abstraction that provides a convenient and effective mechanism for process synchronization. • Abstract data type, internal variables are accessible by code within the procedure. • Only one process may be active within the monitor at a time. monitor monitor-name { // shared variable declarations procedure P1 (…) { …. } procedure Pn (…) {……} Initialization code (…) { … } } }