Call Girls Nikol 7397865700 Ridhima Hire Me Full Night
Processscheduling 161001112521
1. 1
Process Synchronization (Galvin)
Outline
CHAPTER OBJECTIVES
To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data.
To present both software and hardware solutions of the critical-section problem.
To examine several classical process-synchronization problems.
To explore several tools that are used to solve process synchronization problems.
BACKGROUND
THE CRITICAL SECTION PROBLEM
PETERSON'S SOLUTION
SYNCHRONIZATION HARDWARE
MUTEX LOCKS
SEMAPHORES
o Semaphore Usage
o Semaphore Implementation
o Deadlocks and Starvation
o Priority Inversion
CLASSIC PROBLEMS OF SYNCHRONIZATION
o The Bounded-Buffer Problem
o The Readers–Writers Problem
o The Dining-Philosophers Problem
MONITORS
o Monitor Usage
o Dining-Philosophers Solution
o Using Monitors
o Implementing a Monitor
o Using Semaphores
o Resuming Processes within a Monitor
SYNCHRONIZATION EXAMPLES
o Synchronization in Windows
o Synchronization in Linux
o Synchronization in Solaris
o Pthreads Synchronization
ALTERNATIVE APPROACHES
o Transactional Memory
o OpenMP
o Functional Programming Languages
Contents
A cooperating process is one that canaffect or be affectedbyother processes executing inthe system. Cooperating processescaneither directlyshare a
logical address space (that is, both code anddata)or be allowed to share data onlythroughfiles or messages. The former case is achievedthroughthe
use of threads, discussedinChapter 4. Concurrent accessto shareddata mayresult in data inconsistency, however. Inthis chapter, we discussvarious
mechanisms to ensure the orderlyexecution ofcooperating processesthat share a logical address space, sothat data consistencyis maintained.
BACKGROUND
We’ve alreadyseen that processes canexecute concurrentlyor in parallel. Section 3.2.2 introducedthe role of processsched ulingand
described how the CPU scheduler switches rapidlybetween processesto provide concurrent execution. This means that one process may
onlypartiallycomplete execution before another processis scheduled. Infact, a process maybe interruptedat anypoint in its instruction
2. 2
Process Synchronization (Galvin)
stream, andthe processing core maybe assigned to execute instructions of another process. Additionally, Section4.2 introducedparallel
execution, in whichtwo instruction streams (representing different processes) execute simultaneouslyon separate processing cores. Inthis
chapter, we explain how concurrent or parallelexecutioncan contribute to issues involving the integrityof data sharedbyseveral processes.
In Chapter 3, we developeda modelof a system consisting of cooperating sequential processesor threads, all running asynchronouslyand
possiblysharing data. We illustratedthismodel withthe producer–consumer problem, whichis representative ofoperatingsystems.
Specifically, in Section3.4.1, we describedhow a boundedbuffer couldbe usedto enable processes to share memory.
Coming to the bounded buffer problem, as we pointedout, our original solutionallowedat most BUFFER SIZE − 1 items inthe b uffer at the
same time. Suppose we want to modifythe algorithm to remedythis deficiency. One possibilityis to addaninteger variable counter,
initializedto 0. counter is incrementedeverytime we adda newitemto the buffer andis decrementedeverytime we remove o ne itemfrom
the buffer. The code for the producer and consumer processes canbe modifiedas follows:
Although the producer andconsumer routines shown above are correct separately, theymaynot function correctlywhenexecuted
concurrently. As anillustration, suppose that the value ofthe variable counter is currently5 andthat the producer andconsumer processes
concurrentlyexecute the statements “counter++” and“counter--”. Following the execution ofthese two statements, the value of the variable
counter maybe 4, 5, or 6! The onlycorrect result, though, is counter == 5, which is generated correctlyif the producer andconsumer execute
separately.
Note: Page 205 of 9th edition (which we have read well) shows whythe value of the counter may be incorrect. It is due to the way the
statements "Counter++" or "Counter--" are implemented in assembly(and hence machine language) on a typical machine. Since we
know it well, we don't clutter the content here. The following starts after that part in book.
We would arrive at this incorrect state because we allowed bothprocesses to manipulate the variable counter concurrently. A situationlike
this, where several processes access and manipulate the same data concurrentlyandthe outcome of the execution depends on th e particular
order in which the access takes place, is calleda race condition. To guard against the race condition above, we needto ensure that onlyone
process at a time canbe manipulating the variable counter. To make sucha guarantee, we require that the processesbe synchronized in
some way.
Situations such as the one just describedoccur frequentlyinoperating systems as different parts of the systemmanipulate resources.
Furthermore, as we have emphasizedinearlier chapters, the growing importance ofmulticore systems has brought anincreased emphasis
on developing multithreadedapplications. In such applications, several threads—which are quite possiblysharing data—are running in
parallel on different processing cores. Clearly, we want anychanges that result fromsucha ctivities not to interfere with one another.
Because of the importance of thisissue, we devote a major portionof thischapter to process synchronization and coordination among
cooperating processes.
THE CRITICAL SECTION PROBLEM
We beginour considerationof process synchronizationbydiscussing the so-called critical-section
problem. Consider a system consisting of n processes{P0, P1, ..., Pn−1}. Each process hasa segment of
code, calleda critical section, in whichthe processmaybe changingcommonvariables, updating a
table, writing a file, andsoon. The important feature ofthe system is that, whenone process is
executinginits criticalsection, no other process is allowedto execute inits critical section. That is, no
two processes are executingintheir critical sections at the same time. The critical-section problem is to
design a protocol that the processes can use to cooperate. Each process must request permission to
enter its critical section. The sectionof code implementingthis request is the entry section. The critical
sectionmaybe followedbyan exit section. The remainingcode is the remaindersection. The general
structure ofa typical process Pi is shown inFigure 5.1. The entrysection and exit sectionare enclosedinboxes to highlight these important segments of
code.
A solution to the critical-section problem must satisfythe following three requirements:
Mutual exclusion. If processPi is executing inits critical section, thenno other processes can be executing in their critical sections.
Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, thenonlythose processesthat
are not executingintheir remainder sections canparticipate in decidingwhichwill enter its critical sectionnext, andthis selectioncannot be
postponed indefinitely.
3. 3
Process Synchronization (Galvin)
Bounded waiting. There exists a bound, or limit, onthe number of times that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical sectionandbefore that request is granted.
At a givenpoint intime, manykernel-mode processesmaybe active inthe operatingsystem. As a result, the code implementing an operatingsystem
(kernel code)is subject to several possible race conditions. Consider as anexample a kernel data structure that maintains a list of all openfiles inthe
system. This list must be modifiedwhena newfile is openedor closed(adding the file to the list or removing it from the list). If two processes were to
open filessimultaneously, the separate updatesto this list couldresult in a race condition. Other kerneldata structures that are prone to possible race
conditions include structures for maintaining memoryallocation, for maintaining process lists, andfor interrupt handling. It is upto kernel developers to
ensure that the operatingsystemis free from such race conditions.
Two general approachesare usedto handle criticalsections in operating systems: preemptive kernels andnon-preemptive kernels. A preemptive kernel
allows a processto be preempted while it is running in kernel mode. A nonpreemptive kerneldoesnot allow a process running inkernel mode to be
preempted;a kernel-mode process will run untilit exits kernel mode, blocks, or voluntarilyyields control of the CPU. Obviously, a non-preemptive kernel
is essentiallyfree fromrace conditions onkerneldata structures, as onlyone process is active in the kernel at a time. We cannot saythe same about
preemptive kernels, sotheymust be carefullydesignedto ensure that sharedkernel data are free from race conditions. Preemptive kernelsare
especiallydifficult to designfor SMParchitectures, since inthese environments it is possible for two kernel-mode processes to runsimultaneouslyon
different processors.
PETERSON’S SOLUTION
We now illustrate a classic software-based solutionto the critical-sectionproblemknownas Peterson’s solution. Because ofthe waymodern
computer architectures perform basic machine-language instructions, such as loadand store, there are noguarantees that Peterson’s solutionwill
work correctlyon such architectures. However, we present the solutionbecause it provides a goodalgorithmic descriptionof solving the critical-section
problemand illustrates some of the complexitiesinvolvedindesigningsoftware that addressesthe requirements of mutualexclusion, progress, and
boundedwaiting.
AssortedContent
XXX
To be cleared
I
Q’s Later
XXX
Glossary
ReadLater
Further Reading
S
Grey Areas
XXX