SlideShare a Scribd company logo
1 of 185
LECTURE NOTES
Unit I & II
CGAC/CS Department
ADVANCED OPERATING
SYSTEMS
UNIT II
• Inter Process Communication – Race condition
– Critical Region – Mutual Exclusion – Sleep
and wakeup – Semaphores – Mutexes –
Message Passing. Classical IPC Problems :
The Dining Philosophers Problem – The
Readers and Writers Problem – The Sleeping
Barber Problem – Producer Consumer
problem.
2
1. What’s an Operating System?
Operating System
A set of programs1 that act as an intermediary
between a computer’s user(s) and hardware.
1 stored in firmware (ROM) and/or software
OS...User View
All users share similar hardware interfaces
Keyboard, Mouse, Monitor, Printer, Speakers, CDRom
Three types of users by software interface:
1. End-User (Icons, Folder Navigation, MS Word Application)
2. System Administrator: shell interface
Shell Scripts (Keyboard only) for dir, copy, ftp, cd
3. Programmer: Application Programming Interface
• GUI stuff
• Logical stuff (math.h, file.h, ioctl.h)
Obviously, the three are interdependent
This course is equally concerned with needs of all three
labs focus on (3), but “User” generally refers to all three
Hardware
OS...Conceptual View
OS
User A
Programs
User A
Operating System is the (only) intermediary
between Users and Hardware
Figure 1-1 in text...
User B
Programs
User B
Users
OS
Hardware
OS...Physical View
User
mouse
keyboard
Monitor
Speakers
Printer
System
Bus
CPU
Memory
OS
User
Programs
Input
Output
DISK
Users
OS
Hardware
Reality is a little more complicated
than the conceptual view...
OS Roles...Three Roles
• Resource allocator.
Manages / allocates / referees hardware resources
CPU time, memory, video / audio out, network access, printer
queues, etc.
• Control program.
– Starts / stops user programs (scheduling, preemption)
– Directs I/O devices (using device drivers).
• Kernel
– The one program running at all times (all else being
user programs).
OS Goals
3. Execute user programs
2. Make the computer convenient to use
1. Protect users from each other
5. Make programs easier to write
4. Use hardware efficiently
6. Protect hardware from user
End User
Programmer
Sys Admin
Batch System
Only one job runs at a time
– Job fully “owns” CPU and all hardware
– CPU underutilized. Card reader 20 cards/sec.
– input, processing, output
Reduce setup time by batching similar jobs
– Operator separates instructions / data
– Minimizes next stack of cards to be read
Invention of disk....Automatic job sequencing –
auto transfer from one job to another on disk.
– Resident monitor is born
• First rudimentary operating system.
• Less waiting on I/O
Memory Layout: Simple Batch System
OS loaded to a consistent place
in main memory (high or low);
size of OS is predictable
User program (only one)
uses up to all remaining memory
Multiprogrammed Batch Systems
• Several jobs are kept in main
memory at the same time
• CPU multiplexed among them
• OS must make decisions
• one job waits on I/O, so...
• schedule another job
• but what are priorities?
• Core challenges haven’t changed
in 50 years; basic programs look a
lot like this:
input, processing, output.
OS Features for Multiprogramming
• Memory management – allocate memory to
several jobs at same time
• CPU scheduling – choose among several jobs
ready to run
• I/O routine supplied by the system
– when a program needs to wait on I/O, it has a well-
known means of informing the system
– system has a well-known means of rescheduling
another job during the I/O
• Allocation of devices (disk, more memory...)
– Only one job “owns” hardware at a given time.
Time-Sharing Systems (Unix)
Interaction between user and system
– beyond simple input, processing, output.
– when OS finishes the execution of one command, it
seeks next “control statement” from keyboard
– On-line system must be available for users to access data
and code.
CPU still multiplexes several jobs
– But more sophisticated....need greater degree of
multiprogramming to support many users
– jobs kept in memory and on disk
– CPU allocated to a job only if it’s in memory
– Jobs swapped in/out of memory from/to disk.
Concurrency
I/O devices and CPU can execute concurrently.
• Each device controller has a local memory buffer
Device only talks to that buffer (not to main bus)
While device is doing its thing, CPU is oblivious to device
and finds other useful work to do.
• Device controller informs CPU that it has finished its
operation by causing an interrupt.
CPU drops everything it’s doing
CPU moves data from device controller’s local buffers to
main memory; now local buffer can be refilled.
Sidepoint: How does this relate to parallel processing?
6. Network Structures
• Local Area Networks (LAN)
– usually within a building
– speeds 10 MBit – 1 GBit
• Ethernet & FDDI protocols
• bus, ring, and star topologies.
– may interoperate with other LAN(s) via Gateway.
• Wide Area Networks (WAN)
– as big as the world (Arpanet -> Internet)
– disparate speeds, protocols
• Point to Point Protocol (PPP) for Modem access to Internet
– leased phone lines, fiber, satellite links, etc.
– requires many communications processors (CPs)
Local Area Network Structure
Wide Area Network Structure
3. OS Structures
How OS software is organized.
1. System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Storage Management
• Networking
• Protection System
• Command-Interpreter System
Process Management
• A process is a program in execution.
• A process needs certain resources, including
CPU time, memory, files, and I/O devices, to
accomplish its task.
• OS is responsible for the following activities in
connection with process management.
– Process creation and deletion.
– Process suspension and resumption.
– Process synchronization
– Interprocess communication
Main Memory Management
• Memory is a large array of bytes, each with its
own address.
• Main memory is a volatile storage device. It loses
its contents in the case of system failure.
• OS is responsible for the following activities in
connections with memory management:
– Keep track of which parts of memory are currently
being used and by what process.
– Decide which processes to load when memory space
becomes available.
– Allocate and deallocate memory space as needed
3. System Calls
• Interface between a running program(process) and OS
– Generally available as assembly-language instructions.
– Languages defined to replace assembly language for systems
programming allow system calls to be made directly (C, C++,
Perl) via a well-known API.
• Three general methods to pass parameters between a
running program and the OS:
– CPU registers.
– Store parameters to a table in memory, and pass the table
address as a parameter in a register.
– Push (store) the parameters onto the stack by the program, and
pop off the stack by operating system.
Multitasked Process Control
E.g., user runs one
interpreter and three
processes concurrently.
All four tasks execute
independently unless they
explicitly coordinate their
own execution
File Manipulation
create, delete,open,read,write,reposition
Device Management
request the device, release, open, close,
read, write,reposition
Information Maintenance
Current time,date
UNIX System Structure
• UNIX – limited by hardware functionality, the
original UNIX OS had limited structure.
• UNIX consists of two separable parts.
– Systems programs
– The kernel
• Consists of everything below the system-call interface and
above the physical hardware
• Provides the file system, CPU scheduling, memory
management, and other operating-system functions; a large
number of functions for one level.
Chapter 4: Processes
Programs in Execution...
& how they get there.
1. What’s a Process?
• An OS executes a variety of programs:
– or Jobs (Batch systems)
– or Tasks, Executables (Time-shared systems)
– Textbook uses job and process almost interchangeably.
• Process = a program in execution
– execution must progress in a sequential fashion.
• A process includes:
– ordered CPU instructions (text)
– data section
– program counter
– stack
This is the part on disk...
result of static compile...
data changes during execution.
This changes to indicate which
instruction is current for a given user.
Process State
• As a process executes, it changes state
• Typical valid states (OS dependent):
– The process is being created.
– Instructions are being executed.
– The process is waiting for some event
to occur, usually involving I/O.
– The process is waiting to be assigned to
a processor (CPU)
– The process has finished execution
new
running
waiting
ready
terminated
Process State Transitions
Process Control Block (PCB)
OS data structure
Stores info associated with each process:
• Process state
• Program counter
• CPU register values
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
Process Control Block (PCB)
One of these data structures
will be allocated for each
process when the process is
initially scheduled.
Throughout the process’
lifecycle, different kernel
subsystems will read/write
different sections of PCB data
Also a construct in Labs
CPU Switch Between Processes
P0 has the CPU;
then P1;
then P0 again.
During switch,
PCB is crucial
enabler for the OS
to schedule the
next process and
retain knowledge
required to restore
previous process to
execution at some
later time.
2. Process Scheduling Queues
• Job queue – set of all processes in the system.
• Ready queue – set of all processes residing in main
memory, ready and waiting to execute.
• Device queues – set of processes waiting for an I/O
device.
Ready Queue & I/O Device Queues
Process Scheduling
Schedulers
• Long-term scheduler (or job scheduler) – selects
which processes should be initially brought into the
ready queue.
– why wouldn’t we bring a program in?
• Short-term scheduler (or CPU scheduler) – selects
which process should be next to run, the next time the
CPU (or a CPU) becomes available
Medium Term Scheduling
If a process is so “busy” waiting on I/O, or so low in
priority that it never gets to run anyway,
it’s really wasting memory, so copy its memory
to disk to support higher degree of multiprogramming.
Schedulers (Cont.)
• Short-term scheduler is invoked very frequently
(milliseconds)  must be fast
– Processes can be described as either:
• I/O-bound or Bursty– spends more time doing I/O than
computations, many short CPU bursts. E.g.: word processor
• CPU-bound– spends more time doing computations; few but very
long CPU bursts. E.g.: engineering analysis
• Long-term scheduler is invoked very infrequently
(seconds, minutes)  (may be slow).
– controls the degree of multiprogramming.
Context Switch
• When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process.
• Context-switch time is overhead; the system does no
useful work while switching.
– This is a major tradeoff in system design.
– Too much v. Too little context switching
• and what about realtime OS constraints?
• Overhead time depends on hardware
– E.g. register sets on some chips can obviate the need to
copy data to and from the PCB, making the overhead of
less effect.
3. Process Creation
• Parent process can create child processes, which, in
turn can create other processes
• This forms a tree of processes.
• Three resource sharing possibilities:
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• Execution considerations:
– Parent and children execute concurrently.
– Parent waits until children terminate.
Process Creation (Cont.)
• Address space
– Child duplicate of parent
UNIX: fork system call
– Child has a program loaded into it
UNIX: exec system call
Processes Tree in UNIX
Process Termination
• Case 1: Process voluntarily finishes (exit).
– Output data from child to parent (via wait).
– Process’ resources are deallocated by OS
• Case 2: Parent terminates child (abort). Why?
– Child has exceeded allocated resources.
– Task assigned to child is no longer required.
– Parent is exiting & desire is to cascade
4. Cooperating Processes
• Independent processes cannot affect or be affected by
any other processes.
• Cooperating processes can affect or be affected by
the execution of its peer processes
• Advantages of process cooperation
– Information sharing
– Computation speed-up (on parallel system)
– Modularity (software maintenance issue)
– Convenience (edit / print at same time)
Producer-Consumer Problem
• Paradigm for cooperating processes
• producer process produces information that is
consumed by a consumer process.
– unbounded-buffer places no limit on the size of the
buffer. Producer can always produce.
– bounded-buffer assumes a fixed buffer size.
Producer may have to wait until consumer has
consumed in order to produce more.
Bounded-Buffer: Shared-Memory Solution
• Shared data
#define BUFFER_SIZE 10
Typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0; // next producer slot
int out = 0; // next consumer slot
• Solution is correct, but can only use
BUFFER_SIZE-1 elements
Bounded-Buffer: Producer Process
item nextProduced;
while (1) {
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded-Buffer: Consumer Process
item nextConsumed;
while (1) {
while (in == out)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
// (in-out) indicates Producer lead on consumer.
// in == out means Consumer can’t consume
// in == out-1 means producer can’t produce
// what does “do nothing” mean?
// have we used all slots? (but: semaphores, later)
5. Interprocess Communication
• Mechanism for processes to synchronize actions.
• Message system – processes communicate with each other
without resorting to shared variables.
• IPC facility provides two operations:
– send (message) – message size fixed or variable
– receive (message)
• If P and Q wish to communicate, they must:
– establish a communication link between them
– exchange messages via send/receive
• Implementation of communication link
– physical (e.g., shared memory, hardware bus)...transparent here.
– logical (e.g., logical properties)
Implementation Questions
• How are links established?
• Can more than two processes share link?
• Can processes share more than one link?
• What is the capacity of a link?
• Is link message size fixed or variable?
• Is link uni- or bi-directional?
Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from process Q
• Properties:
– Links established automatically (though the processes must
know about each other via some other means).
– One link == one pair of communicating processes.
– May be unidirectional, but is usually bi-directional.
Indirect Communication
• Messages are directed and received from mailboxes
(or ports).
– Each mailbox has a unique id.
– Processes communicate only if they share a port.
• Properties:
– Process must explicitly connect to a port.
– A link may be associated with many processes.
– Each pair of processes may participate in several
disparate links.
– Link may be uni- or bi-directional.
Indirect Communication
• Mailbox sharing
– P1, P2, and P3 share mailbox A.
– P1, sends; P2 and P3 receive.
– Who gets the message?
• Solutions
– Allow a link to be associated with at most two processes.
– Allow only one process at a time to execute a receive operation.
– Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.
Synchronization
Message passing may be blocking or non-blocking.
– Blocking is considered synchronous
– Non-blocking is considered asynchronous
• send and receive primitives may be either
blocking or non-blocking.
Remote Procedure Calls
• Remote procedure calls (RPC) abstract
procedure calls between distributed
(networked) processes
• Stubs – client-side proxy for server procedure
– Stub locates server and marshalls parameters.
• The server-side Skeleton receives this
message, unmarshalls parameters, and
performs the procedure on the server.
Java RMI
• Remote Method Invocation (RMI) is a Java
mechanism similar to RPCs.
• RMI allows a Java program on one machine to invoke
a method on a remote object.
Chapter 5: Threads
Concurrent Tasks Within a Process
1. Overview
• So far we’ve only talked of a Process as a
single-threaded task to be executed, with:
• resources (file table, memory for text/data)
• state (stack, registers, program counter)
• What if we allow state to be “cloned,” so that
potentially many subtasks could execute
concurrently?
• Analogy of fire department / sports team
• each member shares same “program”
• yet each occupies different space, and members must share
resources
1. Overview
Code (text),
data and files
don’t change
in a threaded
scenario.
However,
threads allow
us to have
multiple
instances of
registers, stack,
and PC.
Benefits
• Responsiveness
(cancel button; web browser JPEGs & mail)
• Resource Sharing
(e.g. Shared.java)
• Economy
(Solaris 30X creation, 5X context switch)
• Utilization of MP Architectures
Two Types of Threads
• User Threads (very lightweight)
– Thread management done by user-level threads library
– Very low overhead, but limited benefit
- POSIX Pthreads
- Mach C-threads
- Solaris UI-threads
• Kernel Threads (less lightweight)
– Thread management done by the kernel
– Can block on I/O & take advantage of Parallelism.
• Windows 95/98/NT/2000/XP
• Solaris
• Linux
2. Multithreading Models
• Many-to-One
– Many user-level threads mapped to single kernel
thread
• One-to-One
– Each user-level thread maps to kernel thread.
• Many-to-Many
– Thread pooling....can be best of both worlds.
Many-to-One Model
Good News:
Little Overhead
Bad News:
• One thread
blocks siblings, so
don’t allow user
threads to do I/O
operations.
One-to-one Model
Each thread can block & be CPU-scheduled independently
Many-to-Many Model
• A given User
thread will only
block one kernel
thread, giving
other user threads
the chance to run.
• Since multiple
kernel threads are
available,
parallelism is
possible.
8. Java Threads
• Java threads may be created by:
– Extending Thread class
– Implementing the Runnable interface
• Depending on JVM implementation,
scheduling can be done by JVM or mapped to
native Thread library.
Java Thread States
Monitor
States
Running
Ready
Suspended
Asleep
Blocked
yield()
called by self
called by another thread
Chapter 7: Synchronization
Making things work...
...at the same time
1. Background
• Concurrent access may corrupt shared data
– Maintaining data consistency requires mechanisms to ensure
orderly execution of cooperating processes.
• Example: Shared-memory solution to bounded-buffer
problem (Chapter 4) allows at most n – 1 items in
buffer at the same time.
– A solution where all N buffers are used is not simple.
– Suppose: modify producer-consumer code by adding a
variable counter, initialized to 0 and incremented each time
a new item is added to the buffer...
Bounded-Buffer II...Shared Data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0; // NEW
Bounded-Buffer II...Producer
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Bounded-Buffer II...Consumer
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
Bounded Buffer II...Atomicity
• The statements
counter++;
counter--;
must be performed atomically.
• Atomic operation means an operation that completes
in its entirety without interruption.
Bounded Buffer II...Assembly
• The statement count++ may be implemented in
machine language as:
register1 = counter
register1 = register1 + 1
counter = register1
• The statement count—- may be implemented as:
register2 = counter
register2 = register2 – 1
counter = register2
Bounded Buffer II...Interleaving
• If both producer and consumer attempt to
update buffer concurrently, the assembly
language statements may get interleaved.
• Interleaving depends on how the producer and
consumer processes are scheduled.
We can’t assume scheduling always works in our favor...
Bounded Buffer
• Assume counter is initially 5. One possible scenario for
interleaving is:
producer: r1 = counter (register1 = 5)
producer: r1 = r1 + 1 (register1 = 6)
consumer: r2 = counter (register2 = 5)
consumer: r2 = r2 – 1 (register2 = 4)
producer: counter = r1 (counter = 6)
consumer: counter = r2 (counter = 4)
• Depending on scheduling, the value of counter may be 4, 5,
or 6, but the correct result is 5.
Race Condition
• Race condition: several processes access and
manipulate shared data concurrently, such that
the final value of the shared data depends on
which process finishes last.
• To prevent race conditions, concurrent
processes must be synchronized.
2. Critical-Section Problem
• n processes compete to use some shared data
• Each process has a code segment, called a
critical section (CS), which accesses shared
data.
• Problem – ensure that:
– when one process is executing its CS,
– no other process is allowed to execute its CS
Critical-Section...Solution
 Assume each process executes at a nonzero speed
 No assumption concerning relative speeds of the n processes.
Useful solution requires three elements:
1. Mutual Exclusion. If Pi is executing its CS, no other
processes can be executing their CSs.
2. Progress. If no process is executing its CS and at least one
process wishes to enter its CS, the selection of next process to
enter can’t be postponed indefinitely.
3. Bounded Waiting. A fair bound must exist on the number
of times that other processes are allowed to enter their CSs
after a process has made a request to enter its CS and before
its request is granted.
• Only 2 processes, P0 and P1
• General structure of process Pi (other process Pj)
do {
entry section
critical section
exit section
remainder section
} while (1);
• Processes may share some common variables to
synchronize their actions.
Initial Attempts to Solve Problem
Algorithm 1...Tag Team
• Shared variables:
– int turn = i;  Pi can enter its critical section initially
• Process Pi
do {
while (turn != i)
{ // wait} ;
critical section
turn = j;
reminder section
} while (1);
• Satisfies mutual exclusion, but not progress
– what if P1 has to enter CS twice as often as P2?
Algorithm 2...Politeness
• Shared variables
boolean flag[2]; // initially both flags false.
flag [i] == true  Pi wants to enter CS
• Process Pi
do {
flag[i] = true;
while (flag[j])
{ do no op};
critical section
flag [i] = false;
remainder section
} while (1);
• Satisfies mutual exclusion, but not progress
– what if both say “no, you go first” at the same time?
Algorithm 3...Polite Tag-Team
• Combine variables of algorithms 1 and 2.
• Process Pi
do {
flag [i] = true; turn = j;
while (flag [j] && turn = j);
critical section
flag [i] = false;
remainder section
} while (1);
• Meets all three requirements
– solves CS problem for two processes.
– Makes the race condition work for us
4. Semaphores (Synchronization tool)
• So far each solution to critical section problem has
required busy waiting...until now.
• Semaphore S – integer variable
– can only be accessed via two atomic operations
wait (S):
while S 0 do no-op;
S--;
signal (S):
S++;
No-op means busy waiting without the “busy”
Critical Section of n Processes
• Shared data:
semaphore mutex = 1; // one; important!
• Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
in process P1,
S1;
signal(synch);
in process P2
wait(synch);
S2;
P2 will execute S2 only after P1 has involved
signal(synch), which is after S1;
Semaphore...Approach
• Define a semaphore as a record
type semaphore = record
value :integer;
L: list of process;
end;
Assume two simple operations:
block suspends the process that invokes it.
wakeup(P) resumes execution of blocked process P.
Semaphore...Implementation
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value++;
if (S.value < 1) {
remove a process P from S.L;
wakeup(P);
}
Semaphore as General Synchronization Tool
• Pj executes B only after Pi executes A
• Use semaphore flag initialized to 0
• Code:
Pi Pj
 
A wait(flag)
signal(flag) B
Deadlock and Starvation
• Deadlock – two (or more) processes waiting (indefinitely) for
an event that can only be triggered by one of the waiting
processes.
– Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
 
signal(S); signal(Q);
signal(Q) signal(S);
• Starvation – indefinite blocking. A process may never
be removed from the semaphore queue in which it is
suspended. Someone must signal().
Two Types of Semaphores
• Counting semaphore – integer value can
range over an unrestricted domain.
• Binary semaphore – integer value can range
only between 0 and 1
– can be simpler to implement in hardware
– can be used to implement a counting semaphore
Making a Binary Semaphore Count
• Data structures:
binary-semaphore S1, S2;
int C;
• Initialization:
S1 = true(1);
S2 = false(0);
C = initial value of
counting semaphore S
Implementing S
S1=1,s2=0
wait
wait(S1); C--;
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);
signal
wait(S1); C++;
if (C < 1)
signal(S2);
else
signal(S1);
S1 protects C
C is the “gateway” to
S2, which is the real lock... if C
already taken, we wait on S2.
In either case, signal S1.
If we’ve waited, signal twice on S1
(see ***, below)
Again, S1 protects C
If we’re not last, signal S2 since
someone’s waiting...***they’ll
relinquish C when they’re done.
Else, signal S1 to relinquish our own
hold on C.
5. Classic Synchronization Problems
• Totally solvable in Section 7.5:
– Bounded-Buffer
– Readers and Writers
• Partially solved in Section 7.5:
– Dining-Philosophers
Bounded-Buffer III
• Third time’s a charm
– use all buffer slots
– AND no busy waiting
• Shared data
semaphore full, empty, mutex;
Initially:
full = 0, empty = n, mutex = 1
how full are we? how empty? how many can see buffer at once?
Bounded-Buffer... Producer
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Bounded-Buffer ...Consumer
do {
wait(full)
wait(mutex);
…
remove item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Readers-Writers Problem
• Shared data
semaphore mutex, wrt;
Initially
mutex = 1, wrt = 1, readcount = 0
• how many can access readcount? (mutex)
• how many can write at once? (wrt)
Readers-Writers...Writer
wait(wrt);
…
writing is performed
…
signal(wrt);
writer doesn’t care how many are trying to read.
Just need to get the write lock.
Readers-Writers...Reader
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
…
read
…
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):
Readers want to
cooperate so that many
can read concurrently.
So they share the write
lock to ensure that no
writer comes along while
any one reader is
around.
First in / Last Out control
write lock.
Dining-Philosophers Problem
• Shared data
semaphore chopstick[5];
Initially all values are 1
Need two chopsticks to eat
Only have one each...
Must cooperate with
neighbors.
Dining-Philosophers Problem
Philosopher i:
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
// eat (cooperatively)
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
// think (independently)
} while (1);
This assures Mutual Exclusion 
BUT: Deadlock , Starvation 
103
Inter-process Communication
• Race Conditions: two or more processes are reading and
writing on shared data and the final result depends on who runs
precisely when
• Mutual exclusion : making sure that if one process is accessing
a shared memory, the other will be excluded form doing the
same thing
• Critical region: the part of the program where shared variables
are accessed
104
Interprocess Communication
105
Interprocess Communication
Race Conditions
Two processes want to access shared memory at same time
106
Critical Regions (1)
Four conditions to provide mutual exclusion
1. No two processes simultaneously in critical region
2. No assumptions made about speeds or numbers of CPUs
3. No process running outside its critical region may block
another process
4. No process must wait forever to enter its critical region
107
Critical Regions (2)
Mutual exclusion using critical regions
108
Producer Consumer (bounded buffer) Problem
• Formalizes the programs that use a buffer (queue)
• Two processes: producer and consumer that share a fixed size buffer
• Producer puts an item to the buffer
• Consumer takes out an item from the buffer
Buffer
producer consumer
Max size = 10
109
Producer Consumer (bounded buffer) Problem
• Formalizes the programs that use a buffer (queue)
• Two processes: producer and consumer that share a fixed size buffer
• Producer puts an item to the buffer
• Consumer takes out an item from the buffer
• What happens when the producer wants to put an item to the buffer while the
buffer is already full?
Buffer
producer consumer
Max size = 10
110
Producer Consumer (bounded buffer) Problem
• Formalizes the programs that use a buffer (queue)
• Two processes: producer and consumer that share a fixed size buffer
• Producer puts an item to the buffer
• Consumer takes out an item from the buffer
• What happens when the producer wants to put an item to the buffer while the
buffer is already full?
• OR when the consumer wants to consume an item from the buffer when the
buffer is empty?
Buffer
producer consumer
Max size = 10
111
Mutual Exclusion with Sleep and wakeup
• Solution is to use sleep and wakeup
• Sleep: a system call that causes the caller process to block
• Wakeup: a system call that wakes up a process (given as parameter)
• When the producer wants to put an item to the buffer and the buffer is full
then it sleeps
• When the consumer wants to remove an item from the buffer and the buffer
is empty, then it sleeps.
112
Sleep and Wakeup
What problems exist in this solution?
•Consumer is running
•It checks count when count == 0
•Scheduler decides to run Producer
just before consumer sleeps
•Producer inserts an item and
increments the count
•Producer notices that count is 1, and
issues a wakeup call.
•Since consumer is not sleeping yet,
the wakeup signal is lost
•Scheduler decides to run the
consumer
•Consumer sleeps
•Producer is scheduled, which runs N
times, and after filling up the buffer it
sleeps
•Both processes sleep forever (or until
the prince OS comes and sends a kiss
signal to kill both)
113
Deadlock Illustrated
114
Classical IPC Problems
• Dining philosophers problem (Dijkstra)
– Models processes competing for exclusive access to a limited
number of resources such as I/O devices
• Readers and writers problem (Courtois et al.)
– Models access to a database (both read and write)
• Sleeping barber problem
– Models queuing situations such as a multi-person helpdesk
with a computerized call waiting system for holding a
limited number of incoming calls
115
Dining Philosophers (1)
• 5 Philosophers around a table and 5 forks
• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time (first the right fork
then the left one and eat)
What problems may occur in this case?
116
Dining Philosophers (1)
• All philosophers may take their right fork at
the same time and block when the left forks
are not available.
• Solution: (like collusion detection in Ethernet
protocol)
• pick left fork,
• if the right fork is not available then
release left fork and wait for sometime
There is still a problem!!!!
117
Dining Philosophers (1)
• What happens when all philosophers do the
same thing at the same time?
• This situation is called starvation: a type of
deadlock where everybody is doing
something but no progress is made
• Solution is to use use a mutex_lock before
taking the forks and release the lock after
putting the forks back to the table
Is this a good solution?
No, because only one philosopher can eat at a time but there
Are enough forks for two!!!
118
Dining Philosophers (3)
Solution to dining philosophers problem (part 1)
119
Dining Philosophers (4)
Solution to dining philosophers problem (part 2)
120
Readers and Writers Problem
• Assume that there is a database, and processes compete for reading from and
writing to the database
• Multiple processes may read the database without any problem
• A process can write to the database only if there are no other processes
reading or writing the database
• Here are the basic steps or r/w problem assuming that rc is the reader count
(processes currently reading the database)
– A reader who gains access to the database increments rc (when rc=1, it will lock
the database against writers)
– A reader that finishes reading will decrement rc (when the rc=0 it will unlock the
database so that a writer can proceed)
– A writer can have access to the database when rc = 0 and it will lock the database
for other readers or writers
– Readers will access the database only when there are no writers (but there may be
other readers)
121
The Readers and Writers Problem
A solution to the readers and writers problem
What is the problem with
this solution?
The writer will starve when
there is constant supply op
readers !!!!
Solution is to queue new
readers behind the current
writer at the expense of
reduced concurrency
122
The Sleeping Barber Problem (1)
• There is one barber, and n chain of waiting
customers
•If there are no customers, then the barber sits in
his chair and sleeps (as illustrated in the picture)
•When a new customer arrives and the barber is
sleeping, then he will wakeup the barber
•When a new customer arrives, and the barber is
busy, then he will sit on the chairs if there is any
available, otherwise (when all the chairs are full)
he will leave.
123
The Sleeping Barber Problem (2)
Solution to sleeping barber problem.
124
Semaphores and Classical
Synchronization Problems
125
An Example Synchronization Problem
126
The Producer-Consumer Problem
• An example of the pipelined model
– One thread produces data items
– Another thread consumes them
• Use a bounded buffer between the threads
• The buffer is a shared resource
– Code that manipulates it is a critical section
• Must suspend the producer thread if the buffer
is full
• Must suspend the consumer thread if the buffer
127
Is this busy-waiting solution
correct?
thread producer {
while(1){
// Produce char c
while (count==n) {
no_op
}
buf[InP] = c
InP = InP + 1 mod n
count++
}
}
thread consumer {
while(1){
while (count==0) {
no_op
}
c = buf[OutP]
OutP = OutP + 1 mod n
count--
// Consume char
}
}
0
1
2
n-1
…
Global variables:
char buf[n]
int InP = 0 // place to add
int OutP = 0 // place to get
int count
128
This code is incorrect!
• The “count” variable can be corrupted:
– Increments or decrements may be lost!
– Possible Consequences:
• Both threads may spin forever
• Buffer contents may be over-written
• What is this problem called?
129
This code is incorrect!
• The “count” variable can be corrupted:
– Increments or decrements may be lost!
– Possible Consequences:
• Both threads may sleep forever
• Buffer contents may be over-written
• What is this problem called? Race Condition
• Code that manipulates count must be made into
a ??? and protected using ???
130
This code is incorrect!
• The “count” variable can be corrupted:
– Increments or decrements may be lost!
– Possible Consequences:
• Both threads may sleep forever
• Buffer contents may be over-written
• What is this problem called? Race Condition
• Code that manipulates count must be made into
a critical section and protected using mutual
exclusion!
131
Some more problems with this code
• What if buffer is full?
– Producer will busy-wait
– On a single CPU system the consumer will not be
able to empty the buffer
• What if buffer is empty?
– Consumer will busy-wait
– On a single CPU system the producer will not be
able to fill the buffer
• We need a solution based on blocking!
132
0 thread consumer {
1 while(1) {
2 if(count==0) {
3 sleep(empty)
4 }
5 c = buf[OutP]
6 OutP = OutP + 1 mod n
7 count--;
8 if (count == n-1)
9 wakeup(full)
10 // Consume char
11 }
12 }
Producer/Consumer with Blocking – 1st attempt
0 thread producer {
1 while(1) {
2 // Produce char c
3 if (count==n) {
4 sleep(full)
5 }
6 buf[InP] = c;
7 InP = InP + 1 mod n
8 count++
9 if (count == 1)
10 wakeup(empty)
11 }
12 }
0
1
2
n-1
…
Global variables:
char buf[n]
int InP = 0 // place to add
int OutP = 0 // place to get
int count
133
0 thread consumer {
1 while(1) {
2 if(count==0) {
3 sleep(empty)
4 }
5 c = buf[OutP]
6 OutP = OutP + 1 mod n
7 count--;
8 if (count == n-1)
9 wakeup(full)
10 // Consume char
11 }
12 }
Use a mutex to fix the race condition in this code
0 thread producer {
1 while(1) {
2 // Produce char c
3 if (count==n) {
4 sleep(full)
5 }
6 buf[InP] = c;
7 InP = InP + 1 mod n
8 count++
9 if (count == 1)
10 wakeup(empty)
11 }
12 }
0
1
2
n-1
…
Global variables:
char buf[n]
int InP = 0 // place to add
int OutP = 0 // place to get
int count
134
Problems
• Sleeping while holding the mutex causes
deadlock !
• Releasing the mutex then sleeping opens up a
window during which a context switch might
occur … again risking deadlock
• How can we release the mutex and sleep in a
single atomic operation?
• We need a more powerful synchronization
primitive
135
Semaphores
• An abstract data type that can be used for
condition synchronization and mutual exclusion
What is the difference between mutual exclusion
and condition synchronization?
136
Semaphores
• An abstract data type that can be used for
condition synchronization and mutual exclusion
• Condition synchronization
– wait until condition holds before proceeding
– signal when condition holds so others may proceed
• Mutual exclusion
– only one at a time in a critical section
137
Semaphores
• An abstract data type
– containing an integer variable (S)
– Two operations: Wait (S) and Signal (S)
• Alternative names for the two operations
– Wait(S) = Down(S) = P(S)
– Signal(S) = Up(S) = V(S)
• Blitz names its semaphore operations Down and
Up
138
Classical Definition of Wait and
Signal
Wait(S)
{
while S <= 0 do noop; /* busy
wait! */
S = S – 1; /* S >= 0
*/
}
Signal (S)
{
S = S + 1;
139
Problems with classical definition
• Waiting threads hold the CPU
– Waste of time in single CPU systems
– Required preemption to avoid deadlock
140
Blocking implementation of
semaphores
Semaphore S has a value, S.val, and a thread list, S.list.
Wait (S)
S.val = S.val - 1
If S.val < 0 /* negative value of S.val */
{ add calling thread to S.list; /* is # waiting threads */
block; /* sleep */
}
Signal (S)
S.val = S.val + 1
If S.val <= 0
{ remove a thread T from S.list;
wakeup (T);
}
141
Implementing semaphores
• Wait () and Signal () are assumed to be atomic
How can we ensure that they are atomic?
142
Implementing semaphores
• Wait () and Signal () are assumed to be atomic
How can we ensure that they are atomic?
• Implement Wait() and Signal() as system calls?
– how can the kernel ensure Wait() and Signal() are
completed atomically?
– Same solutions as before
• Disable interrupts, or
• Use TSL-based mutex
143
Semaphores with interrupt
disabling
Signal(semaphore sem)
DISABLE_INTS
sem.val++
if (sem.val <= 0) {
th = remove next
thread from sem.L
wakeup(th)
}
ENABLE_INTS
struct semaphore {
int val;
list L;
}
Wait(semaphore sem)
DISABLE_INTS
sem.val--
if (sem.val < 0){
add thread to sem.L
sleep(thread)
}
ENABLE_INTS
144
Semaphores with interrupt
disabling
Signal(semaphore sem)
DISABLE_INTS
sem.val++
if (sem.val <= 0) {
th = remove next
thread from sem.L
wakeup(th)
}
ENABLE_INTS
struct semaphore {
int val;
list L;
}
Wait(semaphore sem)
DISABLE_INTS
sem.val--
if (sem.val < 0){
add thread to sem.L
sleep(thread)
}
ENABLE_INTS
145
Blitz code for Semaphore.wait
method Wait ()
var oldIntStat: int
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x80000000
FatalError ("Semaphore count underflowed during 'Wait‘
operation")
EndIf
count = count – 1
if count < 0 waitingThreads.AddToEnd (currentThread)
currentThread.Sleep ()
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
146
Blitz code for Semaphore.wait
method Wait ()
var oldIntStat: int
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x80000000
FatalError ("Semaphore count underflowed during 'Wait‘
operation")
EndIf
count = count – 1
if count < 0 waitingThreads.AddToEnd (currentThread)
currentThread.Sleep ()
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
147
Blitz code for Semaphore.wait
method Wait ()
var oldIntStat: int
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x80000000
FatalError ("Semaphore count underflowed during 'Wait‘
operation")
EndIf
count = count – 1
if count < 0 waitingThreads.AddToEnd (currentThread)
currentThread.Sleep ()
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
148
Blitz code for Semaphore.wait
method Wait ()
var oldIntStat: int
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x80000000
FatalError ("Semaphore count underflowed during 'Wait‘
operation")
EndIf
count = count – 1
if count < 0 waitingThreads.AddToEnd (currentThread)
currentThread.Sleep ()
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
149
But what is currentThread.Sleep ()?
• If sleep stops a thread from executing, how,
where, and when does it return?
– which thread enables interrupts following sleep?
– the thread that called sleep shouldn’t return until
another thread has called signal !
– … but how does that other thread get to run?
– … where exactly does the thread switch occur?
• Trace down through the Blitz code until you find
a call to switch()
– Switch is called in one thread but returns in another!
150
Look at the following Blitz source
code
• Thread.c
– Thread.Sleep ()
– Run (nextThread)
• Switch.s
– Switch (prevThread, nextThread)
151
Blitz code for Semaphore.signal
method Signal ()
var oldIntStat: int
t: ptr to Thread
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x7fffffff
FatalError ("Semaphore count overflowed during
'Signal' operation")
endIf
count = count + 1
if count <= 0
t = waitingThreads.Remove ()
t.status = READY
readyList.AddToEnd (t)
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
152
Blitz code for Semaphore.signal
method Signal ()
var oldIntStat: int
t: ptr to Thread
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x7fffffff
FatalError ("Semaphore count overflowed during
'Signal' operation")
endIf
count = count + 1
if count <= 0
t = waitingThreads.Remove ()
t.status = READY
readyList.AddToEnd (t)
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
153
Blitz code for Semaphore.signal
method Signal ()
var oldIntStat: int
t: ptr to Thread
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x7fffffff
FatalError ("Semaphore count overflowed during
'Signal' operation")
endIf
count = count + 1
if count <= 0
t = waitingThreads.Remove ()
t.status = READY
readyList.AddToEnd (t)
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
154
Blitz code for Semaphore.signal
method Signal ()
var oldIntStat: int
t: ptr to Thread
oldIntStat = SetInterruptsTo (DISABLED)
if count == 0x7fffffff
FatalError ("Semaphore count overflowed during
'Signal' operation")
endIf
count = count + 1
if count <= 0
t = waitingThreads.Remove ()
t.status = READY
readyList.AddToEnd (t)
endIf
oldIntStat = SetInterruptsTo (oldIntStat)
endMethod
155
Semaphores using atomic
instructions
• Implementing semaphores with interrupt disabling only works on uni-
processors
– What should we do on a multiprocessor?
• As we saw earlier, hardware provides special atomic instructions for
synchronization
– test and set lock (TSL)
– compare and swap (CAS)
– etc
• Semaphore can be built using atomic instructions
1. build mutex locks from atomic instructions
2. build semaphores from mutex locks
156
Building spinning mutex locks
using TSL
Mutex_lock:
TSL REGISTER,MUTEX | copy mutex to register and set mutex to 1
CMP REGISTER,#0 | was mutex zero?
JZE ok | if it was zero, mutex is unlocked, so return
JMP mutex_lock | try again
Ok: RET | return to caller; enter critical section
Mutex_unlock:
MOVE MUTEX,#0 | store a 0 in mutex
RET | return to caller
157
Using Mutex Locks to Build
Semaphores
• How would you modify the Blitz code to do this?
158
What if you had a blocking mutex lock?
Problem: Implement a counting semaphore
Up ()
Down ()
...using just Mutex locks
• Goal: Make use of the mutex lock’s blocking
behavior rather than reimplementing it for the
semaphore operations
159
How about this solution?
var cnt: int = 0 -- Signal count
var m1: Mutex = unlocked -- Protects access to “cnt”
m2: Mutex = locked -- Locked when waiting
Down ():
Lock(m1)
cnt = cnt – 1
if cnt<0
Lock(m2)
Unlock(m1)
else
Unlock(m1)
endIf
Up():
Lock(m1)
cnt = cnt + 1
if cnt<=0
Unlock(m2)
endIf
Unlock(m1)
160
How about this solution?
var cnt: int = 0 -- Signal count
var m1: Mutex = unlocked -- Protects access to “cnt”
m2: Mutex = locked -- Locked when waiting
Down ():
Lock(m1)
cnt = cnt – 1
if cnt<0
Lock(m2)
Unlock(m1)
else
Unlock(m1)
endIf
Up():
Lock(m1)
cnt = cnt + 1
if cnt<=0
Unlock(m2)
endIf
Unlock(m1)
161
How about this solution then?
var cnt: int = 0 -- Signal count
var m1: Mutex = unlocked -- Protects access to “cnt”
m2: Mutex = locked -- Locked when waiting
Down ():
Lock(m1)
cnt = cnt – 1
if cnt<0
Unlock(m1)
Lock(m2)
else
Unlock(m1)
endIf
Up():
Lock(m1)
cnt = cnt + 1
if cnt<=0
Unlock(m2)
endIf
Unlock(m1)
162
Classical Synchronization problems
• Producer Consumer (bounded buffer)
• Dining philosophers
• Sleeping barber
• Readers and writers
163
Producer consumer problem
• Also known as the bounded buffer problem
8 Buffers
InP
OutP
Consumer
Producer
Producer and consumer
are separate threads
164
Is this a valid solution?
thread producer {
while(1){
// Produce char c
while (count==n) {
no_op
}
buf[InP] = c
InP = InP + 1 mod n
count++
}
}
thread consumer {
while(1){
while (count==0) {
no_op
}
c = buf[OutP]
OutP = OutP + 1 mod n
count--
// Consume char
}
}
0
1
2
n-1
…
Global variables:
char buf[n]
int InP = 0 // place to add
int OutP = 0 // place to get
int count
165
Does this solution work?
0 thread producer {
1 while(1){
2 // Produce char c...
3 down(empty_buffs)
4 buf[InP] = c
5 InP = InP + 1 mod n
6 up(full_buffs)
7 }
8 }
0 thread consumer {
1 while(1){
2 down(full_buffs)
3 c = buf[OutP]
4 OutP = OutP + 1 mod n
5 up(empty_buffs)
6 // Consume char...
7 }
8 }
Global variables
semaphore full_buffs = 0;
semaphore empty_buffs = n;
char buff[n];
int InP, OutP;
166
Producer consumer problem
• What is the shared state in the last solution?
• Does it apply mutual exclusion? If so, how?
8 Buffers
InP
OutP
Consumer
Producer
Producer and consumer
are separate threads
167
Problems with solution
• What if we have multiple producers and
multiple consumers?
– Producer-specific and consumer-specific data
becomes shared
– We need to define and protect critical sections
– You’ll do this in the second part of the current Blitz
project, using the mutex locks you built!
168
Dining philosophers problem
• Five philosophers sit at a table
• One chopstick between each philosopher
(need two to eat)
• Why do they need to synchronize?
• How should they do it?
while(TRUE) {
Think();
Grab first chopstick;
Grab second chopstick;
Eat();
Put down first chopstick;
Put down second chopstick;
}
Each philosopher is
modeled with a thread
The Sleeping Barber Problem
169
Is this a valid solution?
#define N 5
Philosopher() {
while(TRUE) {
Think();
take_chopstick(i);
take_chopstick((i+1)% N);
Eat();
put_chopstick(i);
put_chopstick((i+1)% N);
}
}
170
Problems
• Potential for deadlock !
171
Working towards a solution …
#define N 5
Philosopher() {
while(TRUE) {
Think();
take_chopstick(i);
take_chopstick((i+1)% N);
Eat();
put_chopstick(i);
put_chopstick((i+1)% N);
}
}
take_chopsticks(i)
put_chopsticks(i)
172
Working towards a solution …
#define N 5
Philosopher() {
while(TRUE) {
Think();
take_chopsticks(i);
Eat();
put_chopsticks(i);
}
}
173
Taking chopsticks
// only called with mutex set!
test(int i) {
if (state[i] == HUNGRY &&
state[LEFT] != EATING &&
state[RIGHT] != EATING){
state[i] = EATING;
signal(sem[i]);
}
}
int state[N]
semaphore mutex = 1
semaphore sem[i]
take_chopsticks(int i) {
wait(mutex);
state [i] = HUNGRY;
test(i);
signal(mutex);
wait(sem[i]);
}
174
Putting down chopsticks
// only called with mutex set!
test(int i) {
if (state[i] == HUNGRY &&
state[LEFT] != EATING &&
state[RIGHT] != EATING){
state[i] = EATING;
signal(sem[i]);
}
}
int state[N]
semaphore mutex = 1
semaphore sem[i]
put_chopsticks(int i) {
wait(mutex);
state [i] = THINKING;
test(LEFT);
test(RIGHT);
signal(mutex);
}
175
Dining philosophers
• Is the previous solution correct?
• What does it mean for it to be correct?
• Is there an easier way?
176
The sleeping barber problem
177
The sleeping barber problem
• Barber:
– While there are people waiting for a hair cut, put one in the
barber chair, and cut their hair
– When done, move to the next customer
– Else go to sleep, until someone comes in
• Customer:
– If barber is asleep wake him up for a haircut
– If someone is getting a haircut wait for the barber to become
free by sitting in a chair
– If all chairs are all full, leave the barbershop
178
Designing a solution
• How will we model the barber and customers?
• What state variables do we need?
– .. and which ones are shared?
– …. and how will we protect them?
• How will the barber sleep?
• How will the barber wake up?
• How will customers wait?
• What problems do we need to look out for?
179
Is this a good solution?
Barber Thread:
while true
Wait(customers)
Lock(lock)
numWaiting = numWaiting-1
Signal(barbers)
Unlock(lock)
CutHair()
endWhile
Customer Thread:
Lock(lock)
if numWaiting < CHAIRS
numWaiting = numWaiting+1
Signal(customers)
Unlock(lock)
Wait(barbers)
GetHaircut()
else -- give up & go home
Unlock(lock)
endIf
const CHAIRS = 5
var customers: Semaphore
barbers: Semaphore
lock: Mutex
numWaiting: int = 0
180
The readers and writers problem
• Multiple readers and writers want to access a
database (each one is a thread)
• Multiple readers can proceed concurrently
• Writers must synchronize with readers and
other writers
– only one writer at a time !
– when someone is writing, there must be no readers
!
Goals:
181
Designing a solution
• How will we model the readers and writers?
• What state variables do we need?
– .. and which ones are shared?
– …. and how will we protect them?
• How will the writers wait?
• How will the writers wake up?
• How will readers wait?
• How will the readers wake up?
• What problems do we need to look out for?
182
Is this a valid solution to readers &
writers?
Reader Thread:
while true
Lock(mut)
rc = rc + 1
if rc == 1
Wait(db)
endIf
Unlock(mut)
... Read shared data...
Lock(mut)
rc = rc - 1
if rc == 0
Signal(db)
endIf
Unlock(mut)
... Remainder Section...
endWhile
var mut: Mutex = unlocked
db: Semaphore = 1
rc: int = 0
Writer Thread:
while true
...Remainder Section...
Wait(db)
...Write shared data...
Signal(db)
endWhile
183
Readers and writers solution
• Does the previous solution have any problems?
– is it “fair”?
– can any threads be starved? If so, how could this
be fixed?
– … and how much confidence would you have in
your solution?
184
Quiz
• What is a race condition?
• How can we protect against race conditions?
• Can locks be implemented simply by reading
and writing to a binary variable in memory?
• How can a kernel make synchronization-
related system calls atomic on a uniprocessor?
– Why wouldn’t this work on a multiprocessor?
• Why is it better to block rather than spin on a
uniprocessor?
• Why is it sometimes better to spin rather than
185
Quiz
• When faced with a concurrent programming
problem, what strategy would you follow in
designing a solution?
• What does all of this have to do with Operating
Systems?

More Related Content

Similar to Advanced_OS_Unit 1 & 2.ppt

Bba i-introduction to computer-u-3-functions operating systems
Bba  i-introduction to computer-u-3-functions operating systemsBba  i-introduction to computer-u-3-functions operating systems
Bba i-introduction to computer-u-3-functions operating systemsRai University
 
introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptxSHUJEHASSAN
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxakhilagajjala
 
Operating System-Introduction
Operating System-IntroductionOperating System-Introduction
Operating System-IntroductionShipra Swati
 
Introduction of os and types
Introduction of os and typesIntroduction of os and types
Introduction of os and typesPrakash Sir
 
Operating System / System Operasi
Operating System / System Operasi                   Operating System / System Operasi
Operating System / System Operasi seolangit4
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxkrishnajoshi70
 
Operating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfOperating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfFahanaAbdulVahab
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsRai University
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsRai University
 

Similar to Advanced_OS_Unit 1 & 2.ppt (20)

Bba i-introduction to computer-u-3-functions operating systems
Bba  i-introduction to computer-u-3-functions operating systemsBba  i-introduction to computer-u-3-functions operating systems
Bba i-introduction to computer-u-3-functions operating systems
 
introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptx
 
Os concepts
Os conceptsOs concepts
Os concepts
 
CSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptxCSE3120- Module1 part 1 v1.pptx
CSE3120- Module1 part 1 v1.pptx
 
Operating System
Operating SystemOperating System
Operating System
 
Operating System-Introduction
Operating System-IntroductionOperating System-Introduction
Operating System-Introduction
 
OS Content.pdf
OS Content.pdfOS Content.pdf
OS Content.pdf
 
Operating system
Operating systemOperating system
Operating system
 
Unit 4
Unit  4Unit  4
Unit 4
 
Introduction of os and types
Introduction of os and typesIntroduction of os and types
Introduction of os and types
 
Operating System
Operating SystemOperating System
Operating System
 
Operating System / System Operasi
Operating System / System Operasi                   Operating System / System Operasi
Operating System / System Operasi
 
Ch1 introduction
Ch1   introductionCh1   introduction
Ch1 introduction
 
Operating System Overview.pdf
Operating System Overview.pdfOperating System Overview.pdf
Operating System Overview.pdf
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptx
 
Operating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdfOperating Systems PPT 1 (1).pdf
Operating Systems PPT 1 (1).pdf
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systems
 
Mba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systemsMba i-ifm-u-3 operating systems
Mba i-ifm-u-3 operating systems
 

More from DuraisamySubramaniam1 (7)

3-Requirements.ppt
3-Requirements.ppt3-Requirements.ppt
3-Requirements.ppt
 
6-Design.ppt
6-Design.ppt6-Design.ppt
6-Design.ppt
 
4-ProjectPlanning.ppt
4-ProjectPlanning.ppt4-ProjectPlanning.ppt
4-ProjectPlanning.ppt
 
5-Architecture.ppt
5-Architecture.ppt5-Architecture.ppt
5-Architecture.ppt
 
2-SoftwareProcess.ppt
2-SoftwareProcess.ppt2-SoftwareProcess.ppt
2-SoftwareProcess.ppt
 
7-CodingAndUT.ppt
7-CodingAndUT.ppt7-CodingAndUT.ppt
7-CodingAndUT.ppt
 
1-Intro.ppt
1-Intro.ppt1-Intro.ppt
1-Intro.ppt
 

Recently uploaded

Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........LeaCamillePacle
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxLigayaBacuel1
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationAadityaSharma884161
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 

Recently uploaded (20)

OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptx
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint Presentation
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 

Advanced_OS_Unit 1 & 2.ppt

  • 1. LECTURE NOTES Unit I & II CGAC/CS Department ADVANCED OPERATING SYSTEMS
  • 2. UNIT II • Inter Process Communication – Race condition – Critical Region – Mutual Exclusion – Sleep and wakeup – Semaphores – Mutexes – Message Passing. Classical IPC Problems : The Dining Philosophers Problem – The Readers and Writers Problem – The Sleeping Barber Problem – Producer Consumer problem. 2
  • 3. 1. What’s an Operating System? Operating System A set of programs1 that act as an intermediary between a computer’s user(s) and hardware. 1 stored in firmware (ROM) and/or software
  • 4. OS...User View All users share similar hardware interfaces Keyboard, Mouse, Monitor, Printer, Speakers, CDRom Three types of users by software interface: 1. End-User (Icons, Folder Navigation, MS Word Application) 2. System Administrator: shell interface Shell Scripts (Keyboard only) for dir, copy, ftp, cd 3. Programmer: Application Programming Interface • GUI stuff • Logical stuff (math.h, file.h, ioctl.h) Obviously, the three are interdependent This course is equally concerned with needs of all three labs focus on (3), but “User” generally refers to all three
  • 5. Hardware OS...Conceptual View OS User A Programs User A Operating System is the (only) intermediary between Users and Hardware Figure 1-1 in text... User B Programs User B Users OS Hardware
  • 7. OS Roles...Three Roles • Resource allocator. Manages / allocates / referees hardware resources CPU time, memory, video / audio out, network access, printer queues, etc. • Control program. – Starts / stops user programs (scheduling, preemption) – Directs I/O devices (using device drivers). • Kernel – The one program running at all times (all else being user programs).
  • 8. OS Goals 3. Execute user programs 2. Make the computer convenient to use 1. Protect users from each other 5. Make programs easier to write 4. Use hardware efficiently 6. Protect hardware from user End User Programmer Sys Admin
  • 9. Batch System Only one job runs at a time – Job fully “owns” CPU and all hardware – CPU underutilized. Card reader 20 cards/sec. – input, processing, output Reduce setup time by batching similar jobs – Operator separates instructions / data – Minimizes next stack of cards to be read Invention of disk....Automatic job sequencing – auto transfer from one job to another on disk. – Resident monitor is born • First rudimentary operating system. • Less waiting on I/O
  • 10. Memory Layout: Simple Batch System OS loaded to a consistent place in main memory (high or low); size of OS is predictable User program (only one) uses up to all remaining memory
  • 11. Multiprogrammed Batch Systems • Several jobs are kept in main memory at the same time • CPU multiplexed among them • OS must make decisions • one job waits on I/O, so... • schedule another job • but what are priorities? • Core challenges haven’t changed in 50 years; basic programs look a lot like this: input, processing, output.
  • 12. OS Features for Multiprogramming • Memory management – allocate memory to several jobs at same time • CPU scheduling – choose among several jobs ready to run • I/O routine supplied by the system – when a program needs to wait on I/O, it has a well- known means of informing the system – system has a well-known means of rescheduling another job during the I/O • Allocation of devices (disk, more memory...) – Only one job “owns” hardware at a given time.
  • 13. Time-Sharing Systems (Unix) Interaction between user and system – beyond simple input, processing, output. – when OS finishes the execution of one command, it seeks next “control statement” from keyboard – On-line system must be available for users to access data and code. CPU still multiplexes several jobs – But more sophisticated....need greater degree of multiprogramming to support many users – jobs kept in memory and on disk – CPU allocated to a job only if it’s in memory – Jobs swapped in/out of memory from/to disk.
  • 14. Concurrency I/O devices and CPU can execute concurrently. • Each device controller has a local memory buffer Device only talks to that buffer (not to main bus) While device is doing its thing, CPU is oblivious to device and finds other useful work to do. • Device controller informs CPU that it has finished its operation by causing an interrupt. CPU drops everything it’s doing CPU moves data from device controller’s local buffers to main memory; now local buffer can be refilled. Sidepoint: How does this relate to parallel processing?
  • 15. 6. Network Structures • Local Area Networks (LAN) – usually within a building – speeds 10 MBit – 1 GBit • Ethernet & FDDI protocols • bus, ring, and star topologies. – may interoperate with other LAN(s) via Gateway. • Wide Area Networks (WAN) – as big as the world (Arpanet -> Internet) – disparate speeds, protocols • Point to Point Protocol (PPP) for Modem access to Internet – leased phone lines, fiber, satellite links, etc. – requires many communications processors (CPs)
  • 16. Local Area Network Structure
  • 17. Wide Area Network Structure
  • 18. 3. OS Structures How OS software is organized.
  • 19. 1. System Components • Process Management • Main Memory Management • File Management • I/O System Management • Secondary Storage Management • Networking • Protection System • Command-Interpreter System
  • 20. Process Management • A process is a program in execution. • A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task. • OS is responsible for the following activities in connection with process management. – Process creation and deletion. – Process suspension and resumption. – Process synchronization – Interprocess communication
  • 21. Main Memory Management • Memory is a large array of bytes, each with its own address. • Main memory is a volatile storage device. It loses its contents in the case of system failure. • OS is responsible for the following activities in connections with memory management: – Keep track of which parts of memory are currently being used and by what process. – Decide which processes to load when memory space becomes available. – Allocate and deallocate memory space as needed
  • 22. 3. System Calls • Interface between a running program(process) and OS – Generally available as assembly-language instructions. – Languages defined to replace assembly language for systems programming allow system calls to be made directly (C, C++, Perl) via a well-known API. • Three general methods to pass parameters between a running program and the OS: – CPU registers. – Store parameters to a table in memory, and pass the table address as a parameter in a register. – Push (store) the parameters onto the stack by the program, and pop off the stack by operating system.
  • 23. Multitasked Process Control E.g., user runs one interpreter and three processes concurrently. All four tasks execute independently unless they explicitly coordinate their own execution
  • 24. File Manipulation create, delete,open,read,write,reposition Device Management request the device, release, open, close, read, write,reposition Information Maintenance Current time,date
  • 25. UNIX System Structure • UNIX – limited by hardware functionality, the original UNIX OS had limited structure. • UNIX consists of two separable parts. – Systems programs – The kernel • Consists of everything below the system-call interface and above the physical hardware • Provides the file system, CPU scheduling, memory management, and other operating-system functions; a large number of functions for one level.
  • 26. Chapter 4: Processes Programs in Execution... & how they get there.
  • 27. 1. What’s a Process? • An OS executes a variety of programs: – or Jobs (Batch systems) – or Tasks, Executables (Time-shared systems) – Textbook uses job and process almost interchangeably. • Process = a program in execution – execution must progress in a sequential fashion. • A process includes: – ordered CPU instructions (text) – data section – program counter – stack This is the part on disk... result of static compile... data changes during execution. This changes to indicate which instruction is current for a given user.
  • 28. Process State • As a process executes, it changes state • Typical valid states (OS dependent): – The process is being created. – Instructions are being executed. – The process is waiting for some event to occur, usually involving I/O. – The process is waiting to be assigned to a processor (CPU) – The process has finished execution new running waiting ready terminated
  • 30. Process Control Block (PCB) OS data structure Stores info associated with each process: • Process state • Program counter • CPU register values • CPU scheduling information • Memory-management information • Accounting information • I/O status information
  • 31. Process Control Block (PCB) One of these data structures will be allocated for each process when the process is initially scheduled. Throughout the process’ lifecycle, different kernel subsystems will read/write different sections of PCB data Also a construct in Labs
  • 32. CPU Switch Between Processes P0 has the CPU; then P1; then P0 again. During switch, PCB is crucial enabler for the OS to schedule the next process and retain knowledge required to restore previous process to execution at some later time.
  • 33. 2. Process Scheduling Queues • Job queue – set of all processes in the system. • Ready queue – set of all processes residing in main memory, ready and waiting to execute. • Device queues – set of processes waiting for an I/O device.
  • 34. Ready Queue & I/O Device Queues
  • 36. Schedulers • Long-term scheduler (or job scheduler) – selects which processes should be initially brought into the ready queue. – why wouldn’t we bring a program in? • Short-term scheduler (or CPU scheduler) – selects which process should be next to run, the next time the CPU (or a CPU) becomes available
  • 37. Medium Term Scheduling If a process is so “busy” waiting on I/O, or so low in priority that it never gets to run anyway, it’s really wasting memory, so copy its memory to disk to support higher degree of multiprogramming.
  • 38. Schedulers (Cont.) • Short-term scheduler is invoked very frequently (milliseconds)  must be fast – Processes can be described as either: • I/O-bound or Bursty– spends more time doing I/O than computations, many short CPU bursts. E.g.: word processor • CPU-bound– spends more time doing computations; few but very long CPU bursts. E.g.: engineering analysis • Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow). – controls the degree of multiprogramming.
  • 39. Context Switch • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process. • Context-switch time is overhead; the system does no useful work while switching. – This is a major tradeoff in system design. – Too much v. Too little context switching • and what about realtime OS constraints? • Overhead time depends on hardware – E.g. register sets on some chips can obviate the need to copy data to and from the PCB, making the overhead of less effect.
  • 40. 3. Process Creation • Parent process can create child processes, which, in turn can create other processes • This forms a tree of processes. • Three resource sharing possibilities: – Parent and children share all resources. – Children share subset of parent’s resources. – Parent and child share no resources. • Execution considerations: – Parent and children execute concurrently. – Parent waits until children terminate.
  • 41. Process Creation (Cont.) • Address space – Child duplicate of parent UNIX: fork system call – Child has a program loaded into it UNIX: exec system call
  • 43. Process Termination • Case 1: Process voluntarily finishes (exit). – Output data from child to parent (via wait). – Process’ resources are deallocated by OS • Case 2: Parent terminates child (abort). Why? – Child has exceeded allocated resources. – Task assigned to child is no longer required. – Parent is exiting & desire is to cascade
  • 44. 4. Cooperating Processes • Independent processes cannot affect or be affected by any other processes. • Cooperating processes can affect or be affected by the execution of its peer processes • Advantages of process cooperation – Information sharing – Computation speed-up (on parallel system) – Modularity (software maintenance issue) – Convenience (edit / print at same time)
  • 45. Producer-Consumer Problem • Paradigm for cooperating processes • producer process produces information that is consumed by a consumer process. – unbounded-buffer places no limit on the size of the buffer. Producer can always produce. – bounded-buffer assumes a fixed buffer size. Producer may have to wait until consumer has consumed in order to produce more.
  • 46. Bounded-Buffer: Shared-Memory Solution • Shared data #define BUFFER_SIZE 10 Typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; // next producer slot int out = 0; // next consumer slot • Solution is correct, but can only use BUFFER_SIZE-1 elements
  • 47. Bounded-Buffer: Producer Process item nextProduced; while (1) { while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; }
  • 48. Bounded-Buffer: Consumer Process item nextConsumed; while (1) { while (in == out) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; } // (in-out) indicates Producer lead on consumer. // in == out means Consumer can’t consume // in == out-1 means producer can’t produce // what does “do nothing” mean? // have we used all slots? (but: semaphores, later)
  • 49. 5. Interprocess Communication • Mechanism for processes to synchronize actions. • Message system – processes communicate with each other without resorting to shared variables. • IPC facility provides two operations: – send (message) – message size fixed or variable – receive (message) • If P and Q wish to communicate, they must: – establish a communication link between them – exchange messages via send/receive • Implementation of communication link – physical (e.g., shared memory, hardware bus)...transparent here. – logical (e.g., logical properties)
  • 50. Implementation Questions • How are links established? • Can more than two processes share link? • Can processes share more than one link? • What is the capacity of a link? • Is link message size fixed or variable? • Is link uni- or bi-directional?
  • 51. Direct Communication • Processes must name each other explicitly: – send (P, message) – send a message to process P – receive(Q, message) – receive a message from process Q • Properties: – Links established automatically (though the processes must know about each other via some other means). – One link == one pair of communicating processes. – May be unidirectional, but is usually bi-directional.
  • 52. Indirect Communication • Messages are directed and received from mailboxes (or ports). – Each mailbox has a unique id. – Processes communicate only if they share a port. • Properties: – Process must explicitly connect to a port. – A link may be associated with many processes. – Each pair of processes may participate in several disparate links. – Link may be uni- or bi-directional.
  • 53. Indirect Communication • Mailbox sharing – P1, P2, and P3 share mailbox A. – P1, sends; P2 and P3 receive. – Who gets the message? • Solutions – Allow a link to be associated with at most two processes. – Allow only one process at a time to execute a receive operation. – Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
  • 54. Synchronization Message passing may be blocking or non-blocking. – Blocking is considered synchronous – Non-blocking is considered asynchronous • send and receive primitives may be either blocking or non-blocking.
  • 55. Remote Procedure Calls • Remote procedure calls (RPC) abstract procedure calls between distributed (networked) processes • Stubs – client-side proxy for server procedure – Stub locates server and marshalls parameters. • The server-side Skeleton receives this message, unmarshalls parameters, and performs the procedure on the server.
  • 56. Java RMI • Remote Method Invocation (RMI) is a Java mechanism similar to RPCs. • RMI allows a Java program on one machine to invoke a method on a remote object.
  • 57. Chapter 5: Threads Concurrent Tasks Within a Process
  • 58. 1. Overview • So far we’ve only talked of a Process as a single-threaded task to be executed, with: • resources (file table, memory for text/data) • state (stack, registers, program counter) • What if we allow state to be “cloned,” so that potentially many subtasks could execute concurrently? • Analogy of fire department / sports team • each member shares same “program” • yet each occupies different space, and members must share resources
  • 59. 1. Overview Code (text), data and files don’t change in a threaded scenario. However, threads allow us to have multiple instances of registers, stack, and PC.
  • 60. Benefits • Responsiveness (cancel button; web browser JPEGs & mail) • Resource Sharing (e.g. Shared.java) • Economy (Solaris 30X creation, 5X context switch) • Utilization of MP Architectures
  • 61. Two Types of Threads • User Threads (very lightweight) – Thread management done by user-level threads library – Very low overhead, but limited benefit - POSIX Pthreads - Mach C-threads - Solaris UI-threads • Kernel Threads (less lightweight) – Thread management done by the kernel – Can block on I/O & take advantage of Parallelism. • Windows 95/98/NT/2000/XP • Solaris • Linux
  • 62. 2. Multithreading Models • Many-to-One – Many user-level threads mapped to single kernel thread • One-to-One – Each user-level thread maps to kernel thread. • Many-to-Many – Thread pooling....can be best of both worlds.
  • 63. Many-to-One Model Good News: Little Overhead Bad News: • One thread blocks siblings, so don’t allow user threads to do I/O operations.
  • 64. One-to-one Model Each thread can block & be CPU-scheduled independently
  • 65. Many-to-Many Model • A given User thread will only block one kernel thread, giving other user threads the chance to run. • Since multiple kernel threads are available, parallelism is possible.
  • 66. 8. Java Threads • Java threads may be created by: – Extending Thread class – Implementing the Runnable interface • Depending on JVM implementation, scheduling can be done by JVM or mapped to native Thread library.
  • 68. Chapter 7: Synchronization Making things work... ...at the same time
  • 69. 1. Background • Concurrent access may corrupt shared data – Maintaining data consistency requires mechanisms to ensure orderly execution of cooperating processes. • Example: Shared-memory solution to bounded-buffer problem (Chapter 4) allows at most n – 1 items in buffer at the same time. – A solution where all N buffers are used is not simple. – Suppose: modify producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer...
  • 70. Bounded-Buffer II...Shared Data #define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0; // NEW
  • 71. Bounded-Buffer II...Producer item nextProduced; while (1) { while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }
  • 72. Bounded-Buffer II...Consumer item nextConsumed; while (1) { while (counter == 0) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; }
  • 73. Bounded Buffer II...Atomicity • The statements counter++; counter--; must be performed atomically. • Atomic operation means an operation that completes in its entirety without interruption.
  • 74. Bounded Buffer II...Assembly • The statement count++ may be implemented in machine language as: register1 = counter register1 = register1 + 1 counter = register1 • The statement count—- may be implemented as: register2 = counter register2 = register2 – 1 counter = register2
  • 75. Bounded Buffer II...Interleaving • If both producer and consumer attempt to update buffer concurrently, the assembly language statements may get interleaved. • Interleaving depends on how the producer and consumer processes are scheduled. We can’t assume scheduling always works in our favor...
  • 76. Bounded Buffer • Assume counter is initially 5. One possible scenario for interleaving is: producer: r1 = counter (register1 = 5) producer: r1 = r1 + 1 (register1 = 6) consumer: r2 = counter (register2 = 5) consumer: r2 = r2 – 1 (register2 = 4) producer: counter = r1 (counter = 6) consumer: counter = r2 (counter = 4) • Depending on scheduling, the value of counter may be 4, 5, or 6, but the correct result is 5.
  • 77. Race Condition • Race condition: several processes access and manipulate shared data concurrently, such that the final value of the shared data depends on which process finishes last. • To prevent race conditions, concurrent processes must be synchronized.
  • 78. 2. Critical-Section Problem • n processes compete to use some shared data • Each process has a code segment, called a critical section (CS), which accesses shared data. • Problem – ensure that: – when one process is executing its CS, – no other process is allowed to execute its CS
  • 79. Critical-Section...Solution  Assume each process executes at a nonzero speed  No assumption concerning relative speeds of the n processes. Useful solution requires three elements: 1. Mutual Exclusion. If Pi is executing its CS, no other processes can be executing their CSs. 2. Progress. If no process is executing its CS and at least one process wishes to enter its CS, the selection of next process to enter can’t be postponed indefinitely. 3. Bounded Waiting. A fair bound must exist on the number of times that other processes are allowed to enter their CSs after a process has made a request to enter its CS and before its request is granted.
  • 80. • Only 2 processes, P0 and P1 • General structure of process Pi (other process Pj) do { entry section critical section exit section remainder section } while (1); • Processes may share some common variables to synchronize their actions. Initial Attempts to Solve Problem
  • 81. Algorithm 1...Tag Team • Shared variables: – int turn = i;  Pi can enter its critical section initially • Process Pi do { while (turn != i) { // wait} ; critical section turn = j; reminder section } while (1); • Satisfies mutual exclusion, but not progress – what if P1 has to enter CS twice as often as P2?
  • 82. Algorithm 2...Politeness • Shared variables boolean flag[2]; // initially both flags false. flag [i] == true  Pi wants to enter CS • Process Pi do { flag[i] = true; while (flag[j]) { do no op}; critical section flag [i] = false; remainder section } while (1); • Satisfies mutual exclusion, but not progress – what if both say “no, you go first” at the same time?
  • 83. Algorithm 3...Polite Tag-Team • Combine variables of algorithms 1 and 2. • Process Pi do { flag [i] = true; turn = j; while (flag [j] && turn = j); critical section flag [i] = false; remainder section } while (1); • Meets all three requirements – solves CS problem for two processes. – Makes the race condition work for us
  • 84. 4. Semaphores (Synchronization tool) • So far each solution to critical section problem has required busy waiting...until now. • Semaphore S – integer variable – can only be accessed via two atomic operations wait (S): while S 0 do no-op; S--; signal (S): S++; No-op means busy waiting without the “busy”
  • 85. Critical Section of n Processes • Shared data: semaphore mutex = 1; // one; important! • Process Pi: do { wait(mutex); critical section signal(mutex); remainder section } while (1);
  • 86. in process P1, S1; signal(synch); in process P2 wait(synch); S2; P2 will execute S2 only after P1 has involved signal(synch), which is after S1;
  • 87. Semaphore...Approach • Define a semaphore as a record type semaphore = record value :integer; L: list of process; end; Assume two simple operations: block suspends the process that invokes it. wakeup(P) resumes execution of blocked process P.
  • 88. Semaphore...Implementation wait(S): S.value--; if (S.value < 0) { add this process to S.L; block; } signal(S): S.value++; if (S.value < 1) { remove a process P from S.L; wakeup(P); }
  • 89. Semaphore as General Synchronization Tool • Pj executes B only after Pi executes A • Use semaphore flag initialized to 0 • Code: Pi Pj   A wait(flag) signal(flag) B
  • 90. Deadlock and Starvation • Deadlock – two (or more) processes waiting (indefinitely) for an event that can only be triggered by one of the waiting processes. – Let S and Q be two semaphores initialized to 1 P0 P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q) signal(S); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Someone must signal().
  • 91. Two Types of Semaphores • Counting semaphore – integer value can range over an unrestricted domain. • Binary semaphore – integer value can range only between 0 and 1 – can be simpler to implement in hardware – can be used to implement a counting semaphore
  • 92. Making a Binary Semaphore Count • Data structures: binary-semaphore S1, S2; int C; • Initialization: S1 = true(1); S2 = false(0); C = initial value of counting semaphore S
  • 93. Implementing S S1=1,s2=0 wait wait(S1); C--; if (C < 0) { signal(S1); wait(S2); } signal(S1); signal wait(S1); C++; if (C < 1) signal(S2); else signal(S1); S1 protects C C is the “gateway” to S2, which is the real lock... if C already taken, we wait on S2. In either case, signal S1. If we’ve waited, signal twice on S1 (see ***, below) Again, S1 protects C If we’re not last, signal S2 since someone’s waiting...***they’ll relinquish C when they’re done. Else, signal S1 to relinquish our own hold on C.
  • 94. 5. Classic Synchronization Problems • Totally solvable in Section 7.5: – Bounded-Buffer – Readers and Writers • Partially solved in Section 7.5: – Dining-Philosophers
  • 95. Bounded-Buffer III • Third time’s a charm – use all buffer slots – AND no busy waiting • Shared data semaphore full, empty, mutex; Initially: full = 0, empty = n, mutex = 1 how full are we? how empty? how many can see buffer at once?
  • 96. Bounded-Buffer... Producer do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full); } while (1);
  • 97. Bounded-Buffer ...Consumer do { wait(full) wait(mutex); … remove item from buffer to nextc … signal(mutex); signal(empty); … consume the item in nextc … } while (1);
  • 98. Readers-Writers Problem • Shared data semaphore mutex, wrt; Initially mutex = 1, wrt = 1, readcount = 0 • how many can access readcount? (mutex) • how many can write at once? (wrt)
  • 99. Readers-Writers...Writer wait(wrt); … writing is performed … signal(wrt); writer doesn’t care how many are trying to read. Just need to get the write lock.
  • 100. Readers-Writers...Reader wait(mutex); readcount++; if (readcount == 1) wait(wrt); signal(mutex); … read … wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex): Readers want to cooperate so that many can read concurrently. So they share the write lock to ensure that no writer comes along while any one reader is around. First in / Last Out control write lock.
  • 101. Dining-Philosophers Problem • Shared data semaphore chopstick[5]; Initially all values are 1 Need two chopsticks to eat Only have one each... Must cooperate with neighbors.
  • 102. Dining-Philosophers Problem Philosopher i: do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) // eat (cooperatively) signal(chopstick[i]); signal(chopstick[(i+1) % 5]); // think (independently) } while (1); This assures Mutual Exclusion  BUT: Deadlock , Starvation 
  • 103. 103 Inter-process Communication • Race Conditions: two or more processes are reading and writing on shared data and the final result depends on who runs precisely when • Mutual exclusion : making sure that if one process is accessing a shared memory, the other will be excluded form doing the same thing • Critical region: the part of the program where shared variables are accessed
  • 105. 105 Interprocess Communication Race Conditions Two processes want to access shared memory at same time
  • 106. 106 Critical Regions (1) Four conditions to provide mutual exclusion 1. No two processes simultaneously in critical region 2. No assumptions made about speeds or numbers of CPUs 3. No process running outside its critical region may block another process 4. No process must wait forever to enter its critical region
  • 107. 107 Critical Regions (2) Mutual exclusion using critical regions
  • 108. 108 Producer Consumer (bounded buffer) Problem • Formalizes the programs that use a buffer (queue) • Two processes: producer and consumer that share a fixed size buffer • Producer puts an item to the buffer • Consumer takes out an item from the buffer Buffer producer consumer Max size = 10
  • 109. 109 Producer Consumer (bounded buffer) Problem • Formalizes the programs that use a buffer (queue) • Two processes: producer and consumer that share a fixed size buffer • Producer puts an item to the buffer • Consumer takes out an item from the buffer • What happens when the producer wants to put an item to the buffer while the buffer is already full? Buffer producer consumer Max size = 10
  • 110. 110 Producer Consumer (bounded buffer) Problem • Formalizes the programs that use a buffer (queue) • Two processes: producer and consumer that share a fixed size buffer • Producer puts an item to the buffer • Consumer takes out an item from the buffer • What happens when the producer wants to put an item to the buffer while the buffer is already full? • OR when the consumer wants to consume an item from the buffer when the buffer is empty? Buffer producer consumer Max size = 10
  • 111. 111 Mutual Exclusion with Sleep and wakeup • Solution is to use sleep and wakeup • Sleep: a system call that causes the caller process to block • Wakeup: a system call that wakes up a process (given as parameter) • When the producer wants to put an item to the buffer and the buffer is full then it sleeps • When the consumer wants to remove an item from the buffer and the buffer is empty, then it sleeps.
  • 112. 112 Sleep and Wakeup What problems exist in this solution? •Consumer is running •It checks count when count == 0 •Scheduler decides to run Producer just before consumer sleeps •Producer inserts an item and increments the count •Producer notices that count is 1, and issues a wakeup call. •Since consumer is not sleeping yet, the wakeup signal is lost •Scheduler decides to run the consumer •Consumer sleeps •Producer is scheduled, which runs N times, and after filling up the buffer it sleeps •Both processes sleep forever (or until the prince OS comes and sends a kiss signal to kill both)
  • 114. 114 Classical IPC Problems • Dining philosophers problem (Dijkstra) – Models processes competing for exclusive access to a limited number of resources such as I/O devices • Readers and writers problem (Courtois et al.) – Models access to a database (both read and write) • Sleeping barber problem – Models queuing situations such as a multi-person helpdesk with a computerized call waiting system for holding a limited number of incoming calls
  • 115. 115 Dining Philosophers (1) • 5 Philosophers around a table and 5 forks • Philosophers eat/think • Eating needs 2 forks • Pick one fork at a time (first the right fork then the left one and eat) What problems may occur in this case?
  • 116. 116 Dining Philosophers (1) • All philosophers may take their right fork at the same time and block when the left forks are not available. • Solution: (like collusion detection in Ethernet protocol) • pick left fork, • if the right fork is not available then release left fork and wait for sometime There is still a problem!!!!
  • 117. 117 Dining Philosophers (1) • What happens when all philosophers do the same thing at the same time? • This situation is called starvation: a type of deadlock where everybody is doing something but no progress is made • Solution is to use use a mutex_lock before taking the forks and release the lock after putting the forks back to the table Is this a good solution? No, because only one philosopher can eat at a time but there Are enough forks for two!!!
  • 118. 118 Dining Philosophers (3) Solution to dining philosophers problem (part 1)
  • 119. 119 Dining Philosophers (4) Solution to dining philosophers problem (part 2)
  • 120. 120 Readers and Writers Problem • Assume that there is a database, and processes compete for reading from and writing to the database • Multiple processes may read the database without any problem • A process can write to the database only if there are no other processes reading or writing the database • Here are the basic steps or r/w problem assuming that rc is the reader count (processes currently reading the database) – A reader who gains access to the database increments rc (when rc=1, it will lock the database against writers) – A reader that finishes reading will decrement rc (when the rc=0 it will unlock the database so that a writer can proceed) – A writer can have access to the database when rc = 0 and it will lock the database for other readers or writers – Readers will access the database only when there are no writers (but there may be other readers)
  • 121. 121 The Readers and Writers Problem A solution to the readers and writers problem What is the problem with this solution? The writer will starve when there is constant supply op readers !!!! Solution is to queue new readers behind the current writer at the expense of reduced concurrency
  • 122. 122 The Sleeping Barber Problem (1) • There is one barber, and n chain of waiting customers •If there are no customers, then the barber sits in his chair and sleeps (as illustrated in the picture) •When a new customer arrives and the barber is sleeping, then he will wakeup the barber •When a new customer arrives, and the barber is busy, then he will sit on the chairs if there is any available, otherwise (when all the chairs are full) he will leave.
  • 123. 123 The Sleeping Barber Problem (2) Solution to sleeping barber problem.
  • 126. 126 The Producer-Consumer Problem • An example of the pipelined model – One thread produces data items – Another thread consumes them • Use a bounded buffer between the threads • The buffer is a shared resource – Code that manipulates it is a critical section • Must suspend the producer thread if the buffer is full • Must suspend the consumer thread if the buffer
  • 127. 127 Is this busy-waiting solution correct? thread producer { while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++ } } thread consumer { while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char } } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
  • 128. 128 This code is incorrect! • The “count” variable can be corrupted: – Increments or decrements may be lost! – Possible Consequences: • Both threads may spin forever • Buffer contents may be over-written • What is this problem called?
  • 129. 129 This code is incorrect! • The “count” variable can be corrupted: – Increments or decrements may be lost! – Possible Consequences: • Both threads may sleep forever • Buffer contents may be over-written • What is this problem called? Race Condition • Code that manipulates count must be made into a ??? and protected using ???
  • 130. 130 This code is incorrect! • The “count” variable can be corrupted: – Increments or decrements may be lost! – Possible Consequences: • Both threads may sleep forever • Buffer contents may be over-written • What is this problem called? Race Condition • Code that manipulates count must be made into a critical section and protected using mutual exclusion!
  • 131. 131 Some more problems with this code • What if buffer is full? – Producer will busy-wait – On a single CPU system the consumer will not be able to empty the buffer • What if buffer is empty? – Consumer will busy-wait – On a single CPU system the producer will not be able to fill the buffer • We need a solution based on blocking!
  • 132. 132 0 thread consumer { 1 while(1) { 2 if(count==0) { 3 sleep(empty) 4 } 5 c = buf[OutP] 6 OutP = OutP + 1 mod n 7 count--; 8 if (count == n-1) 9 wakeup(full) 10 // Consume char 11 } 12 } Producer/Consumer with Blocking – 1st attempt 0 thread producer { 1 while(1) { 2 // Produce char c 3 if (count==n) { 4 sleep(full) 5 } 6 buf[InP] = c; 7 InP = InP + 1 mod n 8 count++ 9 if (count == 1) 10 wakeup(empty) 11 } 12 } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
  • 133. 133 0 thread consumer { 1 while(1) { 2 if(count==0) { 3 sleep(empty) 4 } 5 c = buf[OutP] 6 OutP = OutP + 1 mod n 7 count--; 8 if (count == n-1) 9 wakeup(full) 10 // Consume char 11 } 12 } Use a mutex to fix the race condition in this code 0 thread producer { 1 while(1) { 2 // Produce char c 3 if (count==n) { 4 sleep(full) 5 } 6 buf[InP] = c; 7 InP = InP + 1 mod n 8 count++ 9 if (count == 1) 10 wakeup(empty) 11 } 12 } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
  • 134. 134 Problems • Sleeping while holding the mutex causes deadlock ! • Releasing the mutex then sleeping opens up a window during which a context switch might occur … again risking deadlock • How can we release the mutex and sleep in a single atomic operation? • We need a more powerful synchronization primitive
  • 135. 135 Semaphores • An abstract data type that can be used for condition synchronization and mutual exclusion What is the difference between mutual exclusion and condition synchronization?
  • 136. 136 Semaphores • An abstract data type that can be used for condition synchronization and mutual exclusion • Condition synchronization – wait until condition holds before proceeding – signal when condition holds so others may proceed • Mutual exclusion – only one at a time in a critical section
  • 137. 137 Semaphores • An abstract data type – containing an integer variable (S) – Two operations: Wait (S) and Signal (S) • Alternative names for the two operations – Wait(S) = Down(S) = P(S) – Signal(S) = Up(S) = V(S) • Blitz names its semaphore operations Down and Up
  • 138. 138 Classical Definition of Wait and Signal Wait(S) { while S <= 0 do noop; /* busy wait! */ S = S – 1; /* S >= 0 */ } Signal (S) { S = S + 1;
  • 139. 139 Problems with classical definition • Waiting threads hold the CPU – Waste of time in single CPU systems – Required preemption to avoid deadlock
  • 140. 140 Blocking implementation of semaphores Semaphore S has a value, S.val, and a thread list, S.list. Wait (S) S.val = S.val - 1 If S.val < 0 /* negative value of S.val */ { add calling thread to S.list; /* is # waiting threads */ block; /* sleep */ } Signal (S) S.val = S.val + 1 If S.val <= 0 { remove a thread T from S.list; wakeup (T); }
  • 141. 141 Implementing semaphores • Wait () and Signal () are assumed to be atomic How can we ensure that they are atomic?
  • 142. 142 Implementing semaphores • Wait () and Signal () are assumed to be atomic How can we ensure that they are atomic? • Implement Wait() and Signal() as system calls? – how can the kernel ensure Wait() and Signal() are completed atomically? – Same solutions as before • Disable interrupts, or • Use TSL-based mutex
  • 143. 143 Semaphores with interrupt disabling Signal(semaphore sem) DISABLE_INTS sem.val++ if (sem.val <= 0) { th = remove next thread from sem.L wakeup(th) } ENABLE_INTS struct semaphore { int val; list L; } Wait(semaphore sem) DISABLE_INTS sem.val-- if (sem.val < 0){ add thread to sem.L sleep(thread) } ENABLE_INTS
  • 144. 144 Semaphores with interrupt disabling Signal(semaphore sem) DISABLE_INTS sem.val++ if (sem.val <= 0) { th = remove next thread from sem.L wakeup(th) } ENABLE_INTS struct semaphore { int val; list L; } Wait(semaphore sem) DISABLE_INTS sem.val-- if (sem.val < 0){ add thread to sem.L sleep(thread) } ENABLE_INTS
  • 145. 145 Blitz code for Semaphore.wait method Wait () var oldIntStat: int oldIntStat = SetInterruptsTo (DISABLED) if count == 0x80000000 FatalError ("Semaphore count underflowed during 'Wait‘ operation") EndIf count = count – 1 if count < 0 waitingThreads.AddToEnd (currentThread) currentThread.Sleep () endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 146. 146 Blitz code for Semaphore.wait method Wait () var oldIntStat: int oldIntStat = SetInterruptsTo (DISABLED) if count == 0x80000000 FatalError ("Semaphore count underflowed during 'Wait‘ operation") EndIf count = count – 1 if count < 0 waitingThreads.AddToEnd (currentThread) currentThread.Sleep () endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 147. 147 Blitz code for Semaphore.wait method Wait () var oldIntStat: int oldIntStat = SetInterruptsTo (DISABLED) if count == 0x80000000 FatalError ("Semaphore count underflowed during 'Wait‘ operation") EndIf count = count – 1 if count < 0 waitingThreads.AddToEnd (currentThread) currentThread.Sleep () endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 148. 148 Blitz code for Semaphore.wait method Wait () var oldIntStat: int oldIntStat = SetInterruptsTo (DISABLED) if count == 0x80000000 FatalError ("Semaphore count underflowed during 'Wait‘ operation") EndIf count = count – 1 if count < 0 waitingThreads.AddToEnd (currentThread) currentThread.Sleep () endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 149. 149 But what is currentThread.Sleep ()? • If sleep stops a thread from executing, how, where, and when does it return? – which thread enables interrupts following sleep? – the thread that called sleep shouldn’t return until another thread has called signal ! – … but how does that other thread get to run? – … where exactly does the thread switch occur? • Trace down through the Blitz code until you find a call to switch() – Switch is called in one thread but returns in another!
  • 150. 150 Look at the following Blitz source code • Thread.c – Thread.Sleep () – Run (nextThread) • Switch.s – Switch (prevThread, nextThread)
  • 151. 151 Blitz code for Semaphore.signal method Signal () var oldIntStat: int t: ptr to Thread oldIntStat = SetInterruptsTo (DISABLED) if count == 0x7fffffff FatalError ("Semaphore count overflowed during 'Signal' operation") endIf count = count + 1 if count <= 0 t = waitingThreads.Remove () t.status = READY readyList.AddToEnd (t) endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 152. 152 Blitz code for Semaphore.signal method Signal () var oldIntStat: int t: ptr to Thread oldIntStat = SetInterruptsTo (DISABLED) if count == 0x7fffffff FatalError ("Semaphore count overflowed during 'Signal' operation") endIf count = count + 1 if count <= 0 t = waitingThreads.Remove () t.status = READY readyList.AddToEnd (t) endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 153. 153 Blitz code for Semaphore.signal method Signal () var oldIntStat: int t: ptr to Thread oldIntStat = SetInterruptsTo (DISABLED) if count == 0x7fffffff FatalError ("Semaphore count overflowed during 'Signal' operation") endIf count = count + 1 if count <= 0 t = waitingThreads.Remove () t.status = READY readyList.AddToEnd (t) endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 154. 154 Blitz code for Semaphore.signal method Signal () var oldIntStat: int t: ptr to Thread oldIntStat = SetInterruptsTo (DISABLED) if count == 0x7fffffff FatalError ("Semaphore count overflowed during 'Signal' operation") endIf count = count + 1 if count <= 0 t = waitingThreads.Remove () t.status = READY readyList.AddToEnd (t) endIf oldIntStat = SetInterruptsTo (oldIntStat) endMethod
  • 155. 155 Semaphores using atomic instructions • Implementing semaphores with interrupt disabling only works on uni- processors – What should we do on a multiprocessor? • As we saw earlier, hardware provides special atomic instructions for synchronization – test and set lock (TSL) – compare and swap (CAS) – etc • Semaphore can be built using atomic instructions 1. build mutex locks from atomic instructions 2. build semaphores from mutex locks
  • 156. 156 Building spinning mutex locks using TSL Mutex_lock: TSL REGISTER,MUTEX | copy mutex to register and set mutex to 1 CMP REGISTER,#0 | was mutex zero? JZE ok | if it was zero, mutex is unlocked, so return JMP mutex_lock | try again Ok: RET | return to caller; enter critical section Mutex_unlock: MOVE MUTEX,#0 | store a 0 in mutex RET | return to caller
  • 157. 157 Using Mutex Locks to Build Semaphores • How would you modify the Blitz code to do this?
  • 158. 158 What if you had a blocking mutex lock? Problem: Implement a counting semaphore Up () Down () ...using just Mutex locks • Goal: Make use of the mutex lock’s blocking behavior rather than reimplementing it for the semaphore operations
  • 159. 159 How about this solution? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Lock(m2) Unlock(m1) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1)
  • 160. 160 How about this solution? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Lock(m2) Unlock(m1) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1)
  • 161. 161 How about this solution then? var cnt: int = 0 -- Signal count var m1: Mutex = unlocked -- Protects access to “cnt” m2: Mutex = locked -- Locked when waiting Down (): Lock(m1) cnt = cnt – 1 if cnt<0 Unlock(m1) Lock(m2) else Unlock(m1) endIf Up(): Lock(m1) cnt = cnt + 1 if cnt<=0 Unlock(m2) endIf Unlock(m1)
  • 162. 162 Classical Synchronization problems • Producer Consumer (bounded buffer) • Dining philosophers • Sleeping barber • Readers and writers
  • 163. 163 Producer consumer problem • Also known as the bounded buffer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads
  • 164. 164 Is this a valid solution? thread producer { while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++ } } thread consumer { while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char } } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
  • 165. 165 Does this solution work? 0 thread producer { 1 while(1){ 2 // Produce char c... 3 down(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 up(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 down(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 up(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;
  • 166. 166 Producer consumer problem • What is the shared state in the last solution? • Does it apply mutual exclusion? If so, how? 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads
  • 167. 167 Problems with solution • What if we have multiple producers and multiple consumers? – Producer-specific and consumer-specific data becomes shared – We need to define and protect critical sections – You’ll do this in the second part of the current Blitz project, using the mutex locks you built!
  • 168. 168 Dining philosophers problem • Five philosophers sit at a table • One chopstick between each philosopher (need two to eat) • Why do they need to synchronize? • How should they do it? while(TRUE) { Think(); Grab first chopstick; Grab second chopstick; Eat(); Put down first chopstick; Put down second chopstick; } Each philosopher is modeled with a thread The Sleeping Barber Problem
  • 169. 169 Is this a valid solution? #define N 5 Philosopher() { while(TRUE) { Think(); take_chopstick(i); take_chopstick((i+1)% N); Eat(); put_chopstick(i); put_chopstick((i+1)% N); } }
  • 171. 171 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_chopstick(i); take_chopstick((i+1)% N); Eat(); put_chopstick(i); put_chopstick((i+1)% N); } } take_chopsticks(i) put_chopsticks(i)
  • 172. 172 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_chopsticks(i); Eat(); put_chopsticks(i); } }
  • 173. 173 Taking chopsticks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; signal(sem[i]); } } int state[N] semaphore mutex = 1 semaphore sem[i] take_chopsticks(int i) { wait(mutex); state [i] = HUNGRY; test(i); signal(mutex); wait(sem[i]); }
  • 174. 174 Putting down chopsticks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; signal(sem[i]); } } int state[N] semaphore mutex = 1 semaphore sem[i] put_chopsticks(int i) { wait(mutex); state [i] = THINKING; test(LEFT); test(RIGHT); signal(mutex); }
  • 175. 175 Dining philosophers • Is the previous solution correct? • What does it mean for it to be correct? • Is there an easier way?
  • 177. 177 The sleeping barber problem • Barber: – While there are people waiting for a hair cut, put one in the barber chair, and cut their hair – When done, move to the next customer – Else go to sleep, until someone comes in • Customer: – If barber is asleep wake him up for a haircut – If someone is getting a haircut wait for the barber to become free by sitting in a chair – If all chairs are all full, leave the barbershop
  • 178. 178 Designing a solution • How will we model the barber and customers? • What state variables do we need? – .. and which ones are shared? – …. and how will we protect them? • How will the barber sleep? • How will the barber wake up? • How will customers wait? • What problems do we need to look out for?
  • 179. 179 Is this a good solution? Barber Thread: while true Wait(customers) Lock(lock) numWaiting = numWaiting-1 Signal(barbers) Unlock(lock) CutHair() endWhile Customer Thread: Lock(lock) if numWaiting < CHAIRS numWaiting = numWaiting+1 Signal(customers) Unlock(lock) Wait(barbers) GetHaircut() else -- give up & go home Unlock(lock) endIf const CHAIRS = 5 var customers: Semaphore barbers: Semaphore lock: Mutex numWaiting: int = 0
  • 180. 180 The readers and writers problem • Multiple readers and writers want to access a database (each one is a thread) • Multiple readers can proceed concurrently • Writers must synchronize with readers and other writers – only one writer at a time ! – when someone is writing, there must be no readers ! Goals:
  • 181. 181 Designing a solution • How will we model the readers and writers? • What state variables do we need? – .. and which ones are shared? – …. and how will we protect them? • How will the writers wait? • How will the writers wake up? • How will readers wait? • How will the readers wake up? • What problems do we need to look out for?
  • 182. 182 Is this a valid solution to readers & writers? Reader Thread: while true Lock(mut) rc = rc + 1 if rc == 1 Wait(db) endIf Unlock(mut) ... Read shared data... Lock(mut) rc = rc - 1 if rc == 0 Signal(db) endIf Unlock(mut) ... Remainder Section... endWhile var mut: Mutex = unlocked db: Semaphore = 1 rc: int = 0 Writer Thread: while true ...Remainder Section... Wait(db) ...Write shared data... Signal(db) endWhile
  • 183. 183 Readers and writers solution • Does the previous solution have any problems? – is it “fair”? – can any threads be starved? If so, how could this be fixed? – … and how much confidence would you have in your solution?
  • 184. 184 Quiz • What is a race condition? • How can we protect against race conditions? • Can locks be implemented simply by reading and writing to a binary variable in memory? • How can a kernel make synchronization- related system calls atomic on a uniprocessor? – Why wouldn’t this work on a multiprocessor? • Why is it better to block rather than spin on a uniprocessor? • Why is it sometimes better to spin rather than
  • 185. 185 Quiz • When faced with a concurrent programming problem, what strategy would you follow in designing a solution? • What does all of this have to do with Operating Systems?