1. 1
Processes and process management
College of Technology and Engineering
Computer science department
Jimma university
•The process concept
•The threads concept
•Inter-process communication
• Process scheduling
•Scheduling Algorithms
• Deadlocks
2. 2
2.1 process concept
Process is program in execution
Process need resources like
CPU times
Memory
Files
I/O devices
Resources are allocated to processes either
When it created
While it executed
Process can be –
operating system process –execute system code
User process – execute user code
The os responsible (process management)
Creation and deletion of process
Scheduling of process
Provision of mechanism for synchronization communication
Dead lock handling for the process
3. 3
2.1.1 the process
A process is more than program – b/c it
Include the current activity – represent by the value of
program counter
A content of processor register
A process also has process
Stack (contain temporary data like subroutine variables) ,
return address and temporary variables )
Data section containing global variables
Two process may be associated with the same program – are
considered two separate execution sequences
Eg a user may invoke many copies of the editor program
Each are separate process , but their text section are
equivalent ,data section will vary
5. 5
2.1.2 process state
As process execute , it will changes state
Each process may be in one of the following state
New – a process is being created
Running – instruction are executed
Waiting – waiting for some event to occur (like I/O
completion )
Ready – a process is waiting to be assigned to processor
Terminated – a process has finished execution
7. 7
2.1.3 Process control Block
Each process is represented in os by process control
block (PCB)
Pointer Process
state
Process number
Program counter
Register
Memory limit
List of open files
.
.
Fig process control block
8. 8
A PCB contain specific information for each process like
Process state
Program counter – the counter indicates the address of the
next instruction to be executed for this process
CPU registers – (vary in number and in type depends on the
computer architecture ) like AC , general purpose register -
this state must be saved when an interrupt occurs
CPU scheduling info : info like process priority , pointer to
scheduling queues
Memory management info : info like the values of base and limit
register , page tables
I/O status info : list of I/O devices allocated to this peocess
9. 9
Save state info PCB0
Reload state from PCB1
Save state info PCB1
Reload state from PCB0
Process p0
Idle Executing
Idle
Idle
Process p1
Interrupt or system call
Interrupt or system call
Fig diagram showing CPU switch from process to process
10. 10
2.2 Process scheduling
Objective of multi programming – is to have
process running at all times – maximize CPU
utilization
Objective of time sharing – to switch CPU among
process – user can interact with each program
11. 11
2.2.1 Scheduling queues
Processes that are in main memory are ready to
execute are kept on a list called ready queue (in
linked list )
A ready queue header will contain pointer to the
first and last PCBs in the list.
Each PCB has a pointer field that point to the next
process in the ready queue
A list of process waiting for the particular I/O
devices is called a device queue
Each device has its own device queue.
13. 13
2.2.2 Scheduler
An OS select process from the queues in some
fashion
The selection process is carried out by the
appropriate scheduler.
14. 14
Ready queue CPU
I/O request
Time slice expired
Fork a child
Wait for an interrupts
Interrupt
occurs
Child execute
I/O
Fig Queuing diagram representation of process scheduling
15. 15
Long term scheduling – select job from the pool (disk) and
load in the memory for the execution
Execute less frequently
Control the degree of multi programming
Short term scheduler (CPU scheduler ) – select from the
ready process to execute
Execute so frequently
Most process can be either I/O bound or CPU bound
I/O bound process is a process that spends more of the
time doing I/O than it spends doing execution
A CPU bound process – is one generates I/O requests
infrequently (most of its time is computation )
16. 16
A long term scheduler select a good process mix
of I/O bound and CPU bound process
OS like time sharing system – introduce
intermediate level of scheduling
Partially executed
Swapped out process
Ready queue
CPU
I/O waiting
queue
I/O
End
Swap out
Fig medium term scheduler – scheduling to the queues
17. 17
2.2.3 Context switch
Switching the CPU to other process – require
saving the state of old process and loading the
saved state for the new process
It is pure overhead = > means the system does not
do useful work while switching
Context switch speed varies from machine to
machine – depending on the memory speed ,
number of register
Speed range from 1micro second to 1000
microsecond
18. 18
2.3 Operation on process
The processes in the system can execute
concurrently, and must be created and deleted
dynamically.
So an operating system must provide a mechanism
for process creation and termination.
19. 19
Process Creation
A process may create several new processes, via a
createprocess system call, during the course of execution.
Parent process create children processes, child create
other processes, forming a tree of processes
When a process creates a sub process, the sub process may
be able to obtain
its resources directly from the operating system, or
it may be constrained to a subset of the resources of the
parent process.
Parent and children share all resources
21. 21
When a process creates a new process, two
possibilities exist in terms of execution:
The parent continues to execute concurrently with its
children.
The parent waits until some or all of its children have
terminated.
Two possibilities in terms of the address space of
the new process:
The child process is a duplicate of the parent process.
The child process has a program loaded into it.
22. 22
Example UNIX
In UNIX, each process is identified by its process
identifier( unique integer).
A new process is created by the fork system call.
The new process consists of a copy of the address space of
the original process.
This mechanism allows the parent process to communicate
easily with its child process.
Both processes (the parent and the child) continue execution
at the instruction after the fork with one difference:
The return code for the fork is zero for the new (child)
process,
whereas the (nonzero) process identifier of the child is
returned to the parent.
23. 23
The exec system call is used after a fork by one of the two
processes to replace the process memory space with a new
program.
The exec system call loads a binary file into memory
(destroying the memory image of the program containing the
exec system call) and starts its execution.
The parent can then create more children, or, if it has
nothing else to do while the child runs, it can issue a wait
system call to move itself off the ready queue until the
termination of the child.
25. 25
Process Termination
Process executes last statement and asks the operating
system to delete it (exit)
Output data from child to parent (via wait)
Process’ resources are deallocated by operating system
Parent may terminate execution of children processes
(abort)
Child has exceeded allocated resources
Task assigned to child is no longer required
If parent is exiting
Some operating system do not allow child to continue
if its parent terminates
All children terminated - cascading termination
26. 26
2.4 Cooperating Processes
The concurrent processes executing in the operating system
may be either
independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected
by the other processes executing in the system - any
process that does not share any data (temporary or
persistent) with any other process is independent.
A process is cooperating if it can affect or be affected by
the other processes executing in the system- any process
that shares data with other processes is a cooperating
process.
27. 27
Reasons for providing an environment that allows process
cooperation:
Information sharing: Since several users may be interested in the same
piece of information (for instance, a shared file), we must provide an
environment to allow concurrent access to these types of resources.
Computation speedup: If we want a particular task to run faster, we
must break it into subtasks, each of which will be executing in parallel
with the others. Notice that such a speedup can be achieved only if the
computer has multiple processing elements (such as CPUs or I/O
channels).
Modularity: We may want to construct the system in a modular fashion,
dividing the system functions into separate processes
Convenience: Even an individual user may have many tasks to work on at
one time. For instance, a user may be editing, printing, and compiling in
parallel.
28. 28
Concurrent execution( cooperation processes )
Need process to communicate with each other
(Inter process communication )
To synchronize their actions
(process synchronization )
29. 29
producer-consumer problem
Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process
Eg
a print program produces characters that are consumed by the printer
driver.
A compiler produce assembly code, which is consumed by an assembler.
To allow producer and consumer processes to run concurrently, we
must have available a buffer of items that can be filled by the
producer and emptied by the consumer.
A producer can produce one item while the consumer is consuming
another item.
The producer and consumer must be synchronized, so that the
consumer does not try to consume an item that has not yet been
produced.
30. 30
The unbounded-buffer producer-consumer
problem - no limit on the size of the buffer.
The consumer may have to wait for new items, but the
producer can always produce new items.
The bounded-buffer producer consumer problem
assumes that there is a fixed buffer size.
The consumer must wait if the buffer is empty and the
producer must wait if the buffer is full.
31. 31
The buffer provided either
by the operating system through the use of IPC
by the application programmer with the use of shared
memory.
32. 32
2.5 Inter process Communication
The previous example show how cooperating
processes can communicate in a shared-memory
environment- share a common buffer pool, and
that the code for implementing the buffer be
explicitly written by the application programmer.
Another way –an operating system to provide the
means for cooperating processes to communicate
with each other via an inter process-
communication (IPC) facility.
33. 33
Con …
Inter process-communication is best provided by a message
system.
An IPC facility provides at least the two operations:
send (message) - message size fixed or variable
receive (message).
If P and Q wish to communicate, they need to:
establish a communication link between them
exchange messages via send/receive
Implementation of communication link
physical (e.g., shared memory, hardware bus, network )
logical (e.g., logical properties)
35. 35
Naming
Processes that want to communicate must have a
way to refer to each other.
direct communication
indirect communication,
36. 36
Direct Communication
each process explicitly name the recipient or
sender of the communication.
Send (P, message). Send a message to process P.
Receive (Q, message). Receive a message from process Q.
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of
communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-
directional
37. 37
Example : Producer – consumer
The producer process is defined as
repeat
produce an item in nextp
send(consumer, nextp);
until false;
The consumer process is defined as
repeat
receive (producer, nextc);
consume the item in nextc
until false‘
38. 38
The above example exhibit symmetric
asymmetric
Send (P, message). Send a message to process P.
Received (id , message). Receive a message from any
process; the variable id is set to the name of the process
with which communication has taken place.
39. 39
Indirect Communication
The messages are sent to and received from
mailboxes ( ports).
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common
mailbox
A link may be associated with many processes
Each pair of processes may share several communication
links
Link may be unidirectional or bi-directional
40. 40
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from
mailbox A
41. 41
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive
operation
Allow the system to select arbitrarily the receiver.
Sender is notified who the receiver was.
42. 42
Buffering
Queue of messages attached to the link;
implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
43. 43
In the nonzero-capacity cases, a process does not
know whether a message has arrived at its
destination after the send operation is completed.
If this information is crucial for the computation,
the sender must communicate explicitly with the
receiver to find out whether the latter received
the message.
44. 44
E.g. process P sends a message to process Q and can
continue its execution only after the message is received.
Process P executes the sequence
send (Q, message);
receive (Q, message);
Process Q executes
receive (P, message);
send (P, "acknowledgment");
Such processes are said to communicate asynchronously.
46. 46
Sockets
A socket is defined as an endpoint for
communication
Concatenation of IP address and port
The socket 161.25.19.8:1625 refers to port
1625 on host 161.25.19.8
Communication consists between a pair of sockets
48. 48
Remote Procedure Calls
Remote procedure call (RPC) abstracts procedure
calls between processes on networked systems.
Stubs – client-side proxy for the actual procedure
on the server.
The client-side stub locates the server and
marshalls the parameters.
The server-side stub receives this message,
unpacks the marshalled parameters, and peforms
the procedure on the server.
50. 50
Remote Method Invocation
Remote Method Invocation (RMI) is a Java
mechanism similar to RPCs.
RMI allows a Java program on one machine to
invoke a method on a remote object.
53. 53
Thread
In traditional OS, each process has an address
space and single thread of control.
But, it is desirable to have multiple threads of
control in the same address space running in
parallel
as they are separate processes (except for shared
address space )
55. 55
Processes and Threads
Process abstraction combines two concepts
Concurrency
Each process is a sequential execution stream of
instructions
Protection
Each process defines an address space
Address space identifies all addresses that can be touched
by the program
Threads
Key idea: separate the concepts of concurrency from protection
A thread is a sequential execution stream of instructions
A process defines the address space that may be shared by multiple
threads
Threads can execute on different cores on a multi core CPU
(parallelism for performance) and can communicate with other threads
by updating memory
56. 56
A thread represents an abstract entity that executes a sequence
of instructions
It has its own set of CPU registers
It has its own stack
There is no thread-specific heap or data segment (unlike process)
Threads are lightweight
Creating a thread more efficient than creating a process.
Communication between threads easier than btw. processes.
Context switching between threads requires fewer CPU cycles and
memory references than switching processes.
Threads only track a subset of process state (share list of open files,
pid, …)
Examples:
OS-supported: Windows’ threads, Sun’s LWP, POSIX threads
Language-supported: Modula-3, Java
These are possibly going the way of the Dodo
57. 57
Threads VS process
Threads
A threads has no data segment
or heaps
A thread cannot live on its own,
it must live within a process
There can be more than one
thread in a process, the first
thread calls main & has the
process’s stack
If a thread dies, its stack is
reclaimed
Inter-thread communication via
memory.
Each thread can run on a
different physical processor
Inexpensive creation and
context switch
Processes
A process has code/data/heap
& other segments
There must be at least one
thread in a process
Threads within a process share
code/data/heap, share I/O, but
each has its own stack &
registers
If a process dies, its resources
are reclaimed & all threads die
Inter-process communication
via OS and data copying.
Each process can run on a
different physical processor
Expensive creation and context
switch
59. 59
Fig (a) – 3 process – each has its own address
space, and single thread of control (thread)
Fig (b) – single process – with three thread of
control [share same address space ]
Like multi programming , CPU switch rapidly back
and forth among threads
Like process – thread can be in running , blocked ,
ready or terminated state
60. 60
In multi threading , process start with as single threads has
a ability to create new threads by calling library procedure ,
thread_create .
When threads has finished its work , it can exit by calling ,
thread_exit .
If one threads can wait for specific thread to exit by calling
a procedure , thread_wait – block the calling thread until the
specific thread has exited.
Thread call, thread_yield , allows a thread to voluntarily give
up the CPU to let another thread run .
This call is important , b/c there is no clock interrupts