A system consists of processes that execute system code or user code. An operating system makes a computer more productive by switching the CPU between processes. A process is a program in execution that has a program counter, stack, data section, and state. The operating system manages processes using process control blocks and by moving processes between scheduling queues. Context switching allows the CPU to save and load process states when switching between processes.
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Ch03- PROCESSES.ppt
1.
2. A system consists of a collection of processes. Operating system
processes executes system code and user processes executing user
code. By switching the CPU between processes, the operating
system can make the computer more productive
Process Concept
Process Scheduling
Operations on Processes
Cooperating Processes
Interprocess Communication
Communication in Client-Server Systems
3. An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Even if the user can execute only program at one time, the operating system may
need to support its own internal programmed activities such as memory
management. These are called processes.
Textbook uses the terms job and process almost interchangeably
4. Process – a program in execution;
process execution must progress in
sequential fashion
A process includes:
program counter – it contains the current
activity of the process and the contents
of the processor’s registers
Stack – contains the temporary data such
as method parameter, return address,
local variables)
data section – which contains the global
variables
5. A PROGRAM by itself in not a
process, a program is a PASSIVE
entity, such as the contents of the
files stored on the disk, whereas
the process is a ACTIVE entity, with
a program counter specifying the
next instruction to execute and set
of associated resources.
Although two processes may be
associated with the same program,
they are considered two separate
execution sequences.
For example several users may be
running different copies of the mail
program . Each is a separate
process , although text section is
equivalent but the data section
vary.
6.
7. As a process executes, it changes state,
The state of the process is defined in the part by the current
activity in that process. Each process may be in one of the
following states:
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur(such as I/O completion or
reception of a signal)
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
8.
9. Each process is represented in the operating system by a
PCB( process control block) It contains many pieces of information
associated with a specific process, it is like a repository of a
process
Process state- The state may be new , ready ,running, waiting etc.
Program counter – indicate the address of the next instruction to
be executed for this process
CPU registers – may vary in number or type depending on the
computer architecture. They include accumulators, index
registers, stack pointers and general purpose registers.
10. CPU scheduling
information- includes a
process priority , pointers
to scheduling queues etc
Memory-management
information – value of
page table, segment table
etc
Accounting information –
includes the amount of
CPU, and real time used,
time limits, job or process
number etc
I/O status information –
list of I/O devices
allocated to this process, a
list of open files etc
11.
12.
13. The objective of multiprogramming is to have some process running at all
times so as to maximize CPU utilization. The objective of time sharing is to
switch the CPU among processes so frequently that users can interact with
each program while running.
SCHEDULING QUEUES
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and
waiting to execute, this is generally stored as a linked list
A ready queue header contains a pointer to the first and final PCB’s in the list.
We extend each PCB to include a pointer field that points to the next PCB in
the ready queue.
Device queues – set of processes waiting for an I/O device, each device has
its own device queue.
Processes migrate among the various queues
14. A new process is initially put in the
ready queue.
It waits in the ready queue until it
is selected for execution.
Once the process is assigned to the
CPU and is executed on of the
events may take place:
1.The process could issue an I/O
request and then be placed in the
I/O queue.
2.The process could create a new
sub process and wait for its
termination
3.The process could be removed
forcibly from the CPU, as a result of
an interrupt and be put back in the
ready queue.
15.
16.
17. A process migrates between the various scheduling queues
throughout is lifetime. The operating system must select , for
scheduling purposes, processes from these queues in some
fashion. The selection process is carried out by they appropriate
SCHEDULER.
Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue, loads them
into the memory for execution.
Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU to one of
them
18. The primary distinction between
these two schedulers is the
frequency of their execution.
Short-term scheduler is invoked
very frequently (milliseconds)
(must be fast) as it selects a new
process for the CPU frequently. A
process may execute for a few
milliseconds before waiting for a
I/O request.
Long-term scheduler is invoked
very infrequently (seconds,
minutes) (may be slow) because
it may take minutes for creation of
new processes
The long-term scheduler controls
the degree of multiprogramming,
ie the number of processes in the
memory
It can afford to take more time
select a process for execution.
19. Processes can be described as
either:
I/O-bound process – spends more time
doing I/O than computation many short
CPU bursts
CPU-bound process – spends more time
doing computations; and generates I/O
request in frequently.
The long term scheduler must select a good
JOB MIX of I/O bound jobs and CPU
bound jobs.
If all the processes are I/O bound jobs then
the ready queue will almost always be
empty and the short term scheduler will
have little to do.
If all the processes are CPU bound jobs, I/O
bound queue will be empty
The system will give the best performance
when a good mix of both types of jobs is
present.
20.
21. The medium term scheduler
removes the processes from
memory( and from active
contention for the CPU) and thus
reduce the degree of
multiprogramming .
At some time later , the process
can be reintroduced into the
memory and its execution can be
continued where it was left out.
This scheme is called SWAPPING.
The process is swapped out and is
later swapped in, by the medium
term scheduler.
Swapping may be necessary to
improve the process mix or need
is there to free the memory
22.
23. When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process. This is task is
known as CONTEXT SWITCHING.
The context of a process is represented in the PCB of a process, it includes
the value of the CPU registers, the process state and memory management
information
Context-switch time is overhead; the system does no useful work while
switching.
Its speed varies from machine to machine depending on the memory
speed ,the number of registers that must be copied etc
Time dependent on hardware support
24. Operation on processes
The processes in the system can
execute concurrently , the O/S
must provide a mechanism for
process creation and termination
Parent process create CHILDREN
processes, which, in turn create
other processes, forming a tree of
processes
Resource sharing
Parent and children share all resources
Children share subset of parent’s
resources
Parent and child share no resources
When a process creates a new
process, two possibilities exists in
terms of execution
Parent and children execute concurrently
Parent waits until children terminate
25. There are two possibilities in terms
of the Address space of the new
process:
Child duplicate of parent
Child has a program loaded into it
For example: UNIX examples
fork system call creates new process
which consists of a copy of the address of
the original process , this allows the
parent to communicate easily with the
child process.
exec system call used after a fork to
replace the process’ memory space with a
new program
27. Process executes last statement and asks
the operating system to delete it (exit)
Output data from child is returned to
the parent (via wait system call)
Process’ resources are deallocated by
operating system,which includes
physical and virtual memory., open
files and I/O buffers
Parent may terminate execution of
children processes (abort) for a variety of
reasons like:
Child has exceeded allocated resources
Task assigned to child is no longer
required
If parent is exiting
Some operating system do not allow
child to continue if its parent
terminates
All children must be terminated
,this is called cascading
termination, which is normally
initiated by the operating system
28. The concurrent processes
executing in the operating system
may either be independent or co
operating processes.
An Independent process cannot
affect or be affected by the
execution of another process . Any
process with does not share data
with any other process is
independent.
An Cooperating process can affect
or be affected by the execution of
another process, this is the case
when any process shares data with
other processes
29. Advantages of process cooperation
Information sharing – Several users may
be interested in same piece of
information for ex a shared file etc
Computation speed-up – a task is broken
up into subtasks, so that each one of
them will execute in parallel with others.
Modularity – we construct the system in a
modular fashion dividing the system
function into separate processes or
threads
Convenience- An individual may have
many task on which to work at one time.
Example a user may be editing, printing,
and compiling in parallel.
EXAMPLE of such a process is producer
consumer problem
30. THE unbounded-buffer places no practical
limit on the size of the buffer. The
consumer may have to wait for the new
item but the produce can always produce
new items
The bounded-buffer assumes that there is
a fixed buffer size, the consumer must
wait if the buffer is empty and the
producer must wait if the buffer is full
31. Mechanism for processes to
communicate and to synchronize
their actions
Message system – processes
communicate with each other
without resorting to shared
variables
IPC facility provides two
operations:
send(message) – message size fixed or
variable
receive(message)
If P and Q wish to communicate,
they need to:
establish a communication link between
them
exchange messages via send/receive
Implementation of communication
link
physical (e.g., shared memory, hardware
bus)
logical (e.g., logical properties)
32. The co-operating processes can
communicate in a shared memory
environment. This requires that
these processes share a common
buffer pool another way to achieve
the same effect is for the operating
system to provide means for co-
operating processes to
communicate with each other via
an interprocess communication
(IPC) facility.
IPC provides a mechanism to allow
processes to communicate and to
synchronize their actions without
sharing the same address space.
IPC is best provided by a message
passing system
33. The function a message system is
to allow processes to communicate
with one another without the need
resort to shared data.
Messages sent by a process can be
fixed or variable.
When the message is fixed size the
system implementation is
straightforward.
Variable size messages require a
more complex system level
implementation.
If the processes P and Q want to
communicate they must send
messages to and receive from each
other, via a communication link.
The link can be implemented in a
number of ways.
34. DIRECT/ INDIRECT
COMMUNICATION
SYMMERTIC OR ASYMMETRIC
SYSTEM
AUTOMATIC OR EXPLICIT
BUFFERING
SEND BY COPY OR SEND BY
REFERENCE
FIXED SIZED OR VARIABLE SIZED
MESSAGES
35. Processes that want to
communicate must have a way to
refer to each other. They can use
either direct or indirect
communication.
Processes must name each other
explicitly:
send (P, message) – send a message to
process P
receive(Q, message) – receive a message
from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair
of communicating processes
Between each pair there exists exactly
one link
The link may be unidirectional, but is
usually bi-directional
36. Messages are directed and received from mailboxes (also referred to as
ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
37. Mailbox is owned either by a process or by the operating system
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
38. Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions, the answer depends on
the scheme we use:
Allow a link to be associated with at most
two processes
Allow only one process at a time to
execute a receive operation
Allow the system to select arbitrarily the
receiver. Sender is notified who the
receiver was.
39. Communication between processes takes place by calls to send and
receive primitives.
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send , the sending process is blocked until the message is received by the
receiving process or a mailbox
Blocking receive has the receiver block until a message is available
Non-blocking is considered asynchronous
Non-blocking send has the sender send the message and resume operation
Non-blocking receive has the receiver receive a valid message or null
Different combinations of send and receive are possible.
40. Whether the communication is
direct or indirect, messages
exchanged by the communicating
processes reside is a temporary
queue. A queue is implemented in
three ways:
Queue of messages attached to
the link; implemented in one of
three ways
1. Zero capacity – queue has maximum
length 0, thus the link cannot have any
message waiting in it.
2. Bounded capacity – queue is of finite
length of n messages
Sender must wait if link full otherwise it is
send when a new message is send, it is
placed in the queue.
3. Unbounded capacity – queue has infinite
length , any number messages can be wait
in it. Sender never waits