3. MULTIPLE PROCESSOR SCHEDULING
Multiprocessors is a system that has more than one
processor
With multiprocessors load sharing becomes feasible but
it makes scheduling more complex.
no such policy that can declared best scheduling
algorithm for single processor similarly there is no best
scheduling solution for multiprocessor systems
Best scheduling algorithm depends on the different
scenario.
4. MULTIPLE PROCESSOR SCHEDULING
The scheduling process of a multi-processor is more complex
than that of a single processor system because of the following
reasons.
• Load balancing is a problem since more than one processors
are present.
• Processes executing simultaneously may require access to
shared data.
• Cache affinity should be considered in scheduling.
5. Characteristics of a Good Scheduling
Algorithm:
must be flexible enough to process migration
decisions when there is a change in the system
load
A good scheduling algorithm has the property of
being scalable. It is flexible for scaling when the
number of nodes increases in a system
algorithm should not be affected by the failure of
one or more nodes of the system
6. Categories of Multiprocessors
Loosely coupled / Distributed multiprocessors: processors
are independent of each other. Each processor has its own
memory and I/O Channels.
Functionally specialized processor: there is a master/main
processor which is general purpose process. It controls
other processors in the system and provides services to
them
Tightly Couple Multiprocessors: all processors shares
common memory. Also termed as homogeneous
7. Techniques for multiprocessor Scheduling
Multi processor scheduling can be done in two ways
Asymmetric Scheduling
Symmetric Scheduling
8. Symmetric multiprocessor
Processors are self schedule
Each processor check ready queue and select process to execute.
If one processor goes down rest of the system keeps working.
There are two variants of symmetric multiprocessor
Symmetrical Scheduling with global queues.
Symmetrical Scheduling with private global queues.
9. Symmetrical Scheduling with global queues:
if processes are executing in common / global queue scheduler for
each processor checks this global queue and select process to
execute.
10. Symmetrical Scheduling with private queues
if processors in the system have their own private ready queue, then processor will
check their own private ready queue to select process.
Mostly symmetric multiprocessing is used most modern operating system i.e. LINUX,
WINDOWS, MAC OS.
11. Asymmetric Multiprocessor
one processor (master processor) handles all operations and other system
activities
master processor runs operating system code and other slave processors
only executed user code.
If master processor fails, whole system comes to a halt, if slave goes down,
rest of the system keeps working.
12. Processor Affinity
A process has an affinity ()تعلق for the processor on which it is
currently running.
When a process runs on a specific processor, It has effects on
the cache memory.
The data most recently accessed by the process populate the
cache for the processor
13. Processor Affinity
when process migrates to another processor, now contents
of cache memory becomes invalidated for the first processor.
Cache for second processor must be repopulated
Because of high cost of invalidating and repopulating
caches, mostly SMP try to avoid migrating processes from
one processor to another.
This is known as
processor affinity.
14.
15. Soft AFFINITY
When an operating system has a policy of keeping a
process running on the same processor but not
guaranteeing it will do so, this situation is called soft
affinity.
16. Hard AFFINITY
◦When an operating system has a policy of
attempting to keep a process running on the
same processor with guarantee that it will do
so, we have a situation known as hard affinity.
◦Some Linux systems implement soft affinity and
provide system calls
like sched_setaffinity() that also support hard
affinity.
17. Assignment
a processor can access its own local memory faster than non-local memory (memory
local to another processor or memory shared between processors
(NUMA) is a computer memory design used in multiprocessing, where the memory
access time depends on the memory location relative to the processor
Write a short note on Purpose and need of NUMA Architecture
18. Load BALANCING
balancing of load among Processors of distributed
computing.
Excess load of one CPU is distributed to other that
have less load according to defined limits i.e. neither
it gets overloaded nor idle.
Load balancing can help to maximize the
performance and throughput which is ultimate goal
of distributed computing.
19.
20. Static load balancing
While distributing load current state of the system is not
taken into account.
These algorithms are simpler
Two types
1. Deterministic
2. Probabilistic
21. Static load balancing
1. Deterministic: Properties of nodes and
processes are taken into account for the
allocation of processes to nodes
E.g. if tasks are independent, execution
time is known.
Difficult to achieve better results
22. Static load balancing
2. Probabilistic: Algorithm, statistical
attributes of the CPU are taken into account
to make process placement rules
It also doesn’t give better performance.
*Statistical attributes: idle time, busy time
23. Dynamic load balancing
While distributing load, current state of the
system is taken into account.
Dynamically redistributing workloads away from
overloaded nodes toward underloaded nodes.
Difficult to design however better performance.
25. Dynamic load balancing
1. Centralized: task of handling requests for
process scheduling is carried out by centralized
server node.
Benefit is performance, as all information is held
at a single node, but suffers from reliability
problem
Another problem with increasing number of
requests
26. Dynamic load balancing
2. Distributed: task of handling requests for
process scheduling is carried out by individual
node of the system.
Unlike centralized, there is no need to hold
information.
Hence speed is fast.
27. Dynamic load balancing
Two types of Distributed load balancing
◦A. Cooperative: Scheduling is done with the
cooperation of nodes in a system. However
drawback is its complexity overhead
◦B. Non-Cooperative: Scheduling is done by
individual node in the system as they act as
autonomous entities.
28. Issues in load balancing
Load Estimation Policies: Determination of load of a node
Process transfer Policies: decides for the execution of process,
local or remote
State information Exchange: exchange of information among
nodes
Location Policy: selection of nodes for migration of the process
Priority Assignment: priority making algorithm
Migration limit Policy: Determines the limit value for migration
of process.