Process, Threads, Symmetric Multiprocessing and Microkernels in Operating System
Student’s Name Matric
Aniyah binti Amirhussin B031310042
Azwana binti Ahmad B031310071
Daliah binti Daud B031310491
Goh Yu Fern B031310113
Nik Siti Noor Fadhillah binti Md Saad B031310496
Rahilda Nadhirah Norizzaty binti Rahiddin B031310111
Also known as task.
Execution of an individual program.
Contains program code and its current activity
Can be traced to list the sequence of instructions that
Depending on the operating system (OS), a process may
be made up of multiple threads of execution that
execute instructions concurrently.
While program is executing, processes are stored in
data structure known as PCB.
PCB is created for each process.
The creation and management of PCB is done by OS
PCB has sufficient information. Thus, it’s possible to
interrupt a running process and later resume
execution as if there is no interruption.
Process = Program code + Associated data +
Change to other value,
For example: blocked or ready
Current values are saved in appropriate
fields of corresponding PCB
OS is now free to put other
process in Running state
OS program that moves the processor from one
process to another.
Prevents a single process from monopolizing processor
Decides who goes next according to a scheduling
CPU will execute instructions from the dispatcher
while switching from process A to process B.
* Process (parent) creates another process (child) called as
* Submission of batch job and user is logs on.
* It is created to provide service such as printing.
* Batch job issues Halt instruction (OS service call for
termination) and user is logs off.
* Occurs when quitting an application or error and faults
Reasons for Process Termination
* Normal completion
* Time limit exceeded
* Memory unavailable
* Protection error (eg: write to read-only file)
* I/O failure
o Process may be in one of two states: running, not running.
o Dispatcher cannot select the process that is in the queue
the longest because it may be blocked
o Solution: split Not Running into two states:
i) Ready – prepare to execute when given opportunity
ii) Blocked/Waiting – process cannot execute until some
4. Suspended Process
Processor faster than I/O (processes wait for I/O)
Swap these processes to disk to free up memory
Block state -> suspend state, when swap to disk
Two new state
- Blocked, suspend: blocked processes which have
been swapped out to disk
- Ready, suspend: ready processes which have been
swapped out to disk
OS have tables for managing processes and resources.
1. Memory tables
- Allocation of main and secondary memory to
- Protection attributes for access to shared
- Information needed to manage virtual memory
2. I/O tables
- I/O device is available or assigned
- Status of I/O operation
- Location in main memory being used as the
source or destination of the I/O transfer
3. File tables
- Existence of files
- Location on secondary memory
- Current Status
- Sometimes this information is maintained by a file-
4. Process tables
- Where process is located
- Attributes necessary for its management
- Process ID
- Process state
- Location in memory
1. Modes of Execution
i. User Mode
* Less-privileged mode
* User programs typically execute in this
ii. System mode, Control mode or Kernel Mode
* More-privileged mode
* Kernel of the OS
2. Process Creation
Assign a unique process identifier
Allocate space for the process
Initialize process control block
Set up appropriate linkages
Eg: add new process to linked list used for
Create or expand other data structures
Eg: maintain an accounting file
A thread is the smallest unit of processing that can be
performed in an OS.
An execution state (running, ready, etc.)
Has an execution stack.
In most modern operating systems, a thread exists
within a process - that is, a single process may contain
On a single processor, multi threading generally occurs
by as in multitasking, the processor switches between
This context switching generally happens frequently
enough that the user perceives the threads or tasks to
be running at the same time.
On the multiprocessor or mutli-core system, the
threads or task actually do run at the same time, with
each processor or core running a particular thread or
• Multithreading: when OS supports multiple threads of
execution within a single process.
• Single threading: when the OS does not recognize the
concept of thread.
• MS-DOS supports a single thread.
• UNIX supports multiple user processes but only supports one
thread per process
• Windows 2000, Solaris, Linux, Mach, and OS/2 support
Thread Control Block contains a register image, thread priority and thread state information
Thread minimize context switching time.
Use of threads provides concurrency within a process.
Economy- It is more economical to create and context
Utilization of multiprocessor architectures to a greater
scale and efficiency.
Three key states: running, ready, blocked
No suspend state since all threads share the same
Suspending a process involves suspending all threads
of the process.
Termination of a process, terminates all threads
within the process.
States associated with a change in thread state:
i. Spawn -spawn another thread
1. User Level Threads – (user managed thread)
Diagram Of User-level Thread
o All thread management is done by the application.
o The kernel is not aware of the existence of threads.
o OS only schedules the process, not the threads within
o Programmer using a thread library to manage threads
User-level threads can be implemented on operating system that does not
Implementing user-level threads does not require modification of operating
system where everything is managed by the thread library.
Simple representation which the thread is represented by a the thread ID,
program counter, register, stack , all stored in user process address space.
Simple management where creating new threads, switching threads and
synchronization between threads can be done without intervention of the
Fast and efficient where switching thread is much more inexpensive
compared to a system call.
There is a lack of coordination between threads and operating system
kernel. A process gets one time slice no matter it has 1 thread or 10000
threads within it. It is up to the thread itself to give up the control to other
If one thread made a blocking system call, the entire process can be
blocked in the kernel, even if other threads in the same process are in the
2. Kernel-Level Threads
(OS managed threads acting on kernel, an OS core)
All thread management is done by the kernel.
Kernel maintains context information for the process
and the threads.
No thread library but an API to the kernel thread
Switching between threads requires the kernel.
Scheduling is done on a thread basis.
Kernel can simultaneously schedule multiple threads from the
same process on multiple processes.
If one threads in a process is blocked, the Kernel can schedule
another threads of the same process.
Kernel routines themselves can multithreaded.
Kernel threads are generally slower to create and manage than
the user threads.
Transfer of control from one thread to another within same
process requires a mode switch to the Kernel.
User Level VS Kernel Level Thread
User Level Threads Kernel Level Thread
Faster to create and manage. Slower to create and manage.
Implementation is by a thread
library at the user level.
OS supports creation of
Generic and can run on any OS. Specific to the OS.
Multi-threaded application can
not take advantage of
Kernel routines themselves
can be multithreaded.
1. Many to One Relationship
Many user-level threads mapped to single kernel
2. Many to Many Relationship
Allows many user level threads to be mapped to many
Allows the operating system to create a sufficient
number of kernel threads.
2. One to One Relationship
Each user-level thread maps to kernel threads.
Allow another threads to run if block.
Each CPU has equal access to resources.
Each CPU determines what to run using a standard
Kernel can execute on any processor.
- Allowing portions of the kernel to execute in
Typically each processor does self-scheduling from
the pool of available process or threads.
Proc 1 Proc 2 Proc 3 Proc 4
Mem 1 Mem 2 Mem 3 Mem 4
1. Simultaneous concurrent processes or threads
4. Memory management
5. Reliability and fault tolerance
From multiple boards on a shared bus to multiple
processors inside a single chip.
1. Private data are used by a single processor
2. Shared data are used by multiple processors
Caching shared data
reduces latency to shared data, memory
bandwidth for shared data, and interconnect
cache coherence problem
Fault tolerant support is straight forward
Example: memory and I/O
Involves synchronization of access to global ready
Eg: only one processor must execute a job at
Processors: CPU1, CPU2, CPU3, …
When a processor accesses the ready queue:
1. If they attempt access to the ready queue, all
other processors (CPU2, CPU3, …) must wait;
2. Accessing processor (eg. CPU1) removes a process
from ready queue, and dispatch’s process on itself.
3. Just before dispatch, that processor makes ready
queue again available for use by the other CPU’s.
This structures the operating system by removing all
nonessential portions of the kernel and implementing
them as system and user and user level programs.
Provide minimal process and memory management &
Communication between components if the OS is
provided by massage passing
Extending the OS becomes much easier.
Any changes to the kernel tend to be fewer, since
the kernel is smaller.
Provides more security and reability
It is poor performance due to increase system
overhead from message passing.
Operating systems developed in the mid to late 1950s were designed with
little concern about structure.
The problems caused by mutual dependence and interaction were
In these monolithic operating systems, virtually any procedure can
call any other procedure – the approach
Modular programming techniques were needed to handle this scale of
Layered operating systems were developed in which functions are
organized hierarchically and interaction only takes place between
Most or all of the layers execute in kernel mode.
Major changes in one layer can have numerous effects on code in
adjacent layers - many difficult to trace.
And security is difficult to build in because of the many interactions
between adjacent layers.
In a Microkernel - only absolutely essential core OS functions should be in
Less essential services and applications are built on the
microkernel and execute in user mode.
Common characteristic is that many services that traditionally
have been part of the OS are now external subsystems that
interact with the kernel and with each other;
These include device drivers, file systems, virtual memory
manager, windowing system, and security services.
The microkernel functions as a message exchange:
It validates messages,
Passes them between components,
Grants access to hardware.
The microkernel also performs a protection function;
it prevents message passing unless exchange is allowed.