2. What is thread
In an operating system
(OS), a thread refers to the
smallest unit of execution
within a process.
3. Thread Scheduling
Thread scheduling can be defined as the process of
determining the order and timing of execution for
individual threads within a system. Each thread
represents a sequence of instructions that needs to be
executed, and the scheduler is responsible for making
decisions about which thread should run next and for
how long.
Definition:
4. Thread scheduling is distinguishing between user-level and kernel-
level threads.
User-Level Threads:
• User-level threads are managed by a thread library, and the kernel
is unaware of them.
• The thread creation, scheduling, and management are handled by
the application or a user-level library, without involving the
operating system.
• Thread library schedules which thread of the process to run on
which LWP and how long.
5. Kernel-Level Threads:
• Kernel-level threads are managed by the operating system kernel,
and the kernel is responsible for their creation.
• Lightweight processes act as intermediaries between user-level
threads and kernel-level threads.
Example: When you open a new tab to visit a website, the browser
creates a user-level thread (representing the tab) and associates it
with a lightweight process. The lightweight process communicates
with the operating system kernel through kernel-level threads to
perform tasks like fetching data over the network, managing local
storage, and rendering graphics on the screen.
6. Contention Scope:
• The word Contention here refers to the competition
or fight among the user level threads access kernel
resources.
• It is defined by the application developer using the
thread library.
7. Types:
1. Process contention scope: The contention takes
place among thread within same process.(Priority
is specifies by the application developer during the
thread creation.)
2. System contention scope: The contention takes
place among all threads in the system.
8. Understanding the priority
levels and scheduling algorithms
is essential for effective thread
management. Proper task
allocation and CPU utilization
are key factors in achieving
optimal performance.
10. Preemptive Scheduling:
• In preemptive scheduling, the operating
system has the ability to interrupt a currently
running thread and allocate the CPU to
another thread.
• When a higher-priority process or thread is
identified, the scheduler initiates a context
switch.
11. Non-Preemptive Scheduling:
• In non-preemptive (or cooperative) scheduling, a
running thread continues execution until it
completes its task.
• The operating system does not forcibly interrupt
the running thread.
• Non-preemptive scheduling can be simpler to
implement but may lead to less responsive
systems, especially if a high-priority thread is
waiting for a lower-priority thread to finish.
12. Scheduling Algorithms
Various scheduling algorithms such as
Round Robin, Shortest Job First, and
Multi-Level Feedback Queue offer
different approaches to task
dispatching. Each algorithm has its own
impact on system performance and
fairness.
13. Round Robin :
• This algorithm follows a simple, cyclic approach,
allocating a fixed time slice to each task in a
circular manner.
• It ensures fairness by providing equal opportunities
to all tasks, preventing any single task from
monopolizing the CPU.
14. Shortest Job First :
• SJF prioritizes tasks based on their burst time,
executing the shortest job first.
• However, predicting the exact burst time in
practical systems can be challenging, making it
sensitive to inaccurate estimations.
15. Multi-Level Feedback Queue :
• MLFQ operates with multiple priority levels, typically High,
Medium, and Low. Each priority level represents a different
queue, and tasks move between these queues based on their
behavior.
• A task from the highest priority queue is given the CPU to
execute. If the task completes its execution within the time
quantum, it may stay in the same priority level or be promoted
to a higher one. If it uses up its time quantum, it might be
moved to a lower priority level.
17. Thread Synchronization
Proper synchronization mechanisms
such as mutexes and semaphores
are essential for avoiding race
conditions and ensuring data
integrity. Synchronization
mechanisms should be designed to
avoid deadlock situations.
18. Conclusion
Optimizing thread scheduling is
essential for achieving system
efficiency and responsiveness. By
understanding scheduling algorithms
and adapting to evolving workloads,
we can ensure optimal resource
utilization and performance.