2. Lecture Contents
Lecture 1: Processes and Threads
Lecture 2: Thread Functionality
Lecture 3: Types of Threads
Lecture 4: Combined Approaches
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
3. Lecture 1: Processes and Threads
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Processes and Threads
Multithreading
4. Learning Objectives
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
To distinguish the difference between processes and threads.
To explain the basic design issues for threads.
5. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
The unit of resource ownership is
referred to as a process or task.
The unit of dispatching is referred to
as a thread or lightweight process.
Processes and Threads
6. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Processes and Threads
Traditional processes have two characteristics:
Resource Ownership
Process includes a virtual address space to hold the process image and ownership of
resources.
The OS provides protection to prevent unwanted interference between processes with
respect to resource.
Scheduling/executing
Process follows an execution path that may be interleaved with other processes.
A process has an execution state (Running, Ready, etc.) and a dispatching priority and is
scheduled and dispatcehd by the OS.
7. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Multithreading: The ability of an OS to support multiple, concurrent paths of execution within a
single process.
Figure 4.1 Threads and Processes
8. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Single-Threaded Approach
A single execution path per process, in which the concept
of a thread is not recognized, is referred to as a single-
threaded approach.
There is no distinction between threads and processes.
MS-DOS supports a single user process and a single
thread.
Some UNIX support multiple user processes but only
support one thread per process. Figure 4.2 Single-threaded Process Model
9. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Multithreaded Approach
There is only a single process control block and user
address space, but there are separate stacks for each
thread.
Java run-time environment is a single process with multiple
threads.
Multiple processes and threads are found in Windows,
Solaris, and many modern versions of UNIX.
Figure 4.2 Multithreaded Process Model
10. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Process
A virtual address space which holds the
process image
Protected access to
processors,
other processes (for inter-process
communication),
files,
I/O resources (devices and channels)
Thread
An execution state (running, ready, etc.)
Saved thread context when not running
An execution stack
Some per-thread static storage for local
variables
Access to the memory and resources of its
process
Multithreading Environment
11. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Takes less time to
create a new thread
than a process.
Take less time to
terminate a thread
than a process.
Take less time to
switching between
two threads takes
less time that
switching
processes.
Threads can
communicate with
each other without
invoking the
kernel.
Benefits of Multithreading
12. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multithreading
Thread Use in a Single-User System
Foreground and background work
Speed of execution
Modular program structure
Asynchronous processing
13. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Summary
In this lecture, you learned about
A process is the unit of resource ownership and a thread is the unit of dispatching.
Multiple threads within the same process can be executing simultaneously on
different processors.
14. Lecture 2: Thread Functionality
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Thread States
Thread Synchronization
15. Learning Objectives
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
To explain four basic thread oprations associated with a change in thread state.
To describe the synchronization of threads.
16. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Spawn: A thread within
a process may spawn
another thread within
the same process,
providing an instruction
pointer and argument
for the new thread.
Block: When a thread
needs to wait for an event,
it will block (saving its
registers, PC and stack
pointers).
Unblock: When the
event for which a thread
is blocked occurs, the
thread is moved to the
Ready queue.
Finish: When a Thread
completes, its register
context and stacks are
deallocated.
Thread States
The key states for a thread are
Running, Ready and Blocked. (not
Suspend state)
There are four basic thread
operations association with a change
in thread state.
17. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Thread States
Remote Procedure Call (RPC) Using Single Thread
Figure 4.3 Remote Procedure Call (RPC) Using Threads
18. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Thread States
Remote Procedure Call (RPC) Using One Thread per Server
Figure 4.3 Remote Procedure Call (RPC) Using Threads
19. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Thread States
Multithreading on a Uniprocessor
Figure 4.4 Multithreading Example on a Uniprocessor
20. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Thread Synchronization
It is necessary to synchronize the activities of the various threads
All threads of a process share the same address space and other
resources.
Any alteration of a resource by one thread affects the other threads
in the same process.
21. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Summary
In this lecture, you learned about
Threads that have execution states and may synchronize with one another.
Multiprogramming that enables the interleaving of multiple threads within
multiple processes on a uniprocessor.
22. Lecture 3: Types of Threads
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
User-Level Threads (ULTs)
Kernel-Level Threads (KLTs)
23. Learning Objectives
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
To illustrate the difference between user-level threads and kernel-level threads.
To describe the relationship between user-level threads and processes.
24. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Types of Threads
Note: we are talking about threads for user processes. Both ULT and KLT execute in user mode.
An OS may also have threads but that is not what we are discussing here.
User-Level Threads Kernel-Level Threads
25. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
User-Level Threads (ULTs)
Figure 4.5 (a) Pure user-level Thread
In a pure ULT facility,
Thread management is done by the application.
The kernel is not aware of the existence of
threads.
The threads library contains code for
creating and destroying threads
passing messages and data between threads
scheduling thread execution
saving and restoring thread contexts.
26. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
User-Level Threads (ULTs)
Relationships Between ULT States and Process States
Figure 4.6 Examples of the
Relationships between User-Level
Thread States and Process States
Possible transitions from 4.6(a):
case-1 : 4.6(a) → 4.6(b) [system call]
case-2 : 4.6(a) → 4.6(c) [clock interrupt]
case-3 : 4.6(a) → 4.6(d) [thread transition]
27. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
User-Level Threads (ULTs)
Advantages
Thread switching does not require kernel mode privileges (no mode
switches).
Scheduling can be application specific.
ULTs can run on any OS.
28. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
User-Level Threads (ULTs)
Disadvantages
In a typical OS, many system calls are blocking.
As a result, when a ULT executes a system call, not only is that thread blocked, but also
all of the threads within the process are blocked.
In a pure ULT strategy, a multithreaded application cannot take advantage of multiprocessing.
Overcoming
Jacketing: It is to convert a blocking system call into a nonblocking system call.
By writing an application as multiple processes tather than multiple threads.
29. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Kernel-Level Threads (KLTs)
In a pure KLT facility,
Thread management is done by the kernel.
No thread management is done by the application.
There is simply an API to the kernel thread
facility.
Windows is an example of this approach.
Figure 4.5 (b) Pure kernel-level
30. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Kernel-Level Threads (KLTs)
Advantages
The kernel can simultaneously schedule multiple threads from the same process on
multiple processors.
If one thread in a process is blocked, the kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
31. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Kernel-Level Threads (KLTs)
Disadvantages
The transfer of control from one thread to another within the same process requires a
mode switch to the kernel.
Operation User-Level Threads Kernel-Level Threads Processes
Null Fork 34 948 11,300
Signal Wait 37 441 1,840
Table 4.1 Threads and Process Operation Latencies (μs)
32. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Summary
In this lecture, you learned about
ULT, in which thread management is done by the application.
KLT, in which thread management is done by the kernel.
Advantages, disadvantages of both thread types and overcoming problems.
33. Lecture 4: Combined Approaches
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Combined Approaches (User-Level Threads and Kernel-Level Threads)
Jacketing
Multicore and Multithreading
34. Learning Objectives
FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
To express an approach that combined with the user-level threads and kernel-level
threads.
To explain the relation of threads and processes and the multicore system that
supports a single application with multiplw threads.
35. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Combined Approaches
Figure 4.5 (c) Combined
In a combined system,
Thread creation is done completely in user space
Thread scheduling and synchronization is done in user or
kernel space.
The multiple ULTs from a single application are mapped
onto some number of KLTs.
Multiple threads within the same application can run in
parallel on multiple processors. (many to many)
A blocking system call need not block the entire process.
Solaris is a good example of this approach.
36. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Combined Approaches
Threads: Processes Description Example Systems
1:1 Each thread of execution is a unique process
with its own address space and resources
Traditional UNIX implementation
M:1 A process defines an address space and dynamc
resource wonership. Multiple threads may be
created and executed within that process.
Windows NT, Solaris, Linux, OS/2,
OS/390, MACH
1:M A thread my migrate from one process
environment to another. This allows a thread to
be easily moved among distinct systems.
Ra (Clouds), Emerald
M:M Combines attributes of M:1 and 1:M cases. TRIX
Table 4.2 Relationship between Threads and Processes
37. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Jacketing
The purpose of jacketing is to convert a blocking system call into a non-blocking system call.
Instead of directly calling a system I/O routine, a thread calls an application-level I/O
jacket routine.
Within this jacket routine is code that checks to determine if the I/O device is busy.
If it is, the thread enters the Blocked state and passes control to another thread.
When this thread later is given control again, the jacket routine checks the I/O device
again.
38. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multicore and Multithreading
Multithreading and multicore chips have the potential to improve performance of
applications that have large amounts of parallelism.
Gaming, simulations, etc. are examples
Performance doesn’t necessarily scale linearly with the number of cores.
Amdahl’s Law
Speedup depends on the amount of code that must be executed sequentially.
Speedup=
𝒕𝒊𝒎𝒆 𝒕𝒐 𝒆𝒙𝒆𝒄𝒖𝒕𝒆 𝒑𝒓𝒐𝒈𝒓𝒂𝒎 𝒐𝒏 𝒂 𝒔𝒊𝒏𝒈𝒍𝒆 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒐𝒓
𝒕𝒊𝒎𝒆 𝒐𝒕 𝒆𝒙𝒆𝒄𝒖𝒕𝒆 𝒑𝒓𝒐𝒈𝒓𝒂𝒎 𝒐𝒏 𝑵 𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 𝒑𝒓𝒐𝒄𝒆𝒔𝒔𝒐𝒓𝒔
=
𝟏
𝟏−𝒇 +
𝒇
𝑵
(where f is the amount of parallelizable code)
39. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Multicore and Multithreading
Figure 4.7 Performance Effect of Multiple Cores
(a) Speedup with 0%, 2%, 5%, and 10% sequential portions (b) Speedup with overheads
40. FACULTY OF COMPUTER SCIENCE, UNIVERSITY OF COMPUTER STUDIES
Summary
In this lecture, you learned about
The combined system that is used when multiple threads within the same
application can run in parallel on multiple processors.
Multicore system that is used to support a single application with multiple threads
raises issues of performance.