More Related Content
Similar to ch4_EN_BK_Threads.pdf (20)
More from HoNguyn746501 (10)
ch4_EN_BK_Threads.pdf
- 2. 2
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Chapter 4: Threads
q Overview
q Multicore Programming
q Multithreading Models
q Thread Libraries
q Implicit Threading
q Threading Issues
q Operating System Examples
- 3. 3
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Objectives
q Identify the basic components of a thread, and contrast threads and
processes
q Describe the benefits and challenges of designing multithreaded
applications
q Illustrate different approaches to implicit threading including thread
pools, fork-join, and Grand Central Dispatch
q Describe how the Windows and Linux operating systems represent
threads
q Design multithreaded applications using the Pthreads, Java, and
Windows threading APIs
- 4. 4
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Motivation
q Most modern applications are multithreaded
q Threads run within application
q Multiple tasks with the application can be implemented by separate
threads
o Update display
o Fetch data
o Spell checking
o Answer a network request
q Process creation is heavy-weight while thread creation is light-weight
q Can simplify code, increase efficiency
q Kernels are generally multithreaded
- 7. 7
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Benefits
q Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
q Resource Sharing – threads share resources of process, easier than
shared memory or message passing (IPC)
q Economy – cheaper than process creation, thread switching lower
overhead than context switching
q Scalability – process can take advantage of multicore architectures
- 8. 8
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Multicore Programming
q Multicore or multiprocessor systems putting pressure on
programmers, challenges include:
o Dividing activities
o Balance
o Data splitting
o Data dependency
o Testing and debugging
q Parallelism implies a system can perform more than one task
simultaneously
q Concurrency supports more than one task making progress
o Single processor / core, scheduler providing concurrency
- 9. 9
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
q Concurrent execution on single-core system:
q Parallelism on a multi-core system:
- 10. 10
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Multicore Programming
q Types of parallelism
o Data parallelism – distributes subsets of the same data across multiple
cores, same operation on each
o Task parallelism – distributing threads across cores, each thread
performing unique operation
- 12. 12
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
q Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
o S is serial portion
o N processing cores
o That is, if application is 75% parallel / 25% serial, moving from 1 to 2
cores results in speedup of 1.6 times
o As N approaches infinity, speedup approaches 1/S
q Serial portion of an application has disproportionate effect on
performance gained by adding additional cores
q But does the law take into account contemporary multicore systems?
- 14. 14
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
User Threads and Kernel Threads
q User threads - management done by user-level threads library
q Three primary thread libraries:
o POSIX Pthreads
o Windows threads
o Java threads
q Kernel threads - supported by the Kernel
q Examples – virtually all general purpose operating systems, including:
o Windows, Linux, Mac OS X
o iOS, Android
- 17. 17
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Many-to-One
q Many user-level threads mapped to single kernel thread
q One thread blocking causes all to block
q Multiple threads may not run in parallel on multicore system because
only one may be in kernel at a time
q Few systems currently use this model
q Examples:
o Solaris Green Threads
o GNU Portable Threads
- 18. 18
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
One-to-One
q Each user-level thread maps to one kernel thread
q Creating a user-level thread creates a kernel thread
q More concurrency than many-to-one
q Number of threads per process sometimes restricted due to overhead
q Examples
o Windows
o Linux
- 19. 19
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Many-to-Many Model
q Allows many user level threads to be mapped to many kernel threads
q Allows the operating system to create a sufficient number of kernel
threads
q Windows with the ThreadFiber package
q Otherwise not very common
- 20. 20
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Two-level Model
q Similar to Many-to-Many, except that it allows a user thread to be
bound to kernel thread
- 21. 21
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread Libraries
q Thread library provides programmer with API for creating and
managing threads
q Two primary ways of implementing
o Library entirely in user space
o Kernel-level library supported by the OS
- 22. 22
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Pthreads
q May be provided either as user-level or kernel-level
q A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
q Specification, not implementation
q API specifies behavior of the thread library, implementation is up to
development of the library
q Common in UNIX operating systems (Linux & Mac OS X)
- 25. 25
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Pthreads Code for Joining 10 Threads
4.21 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts ñ 9th Edition
Pthreads Code for Joining 10 Threads
- 28. 28
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Java Threads
q Java threads are managed by the JVM
q Typically implemented using the threads model provided by
underlying OS
q Java threads may be created by:
o Extending Thread class
o Implementing the Runnable interface
o Standard practice is to implement Runnable interface
- 29. 29
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Java Threads
q Implementing Runnable interface:
q Creating a thread:
q Waiting on a thread:
- 30. 30
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Java Executor Framework
q Rather than explicitly creating threads, Java also allows thread
creation around the Executor interface:
q The Executor is used as follows:
- 33. 33
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Implicit Threading
q Growing in popularity as numbers of threads increase, program
correctness more difficult with explicit threads
q Creation and management of threads done by compilers and run-time
libraries rather than programmers
q Five methods explored
o Thread Pools
o Fork-Join
o OpenMP
o Grand Central Dispatch
o Intel Threading Building Blocks
- 34. 34
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread Pools
q Create a number of threads in a pool where they await work
q Advantages:
o Usually slightly faster to service a request with an existing thread than
create a new thread
o Allows the number of threads in the application(s) to be bound to the size
of the pool
o Separating task to be performed from mechanics of creating task allows
different strategies for running task
4 i.e., Tasks could be scheduled to run periodically
q Windows API supports thread pools:
- 35. 35
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Java Thread Pools
q Three factory methods for creating thread pools in Executors class:
- 37. 37
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism
q Multiple threads (tasks) are forked, and then joined.
- 42. 42
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Fork-Join Parallelism in Java
q The ForkJoinTask is an abstract base class
q RecursiveTask and RecursiveAction classes extend
ForkJoinTask
q RecursiveTask returns a result (via the return value from the
compute() method)
q RecursiveAction does not return a result
- 43. 43
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
OpenMP
q Set of compiler directives
and an API for C, C++,
FORTRAN
q Provides support for parallel
programming in shared-
memory environments
q Identifies parallel regions –
blocks of code that can run
in parallel
q #pragma omp parallel
q Create as many threads as
there are cores
- 45. 45
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Grand Central Dispatch
q Apple technology for macOS and iOS operating systems
q Extensions to C, C++ and Objective-C languages, API, and run-time
library
q Allows identification of parallel sections
q Manages most of the details of threading
q Block is in “^{ }” :
ˆ{ printf("I am a block"); }
q Blocks placed in dispatch queue
o Assigned to available thread in thread pool when removed from queue
- 46. 46
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Grand Central Dispatch
q Two types of dispatch queues:
o serial – blocks removed in FIFO order, queue is per process, called main
queue
4 Programmers can create additional serial queues within program
o concurrent – removed in FIFO order but several may be removed at a
time
4 Four system wide queues divided by quality of service:
– QOS_CLASS_USER_INTERACTIVE
– QOS_CLASS_USER_INITIATED
– QOS_CLASS_USER_UTILITY
– QOS_CLASS_USER_BACKGROUND
- 47. 47
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Grand Central Dispatch
q For the Swift language a task is defined as a closure – similar to a
block, minus the caret
q Closures are submitted to the queue using the dispatch_async()
function:
- 48. 48
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Intel Threading Building Blocks (TBB)
q Template library for designing parallel C++ programs
q A serial version of a simple for loop
q The same for loop written using TBB with parallel_for
statement:
- 49. 49
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Threading Issues
q Semantics of fork() and exec() system calls
q Signal handling
o Synchronous and asynchronous
q Thread cancellation of target thread
o Asynchronous or deferred
q Thread-local storage
q Scheduler Activations
- 50. 50
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Semantics of fork() and exec()
q Does fork() duplicate only the calling thread or all threads?
o Some UNIXes have two versions of fork
q exec() usually works as normal – replace the running process
including all threads
- 51. 51
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Signal Handling
q Signals are used in UNIX systems to notify a process that a
particular event has occurred.
q A signal handler is used to process signals
o Signal is generated by particular event
o Signal is delivered to a process
o Signal is handled by one of two signal handlers
4 default
4 user-defined
q Every signal has a default handler that kernel runs when handling
signal
o User-defined signal handler can override default
o For single-threaded, signal delivered to process
- 52. 52
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Signal Handling (Cont.)
q Where should a signal be delivered for multi-threaded?
o Deliver the signal to the thread to which the signal applies
o Deliver the signal to every thread in the process
o Deliver the signal to certain threads in the process
o Assign a specific thread to receive all signals for the process
- 53. 53
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread Cancellation
q Terminating a thread before it has finished
q Thread to be canceled is target thread
q Two general approaches:
o Asynchronous cancellation
terminates the target thread
immediately
o Deferred cancellation allows the
target thread to periodically
check if it should be cancelled
q Pthread code to create and
cancel a thread:
- 54. 54
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread Cancellation (Cont.)
q Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
q If thread has cancellation disabled, cancellation remains pending until
thread enables it
q Default type is deferred
o Cancellation only occurs when thread reaches cancellation point
4 i.e., pthread_testcancel()
4 Then cleanup handler is invoked
q On Linux systems, thread cancellation is handled through signals
- 55. 55
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread Cancellation in Java
q Deferred cancellation uses the interrupt() method, which sets the
interrupted status of a thread.
q A thread can then check to see if it has been interrupted:
- 56. 56
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Thread-Local Storage
q Thread-local storage (TLS) allows each thread to have its own copy
of data
q Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)
q Different from local variables
o Local variables visible only during single function invocation
o TLS visible across function invocations
q Similar to static data
o TLS is unique to each thread
- 57. 57
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Scheduler Activations
q Both M:M and Two-level models require communication to maintain
the appropriate number of kernel threads allocated to the application
q Typically use an intermediate data structure between user and kernel
threads – lightweight process (LWP)
o Appears to be a virtual processor on which process can schedule user
thread to run
o Each LWP attached to kernel thread
o How many LWPs to create?
q Scheduler activations provide upcalls - a
communication mechanism from the kernel
to the upcall handler in the thread library
q This communication allows an application to
maintain the correct number kernel threads
- 59. 59
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Windows Threads
q Windows API – primary API for Windows applications
q Implements the one-to-one mapping, kernel-level
q Each thread contains
o A thread id
o Register set representing state of processor
o Separate user and kernel stacks for when thread runs in user mode or
kernel mode
o Private data storage area used by run-time libraries and dynamic link
libraries (DLLs)
q The register set, stacks, and private storage area are known as the
context of the thread
- 60. 60
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Windows Threads (Cont.)
q The primary data structures of a thread include:
o ETHREAD (executive thread block) – includes pointer to process to
which thread belongs and to KTHREAD, in kernel space
o KTHREAD (kernel thread block) – scheduling and synchronization info,
kernel-mode stack, pointer to TEB, in kernel space
o TEB (thread environment block) – thread id, user-mode stack, thread-
local storage, in user space
- 62. 62
Operating System Concepts Silberschatz, Galvin and Gagne ©2018
Linux Threads
q Linux refers to them as tasks rather than threads
q Thread creation is done through clone() system call
q clone() allows a child task to share the address space of the
parent task (process)
o Flags control behavior
q struct task_struct points to process data structures (shared or
unique)