Operating System 20
Threads
Prof Neeraj Bhargava
Vaibhav Khanna
Department of Computer Science
School of Engineering and Systems Sciences
Maharshi Dayanand Saraswati University Ajmer
Motivation
• Most modern applications are multithreaded
• Threads run within application
• Multiple tasks with the application can be implemented
by separate threads
– Update display
– Fetch data
– Spell checking
– Answer a network request
• Process creation is heavy-weight while thread creation is
light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
Multithreaded Server Architecture
Benefits
• Responsiveness – may allow continued execution
if part of process is blocked, especially important
for user interfaces
• Resource Sharing – threads share resources of
process, easier than shared memory or message
passing
• Economy – cheaper than process creation, thread
switching lower overhead than context switching
• Scalability – process can take advantage of
multiprocessor architectures
Multicore Programming
• Multicore or multiprocessor systems putting pressure
on programmers, challenges include:
– Dividing activities
– Balance
– Data splitting
– Data dependency
– Testing and debugging
• Parallelism implies a system can perform more than
one task simultaneously
• Concurrency supports more than one task making
progress
– Single processor / core, scheduler providing concurrency
Multicore Programming (Cont.)
• Types of parallelism
– Data parallelism – distributes subsets of the
same data across multiple cores, same
operation on each
– Task parallelism – distributing threads across
cores, each thread performing unique
operation
• As # of threads grows, so does
architectural support for threading
– CPUs have cores as well as hardware threads
– Consider Oracle SPARC T4 with 8 cores, and 8
hardware threads per core
Concurrency vs. Parallelism
 Concurrent execution on single-core system:
 Parallelism on a multi-core system:
Single and Multithreaded Processes
Amdahl’s Law
• Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
• S is serial portion
• N processing cores
• That is, if application is 75% parallel / 25% serial, moving from 1 to 2
cores results in speedup of 1.6 times
• As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on
performance gained by adding additional cores
• But does the law take into account contemporary multicore systems?
User Threads and Kernel Threads
• User threads - management done by user-level threads library
• Three primary thread libraries:
– POSIX Pthreads
– Windows threads
– Java threads
• Kernel threads - Supported by the Kernel
• Examples – virtually all general purpose operating systems,
including:
– Windows
– Solaris
– Linux
– Tru64 UNIX
– Mac OS X
Thread Structure
• A thread, sometimes called a lightweight
process (LWP), is a basic unit of resource
utilization, and consists of a program counter,
a register set, and a stack.
• It shares with peer threads its code section,
data section, and operating-system resources
such as open files and signals, collectively
known as a task.
Thread Structure
• A traditional or heavyweight process is equal to a task with
one thread.
• A task does nothing if no threads are in it, and a thread
must be in exactly one task.
• The extensive sharing makes CPU switching among peer
threads and the creation of threads inexpensive, compared
with context switches among heavyweight processes.
• Although a thread context switch still requires a register set
switch, no memory-management-related work need be
done.
• Like any parallel processing environment, multithreading a
process may introduce concurrency control problems that
require the use of critical sections or locks.
User Level Thread Management
• Also, some systems implement user-level threads in user-level
libraries, rather than via system calls, so thread switching does not
need to call the operating system, and to cause an interrupt to the
kernel.
• Switching between user-level threads can be done independently
of the operating system and, therefore, very quickly.
• Thus, blocking a thread and switching to another thread is a
reasonable solution to the problem of how a server can handle
many requests efficiently.
• User-level threads do have disadvantages, however. For instance, if
the kernel is single-threaded, then any user-level thread executing a
system call will cause the entire task to wait until the system call
returns.
Advantages of threads over processes.
• With multiple processes, each process operates independently of
the others; each process has its own program counter, stack
register, and address space.
• This type of organization is useful when the jobs performed by the
processes are unrelated.
• Multiple processes can perform the same task as well. For instance,
multiple processes can provide data to remote machines in a
network file system implementation.
• However, it is more efficient to have one process containing
multiple threads serve the same purpose.
• In the multiple process implementation, each process executes the
same code but has its own memory and file resources.
• One multi-threaded process uses fewer resources than multiple
redundant processes, including memory, open files and CPU
scheduling,
Threads vs Process
• A thread within a process executes sequentially, and each thread
has its own stack and program counter.
• Threads can create child threads, and can block waiting for system
calls to complete; if one thread is blocked, another can run.
However, unlike processes, threads are not independent of one
another.
• Because all threads can access every address in the task, a thread
can read or write over any other thread's stacks.
• This structure does not provide protection between threads. Such
protection, however, should not be necessary.
• Whereas processes may originate from different users, and may be
hostile to one another, only a single user can own an individual task
with multiple threads.
• The threads, in this case, probably would be designed to assist one
another, and therefore would not require mutual protection.
Assignment
• Explain Multithreaded Server Architecture and
its benefits
• Explain the concept of multicore programming
• Explain the advantages of threads over
processes

Operating system 20 threads

  • 1.
    Operating System 20 Threads ProfNeeraj Bhargava Vaibhav Khanna Department of Computer Science School of Engineering and Systems Sciences Maharshi Dayanand Saraswati University Ajmer
  • 2.
    Motivation • Most modernapplications are multithreaded • Threads run within application • Multiple tasks with the application can be implemented by separate threads – Update display – Fetch data – Spell checking – Answer a network request • Process creation is heavy-weight while thread creation is light-weight • Can simplify code, increase efficiency • Kernels are generally multithreaded
  • 3.
  • 4.
    Benefits • Responsiveness –may allow continued execution if part of process is blocked, especially important for user interfaces • Resource Sharing – threads share resources of process, easier than shared memory or message passing • Economy – cheaper than process creation, thread switching lower overhead than context switching • Scalability – process can take advantage of multiprocessor architectures
  • 5.
    Multicore Programming • Multicoreor multiprocessor systems putting pressure on programmers, challenges include: – Dividing activities – Balance – Data splitting – Data dependency – Testing and debugging • Parallelism implies a system can perform more than one task simultaneously • Concurrency supports more than one task making progress – Single processor / core, scheduler providing concurrency
  • 6.
    Multicore Programming (Cont.) •Types of parallelism – Data parallelism – distributes subsets of the same data across multiple cores, same operation on each – Task parallelism – distributing threads across cores, each thread performing unique operation • As # of threads grows, so does architectural support for threading – CPUs have cores as well as hardware threads – Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core
  • 7.
    Concurrency vs. Parallelism Concurrent execution on single-core system:  Parallelism on a multi-core system:
  • 8.
  • 9.
    Amdahl’s Law • Identifiesperformance gains from adding additional cores to an application that has both serial and parallel components • S is serial portion • N processing cores • That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times • As N approaches infinity, speedup approaches 1 / S Serial portion of an application has disproportionate effect on performance gained by adding additional cores • But does the law take into account contemporary multicore systems?
  • 10.
    User Threads andKernel Threads • User threads - management done by user-level threads library • Three primary thread libraries: – POSIX Pthreads – Windows threads – Java threads • Kernel threads - Supported by the Kernel • Examples – virtually all general purpose operating systems, including: – Windows – Solaris – Linux – Tru64 UNIX – Mac OS X
  • 11.
    Thread Structure • Athread, sometimes called a lightweight process (LWP), is a basic unit of resource utilization, and consists of a program counter, a register set, and a stack. • It shares with peer threads its code section, data section, and operating-system resources such as open files and signals, collectively known as a task.
  • 12.
    Thread Structure • Atraditional or heavyweight process is equal to a task with one thread. • A task does nothing if no threads are in it, and a thread must be in exactly one task. • The extensive sharing makes CPU switching among peer threads and the creation of threads inexpensive, compared with context switches among heavyweight processes. • Although a thread context switch still requires a register set switch, no memory-management-related work need be done. • Like any parallel processing environment, multithreading a process may introduce concurrency control problems that require the use of critical sections or locks.
  • 13.
    User Level ThreadManagement • Also, some systems implement user-level threads in user-level libraries, rather than via system calls, so thread switching does not need to call the operating system, and to cause an interrupt to the kernel. • Switching between user-level threads can be done independently of the operating system and, therefore, very quickly. • Thus, blocking a thread and switching to another thread is a reasonable solution to the problem of how a server can handle many requests efficiently. • User-level threads do have disadvantages, however. For instance, if the kernel is single-threaded, then any user-level thread executing a system call will cause the entire task to wait until the system call returns.
  • 14.
    Advantages of threadsover processes. • With multiple processes, each process operates independently of the others; each process has its own program counter, stack register, and address space. • This type of organization is useful when the jobs performed by the processes are unrelated. • Multiple processes can perform the same task as well. For instance, multiple processes can provide data to remote machines in a network file system implementation. • However, it is more efficient to have one process containing multiple threads serve the same purpose. • In the multiple process implementation, each process executes the same code but has its own memory and file resources. • One multi-threaded process uses fewer resources than multiple redundant processes, including memory, open files and CPU scheduling,
  • 15.
    Threads vs Process •A thread within a process executes sequentially, and each thread has its own stack and program counter. • Threads can create child threads, and can block waiting for system calls to complete; if one thread is blocked, another can run. However, unlike processes, threads are not independent of one another. • Because all threads can access every address in the task, a thread can read or write over any other thread's stacks. • This structure does not provide protection between threads. Such protection, however, should not be necessary. • Whereas processes may originate from different users, and may be hostile to one another, only a single user can own an individual task with multiple threads. • The threads, in this case, probably would be designed to assist one another, and therefore would not require mutual protection.
  • 16.
    Assignment • Explain MultithreadedServer Architecture and its benefits • Explain the concept of multicore programming • Explain the advantages of threads over processes