• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Ch04
 

Ch04

on

  • 1,554 views

 

Statistics

Views

Total Views
1,554
Views on SlideShare
1,554
Embed Views
0

Actions

Likes
0
Downloads
133
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Besides memory state and processor state, there’s also execution state. In this class we’ll use a simple 3-state model of process execution. — At all times a process is always in one of these three states. — We assume a single processor so there’s only ever one process in the running state. There are queues associated w/ the waiting and ready states. — More later. Do an example of a process making state transitions upon performing I/O. — Note that the READ operation causes two process transitions: one from the running to the waiting state, and one from the ready to the running state.

Ch04 Ch04 Presentation Transcript

  • Chapter 4: Threads
  • Chapter 4: Threads s Overview s Multithreading Models s Threading Issues s Pthreads s Windows XP Threads s Linux Threads s Java ThreadsOperating System Concepts 4.2 Silberschatz, Galvin and Gagne
  • Thread: Introduction s Each process has 1. Own Address Space 2. Single thread of control s A process model has two concepts: 1. Resource grouping 2. Execution s Sometimes it is useful to separate them 3Operating System Concepts 4.3 Silberschatz, Galvin and Gagne
  • Unit of Resource Ownership s A process has an q Address space q Open files q Child processes q Accounting information q Signal handlers q Etc s If these are put together in a form of a process, can be managed more easily 4Operating System Concepts 4.4 Silberschatz, Galvin and Gagne
  • Unit of Dispatching s Path of execution q Program counter: which instruction is running q Registers:  holds current working variables q Stack:  Contains the execution history, with one entry for each procedure called but not yet returned q State s Processes are used to group resources together s Threads are the entities scheduled for execution on the CPU s Threads are also called lightweight process 5Operating System Concepts 4.5 Silberschatz, Galvin and Gagne
  • Its better to distinguish between the two concepts Address space/Global Heavy weight process Variables Open files Child processes Accounting info Signal handlers Lig Program counter htIn case of multiple threads per Registers we igh process Stack tp ro ce Unit of Resource State ss es Split Program Address space/Global counter Program Variables Registers counter Open files Stack Registers Program Unit of Dispatch Child processes State Stack counter Accounting info Share State Registers 6 Stack Signal handlersOperating System Concepts 4.6 Silberschatz, Galvin and Gagne
  • 65 Threads allow you to multiplex 47 of which resources? 65 149% 0 1. CPU 2. Memory 3. PCBs 81% 60% 62% 4. Open files 38% 5. User authentication structures es y s PU .. or B s. fil PC C em n n io pe M at O ic nt e th r au se UOperating System Concepts 4.7 Silberschatz, Galvin and Gagne
  • Thread s A thread is a basic unit of CPU utilization. It consist of q A thread ID q A program Counter q A register Set q A stack s Threads share something with its peer threads (all the other5 threads in this particular task) the things that it share are q Its code section q Its data section q Any OS resources, available for the task. s The first thread starts execution with int main(int argc, char *argv[]) s The threads appear to the Scheduling part of an OS just like any other process s Allow multiple execution paths in the same process environment 8Operating System Concepts 4.8 Silberschatz, Galvin and Gagne
  • Threads vs. Processes Threads Processes s A thread has no data segment or A process has code/data/heap & heap other segments s A thread cannot live on its own, it There must be at least one thread in must live within a process a process s There can be more than one thread Threads within a process share in a process, the first thread calls code/data/heap, share I/O, but each main & has the process’s stack has its own stack & registers s Inexpensive creation Expensive creation s Inexpensive context switching Expensive context switching s If a thread dies, its stack is If a process dies, its resources are reclaimed reclaimed & all threads die s Inter-thread communication via Inter-process communication via memory. OS and data copying.Operating System Concepts 4.9 Silberschatz, Galvin and Gagne
  • Process Vs. Threads (a) Three threads, each running in (b) Three threads, sharing the a separate address space same address space 10Operating System Concepts 4.10 Silberschatz, Galvin and Gagne
  • Context switch time for which entity 65 is greater? 79% 1. Process 2. 47Thread 21% d s es a re oc Th PrOperating System Concepts 4.11 Silberschatz, Galvin and Gagne
  • The Thread Model Each thread has its own stack 12Operating System Concepts 4.12 Silberschatz, Galvin and Gagne
  • Single and Multithreaded Processes A traditional heavy weight process is same as task with one thread. It has a single thread of control. If a process is multi thread, then that means more than one part of the thread is executing at one time. Multi threading can be useful in programs such as web browsers where you can wish to download a file , view an animation and print something at the same time.Operating System Concepts 4.13 Silberschatz, Galvin and Gagne
  • Single and Multithreaded ProcessesOperating System Concepts 4.14 Silberschatz, Galvin and Gagne
  • Implementing Threads Process’s s TCB for address space Processes define an address Thread1 space; threads share the address mapped segments space PC SP DLL’s s Process Control Block (PCB) State contains process-specific Registers Heap information … q Owner, PID, heap pointer, priority, active thread, and pointers to thread information TCB for Stack – thread2 Thread2 s Thread Control Block (TCB) PC Stack – thread1 contains thread-specific information SP q Stack pointer, PC, thread state State Initialized data (running, …), register values, a Registers pointer to PCB, … … CodeOperating System Concepts 4.15 Silberschatz, Galvin and Gagne
  • Threads’ Life Cycle s Threads (just like processes) go through a sequence of start, ready, running, waiting, and done states Start Done Ready Running WaitingOperating System Concepts 4.16 Silberschatz, Galvin and Gagne
  • Benefits Of Multi-Threading s Responsiveness Multithreading increase the responsiveness .As the process consist of more than one thread, if one thread block or busy in lengthy calculation, some other thread still executing of the process. So the user get response from executing process. s Resource Sharing All threads, which belongs to one process, share the memory and resources of that process. Secondly it allow the application to have several different threads within the same address space.Operating System Concepts 4.17 Silberschatz, Galvin and Gagne
  • Benefits Of Multi-Threading s Economy Allocation of memory and resources or process creation is costly. All threads of a process share the resources of that process so it is more economical to create and context switch the thread. s Utilization of Multi processor Architectures MP architecture allows the facility of parallel processing, which is most efficient way of processing. A single process can run on one CPU even if we have more processors. Multi-threading on MP system increase the concurrency. If a process is dividing into multiple threads , these threads can execute simultaneously on different processors.Operating System Concepts 4.18 Silberschatz, Galvin and Gagne
  • Types of Threads s There are two types of threads q Kernel Threads q User ThreadsOperating System Concepts 4.19 Silberschatz, Galvin and Gagne
  • User Threads s User level threads are not seen by operating system and also very fast (switching from one thread to another thread in a single process does not require context switch since same process is still executing). However, if the thread that is currently executing blocks, the rest of the process may also blocked( if OS is using only one single kernel thread for this process i.e. the thread that kernel sees is same as blocked thread , hence kernel assume that whole process is blocked). s Thread management done by user-level threads library s Three primary thread libraries: q POSIX Pthreads q Win32 threads q Java threadsOperating System Concepts 4.20 Silberschatz, Galvin and Gagne
  • Kernel Threads s Kernel Supported threads are seen by operating system and must be scheduled by the operating system. One multi thread may have multiple kernel threads. s Examples q Windows XP/2000 q Solaris q Linux q Tru64 UNIX q Mac OS XOperating System Concepts 4.21 Silberschatz, Galvin and Gagne
  • User-Level vs. Kernel Threads User-Level Kernel-Level s Managed by application s Managed by kernel s Kernel not aware of thread s Consumes kernel resources s Context switching cheap s Context switching expensive s Create as many as needed s Number limited by kernel resources s Must be used with care s Simpler to use Key issue: kernel threads provide virtual processors to user-level threads, but if all of kthreads block, then all user-level threads will block even if the program logic allows them to proceedOperating System Concepts 4.22 Silberschatz, Galvin and Gagne
  • Thread Libraries s Thread library provides programmer with API for creating and managing threads s Two primary ways of implementing q Library entirely in user space with no kernel support. All the code and data structure exists in user space. This means that invoking of a function in a the library results in a local function call in user space and not a system call. q Kernel-level library supported by the OS. In this case code and data structures for the library exits in kernel space. Invoking a function in the API of library typically results in a system call to a kernel.Operating System Concepts 4.23 Silberschatz, Galvin and Gagne
  • Thread Libraries s Three main thread libraries are used in today q POSIX Pthreads q Win32 q JavaOperating System Concepts 4.24 Silberschatz, Galvin and Gagne
  • Multithreading Models s Many-to-One s One-to-One s Many-to-ManyOperating System Concepts 4.25 Silberschatz, Galvin and Gagne
  • Many-to-One s Many user-level threads mapped to single kernel thread. s It is efficient because it is implemented in user space. A process using this model blocked entirely if a thread makes a blocking system call. s Only one thread can access the kernel at a time so it can not be run in parallel on multiprocessor. s Examples: q Solaris Green Threads q GNU Portable Threads("Genuinely Not Unix" ; GNU is an operating system composed of free software)Operating System Concepts 4.26 Silberschatz, Galvin and Gagne
  • Many-to-One ModelOperating System Concepts 4.27 Silberschatz, Galvin and Gagne
  • One-to-One s Each user-level thread maps to kernel thread. s It provides more concurrency because it allows another thread to execute when threads invoke the blocking system call. s It facilitates the parallelism in multiprocessor systems. s Each user thread requires a kernel thread, which may affect the performance of the system. s Creation of threads in this model is restricted to certain number. s Examples q Windows NT/XP/2000 q Linux q Solaris 9 and laterOperating System Concepts 4.28 Silberschatz, Galvin and Gagne
  • One-to-one ModelOperating System Concepts 4.29 Silberschatz, Galvin and Gagne
  • Many-to-Many Model s Allows many user level threads to be mapped to many kernel threads s Allows the operating system to create a sufficient number of kernel threads s Number of kernel threads may be specific to a either a particular application or a particular machine. s The user can create any number of threads and corresponding kernel level threads can run in parallel on multiprocessor. s When a thread makes a blocking system call, the kernel can execute another thread. s Solaris prior to version 9 s Windows NT/2000 with the ThreadFiber packageOperating System Concepts 4.30 Silberschatz, Galvin and Gagne
  • Many-to-Many ModelOperating System Concepts 4.31 Silberschatz, Galvin and Gagne
  • Multithreading Models: Comparison s The many-to-one model allows the developer to create as many user threads as he/she wishes, but true concurrency can not be achieved because only one kernel thread can be scheduled for execution at a time s The one-to-one model allows more concurrence, but the developer has to be careful not to create too many threads within an application s The many-to-many model does not have these disadvantages and limitations: developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessorOperating System Concepts 4.32 Silberschatz, Galvin and Gagne
  • Two-level Model s Similar to M:M, except that it allows a user thread to be bound to kernel thread s Examples q IRIX q HP-UX q Tru64 UNIX q Solaris 8 and earlierOperating System Concepts 4.33 Silberschatz, Galvin and Gagne
  • Two-level Model user-level threads LWP LWP LWP LWP • Combination of one-to-one + “strict” many-to-many models • Supports both bound and unbound threads – Bound threads - permanently mapped to a single, dedicated LWP – Unbound threads - may move among LWPs in set • Thread creation, scheduling, synchronization done in user space • Flexible approach, “best of both worlds” • Used in Solaris implementation of Pthreads and several other Unix implementations (IRIX, HP-UX)Operating System Concepts 4.34 Silberschatz, Galvin and Gagne
  • Two-level ModelOperating System Concepts 4.35 Silberschatz, Galvin and Gagne
  • Pthreads s A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization s API specifies behavior of the thread library, implementation is up to development of the library s Common in UNIX operating systems (Solaris, Linux, Mac OS X)Operating System Concepts 4.36 Silberschatz, Galvin and Gagne
  • Java Threads s Java threads are managed by the JVM s Java threads may be created by: q Extending Thread class q Implementing the Runnable interfaceOperating System Concepts 4.37 Silberschatz, Galvin and Gagne
  • Java Thread StatesOperating System Concepts 4.38 Silberschatz, Galvin and Gagne
  • Threading Issues s Semantics of fork() and exec() system calls s Thread cancellation s Signal handling s Thread pools s Thread specific data s Scheduler activationsOperating System Concepts 4.39 Silberschatz, Galvin and Gagne
  • Semantics of fork() and exec() s As the fork() system call is used to create a separate, duplicate process. s The semantics of the fork() and exec() system calls change in a multithreaded program. s If one thread in a program calls the fork(), does the new process duplicate all the threads or is the new process single threaded?Operating System Concepts 4.40 Silberschatz, Galvin and Gagne
  • Semantics of fork() and exec() s Does fork() duplicate only the calling thread or all threads? • Two versions of fork() in UNIX:, one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. • If a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process – including all threads. • If exec() is called immediately after forking, then duplicating all threads is unnecessary, as the program specified in the parameters to exec() will replace the process. In this instance, duplicating only the calling thread is appropriate. • If the separate process does not call exec() after forking, the separate process should duplicate all threads.Operating System Concepts 4.41 Silberschatz, Galvin and Gagne
  • Thread Cancellation s Thread cancellation is the task of terminating a thread before it has finished. For example, if multiple threads are concurrently searching through a database and one thread return the result, the remaining threads might be canceled. s A thread that is to be often canceled is referred as the target thread. s Two general approaches: q Asynchronous cancellation terminates the target thread immediately q Deferred cancellation allows the target thread to periodically check if it should be cancelled, allowing it an opportunity to terminate itself in an orderly fashion.Operating System Concepts 4.42 Silberschatz, Galvin and Gagne
  • Problem in Thread Cancellation s The difficulty with cancellation occurs in a situations q where resources have been allocated to a cancel thread q Where thread has been cancelled in the midst of updating data it share with other threads. s This becomes especially difficult with asynchronous cancellation. Often operating system will reclaim system resources from the canceled thread but will not reclaim all the resources. Therefore, canceling a thread asynchronously may not free a necessary system-wide resource.Operating System Concepts 4.43 Silberschatz, Galvin and Gagne
  • Signal Handling s A signal is used in a UNIX system to notify a process that a particular event has occurred. Signal may be received either asynchronously or synchronously. All Signal follow the same pattern. s A signal handler is used to process signals 1. Signal is generated by particular event 2. Signal is delivered to a process 3. Signal is handledOperating System Concepts 4.44 Silberschatz, Galvin and Gagne
  • Signal Handling s Examples of synchronous signals include illegal memory access and division by 0. if a running program perform either of these operations a signal is generated. Synchronous signals are delivered to the same process that performed the operation that caused the signals. s When a signal is generated by some event external to a running process, that process receive the signal asynchronously. Examples of such signals include terminating a process with specific key strokes (such as <control><C>) and having a timer expire. Typically , asynchronous signal is sent to another process.Operating System Concepts 4.45 Silberschatz, Galvin and Gagne
  • Signal Handling in UNIX s Every signal may be handled by one of two possible handlers:  A default signal handler that is run by the kernel when handling that signal.  A user-defined signal handler that is called to handle a signal.Operating System Concepts 4.46 Silberschatz, Galvin and Gagne
  • Signal Handling s Delivering the signal in multithreaded programs is more complicated, where a process may have several threads. Following Options exist: q Deliver the signal to the thread to which the signal applies q Deliver the signal to every thread in the process q Deliver the signal to certain threads in the process q Assign a specific thread to receive all signals for the process.Operating System Concepts 4.47 Silberschatz, Galvin and Gagne
  • Thread Pools s The general idea is to create a number of threads at process startup and place them into a pool where they sit and wait for work s Advantages: q Usually slightly faster to service a request with an existing thread than create a new thread q Allows the number of threads in the application(s) to be bound to the size of the poolOperating System Concepts 4.48 Silberschatz, Galvin and Gagne
  • Thread Specific Data s Allows each thread to have its own copy of data s Useful when you do not have control over the thread creation process (i.e., when using a thread pool)Operating System Concepts 4.49 Silberschatz, Galvin and Gagne
  • Scheduler Activations s A final issue to be considered with multithreaded programs concerns with communication between kernel level and user level thread library. s Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application. s Many system implementing either M:M or Two level model place an intermediate data structure between user and kernel threads. This data structure is typically known as light weight process or (LWP).Operating System Concepts 4.50 Silberschatz, Galvin and Gagne
  • Scheduler Activations s One scheme for communication between user thread library and kernel thread library is known as Scheduler Activation. It works as follow: q The kernel provides an application with a set of virtual processor (LWPs), and the application can schedule user threads onto an available virtual processor. s The kernel must inform an application about certain events. This procedure is known as upcalls - a communication mechanism from the kernel to the thread library. s Upcalls are handled by thread library with an upcall handler. s Upcall handler must run on virtual processor. s This communication allows an application to maintain the correct number of kernel threads.Operating System Concepts 4.51 Silberschatz, Galvin and Gagne
  • Operating System Examples s Windows XP Threads s Linux ThreadOperating System Concepts 4.52 Silberschatz, Galvin and Gagne
  • Windows XP Threads s Implements the one-to-one mapping, kernel-level s Each thread contains q A thread id q Register set q Separate user and kernel stacks q Private data storage area s The register set, stacks, and private storage area are known as the context of the threads s The primary data structures of a thread include: q ETHREAD (executive thread block) q KTHREAD (kernel thread block) q TEB (thread environment block)Operating System Concepts 4.53 Silberschatz, Galvin and Gagne
  • Windows XP ThreadsOperating System Concepts 4.54 Silberschatz, Galvin and Gagne
  • Linux Threads s Linux refers to them as tasks rather than threads s Thread creation is done through clone() system call s clone() allows a child task to share the address space of the parent task (process)Operating System Concepts 4.55 Silberschatz, Galvin and Gagne
  • Linux ThreadsOperating System Concepts 4.56 Silberschatz, Galvin and Gagne
  • Thread Usage s Less time to create a new thread than a process  the newly created thread uses the current process address space  no resources attached to them s Less time to terminate a thread than a process. s Less time to switch between two threads within the same process, because the newly created thread uses the current process address space. s Less communication overheads  threads share everything: address space, in particular. So, data produced by one thread is immediately available to all the other threads s Performance gain  Substantial Computing and Substantial Input/output s Useful on systems with multiple processors57Operating System Concepts 4.57 Silberschatz, Galvin and Gagne
  • End of Chapter 4