Your SlideShare is downloading. ×
0
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Operating System 4
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Operating System 4

6,323

Published on

Published in: Technology
1 Comment
2 Likes
Statistics
Notes
  • cool tnks for my project
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
6,323
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
726
Comments
1
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Chapter 4: Threads
  • 2. What’s in a process? <ul><li>A process consists of (at least): </li></ul><ul><ul><li>an address space </li></ul></ul><ul><ul><li>the code for the running program </li></ul></ul><ul><ul><li>the data for the running program </li></ul></ul><ul><ul><li>an execution stack and stack pointer (SP) </li></ul></ul><ul><ul><ul><li>traces state of procedure calls made </li></ul></ul></ul><ul><ul><li>the program counter (PC), indicating the next instruction </li></ul></ul><ul><ul><li>a set of general-purpose processor registers and their values </li></ul></ul><ul><ul><li>a set of OS resources </li></ul></ul><ul><ul><ul><li>open files, network connections, sound channels, … </li></ul></ul></ul>
  • 3. Concurrency <ul><li>Imagine a web server, which might like to handle multiple requests concurrently </li></ul><ul><ul><li>While waiting for the credit card server to approve a purchase for one client, it could be retrieving the data requested by another client from disk, and assembling the response for a third client from cached information </li></ul></ul><ul><li>Imagine a web client (browser), which might like to initiate multiple requests concurrently </li></ul><ul><ul><li>The IT home page has 10 “src= …” html commands, each of which is going to involve a lot of sitting around! Wouldn’t it be nice to be able to launch these requests concurrently? </li></ul></ul><ul><li>Imagine a parallel program running on a multiprocessor, which might like to employ “physical concurrency” </li></ul><ul><ul><li>For example, multiplying a large matrix – split the output matrix into k regions and compute the entries in each region concurrently using k processors </li></ul></ul>
  • 4. What’s needed? <ul><li>In each of these examples of concurrency (web server, web client, parallel program): </li></ul><ul><ul><li>Everybody wants to run the same code </li></ul></ul><ul><ul><li>Everybody wants to access the same data </li></ul></ul><ul><ul><li>Everybody has the same privileges (most of the time) </li></ul></ul><ul><ul><li>Everybody uses the same resources (open files, network connections, etc.) </li></ul></ul><ul><li>But you’d like to have multiple hardware execution states: </li></ul><ul><ul><li>an execution stack and stack pointer (SP) </li></ul></ul><ul><ul><ul><li>traces state of procedure calls made </li></ul></ul></ul><ul><ul><li>the program counter (PC), indicating the next instruction </li></ul></ul><ul><ul><li>a set of general-purpose processor registers and their values </li></ul></ul>
  • 5. How could we achieve this? <ul><li>Given the process abstraction as we know it: </li></ul><ul><ul><li>fork several processes </li></ul></ul><ul><ul><li>cause each to map to the same physical memory to share data </li></ul></ul><ul><li>This is really inefficient!! </li></ul><ul><ul><li>space: PCB, page tables, etc. </li></ul></ul><ul><ul><li>time: creating OS structures, fork and copy address space, etc. </li></ul></ul><ul><li>So any support that the OS can give for doing multi-threaded programming is a win </li></ul>
  • 6. Can we do better? <ul><li>Key idea: </li></ul><ul><ul><li>separate the concept of a process (address space, etc.) </li></ul></ul><ul><ul><li>… from that of a minimal “ thread of control ” (execution state: PC, etc.) </li></ul></ul><ul><li>This execution state is usually called a thread , or sometimes, a lightweight process </li></ul>
  • 7. Single-Threaded Example <ul><li>Imagine the following C program: </li></ul><ul><li>main() { </li></ul><ul><li> ComputePI(“pi.txt”); </li></ul><ul><li> PrintClassList(“clist.text”); </li></ul><ul><li>} </li></ul><ul><li>What is the behavior here? </li></ul><ul><ul><li>Program would never print out class list </li></ul></ul><ul><ul><li>Why? ComputePI would never finish </li></ul></ul>
  • 8. Use of Threads <ul><li>Version of program with Threads: </li></ul><ul><li>main() { </li></ul><ul><li>CreateThread(ComputePI(“pi.txt”)); </li></ul><ul><li> CreateThread(PrintClassList(“clist.text”)); </li></ul><ul><li>} </li></ul><ul><li>What does “CreateThread” do? </li></ul><ul><ul><li>Start independent thread running for a given procedure </li></ul></ul><ul><li>What is the behavior here? </li></ul><ul><ul><li>Now, you would actually see the class list </li></ul></ul><ul><ul><li>This should behave as if there are two separate CPUs </li></ul></ul>CPU1 CPU2 CPU1 CPU2 Time CPU1 CPU2
  • 9. Threads and processes <ul><li>Most modern OS’s (NT, modern UNIX, etc) therefore support two entities: </li></ul><ul><ul><li>the process , which defines the address space and general process attributes (such as open files, etc.) </li></ul></ul><ul><ul><li>the thread , which defines a sequential execution stream within a process </li></ul></ul><ul><li>A thread is bound to a single process / address space </li></ul><ul><ul><li>address spaces, however, can have multiple threads executing within them </li></ul></ul><ul><ul><li>sharing data between threads is cheap: all see the same address space </li></ul></ul><ul><ul><li>creating threads is cheap too! </li></ul></ul>
  • 10. Single and Multithreaded Processes
  • 11. Benefits <ul><li>Responsiveness - Interactive applications can be performing two tasks at the same time (rendering, spell checking) </li></ul><ul><li>Resource Sharing - Sharing resources between threads is easy </li></ul><ul><li>Economy - Resource allocation between threads is fast (no protection issues) </li></ul><ul><li>Utilization of MP Architectures - seamlessly assign multiple threads to multiple processors (if available). Future appears to be multi-core anyway. </li></ul>
  • 12. Thread Design Space address space thread one thread/process many processes many threads/process many processes one thread/process one process many threads/process one process MS/DOS Java older UNIXes Mach, NT, Chorus, Linux, …
  • 13. (old) Process address space 0x00000000 0x7FFFFFFF address space code (text segment) static data (data segment) heap (dynamic allocated mem) stack (dynamic allocated mem) PC SP
  • 14. (new) Address space with threads 0x00000000 0x7FFFFFFF address space code (text segment) static data (data segment) heap (dynamic allocated mem) thread 1 stack PC (T2) SP (T2) thread 2 stack thread 3 stack SP (T1) SP (T3) PC (T1) PC (T3) SP PC
  • 15. Types of Threads
  • 16. Thread types <ul><li>User threads : thread management done by user-level threads library. Kernel does not know about these threads </li></ul><ul><li>Kernel threads : Supported by the Kernel and so more overhead than user threads </li></ul><ul><ul><li>Examples: Windows XP/2000, Solaris, Linux, Mac OS X </li></ul></ul><ul><li>User threads map into kernel threads </li></ul>
  • 17. Multithreading Models <ul><li>Many-to-One: Many user-level threads mapped to single kernel thread </li></ul><ul><ul><li>If a thread blocks inside kernel, all the other threads cannot run </li></ul></ul><ul><ul><li>Examples: Solaris Green Threads, GNU Pthreads </li></ul></ul>
  • 18. Multithreading Models <ul><li>One-to-One: Each user-level thread maps to kernel thread </li></ul>
  • 19. Multithreading Models <ul><li>Many-to-Many: Allows many user level threads to be mapped to many kernel threads </li></ul><ul><ul><li>Allows the operating system to create a sufficient number of kernel threads </li></ul></ul>
  • 20. Two-level Model <ul><li>Similar to M:M, except that it allows a user thread to be bound to kernel thread </li></ul><ul><li>Examples </li></ul><ul><ul><li>IRIX </li></ul></ul><ul><ul><li>HP-UX </li></ul></ul><ul><ul><li>Tru64 UNIX </li></ul></ul><ul><ul><li>Solaris 8 and earlier </li></ul></ul>
  • 21. Threads Implementation <ul><li>Two ways: </li></ul><ul><ul><li>Provide library entirely in user space with no kernel support. </li></ul></ul><ul><ul><ul><li>Invoking function in the API ->local function call </li></ul></ul></ul><ul><ul><li>Kernel-level library supported by OS </li></ul></ul><ul><ul><ul><li>Invoking function in the API -> system call </li></ul></ul></ul><ul><li>Three primary thread libraries: </li></ul><ul><ul><li>POSIX Pthreads (maybe KL or UL) </li></ul></ul><ul><ul><li>Win32 threads (KL) </li></ul></ul><ul><ul><li>Java threads (UL) </li></ul></ul>
  • 22. Threading Issues
  • 23. Threading Issues <ul><li>Semantics of fork() and exec() system calls </li></ul><ul><li>Thread cancellation </li></ul><ul><li>Signal handling </li></ul><ul><li>Thread pools </li></ul><ul><li>Thread specific data </li></ul><ul><li>Scheduler activations </li></ul>
  • 24. Semantics of fork() and exec() <ul><li>Does fork() duplicate only the calling thread or all threads? </li></ul>
  • 25. Thread Cancellation <ul><li>Terminating a thread before it has finished </li></ul><ul><li>Two general approaches: </li></ul><ul><ul><li>Asynchronous cancellation terminates the target thread immediately </li></ul></ul><ul><ul><li>Deferred cancellation allows the target thread to periodically check if it should be cancelled </li></ul></ul>
  • 26. Signal Handling <ul><li>Signals are used in UNIX systems to notify a process that a particular event has occurred </li></ul><ul><li>A signal handler is used to process signals </li></ul><ul><ul><li>Signal is generated by particular event </li></ul></ul><ul><ul><li>Signal is delivered to a process </li></ul></ul><ul><ul><li>Signal is handled </li></ul></ul><ul><li>Every signal maybe handled by either: </li></ul><ul><ul><li>A default signal handler </li></ul></ul><ul><ul><li>A user-defined signal handler </li></ul></ul>
  • 27. Signal Handling <ul><li>In Multi-threaded programs, we have the following options: </li></ul><ul><ul><li>Deliver the signal to the thread to which the signal applies (e.g. synchronous signals) </li></ul></ul><ul><ul><li>Deliver the signal to every thread in the process (e.g. terminate a process) </li></ul></ul><ul><ul><li>Deliver the signal to certain threads in the process </li></ul></ul><ul><ul><li>Assign a specific thread to receive all signals for the process </li></ul></ul><ul><li>In *nix: Kill –signal pid (for process), pthread_kill tid (for threads) </li></ul>
  • 28. Thread Pools <ul><li>Do you remember the multithreading scenario in a web server? It has two problems: </li></ul><ul><ul><li>Time required to create the thread </li></ul></ul><ul><ul><li>No bound on the number of threads </li></ul></ul>
  • 29. Thread Pools <ul><li>Create a number of threads in a pool where they await work </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Usually slightly faster to service a request with an existing thread than create a new thread </li></ul></ul><ul><ul><li>Allows the number of threads in the application(s) to be bound to the size of the pool </li></ul></ul>
  • 30. Thread Specific Data <ul><li>Allows each thread to have its own copy of data </li></ul><ul><li>Useful when you do not have control over the thread creation process (i.e., when using a thread pool) </li></ul>
  • 31. Scheduler Activations <ul><li>Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application </li></ul><ul><li>Use intermediate data structure called LWP (lightweight process) </li></ul><ul><ul><li>CPU Bound -> one LWP </li></ul></ul><ul><ul><li>I/O Bound -> Multiple LWP </li></ul></ul>
  • 32. Scheduler Activations <ul><li>Scheduler activations provide upcalls - a communication mechanism from the kernel to the thread library </li></ul><ul><li>This communication allows an application to maintain the correct of number kernel threads </li></ul>
  • 33. OS Examples
  • 34. Windows XP Threads <ul><li>Implements the one-to-one mapping </li></ul><ul><li>Each thread contains </li></ul><ul><ul><li>A thread id </li></ul></ul><ul><ul><li>Register set </li></ul></ul><ul><ul><li>Separate user and kernel stacks </li></ul></ul><ul><ul><li>Private data storage area </li></ul></ul>
  • 35. Linux Threads <ul><li>Linux refers to them as tasks rather than threads </li></ul><ul><li>Thread creation is done through clone() system call </li></ul><ul><li>clone() allows a child task to share the address space of the parent task (process) </li></ul>
  • 36. Conclusion <ul><li>Kernel threads: </li></ul><ul><ul><li>More robust than user-level threads </li></ul></ul><ul><ul><li>Allow impersonation </li></ul></ul><ul><ul><li>Easier to tune the OS CPU scheduler to handle multiple threads in a process </li></ul></ul><ul><ul><li>A thread doing a wait on a kernel resource (like I/O) does not stop the process from running </li></ul></ul><ul><li>User-level threads </li></ul><ul><ul><li>A lot faster if programmed correctly </li></ul></ul><ul><ul><li>Can be better tuned for the exact application </li></ul></ul><ul><li>Note that user-level threads can be done on any OS </li></ul>
  • 37. Conclusion <ul><li>Each thread shares everything with all the other threads in the process </li></ul><ul><ul><li>They can read/write the exact same variables, so they need to synchronize themselves </li></ul></ul><ul><ul><li>They can access each other’s runtime stack, so be very careful if you communicate using runtime stack variables </li></ul></ul><ul><ul><li>Each thread should be able to retrieve a unique thread id that it can use to access thread local storage </li></ul></ul><ul><li>Multi-threading is great, but use it wisely </li></ul>

×