This chapter discusses threads and multithreaded programming. It covers thread models like many-to-one, one-to-one and many-to-many. Common thread libraries like Pthreads, Windows and Java threads are explained. Implicit threading techniques such as thread pools and OpenMP are introduced. Issues with multithreaded programming like signal handling and thread cancellation are examined. Finally, threading implementations in Windows and Linux are overviewed.
The objectives of Multithreaded Programming in Operating Systems are:
- To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
- To discuss the APIs for the Pthreads, Windows, and Java thread libraries
- To explore several strategies that provide implicit threading.
- To examine issues related to multithreaded programming.
- To cover operating system support for threads in Windows and Linux.
1. The document discusses threads and multithreaded programming. It covers thread models like many-to-one, one-to-one, and many-to-many. It also discusses common thread libraries like Pthreads, Windows threads, and Java threads.
2. Issues with multithreaded programming like signal handling, thread cancellation, and thread-local storage are examined. Examples of thread implementations on Windows and Linux are provided.
3. The benefits of multithreading include increased responsiveness, resource sharing, and allowing applications to take advantage of multicore processors. Implicit threading techniques like thread pools, OpenMP, and Grand Central Dispatch are also covered.
Operating Systems - "Chapter 4: Multithreaded Programming"Ra'Fat Al-Msie'deen
This chapter discusses multithreaded programming and threads. It defines a thread as the basic unit of CPU utilization that allows multiple tasks to run concurrently within a process by sharing the process's resources. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Common thread libraries for POSIX, Windows, and Java are also covered. The chapter examines issues in multithreaded programming and provides examples of how threads are implemented in Windows and Linux.
Threads allow a process to concurrently perform multiple tasks. A thread is the basic unit of CPU utilization and shares resources with other threads in the same process. Multithreading improves responsiveness, allows resource sharing, and enables better utilization of multiprocessor systems. There are different models for mapping user threads to kernel threads, including many-to-one, one-to-one, and many-to-many. Popular thread libraries include Pthreads and Windows threads which provide APIs for thread management. Issues with multithreading include how to handle signals, cancellation, and the fork and exec system calls across threads.
Multi-threaded programming allows a process to have multiple threads of control that can perform tasks concurrently. Threads share resources like memory and files within a process. This allows a process to perform multiple tasks simultaneously like updating a display, fetching data, or answering network requests. Creating threads is lighter weight than creating entire processes. Common thread models include one-to-one, many-to-one, and many-to-many mappings of threads to kernels. Popular thread libraries are Pthreads, Win32 threads, and Java threads. Scheduling, synchronization, and other concurrency issues must be addressed for multi-threaded programming.
A thread is a lightweight process that shares resources like memory with other threads in the same process. Using threads allows a process to perform multiple tasks simultaneously. Threads can improve responsiveness when one part of a program is blocked and allow for resource sharing. Creating threads is more efficient than processes. There are different models for how user-level threads map to kernel threads, including one-to-one, many-to-one, and many-to-many mappings. Popular thread APIs include POSIX threads, Windows threads, and Java threads.
This chapter discusses threads and multithreaded programming. It covers thread models like many-to-one, one-to-one and many-to-many. Common thread libraries like Pthreads, Windows and Java threads are explained. Implicit threading techniques such as thread pools and OpenMP are introduced. Issues with multithreaded programming like signal handling and thread cancellation are examined. Finally, threading implementations in Windows and Linux are overviewed.
The objectives of Multithreaded Programming in Operating Systems are:
- To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
- To discuss the APIs for the Pthreads, Windows, and Java thread libraries
- To explore several strategies that provide implicit threading.
- To examine issues related to multithreaded programming.
- To cover operating system support for threads in Windows and Linux.
1. The document discusses threads and multithreaded programming. It covers thread models like many-to-one, one-to-one, and many-to-many. It also discusses common thread libraries like Pthreads, Windows threads, and Java threads.
2. Issues with multithreaded programming like signal handling, thread cancellation, and thread-local storage are examined. Examples of thread implementations on Windows and Linux are provided.
3. The benefits of multithreading include increased responsiveness, resource sharing, and allowing applications to take advantage of multicore processors. Implicit threading techniques like thread pools, OpenMP, and Grand Central Dispatch are also covered.
Operating Systems - "Chapter 4: Multithreaded Programming"Ra'Fat Al-Msie'deen
This chapter discusses multithreaded programming and threads. It defines a thread as the basic unit of CPU utilization that allows multiple tasks to run concurrently within a process by sharing the process's resources. Different threading models like many-to-one, one-to-one, and many-to-many are described based on how user threads map to kernel threads. Common thread libraries for POSIX, Windows, and Java are also covered. The chapter examines issues in multithreaded programming and provides examples of how threads are implemented in Windows and Linux.
Threads allow a process to concurrently perform multiple tasks. A thread is the basic unit of CPU utilization and shares resources with other threads in the same process. Multithreading improves responsiveness, allows resource sharing, and enables better utilization of multiprocessor systems. There are different models for mapping user threads to kernel threads, including many-to-one, one-to-one, and many-to-many. Popular thread libraries include Pthreads and Windows threads which provide APIs for thread management. Issues with multithreading include how to handle signals, cancellation, and the fork and exec system calls across threads.
Multi-threaded programming allows a process to have multiple threads of control that can perform tasks concurrently. Threads share resources like memory and files within a process. This allows a process to perform multiple tasks simultaneously like updating a display, fetching data, or answering network requests. Creating threads is lighter weight than creating entire processes. Common thread models include one-to-one, many-to-one, and many-to-many mappings of threads to kernels. Popular thread libraries are Pthreads, Win32 threads, and Java threads. Scheduling, synchronization, and other concurrency issues must be addressed for multi-threaded programming.
A thread is a lightweight process that shares resources like memory with other threads in the same process. Using threads allows a process to perform multiple tasks simultaneously. Threads can improve responsiveness when one part of a program is blocked and allow for resource sharing. Creating threads is more efficient than processes. There are different models for how user-level threads map to kernel threads, including one-to-one, many-to-one, and many-to-many mappings. Popular thread APIs include POSIX threads, Windows threads, and Java threads.
The document discusses threads and processes in operating systems. It begins by distinguishing threads from processes, noting that threads are lightweight processes that share resources like memory within a process but have their own program counters and stacks. It then covers different threading models like user-level threads managed by a library versus kernel threads directly supported by the OS kernel. The rest of the document discusses threading issues, common threading APIs like POSIX threads, and how specific operating systems like Linux and Windows implement threading.
This document discusses the concept of threads. It defines a thread as the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. Threads allow for parallel computing and resource sharing. The benefits of threads include maintaining responsiveness, prioritizing tasks, and performing long operations without stopping other tasks. Threads can be user threads scheduled in user space or kernel threads scheduled across CPUs. Common threading models include user threads (N:1), kernel threads (1:1), and hybrid models (M:N). The document also discusses threading in C# and strategies for safely sharing data between threads to avoid issues like race conditions.
Threads are lightweight processes that can run concurrently within a single process. They share the process's resources like memory but have their own program counters, registers, and stacks. Using threads provides benefits like improved responsiveness, easier resource sharing between tasks, reduced overhead compared to processes, and ability to utilize multiple CPU cores. Common thread libraries are POSIX pthreads, Win32 threads, and Java threads which allow creating and managing threads via APIs. Multithreading can be implemented using different models mapping user threads to kernel threads.
The objectives of Multithreaded Programming in Operating Systems are:
- To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
- To discuss the APIs for the Pthreads, Windows, and Java thread libraries
- To explore several strategies that provide implicit threading.
- To examine issues related to multithreaded programming.
- To cover operating system support for threads in Windows and Linux.
Operating System Chapter 4 Multithreaded programmingguesta40f80
A thread is the basic unit of CPU utilization within a process. Multithreaded processes provide benefits like increased responsiveness, resource sharing, and ability to utilize multiple processors. There are different models for how user threads created by libraries map to kernel threads managed by the OS kernel, including many-to-one, one-to-one, and many-to-many mappings. Popular thread libraries include Pthreads, Win32 threads, and Java threads which provide APIs for creating and managing threads.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that involves states like new, ready, running, blocked, and terminated.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that includes states like new, ready, running, blocked, and terminated.
This document introduces threads as a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems. It examines issues related to multithreaded programming and compares single and multithreaded processes. Threads within a process share code, data and heap sections but have their own stack segments. Context switching is cheaper between threads of the same process than between processes. Threads allow overlapping computation and I/O to improve performance.
The document discusses threads in OCaml and their implementations. There are two implementations of threads available in OCaml - system threads which are built on operating system threads, and VM-level threads which perform time-sharing at the virtual machine level without using OS threads. While VM-level threads do not take advantage of multiple processors, they can be easier to program with than system threads for some applications.
Threads provide concurrency within a process by allowing parallel execution. A thread is a flow of execution that has its own program counter, registers, and stack. Threads share code and data segments with other threads in the same process. There are two types: user threads managed by a library and kernel threads managed by the operating system kernel. Kernel threads allow true parallelism but have more overhead than user threads. Multithreading models include many-to-one, one-to-one, and many-to-many depending on how user threads map to kernel threads. Threads improve performance over single-threaded processes and allow for scalability across multiple CPUs.
This document discusses CPU scheduling and multithreaded programming. It covers key concepts in CPU scheduling like multiprogramming, CPU-I/O burst cycles, and scheduling criteria. It also discusses dispatcher role, multilevel queue scheduling, and multiple processor scheduling challenges. For multithreaded programming, it defines threads and their benefits. It compares concurrency and parallelism and discusses multithreading models, thread libraries, and threading issues.
This document discusses threads and multithreading. It begins by defining a thread as a light weight process that shares code, data, and resources with other threads belonging to the same process. It then discusses the benefits of multithreading such as responsiveness, resource sharing, and utilizing multiprocessor architectures. Finally, it covers different multithreading models including many-to-one, one-to-one, and many-to-many mappings of user threads to kernel threads.
This document describes an operating systems course titled "Operating Systems 17CS64" at Canara Engineering College. The course is taught in the 6th semester and covers 10 hours of content over 3 modules - multithreaded programming, process scheduling, and process synchronization. Module 2 focuses on multithreaded programming concepts like threading models, thread libraries, and threading issues. It provides details on multithreading benefits and challenges.
1) A thread is a flow of execution within a process that has its own program counter, registers, and stack. Threads allow for parallelism and improved performance over single-threaded processes.
2) Multithreaded processes allow multiple parts of a program to execute concurrently using multiple threads, whereas single-threaded processes execute instructions in a single sequence.
3) There are two main types of threads: user-level threads managed by a user-space thread library, and kernel-level threads directly supported by the operating system kernel. Kernel threads can take advantage of multiprocessors but have more overhead than user-level threads.
This document discusses parallel and distributed computing concepts like multithreading, multitasking, and multiprocessing. It defines processes and threads, with processes being heavier weight and using more resources than threads. While processes are isolated and don't share data, threads within the same process can share data and resources. Multitasking allows running multiple processes concurrently, while multithreading allows a single process to perform multiple tasks simultaneously. The benefits of multithreading include improved responsiveness, faster context switching, and better utilization of multiprocessor systems.
Thread vs Process
scheduling
synchronization
The thread begins execution with the C/C run-time library startup code.
The startup code calls your main or WinMain and execution continues until the main function returns and the C/C library code calls ExitProcess.
Reflection is the ability of a managed code to read its own metadata for the purpose of finding assemblies, modules and type information at runtime. The classes that give access to the metadata of a running program are in System.Reflection.
System.Reflection namespace defines the following types to analyze the module's metadata of an assembly:
Assembly, Module, Enum, ParameterInfo, MemberInfo, Type, MethodInfo, ConstructorInfo, FieldInfo, EventInfo, and PropertyInfo
This document discusses processes and threads in distributed systems. It begins by defining key terms like process, thread, and context. It then explains that threads allow blocking calls without blocking the entire process, making them attractive for distributed systems. The document provides examples of how multithreading can improve client and server performance by hiding network latency and enabling simple scaling to multiprocessors. Overall, multithreading is popular for distributed systems because it facilitates organization and parallelism while allowing the use of blocking calls.
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
Processes provide resource ownership and protection, while threads are the unit of scheduling and execution. There are two main types of threads: user-level threads managed by applications and kernel-level threads managed by the operating system kernel. Both approaches have advantages and disadvantages related to scheduling efficiency and blocking system calls. Modern operating systems use a combined approach for maximum flexibility. Processes contain threads and manage resources, while the kernel schedules threads and handles synchronization between threads of the same process.
The document discusses threads and processes in operating systems. It begins by distinguishing threads from processes, noting that threads are lightweight processes that share resources like memory within a process but have their own program counters and stacks. It then covers different threading models like user-level threads managed by a library versus kernel threads directly supported by the OS kernel. The rest of the document discusses threading issues, common threading APIs like POSIX threads, and how specific operating systems like Linux and Windows implement threading.
This document discusses the concept of threads. It defines a thread as the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. Threads allow for parallel computing and resource sharing. The benefits of threads include maintaining responsiveness, prioritizing tasks, and performing long operations without stopping other tasks. Threads can be user threads scheduled in user space or kernel threads scheduled across CPUs. Common threading models include user threads (N:1), kernel threads (1:1), and hybrid models (M:N). The document also discusses threading in C# and strategies for safely sharing data between threads to avoid issues like race conditions.
Threads are lightweight processes that can run concurrently within a single process. They share the process's resources like memory but have their own program counters, registers, and stacks. Using threads provides benefits like improved responsiveness, easier resource sharing between tasks, reduced overhead compared to processes, and ability to utilize multiple CPU cores. Common thread libraries are POSIX pthreads, Win32 threads, and Java threads which allow creating and managing threads via APIs. Multithreading can be implemented using different models mapping user threads to kernel threads.
The objectives of Multithreaded Programming in Operating Systems are:
- To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
- To discuss the APIs for the Pthreads, Windows, and Java thread libraries
- To explore several strategies that provide implicit threading.
- To examine issues related to multithreaded programming.
- To cover operating system support for threads in Windows and Linux.
Operating System Chapter 4 Multithreaded programmingguesta40f80
A thread is the basic unit of CPU utilization within a process. Multithreaded processes provide benefits like increased responsiveness, resource sharing, and ability to utilize multiple processors. There are different models for how user threads created by libraries map to kernel threads managed by the OS kernel, including many-to-one, one-to-one, and many-to-many mappings. Popular thread libraries include Pthreads, Win32 threads, and Java threads which provide APIs for creating and managing threads.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that involves states like new, ready, running, blocked, and terminated.
Threads are lightweight processes that improve application performance through parallelism. Each thread has its own program counter and stack but shares other resources like memory with other threads in a process. Using threads provides advantages like lower overhead context switching compared to processes and allows parallel execution on multi-core systems. There are two types of threads - user level threads managed by libraries and kernel level threads supported by the OS kernel. Threads have a life cycle that includes states like new, ready, running, blocked, and terminated.
This document introduces threads as a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems. It examines issues related to multithreaded programming and compares single and multithreaded processes. Threads within a process share code, data and heap sections but have their own stack segments. Context switching is cheaper between threads of the same process than between processes. Threads allow overlapping computation and I/O to improve performance.
The document discusses threads in OCaml and their implementations. There are two implementations of threads available in OCaml - system threads which are built on operating system threads, and VM-level threads which perform time-sharing at the virtual machine level without using OS threads. While VM-level threads do not take advantage of multiple processors, they can be easier to program with than system threads for some applications.
Threads provide concurrency within a process by allowing parallel execution. A thread is a flow of execution that has its own program counter, registers, and stack. Threads share code and data segments with other threads in the same process. There are two types: user threads managed by a library and kernel threads managed by the operating system kernel. Kernel threads allow true parallelism but have more overhead than user threads. Multithreading models include many-to-one, one-to-one, and many-to-many depending on how user threads map to kernel threads. Threads improve performance over single-threaded processes and allow for scalability across multiple CPUs.
This document discusses CPU scheduling and multithreaded programming. It covers key concepts in CPU scheduling like multiprogramming, CPU-I/O burst cycles, and scheduling criteria. It also discusses dispatcher role, multilevel queue scheduling, and multiple processor scheduling challenges. For multithreaded programming, it defines threads and their benefits. It compares concurrency and parallelism and discusses multithreading models, thread libraries, and threading issues.
This document discusses threads and multithreading. It begins by defining a thread as a light weight process that shares code, data, and resources with other threads belonging to the same process. It then discusses the benefits of multithreading such as responsiveness, resource sharing, and utilizing multiprocessor architectures. Finally, it covers different multithreading models including many-to-one, one-to-one, and many-to-many mappings of user threads to kernel threads.
This document describes an operating systems course titled "Operating Systems 17CS64" at Canara Engineering College. The course is taught in the 6th semester and covers 10 hours of content over 3 modules - multithreaded programming, process scheduling, and process synchronization. Module 2 focuses on multithreaded programming concepts like threading models, thread libraries, and threading issues. It provides details on multithreading benefits and challenges.
1) A thread is a flow of execution within a process that has its own program counter, registers, and stack. Threads allow for parallelism and improved performance over single-threaded processes.
2) Multithreaded processes allow multiple parts of a program to execute concurrently using multiple threads, whereas single-threaded processes execute instructions in a single sequence.
3) There are two main types of threads: user-level threads managed by a user-space thread library, and kernel-level threads directly supported by the operating system kernel. Kernel threads can take advantage of multiprocessors but have more overhead than user-level threads.
This document discusses parallel and distributed computing concepts like multithreading, multitasking, and multiprocessing. It defines processes and threads, with processes being heavier weight and using more resources than threads. While processes are isolated and don't share data, threads within the same process can share data and resources. Multitasking allows running multiple processes concurrently, while multithreading allows a single process to perform multiple tasks simultaneously. The benefits of multithreading include improved responsiveness, faster context switching, and better utilization of multiprocessor systems.
Thread vs Process
scheduling
synchronization
The thread begins execution with the C/C run-time library startup code.
The startup code calls your main or WinMain and execution continues until the main function returns and the C/C library code calls ExitProcess.
Reflection is the ability of a managed code to read its own metadata for the purpose of finding assemblies, modules and type information at runtime. The classes that give access to the metadata of a running program are in System.Reflection.
System.Reflection namespace defines the following types to analyze the module's metadata of an assembly:
Assembly, Module, Enum, ParameterInfo, MemberInfo, Type, MethodInfo, ConstructorInfo, FieldInfo, EventInfo, and PropertyInfo
This document discusses processes and threads in distributed systems. It begins by defining key terms like process, thread, and context. It then explains that threads allow blocking calls without blocking the entire process, making them attractive for distributed systems. The document provides examples of how multithreading can improve client and server performance by hiding network latency and enabling simple scaling to multiprocessors. Overall, multithreading is popular for distributed systems because it facilitates organization and parallelism while allowing the use of blocking calls.
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
Processes provide resource ownership and protection, while threads are the unit of scheduling and execution. There are two main types of threads: user-level threads managed by applications and kernel-level threads managed by the operating system kernel. Both approaches have advantages and disadvantages related to scheduling efficiency and blocking system calls. Modern operating systems use a combined approach for maximum flexibility. Processes contain threads and manage resources, while the kernel schedules threads and handles synchronization between threads of the same process.
Similar to Operating systems - Introduction to Threads (20)
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
3. Objectives
4.3
● To introduce the notion of a thread—a fundamental unit of
CPU utilization that forms the basis of multithreaded computer
systems
● To discuss the APIs for the Pthreads, Windows, and Java
thread libraries
● To explore several strategies that provide implicit threading
● To examine issues related to multithreaded programming
● To cover operating system support for threads in Windows and
Linux
4. Motivation
4.4
● Most modern applications are multithreaded
● Threads run within application
● Multiple tasks with the application can be implemented by
separate threads
● Update display
● Fetch data
● Spell checking
● Answer a network request
● Process creation is heavy-weight while thread creation is
light-weight
● Can simplify code, increase efficiency
● Kernels are generally multithreaded
6. Benefits
4.6
● Responsiveness –may allow continued execution if part of
process is blocked, especially important for user interfaces
● Resource Sharing –threads share resources of process, easier
than shared memory or message passing
● Economy – cheaper than process creation, thread switching
lower overhead than context switching
● Scalability – process can take advantage of multiprocessor
architectures
7. Multicore Programming
4.7
● Multicore or multiprocessor systems putting pressure on
programmers, challenges include:
● Dividing activities
● Balance
● Data splitting
● Data dependency
● Testing and debugging
● Parallelismimplies a system can perform more than one task
simultaneously
● Concurrency supports more than one task making progress
● Single processor / core, scheduler providing concurrency
8. Multicore Programming (Cont.)
4.8
● Types of parallelism
● Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
● Task parallelism – distributing threads across cores, each
thread performing unique operation
● As #of threads grows, so does architectural support for
threading
● CPUs have cores as well as hardware threads
● Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core
11. Amdahl’s Law
● Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
● S is serial portion
● N processing cores
● That is, if application is 75% parallel / 25% serial, moving from 1 to 2
cores results in speedup of 1.6 times
● As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on
performance gained by adding additional cores
● But does the law take into account contemporary multicore systems?
4.11
12. User Threads and Kernel Threads
4.12
● User threads - management done by user-level threads library
● Three primary thread libraries:
● POSIX Pthreads
● Windows threads
● Java threads
● Kernel threads - Supported by the Kernel
● Examples – virtually all general purpose operating systems, including:
● Windows
● Solaris
● Linux
● Tru64 UNIX
● Mac OS X
14. Many-to-One
● Many user-level threads mapped to
single kernel thread
● One thread blocking causes all to block
● Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
● Few systems currently use this model
● Examples:
● Solaris Green Threads
● GNU Portable Threads
4.14
15. One-to-One
● Each user-level thread maps to kernel thread
● Creating a user-level thread creates a kernel thread
● More concurrency than many-to-one
● Number of threads per process sometimes
restricted due to overhead
● Examples
● Windows
● Linux
● Solaris 9 and later
4.15
16. Many-to-Many Model
● Allows many user level threads to be
mapped to many kernel threads
● Allows the operating system to create
a sufficient number of kernel threads
● Solaris prior to version 9
● Windows with the ThreadFiber
package
4.16
17. Two-level Model
● Similar to M:M, except that it allows a user thread to be
bound to kernel thread
● Examples
● IRIX
● HP-UX
● Tru64 UNIX
● Solaris 8 and earlier
4.17
18. Thread Libraries
4.18
● Thread library provides programmer with API for creating
and managing threads
● Two primary ways of implementing
● Library entirely in user space
● Kernel-level library supported by the OS
19. Pthreads
4.19
● May be provided either as user-level or kernel-level
● A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
● Specification, not implementation
● API specifies behavior of the thread library, implementation is
up to development of the library
● Common in UNIX operating systems (Solaris, Linux, Mac OS X)
25. Java Threads
● Java threads are managed by the JVM
● Typically implemented using the threads model provided by
underlying OS
● Java threads may be created by:
● Extending Thread class
● Implementing the Runnable interface
4.25
28. Implicit Threading
4.28
● Growing in popularity as numbers of threads increase,
program correctness more difficult with explicit threads
● Creation and management of threads done by compilers and
run-time libraries rather than programmers
● Three methods explored
● Thread Pools
● OpenMP
● Grand Central Dispatch
● Other methods include Microsoft Threading Building Blocks
(TBB), java.util.concurrent package
29. Thread Pools
4.29
● Create a number of threads in a pool where they await work
● Advantages:
● Usually slightly faster to service a request with an existing
thread than create a new thread
● Allows the number of threads in the application(s) to be
bound to the size of the pool
● Separating task to be performed from mechanics of
creating task allows different strategies for running task
4 i.e.Tasks could be scheduled to run periodically
● Windows API supports thread pools:
30. OpenMP
● Set of compiler directives and an
API for C, C++, FORTRAN
● Provides support for parallel
programming in shared-memory
environments
● Identifies parallel regions –
blocks of code that can run in
parallel
#pragma omp parallel
Create as many threads as there are
cores
#pragma omp parallel for
for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel
4.30
31. Grand Central Dispatch
4.31
● Apple technology for Mac OS X and iOS operating systems
● Extensions to C, C++ languages, API, and run-time library
● Allows identification of parallel sections
● Manages most of the details of threading
● Block is in “^{ }” - ˆ{ printf("I am a block"); }
● Blocks placed in dispatch queue
● Assigned to available thread in thread pool when removed
from queue
32. Grand Central Dispatch
● Two types of dispatch queues:
● serial – blocks removed in FIFO order, queue is per process,
called main queue
4 Programmers can create additional serial queues within
program
● concurrent – removed in FIFO order but several may be
removed at a time
4 Three system wide queues with priorities low, default, high
4.32
33. Threading Issues
4.33
● Semantics of fork() and exec() system calls
● Signal handling
● Synchronous and asynchronous
● Thread cancellation of target thread
● Asynchronous or deferred
● Thread-local storage
● Scheduler Activations
34. Semantics of fork() and exec()
4.34
● Does fork()duplicate only the calling thread or all
threads?
● Some UNIXes have two versions of fork
● exec() usually works as normal – replace the running
process including all threads
35. Signal Handling
4.35
● Signals are used in UNIX systems to notify a process that a
particular event has occurred.
● A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
● Every signal has default handler that kernel runs when
handling signal
● User-defined signal handler can override default
● For single-threaded, signal delivered to process
36. Signal Handling (Cont.)
4.36
● Where should a signal be delivered for multi-threaded?
● Deliver the signal to the thread to which the signal
applies
● Deliver the signal to every thread in the process
● Deliver the signal to certain threads in the process
● Assign a specific thread to receive all signals for the
process
37. Thread Cancellation
● Terminating a thread before it has finished
● Thread to be canceled is target thread
● Two general approaches:
● Asynchronous cancellation terminates the target thread
immediately
● Deferred cancellation allows the target thread to periodically
check if it should be cancelled
● Pthread code to create and cancel a thread:
4.37
38. Thread Cancellation (Cont.)
● Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
● If thread has cancellation disabled, cancellation remains pending
until thread enables it
● Default type is deferred
● Cancellation only occurs when thread reaches cancellation
point
4 I.e. pthread_testcancel()
4 Then cleanup handler is invoked
● On Linux systems, thread cancellation is handled through signals
4.38
39. Thread-Local Storage
4.39
● Thread-local storage (TLS)allows each thread to have its
own copy of data
● Useful when you do not have control over the thread creation
process (i.e., when using a thread pool)
● Different from local variables
● Local variables visible only during single function
invocation
● TLS visible across function invocations
● Similar to static data
● TLS is unique to each thread
40. Scheduler Activations
● Both M:M and Two-level models require
communication to maintain the appropriate
number of kernel threads allocated to the
application
● Typically use an intermediate data structure
between user and kernel threads – lightweight
process (LWP)
● Appears to be a virtual processor on which
process can schedule user thread to run
● Each LWP attached to kernel thread
● How many LWPs to create?
● Scheduler activations provide upcalls - a
communication mechanism from the kernel to
the upcall handler in the thread library
● This communication allows an application to
maintain the correct number kernel threads
4.40
42. Windows Threads
4.42
● Windows implements the Windows API – primary API for Win
98, Win NT, Win 2000, Win XP, and Win 7
● Implements the one-to-one mapping, kernel-level
● Each thread contains
● A thread id
● Register set representing state of processor
● Separate user and kernel stacks for when thread runs in
user mode or kernel mode
● Private data storage area used by run-time libraries and
dynamic link libraries (DLLs)
● The register set, stacks, and private storage area are known as
the context of the thread
43. Windows Threads (Cont.)
4.43
● The primary data structures of a thread include:
● ETHREAD (executive thread block) – includes pointer to
process to which thread belongs and to KTHREAD, in
kernel space
● KTHREAD (kernel thread block) – scheduling and
synchronization info, kernel-mode stack, pointer to TEB,
in kernel space
● TEB (thread environment block) – thread id, user-mode
stack, thread-local storage, in user space
45. Linux Threads
● Linux refers to them as tasks rather than threads
● Thread creation is done through clone() system call
● clone() allows a child task to share the address space of the
parent task (process)
● Flags control behavior
● struct task_struct points to process data structures
(shared or unique)
4.45