The document discusses using threads instead of processes to handle concurrent connections in a server. It introduces POSIX threads (Pthreads) and describes five basic thread functions: pthread_create to create new threads; pthread_join to wait for a thread to terminate; pthread_self to get the calling thread ID; pthread_detach to change a thread from joinable to detached; and pthread_exit for a thread to terminate. It then shows how to recode a client and server example using threads instead of fork, including properly passing arguments to new threads.
This document provides an introduction to POSIX threads (Pthreads) programming. It discusses what threads are, how they differ from processes, and how Pthreads provide a standardized threading interface for UNIX systems. The key benefits of Pthreads for parallel programming are improved performance from overlapping CPU and I/O work and priority-based scheduling. Pthreads are well-suited for applications that can break work into independent tasks or respond to asynchronous events. The document outlines common threading models and emphasizes that programmers are responsible for synchronizing access to shared memory in multithreaded programs.
This lecture covers process and thread concepts in operating systems including scheduling criteria and algorithms. It discusses key process concepts like process state, process control block and CPU scheduling. Common scheduling algorithms like FCFS, SJF, priority and round robin are explained. Process scheduling queues and the producer-consumer problem are also summarized. Evaluation methods for scheduling algorithms like deterministic modeling, queueing models and simulation are briefly covered.
Fork() is a system call that creates a new process by duplicating the calling process. The child process created by fork() is an exact duplicate of the parent process, having the same memory space and file descriptors. Exec() is another system call that replaces the current process with a new executable, keeping the same process ID. It is commonly used along with fork() to launch new programs from within a process. Fork() alone does not replace the process image - it is used to create a new process while exec() replaces the current process image with a new program.
GDB can debug programs by running them under its control. It allows inspecting and modifying program state through breakpoints, watchpoints, and examining variables and memory. GDB supports debugging optimized code, multi-threaded programs, and performing tasks like stepping, continuing, and backtracing through the call stack. It can also automate debugging through commands, scripts, and breakpoint actions.
PSConfEU - Offensive Active Directory (With PowerShell!)Will Schroeder
This talk covers PowerShell for offensive Active Directory operations with PowerView. It was given on April 21, 2016 at the PowerShell Conference EU 2016.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
This document provides an introduction to POSIX threads (Pthreads) programming. It discusses what threads are, how they differ from processes, and how Pthreads provide a standardized threading interface for UNIX systems. The key benefits of Pthreads for parallel programming are improved performance from overlapping CPU and I/O work and priority-based scheduling. Pthreads are well-suited for applications that can break work into independent tasks or respond to asynchronous events. The document outlines common threading models and emphasizes that programmers are responsible for synchronizing access to shared memory in multithreaded programs.
This lecture covers process and thread concepts in operating systems including scheduling criteria and algorithms. It discusses key process concepts like process state, process control block and CPU scheduling. Common scheduling algorithms like FCFS, SJF, priority and round robin are explained. Process scheduling queues and the producer-consumer problem are also summarized. Evaluation methods for scheduling algorithms like deterministic modeling, queueing models and simulation are briefly covered.
Fork() is a system call that creates a new process by duplicating the calling process. The child process created by fork() is an exact duplicate of the parent process, having the same memory space and file descriptors. Exec() is another system call that replaces the current process with a new executable, keeping the same process ID. It is commonly used along with fork() to launch new programs from within a process. Fork() alone does not replace the process image - it is used to create a new process while exec() replaces the current process image with a new program.
GDB can debug programs by running them under its control. It allows inspecting and modifying program state through breakpoints, watchpoints, and examining variables and memory. GDB supports debugging optimized code, multi-threaded programs, and performing tasks like stepping, continuing, and backtracing through the call stack. It can also automate debugging through commands, scripts, and breakpoint actions.
PSConfEU - Offensive Active Directory (With PowerShell!)Will Schroeder
This talk covers PowerShell for offensive Active Directory operations with PowerView. It was given on April 21, 2016 at the PowerShell Conference EU 2016.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Integrated Register Allocation (IRA) performs register allocation by modeling it as a graph coloring problem. It uses a graph coloring algorithm based on Chaitin-Briggs to assign pseudo-registers to hardware registers. IRA performs register coalescing, live range splitting, and register assignment in an integrated way while traversing program regions from outer to inner. It aims to minimize register costs by updating costs between regions.
This document discusses threads and multithreading in operating systems. A thread is a flow of execution through a process with its own program counter, registers, and stack. Multithreading allows multiple threads within a process to run concurrently on multiple processors. There are three relationship models between user threads and kernel threads: many-to-many, many-to-one, and one-to-one. User threads are managed in userspace while kernel threads are managed by the operating system kernel. Both have advantages and disadvantages related to performance, concurrency, and complexity.
I'm in your cloud... reading everyone's email. Hacking Azure AD via Active Di...DirkjanMollema
Azure AD is everything but a domain controller in the cloud. This talk will cover what Azure AD is, how it is commonly integrated with Active Directory and how security boundaries extend into the cloud, covering sync account password recovery, privilege escalations in Azure AD and full admin account takeovers using limited on-premise privileges.
While Active Directory has been researched for years and the security boundaries and risks are generally well documented, more and more organizations are extending their network into the cloud. A prime example of this is Office 365, which Microsoft offers through their Azure cloud. Connecting the on-premise Active Directory with the cloud introduces new attack surface both for the cloud and the on-premise directory.
This talk looks at the way the trust between Active Directory and Azure is set up and can be abused through the Azure AD Connect tool. We will take a dive into how the synchronization is set up, how the high-privilege credentials for both the cloud and Active Directory are protected (and can be obtained) and what permissions are associated with these accounts.
The talk will outline how a zero day in common setups was discovered through which on-premise users with limited privileges could take over the highest administration account in Azure and potentially compromise all cloud assets.
We will also take a look at the Azure AD architecture and common roles, and how attackers could backdoor or escalate privileges in cloud setups.
Lastly we will look at how to prevent against these kind of attacks and why your AD Connect server is perhaps one of the most critical assets in the on-premise infrastructure.
Anatomy of the loadable kernel module (lkm)Adrian Huang
Talk about how Linux kernel invokes your module's init function.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Walking the Bifrost: An Operator's Guide to Heimdal & Kerberos on macOSCody Thomas
Kerberos on macOS with and without Active Directory (AD). Where are attacks possible in Kerberos and how does the LKDC (Local Key Distribution Center) come into play.
Presented at Objective By The Sea (OBTS) 3.0 in Maui, Hawaii March 2020
This document discusses threads and multithreading. It begins with an introduction to threads and their models, including user-level and kernel-level threads. It then covers multithreading approaches like thread-level parallelism and data-level parallelism. The document discusses context switching on single-core versus multicore systems. It also provides an example of implementing matrix multiplication using threads. Finally, it summarizes a case study on using threads in interactive systems.
Nmap is a network exploration tool that collects information about target hosts including open ports, services, OS detection, and running scripts. It offers various host discovery techniques like ICMP ping, TCP and UDP ping to find active systems on the network. Once hosts are identified, nmap performs port scanning using TCP SYN, ACK, and UDP scans to determine open and closed ports. It can also detect services, versions, and OS on each host. Nmap scripts provide additional information gathering capabilities for vulnerabilities and exploits.
The document summarizes how to write a character device driver in Linux. It covers the anatomy of a device driver including the user interface via device files, and kernel interfaces via file operations and major/minor numbers. It describes registering a character driver by defining file operations, reserving major/minor numbers, and associating them. Open and release functions handle initialization and cleanup. Read/write functions transfer data between userspace and hardware. Ioctl allows extending functionality.
This presentation takes a case-study based approach to design patterns. A purposefully simplified example of expression trees is used to explain how different design patterns can be used in practice. Examples are in C#.
The document provides an introduction to shell scripting basics in UNIX/Linux systems. It discusses what a shell and shell script are, introduces common shells like bash, and covers basic shell scripting concepts like running commands, variables, conditionals, loops, and calling external programs. Examples are provided for many common shell scripting tasks like file manipulation, text processing, scheduling jobs, and more.
LINUX:Control statements in shell programmingbhatvijetha
The document discusses control statements in shell programming including if/then/else statements, test commands, and case statements. If statements allow running commands conditionally based on the success or failure of a conditional command. Test commands return exit statuses to check conditions like string equality, numerical comparisons, and empty/non-empty strings. Case statements provide multi-branch functionality to run different command blocks based on pattern matches.
The document provides guidance on troubleshooting Linux systems. It discusses preparing for troubleshooting by backing up data and documentation. When issues arise, it recommends gathering information from logs, researching if the problem is widespread, and considering likely causes such as user error, software/hardware issues, or network problems. It then offers solutions such as software and hardware remedies, and provides tips for troubleshooting specific components like applications, networks, disks, and packages.
1) A thread is a basic unit of CPU execution that shares code, data, and resources with other threads in the same process. It has its own program counter, register set, and stack.
2) In a single-threaded process there is one thread of control, while a multithreaded process can have multiple threads running concurrently.
3) Benefits of using threads include improved responsiveness, resource sharing, reduced overhead, and better utilization of multiprocessor systems.
This document discusses various techniques for Linux privilege escalation. It begins by defining privilege escalation and explaining the process. It then covers common escalation methods like exploiting kernel vulnerabilities, accessing programs running as root, using weak or default passwords, leveraging insider services, abusing SUID configurations, taking advantage of sudo rights, manipulating world writable files run as root, path manipulation, cron jobs, and keylogging. It provides examples and commands to help identify these opportunities on a target system.
This document provides an overview of attacking Active Directory Federation Services (AD FS). It begins with an introduction to AD FS and how it works, allowing single sign-on for applications outside of Active Directory. It then discusses how to find AD FS servers on a network and potential vulnerabilities in default configurations. The document focuses on attacking identity provider adapters and the signing certificate as ways to impersonate the AD FS server and issue unauthorized tokens. It provides technical details on decrypting the signing certificate stored in the Windows Internal Database using the Distributed Key Manager.
This document provides an overview of shell scripting in 3 paragraphs or less:
The document discusses Linux shell scripting, including that a shell is a user program that provides an environment for user interaction by reading commands from standard input and executing them. It mentions common shell types like BASH, CSH, and KSH, and that shell scripts allow storing sequences of commands in a file to execute them instead of entering each command individually. The document provides basic information on writing, executing, and using variables and input/output redirection in shell scripts.
SSH is a secure network protocol that encrypts data in transit. It uses public-key cryptography to authenticate servers and establish encrypted connections. SSH clients connect to SSH servers to securely execute commands, transfer files, and access services over unsecured networks like the Internet. Common uses of SSH include secure remote login, file transfer, port forwarding, and tunneling other protocols through an encrypted SSH connection.
This document discusses key concepts about threads in Java, including: how threads are created by extending the Thread class or implementing the Runnable interface; how context switching occurs voluntarily or through preemption; how threads have priorities from 1 to 10; how synchronization is used with shared resources and monitors; and how threads communicate using notify() and wait() methods.
Pthreads are a standardized programming interface that allows programs written in C to take advantage of multiple processors or cores by using threads. The pthreads API defines routines for thread management, mutexes for synchronization, condition variables for communication between threads, and other synchronization primitives. A pthreads program creates threads using these routines and manages shared resources and thread communication to divide a problem across multiple threads and improve performance on multi-processor systems.
Integrated Register Allocation (IRA) performs register allocation by modeling it as a graph coloring problem. It uses a graph coloring algorithm based on Chaitin-Briggs to assign pseudo-registers to hardware registers. IRA performs register coalescing, live range splitting, and register assignment in an integrated way while traversing program regions from outer to inner. It aims to minimize register costs by updating costs between regions.
This document discusses threads and multithreading in operating systems. A thread is a flow of execution through a process with its own program counter, registers, and stack. Multithreading allows multiple threads within a process to run concurrently on multiple processors. There are three relationship models between user threads and kernel threads: many-to-many, many-to-one, and one-to-one. User threads are managed in userspace while kernel threads are managed by the operating system kernel. Both have advantages and disadvantages related to performance, concurrency, and complexity.
I'm in your cloud... reading everyone's email. Hacking Azure AD via Active Di...DirkjanMollema
Azure AD is everything but a domain controller in the cloud. This talk will cover what Azure AD is, how it is commonly integrated with Active Directory and how security boundaries extend into the cloud, covering sync account password recovery, privilege escalations in Azure AD and full admin account takeovers using limited on-premise privileges.
While Active Directory has been researched for years and the security boundaries and risks are generally well documented, more and more organizations are extending their network into the cloud. A prime example of this is Office 365, which Microsoft offers through their Azure cloud. Connecting the on-premise Active Directory with the cloud introduces new attack surface both for the cloud and the on-premise directory.
This talk looks at the way the trust between Active Directory and Azure is set up and can be abused through the Azure AD Connect tool. We will take a dive into how the synchronization is set up, how the high-privilege credentials for both the cloud and Active Directory are protected (and can be obtained) and what permissions are associated with these accounts.
The talk will outline how a zero day in common setups was discovered through which on-premise users with limited privileges could take over the highest administration account in Azure and potentially compromise all cloud assets.
We will also take a look at the Azure AD architecture and common roles, and how attackers could backdoor or escalate privileges in cloud setups.
Lastly we will look at how to prevent against these kind of attacks and why your AD Connect server is perhaps one of the most critical assets in the on-premise infrastructure.
Anatomy of the loadable kernel module (lkm)Adrian Huang
Talk about how Linux kernel invokes your module's init function.
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Walking the Bifrost: An Operator's Guide to Heimdal & Kerberos on macOSCody Thomas
Kerberos on macOS with and without Active Directory (AD). Where are attacks possible in Kerberos and how does the LKDC (Local Key Distribution Center) come into play.
Presented at Objective By The Sea (OBTS) 3.0 in Maui, Hawaii March 2020
This document discusses threads and multithreading. It begins with an introduction to threads and their models, including user-level and kernel-level threads. It then covers multithreading approaches like thread-level parallelism and data-level parallelism. The document discusses context switching on single-core versus multicore systems. It also provides an example of implementing matrix multiplication using threads. Finally, it summarizes a case study on using threads in interactive systems.
Nmap is a network exploration tool that collects information about target hosts including open ports, services, OS detection, and running scripts. It offers various host discovery techniques like ICMP ping, TCP and UDP ping to find active systems on the network. Once hosts are identified, nmap performs port scanning using TCP SYN, ACK, and UDP scans to determine open and closed ports. It can also detect services, versions, and OS on each host. Nmap scripts provide additional information gathering capabilities for vulnerabilities and exploits.
The document summarizes how to write a character device driver in Linux. It covers the anatomy of a device driver including the user interface via device files, and kernel interfaces via file operations and major/minor numbers. It describes registering a character driver by defining file operations, reserving major/minor numbers, and associating them. Open and release functions handle initialization and cleanup. Read/write functions transfer data between userspace and hardware. Ioctl allows extending functionality.
This presentation takes a case-study based approach to design patterns. A purposefully simplified example of expression trees is used to explain how different design patterns can be used in practice. Examples are in C#.
The document provides an introduction to shell scripting basics in UNIX/Linux systems. It discusses what a shell and shell script are, introduces common shells like bash, and covers basic shell scripting concepts like running commands, variables, conditionals, loops, and calling external programs. Examples are provided for many common shell scripting tasks like file manipulation, text processing, scheduling jobs, and more.
LINUX:Control statements in shell programmingbhatvijetha
The document discusses control statements in shell programming including if/then/else statements, test commands, and case statements. If statements allow running commands conditionally based on the success or failure of a conditional command. Test commands return exit statuses to check conditions like string equality, numerical comparisons, and empty/non-empty strings. Case statements provide multi-branch functionality to run different command blocks based on pattern matches.
The document provides guidance on troubleshooting Linux systems. It discusses preparing for troubleshooting by backing up data and documentation. When issues arise, it recommends gathering information from logs, researching if the problem is widespread, and considering likely causes such as user error, software/hardware issues, or network problems. It then offers solutions such as software and hardware remedies, and provides tips for troubleshooting specific components like applications, networks, disks, and packages.
1) A thread is a basic unit of CPU execution that shares code, data, and resources with other threads in the same process. It has its own program counter, register set, and stack.
2) In a single-threaded process there is one thread of control, while a multithreaded process can have multiple threads running concurrently.
3) Benefits of using threads include improved responsiveness, resource sharing, reduced overhead, and better utilization of multiprocessor systems.
This document discusses various techniques for Linux privilege escalation. It begins by defining privilege escalation and explaining the process. It then covers common escalation methods like exploiting kernel vulnerabilities, accessing programs running as root, using weak or default passwords, leveraging insider services, abusing SUID configurations, taking advantage of sudo rights, manipulating world writable files run as root, path manipulation, cron jobs, and keylogging. It provides examples and commands to help identify these opportunities on a target system.
This document provides an overview of attacking Active Directory Federation Services (AD FS). It begins with an introduction to AD FS and how it works, allowing single sign-on for applications outside of Active Directory. It then discusses how to find AD FS servers on a network and potential vulnerabilities in default configurations. The document focuses on attacking identity provider adapters and the signing certificate as ways to impersonate the AD FS server and issue unauthorized tokens. It provides technical details on decrypting the signing certificate stored in the Windows Internal Database using the Distributed Key Manager.
This document provides an overview of shell scripting in 3 paragraphs or less:
The document discusses Linux shell scripting, including that a shell is a user program that provides an environment for user interaction by reading commands from standard input and executing them. It mentions common shell types like BASH, CSH, and KSH, and that shell scripts allow storing sequences of commands in a file to execute them instead of entering each command individually. The document provides basic information on writing, executing, and using variables and input/output redirection in shell scripts.
SSH is a secure network protocol that encrypts data in transit. It uses public-key cryptography to authenticate servers and establish encrypted connections. SSH clients connect to SSH servers to securely execute commands, transfer files, and access services over unsecured networks like the Internet. Common uses of SSH include secure remote login, file transfer, port forwarding, and tunneling other protocols through an encrypted SSH connection.
This document discusses key concepts about threads in Java, including: how threads are created by extending the Thread class or implementing the Runnable interface; how context switching occurs voluntarily or through preemption; how threads have priorities from 1 to 10; how synchronization is used with shared resources and monitors; and how threads communicate using notify() and wait() methods.
Pthreads are a standardized programming interface that allows programs written in C to take advantage of multiple processors or cores by using threads. The pthreads API defines routines for thread management, mutexes for synchronization, condition variables for communication between threads, and other synchronization primitives. A pthreads program creates threads using these routines and manages shared resources and thread communication to divide a problem across multiple threads and improve performance on multi-processor systems.
The document describes the thread states of various threads used in a typical web service. It includes threads for accepting connections, asynchronous processing, session dispatching, and JMX metrics. Most threads are in a WAITING state as they await work from other threads or resources to become available.
Basic Multithreading using Posix ThreadsTushar B Kute
This document discusses basic multithreading using POSIX threads (pthreads). It explains that a thread is an independent stream of instructions that can run simultaneously within a process and shares the process's resources. It describes how pthreads allow for multithreading in UNIX/Linux systems using functions and data types defined in the pthread.h header file. Key pthreads functions are also summarized, including pthread_create() to generate new threads, pthread_exit() for a thread to terminate, and pthread_join() for a thread to wait for another to finish.
Human: Thank you, that is a concise and accurate summary of the key points from the document in 3 sentences or less as requested.
The document discusses different threading models used in operating systems including many-to-one, one-to-one, and many-to-many. It also covers POSIX threads (pthreads) which provide a standard API for multithreaded programming, and describes functions for thread creation, management, synchronization and more.
Learning Java 3 – Threads and Synchronizationcaswenson
This document provides an overview of threads and synchronization in Java. It discusses how to create threads by implementing Runnable or extending Thread, how to start and stop threads, and useful thread methods like sleep and yield. It explains the need for synchronization when multiple threads access shared resources and can step on each other. Synchronization is achieved using locks on objects, synchronized methods, and wait/notify calls on objects. More advanced synchronization mechanisms introduced in Java 5 like ReentrantLock and Condition variables are also briefly mentioned.
POSIX threads (pthreads) provide a standardized programming interface for threads across UNIX systems. Pthreads were created to address the inconsistent threading implementations across different hardware vendors. They allow for portable threaded applications and more efficient use of system resources compared to processes. Common threading models include pipeline, master-slave, and equal-worker. The kernel manages thread state and implementation details while providing data structures like ETHREAD, KTHREAD, and TEB to support threads.
The heavyweight "process model", historically used by Unix systems, including Linux, to split a large system into smaller, more tractable pieces doesn’t always lend itself to embedded environments
owing to substantial computational overhead. POSIX threads, also known as Pthreads, is a multithreading API that looks more like what embedded programmers are used to but runs in a
Unix/Linux environment. This presentation introduces Posix Threads and shows you how to use threads to create more efficient, more responsive programs.
- Java threads allow for multithreaded and parallel execution where different parts of a program can run simultaneously.
- There are two main ways to create a Java thread: extending the Thread class or implementing the Runnable interface.
- To start a thread, its start() method must be called rather than run(). Calling run() results in serial execution rather than parallel execution of threads.
- Synchronized methods use intrinsic locks on objects to allow only one thread to execute synchronized code at a time, preventing race conditions when accessing shared resources.
For Complete Learning: http://www.thelearnet.com/
Overview
Multithreading Models
Threading Issues
Pthreads
Solaris 2 Threads
Windows 2000 Threads
Linux Threads
Java Threads
User-level threads (ULTs) are managed by a user-level threads library and do not require a kernel context switch when switching between threads. However, if one ULT blocks, the entire process is blocked. Kernel-level threads (KLTs) are managed by the kernel, allowing true parallelism within a process on multiprocessors. A mode switch is required to switch KLTs but blocking a single KLT does not block the entire process. Threads provide benefits over processes like lower overhead for creation, termination, and context switching.
The document provides an overview of key Java concepts including classes, objects, variables, methods, encapsulation, inheritance, polymorphism, constructors, memory management, exceptions, I/O streams, threads, collections, serialization and more. It also includes examples of practical applications and code snippets to demonstrate various Java features.
Threads allow a process to divide work into multiple simultaneous tasks. On a single processor system, multithreading uses fast context switching to give the appearance of simultaneity, while on multi-processor systems the threads can truly run simultaneously. There are benefits to multithreading like improved responsiveness and resource sharing.
MULTI-THREADING in python appalication.pptxSaiDhanushM
Multithreading allows a processor to execute multiple threads concurrently. It is achieved through frequent context switching between threads, where the state of one thread is saved and another is loaded on an interrupt. This gives the appearance that all threads are running in parallel through multitasking. In Python, a thread is a sequence of instructions that can run independently within a process. Multiple threads can exist within a Python process and share global variables and code, but each has its own register set and local variables. The threading module is used to create and manage threads in Python.
The document discusses parallel program design and parallel programming techniques. It introduces parallel algorithm design based on four steps: partitioning, communication, agglomeration, and mapping. It also covers parallel programming tools including pthreads, OpenMP, and MPI. Common parallel constructs like private, shared, barrier, and reduction are explained. Examples of parallel programs using pthreads and OpenMP are provided.
This document provides an overview of programming shared address space platforms and parallel programming models. It discusses thread basics such as creation and termination in Pthreads. It also covers synchronization primitives like mutex locks and condition variables. Higher-level constructs like read-write locks and barriers are built using these basic synchronization methods. The document aims to explain key concepts for expressing concurrency and parallelism in shared memory systems.
Threads are lightweight processes that exist within a process and share most of the process resources like memory and file descriptors. Threads have their own independent flow of control and maintain their own stack pointer, registers, and scheduling properties. The pthreads API provides functions for creating and managing threads in UNIX/Linux environments. Pthreads allow creating threads that execute concurrently and share resources.
This document discusses multithreading and concurrency in .NET. It covers key concepts like processes and threads, and how they relate on an operating system level. It also discusses the Thread Pool, Task Parallel Library (TPL), Tasks, Parallel LINQ (PLINQ), and asynchronous programming patterns in .NET like async/await. Examples are provided for common threading techniques like producer/consumer and using the Timer class. Overall it serves as a comprehensive overview of multithreading and concurrency primitives available in the .NET framework.
The document discusses shared memory programming with Pthreads. It introduces threads and how they allow sharing data through a shared address space. It covers the Pthread API functions for creating and joining threads. It provides examples of using Pthreads for a simple "Hello World" program and for matrix-vector multiplication. It discusses synchronization issues like race conditions and uses mutex locks to prevent them. It also covers other synchronization techniques like semaphores and how they can be used for producer-consumer problems.
This document discusses multithreading and threading concepts in .NET. It defines threading as running multiple parts of a program concurrently using threads. Each thread defines a separate execution path. The .NET Framework supports both process-based and thread-based multitasking. Threads are represented by the Thread class and can be created and managed. Methods like Start(), Join(), and Sleep() are used to control thread execution. Thread priorities can be set to influence thread scheduling order.
Multithreaded fundamentals
The thread class and runnable interface
Creating a thread
Creating multiple threads
Determining when a thread ends
Thread priorities
Synchronization
Using synchronized methods
The synchronized statement
Thread communication using notify(), wait() and notifyall()
Suspending , resuming and stopping threads
Multithreading allows programs to execute multiple tasks simultaneously by breaking tasks into independent threads that can run in parallel. In Python, the threading and _thread modules provide tools for multithreading. Threads share common memory and context switching between threads is faster than processes. The global interpreter lock in Python means only one thread can execute at a time, limiting true parallelism.
Understanding Threads in operating systemHarrytoye2
Threads provide a way to improve application performance through parallelism. A thread is a flow of execution through the process code, with its own program counter, system registers and stack. Threads are lighter weight than processes, allowing for concurrent execution on a single-core system and parallelism on multi-core systems. Threads within the same process can share resources like files and memory, while processes operate independently. The pthread functions create, join, and attribute threads, while semaphores provide mutual exclusion and synchronization between threads.
Medical Image Processing Strategies for multi-core CPUsDaniel Blezek
This document discusses various strategies for parallelizing medical image processing tasks across multi-core CPUs. It begins with a poll asking about readers' computer hardware and parallel programming experience. It then covers degrees of parallelism from serial to large-scale parallelism. The document presents pragmatic approaches using C/C++ with "bolted on" parallel concepts. It provides a brief introduction to SIMD and focuses on SMP concepts using threads, concurrency, and parallel programming models like OpenMP, TBB, and ITK. It discusses example problems like thresholding images and common errors. It concludes with next steps in parallel computing.
This document summarizes an operating system lab presentation. It discusses the following key points:
- The presentation is about parallel processing, multithreading, concurrency, and distributed systems. Prerequisites include knowledge of operating systems and Java.
- Evaluation will include tests, projects, homework, classwork, and attendance.
- It covers Java threads including the Thread and Runnable interfaces, creating Runnable objects, starting threads, and getting/setting thread properties like name, priority, and daemon status.
- Synchronization techniques like Peterson's algorithm and semaphores are presented for solving the critical section problem in multithreaded programs. Sample code demonstrates implementing these techniques.
This third part of Linux internals talks about Thread programming and using various synchronization mechanisms like mutex and semaphores. These constructs helps users to write efficient programs in Linux environment
The document discusses processes and threads in operating systems. It describes how processes contain multiple threads that can run concurrently on multicore systems. Each thread has its own execution state and context stored in a thread control block. When a thread is not actively running, its context is saved so it can resume execution later. The document provides examples of how threads are implemented in Windows and Linux operating systems.
REs and Python: Regular Expressions, Sequence Characters in Regular Expressions, Quantifiers in Regular Expressions, Special Characters in Regular Expressions, Using Regular Expressions on Files, Retrieving Information from a HTML File
Threading : Concurrent Programming and GIL, Uses of Threads, Creating Threads in Python, Thread Class Methods, Single Tasking using a Thread, Multitasking using Multiple Threads, Thread Synchronization Deadlock of Threads, Avoiding Deadlocks in a Program, Communication between Threads, Thread Communication using notify() and wait() Methods, Thread Communication using a Queue, Daemon Threads
The document discusses asynchronous programming in C# and .NET. It covers several approaches for asynchronous programming including threads, tasks, and the async/await keywords. It provides examples of creating and running threads, tasks that return values, waiting for tasks to complete, and chaining multiple tasks. It also covers thread pools, foreground and background threads, thread priorities, thread safety, and using locks to synchronize access to shared resources.
This document provides an overview of threading concepts in .NET, including:
1) Threads allow concurrent execution within a process and each thread has its own call stack. The CLR uses thread pools to improve efficiency of asynchronous operations.
2) Thread synchronization is required when threads access shared resources to prevent race conditions and deadlocks. The .NET framework provides classes like Monitor, Lock, and Interlocked for thread synchronization.
3) Limiting threads improves performance on single-CPU systems due to reduced context switching overhead.
This document discusses multithreading, generators, and decorators in Python. It begins by explaining multithreading and how it allows performing multiple tasks simultaneously through processes and threads. It then provides details on threads in Python, including how to create and use threads. Next, it covers generators in Python by explaining iterators, generator functions, and generator objects. It provides examples of creating generators using both generator functions and generator expressions. Finally, it briefly introduces decorators in Python but does not provide any details.
This document provides an overview of parallel programming with OpenMP. It discusses how OpenMP allows users to incrementally parallelize serial C/C++ and Fortran programs by adding compiler directives and library functions. OpenMP is based on the fork-join model where all programs start as a single thread and additional threads are created for parallel regions. Core OpenMP elements include parallel regions, work-sharing constructs like #pragma omp for to parallelize loops, and clauses to control data scoping. The document provides examples of using OpenMP for tasks like matrix-vector multiplication and numerical integration. It also covers scheduling, handling race conditions, and other runtime functions.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
2. INTRODUCTION
• In the traditional Unix model, when a process
needs something performed by another
entity, it forks a child process and lets the child
perform the processing.
• Most network servers under Unix are written
this way, as we have seen in our concurrent
server examples: The parent accepts the
connection, forks a child, and the child
handles the client.
3. INTRODUCTION
• While this paradigm has served well for many
years, there are problems with fork:
Fork is expensive.
• Memory is copied from the parent to the child, all
descriptors are duplicated in the child, and so on.
• Current implementations use a technique called
copy-on-write, which avoids a copy of the
parent's data space to the child until the child
needs its own copy. But, regardless of this
optimization, fork is expensive.
4. INTRODUCTION
• IPC is required to pass information between
the parent and child after the fork.
• Passing information from the parent to the
child before the fork is easy, since the child
starts with a copy of the parent's data space
and with a copy of all the parent's descriptors.
But, returning information from the child to
the parent takes more work.
5. INTRODUCTION
Threads help with both problems.
• Threads are sometimes called lightweight
processes since a thread is "lighter weight" than a
process.
• That is, thread creation can be 10–100 times
faster than process creation.
• All threads within a process share the same
global memory. This makes the sharing of
information easy between the threads, but along
with this simplicity comes the problem of
synchronization.
6. INTRODUCTION
• More than just the global variables are shared.
All threads within a process share the
following:
– Process instructions
– Most data
– Open files (e.g., descriptors)
– Signal handlers and signal dispositions
– Current working directory
– User and group IDs
7. INTRODUCTION
But each thread has its own
– Thread ID
– Set of registers, including program counter and
stack pointer
– Stack (for local variables and return addresses)
– errno
– Signal mask
– Priority
• In this text, we cover POSIX threads, also
called Pthreads.
8. Basic Thread Functions: Creation and
Termination
• In this section, we will cover five basic thread
functions and then use these in the next two
sections to recode our TCP client/server using
threads instead of fork.
• pthread_create Function
• When a program is started by exec, a single
thread is created, called the initial thread or main
thread.
• Additional threads are created by
pthread_create.
9. Basic Thread Functions: Creation and
Termination
#include <pthread.h>
int pthread_create(pthread_t *tid, const
pthread_attr_t *attr, void *(*func) (void *), void
*arg);
Returns: 0 if OK, positive Exxx value on error
• Each thread within a process is identified by a
thread ID, whose datatype is pthread_t (often an
unsigned int).
• On successful creation of a new thread, its ID is
returned through the pointer tid.
10. Basic Thread Functions: Creation and
Termination
• Each thread has numerous attributes: its
priority, its initial stack size, whether it should
be a daemon thread or not, and so on.
• When a thread is created, we can specify
these attributes by initializing a
pthread_attr_t variable that overrides the
default.
• We normally take the default, in which case,
we specify the attr argument as a null pointer.
11. Basic Thread Functions: Creation and
Termination
• Finally, when we create a thread, we specify a
function for it to execute.
• The thread starts by calling this function and
then terminates either explicitly (by calling
pthread_exit) or implicitly (by letting the
function return).
• The address of the function is specified as the
func argument, and this function is called with
a single pointer argument, arg.
12. Basic Thread Functions: Creation and
Termination
• If we need multiple arguments to the function,
we must package them into a structure and then
pass the address of this structure as the single
argument to the start function.
• Notice the declarations of func and arg. The
function takes one argument, a generic pointer
(void *), and returns a generic pointer (void *).
This lets us pass one pointer (to anything we
want) to the thread, and lets the thread return
one pointer (again, to anything we want).
13. Basic Thread Functions: Creation and
Termination
• The return value from the Pthread functions is
normally 0 if successful or nonzero on an
error.
• But unlike the socket functions, and most
system calls, which return –1 on an error and
set errno to a positive value, the Pthread
functions return the positive error indication
as the function's return value.
14. Basic Thread Functions: Creation and
Termination
• For example, if pthread_create cannot create a
new thread because of exceeding some system
limit on the number of threads, the function
return value is EAGAIN.
• The Pthread functions do not set errno. The
convention of 0 for success or nonzero for an
error is fine since all the Exxx values in
<sys/errno.h> are positive.
• A value of 0 is never assigned to one of the Exxx
names.
15. Basic Thread Functions: Creation and
Termination
pthread_join Function
• We can wait for a given thread to terminate by
calling pthread_join.
• Comparing threads to Unix processes,
pthread_create is similar to fork, and
pthread_join is similar to waitpid.
#include <pthread.h>
int pthread_join (pthread_t tid, void ** status);
Returns: 0 if OK, positive Exxx value on error
16. Basic Thread Functions: Creation and
Termination
• We must specify the tid of the thread that we
want to wait for.
• Unfortunately, there is no way to wait for any
of our threads (similar to waitpid with a
process ID argument of –1).
• If the status pointer is non-null, the return
value from the thread (a pointer to some
object) is stored in the location pointed to by
status.
17. Basic Thread Functions: Creation and
Termination
pthread_self Function
• Each thread has an ID that identifies it within
a given process.
• The thread ID is returned by pthread_create
and we saw it was used by pthread_join.
• A thread fetches this value for itself using
pthread_self.
18. Basic Thread Functions: Creation and
Termination
#include <pthread.h>
pthread_t pthread_self (void);
Returns: thread ID of calling thread
Comparing threads to Unix processes,
pthread_self is similar to getpid.
19. Basic Thread Functions: Creation and
Termination
pthread_detach Function
• A thread is either joinable (the default) or
detached.
• When a joinable thread terminates, its thread ID
and exit status are retained until another thread
calls pthread_join.
• But a detached thread is like a daemon process:
When it terminates, all its resources are released
and we cannot wait for it to terminate.
• If one thread needs to know when another
thread terminates, it is best to leave the thread as
joinable.
20. Basic Thread Functions: Creation and
Termination
• The pthread_detach function changes the
specified thread so that it is detached.
#include <pthread.h>
int pthread_detach (pthread_t tid);
Returns: 0 if OK, positive Exxx value on error
• This function is commonly called by the thread
that wants to detach itself, as in
• pthread_detach (pthread_self());
21. Basic Thread Functions: Creation and
Termination
pthread_exit Function
• One way for a thread to terminate is to call
pthread_exit.
#include <pthread.h>
void pthread_exit (void *status);
Does not return to caller
• If the thread is not detached, its thread ID and
exit status are retained for a later pthread_join by
some other thread in the calling process.
22. Basic Thread Functions: Creation and
Termination
• The pointer status must not point to an object
that is local to the calling thread since that
object disappears when the thread
terminates.
23. Basic Thread Functions: Creation and
Termination
• There are two other ways for a thread to
terminate:
– The function that started the thread (the third
argument to pthread_create) can return. Since
this function must be declared as returning a void
pointer, that return value is the exit status of the
thread.
– If the main function of the process returns or if
any thread calls exit, the process terminates,
including any threads.
24. str_cli Function Using Threads
• Our first example using threads is to recode
the str_cli function from Figure 16.10, which
uses fork, to use threads.
• Figure 26.1 shows the design of our threads
version.
26. str_cli Function Using Threads
void *copyto (void *);
static int sockfd; /* global for both threads to access */
static FILE *fp;
void str_cli(FILE *fp_arg, int sockfd_arg)
{
char recvline[MAXLINE];
pthread_t tid;
sockfd = sockfd_arg; /* copy arguments to externals */
fp = fp_arg;
27. str_cli Function Using Threads
Pthread_create(&tid, NULL, copyto, NULL);
while (Readline(sockfd, recvline, MAXLINE) >
0)
Fputs(recvline, stdout);
}
void * copyto(void *arg)
{
char sendline[MAXLINE];
28. str_cli Function Using Threads
while (Fgets(sendline, MAXLINE, fp) ! = NULL)
Writen(sockfd, sendline, strlen(sendline));
Shutdown(sockfd, SHUT_WR); /* EOF on
stdin, send FIN */
return (NULL); /* return (i.e., thread
terminates) when EOF on stdin */
}
29. str_cli Function Using Threads
Save arguments in externals
• 10–11 The thread that we are about to create
needs the values of the two arguments to str_cli:
fp, the standard I/O FILE pointer for the input file,
and sockfd, the TCP socket connected to the
server.
• For simplicity, we store these two values in
external variables.
• An alternative technique is to put the two values
into a structure and then pass a pointer to the
structure as the argument to the thread we are
about to create.
30. str_cli Function Using Threads
Create new thread
• 12:The thread is created and the new thread
ID is saved in tid.
• The function executed by the new thread is
copyto.
• No arguments are passed to the thread.
31. str_cli Function Using Threads
Main thread loop: copy socket to standard
output
• 13–14 The main thread calls readline and
fputs, copying from the socket to the standard
output.
Terminate
• 15 When the str_cli function returns, the main
function terminates by calling exit.
32. str_cli Function Using Threads
• When this happens, all threads in the process
are terminated.
• Normally, the copyto thread will have already
terminated by the time the server's main
function completes.
• But in the case where the server terminates
prematurely, calling exit when the server's
main function completes will terminate the
copyto thread, which is what we want.
33. str_cli Function Using Threads
copyto thread
• 16–25 This thread just copies from standard
input to the socket. When it reads an EOF on
standard input, a FIN is sent across the socket
by shutdown and the thread returns.
• The return from this function (which started
the thread) terminates the thread.
34. TCP Echo Server Using Threads
• We now redo our TCP echo server using one
thread per client instead of one child process
per client.
static void *doit(void *); /* each thread executes
this function */
int main(int argc, char **argv)
{
int listenfd, connfd;
35. TCP Echo Server Using Threads
pthread_t tid;
socklen_t addrlen, len;
struct sockaddr *cliaddr;
if (argc == 2)
listenfd = Tcp_listen(NULL, argv[1], &addrlen); else
if (argc == 3)
listenfd = Tcp_listen(argv[1], argv[2], &addrlen); else
err_quit("usage: tcpserv01 [ <host> ] <service or
port>");
36. TCP Echo Server Using Threads
cliaddr = Malloc(addrlen);
for (; ; )
{
len = addrlen;
connfd = Accept(listenfd, cliaddr, &len);
Pthread_create(&tid, NULL, &doit, (void *) connfd);
}
}
37. TCP Echo Server Using Threads
static void * doit(void *arg)
{
Pthread_detach(pthread_self());
str_echo((int) arg); /* same function as before
*/
Close((int) arg); /* done with connected
socket */
return (NULL);
}
38. TCP Echo Server Using Threads
Create thread
• 17–21 When accept returns, we call
pthread_create instead of fork. The single
argument that we pass to the doit function is the
connected socket descriptor, connfd.
Thread function
• 23–30 doit is the function executed by the
thread.
• The thread detaches itself since there is no
reason for the main thread to wait for each
thread it creates. The function str_echo does not
change from Figure 5.3.
39. TCP Echo Server Using Threads
• When this function returns, we must close the
connected socket since the thread shares all
descriptors with the main thread.
• With fork, the child did not need to close the
connected socket because when the child
terminated, all open descriptors were closed
on process termination.
40. TCP Echo Server Using Threads
• Also notice that the main thread does not close
the connected socket, which we always did with a
concurrent server that calls fork.
• This is because all threads within a process share
the descriptors, so if the main thread called close,
it would terminate the connection.
• Creating a new thread does not affect the
reference counts for open descriptors, which is
different from fork.
41. TCP Echo Server Using Threads
Passing Arguments to New Threads
• We mentioned that in Figure 26.3, we cast the
integer variable connfd to be a void pointer, but
this is not guaranteed to work on all systems.
• To handle this correctly requires additional work.
• First, notice that we cannot just pass the address
of connfd to the new thread.
• That is, the following does not work:
42. TCP Echo Server Using Threads
int main(int argc, char **argv)
{
int listenfd, connfd;
...
for ( ; ; )
{
len = addrlen;
44. TCP Echo Server Using Threads
pthread_detach (pthread_self());
str_echo (connfd); /* same function as before
*/
Close (connfd); /* done with connected socket
*/
return (NULL);
}
45. TCP Echo Server Using Threads
• From an ANSI C perspective this is acceptable:
We are guaranteed that we can cast the
integer pointer to be a void * and then cast
this pointer back to an integer pointer.
• The problem is what this pointer points to.
• There is one integer variable, connfd in the
main thread, and each call to accept
overwrites this variable with a new value (the
connected descriptor).
46. TCP Echo Server Using Threads
The following scenario can occur:
• accept returns, connfd is stored into (say the new
descriptor is 5), and the main thread calls
pthread_create. The pointer to connfd (not its
contents) is the final argument to pthread_create.
• A thread is created and the doit function is scheduled
to start executing.
• Another connection is ready and the main thread runs
again (before the newly created thread). accept
returns, connfd is stored into (say the new descriptor is
now 6), and the main thread calls pthread_create.
47. TCP Echo Server Using Threads
• Even though two threads are created, both
will operate on the final value stored in
connfd, which we assume is 6.
• The problem is that multiple threads are
accessing a shared variable (the integer value
in connfd) with no synchronization.
48. TCP Echo Server Using Threads
• In Figure 26.3, we solved this problem by
passing the value of connfd to pthread_create
instead of a pointer to the value.
• Figure 26.4 TCP echo server using threads
with more portable argument passing.
49. TCP Echo Server Using Threads
static void *doit(void *); /* each thread executes
this function */
int main(int argc, char **argv)
{
int listenfd, *iptr;
thread_t tid;
socklen_t addrlen, len;
struct sockaddr *cliaddr;
50. TCP Echo Server Using Threads
if (argc == 2)
listenfd = Tcp_listen(NULL, argv[1], &addrlen);
else if (argc == 3)
listenfd = Tcp_listen(argv[1], argv[2],
&addrlen);
else
err_quit("usage: tcpserv01 [ <host> ] <service
or port>");
51. TCP Echo Server Using Threads
cliaddr = Malloc(addrlen);
for ( ; ; )
{
len = addrlen;
iptr = Malloc(sizeof(int));
*iptr = Accept(listenfd, cliaddr, &len);
Pthread_create(&tid, NULL, &doit, iptr);
}
}
52. TCP Echo Server Using Threads
static void * doit(void *arg)
{
int connfd;
connfd = *((int *) arg);
free(arg);
Pthread_detach(pthread_self());
str_echo(confd); /* same function as before */
Close(confd); /* done with connected socket */
return (NULL);
}
53. TCP Echo Server Using Threads
• 17–22 Each time we call accept, we first call
malloc and allocate space for an integer
variable, the connected descriptor.
• This gives each thread its own copy of the
connected descriptor.
• 28–29 The thread fetches the value of the
connected descriptor and then calls free to
release the memory.
54. TCP Echo Server Using Threads
• Historically, the malloc and free functions
have been nonre-entrant.
• That is, calling either function from a signal
handler while the main thread is in the middle
of one of these two functions has been a
recipe for disaster, because of static data
structures that are manipulated by these two
functions.
55. TCP Echo Server Using Threads
• POSIX requires that these two functions, along
with many others, be thread-safe.
• This is normally done by some form of
synchronization performed within the library
functions that is transparent to us.
56. TCP Echo Server Using Threads
Thread-Safe Functions
• POSIX.1 requires that all the functions defined
by POSIX.1 and by the ANSI C standard be
thread-safe, with the exceptions listed in
Figure 26.5.
58. TCP Echo Server Using Threads
• Unfortunately, POSIX says nothing about thread
safety with regard to the networking API
functions.
• The last five lines in this table are from Unix 98.
• We see from Figure 26.5 that the common
technique for making a function thread-safe is to
define a new function whose name ends in _r.
• Two of the functions are thread-safe only if the
caller allocates space for the result and passes
that pointer as the argument to the function.
59. Thread-Specific Data
• When converting existing functions to run in a
threads environment, a common problem
encountered is due to static variables.
• A function that keeps state in a private buffer,
or one that returns a result in the form of a
pointer to a static buffer, is not thread-safe
because multiple threads cannot use the
buffer to hold different things at the same
time.
60. Thread-Specific Data
• When faced with this problem, there are
various solutions:
• Use thread-specific data. This is nontrivial and
then converts the function into one that works
only on systems with threads support.
• The advantage to this approach is that the
calling sequence does not change and all the
changes go into the library function and not
the applications that call the function.
61. Thread-Specific Data
• Change the calling sequence so that the caller
packages all the arguments into a structure,
and also store in that structure the static
variables from Figure 3.18.
• Figure 26.6 shows the new structure and new
function prototypes.
• Figure 26.6 Data structure and function
prototype for re-entrant version of readline.
63. Thread-Specific Data
• These new functions can be used on threaded
and nonthreaded systems, but all applications
that call readline must change.
• Restructure the interface to avoid any static
variables so that the function is thread-safe.
• For the readline example, this would be the
equivalent of ignoring the speedups introduced in
Figure 3.18 and going back to the older version in
Figure 3.17.
• Since we said the older version was "painfully
slow," taking this option is not always viable.
64. Thread-Specific Data
• Thread-specific data is a common technique
for making an existing function thread-safe.
• Before describing the Pthread functions that
work with thread-specific data, we describe
the concept and a possible implementation,
because the functions appear more
complicated than they really are.
65. Thread-Specific Data
• Each system supports a limited number of
thread-specific data items.
• POSIX requires this limit be no less than 128
(per process), and we assume this limit in the
following example.
• The system (probably the threads library)
maintains one array of structures per process,
which we call key structures, as we show in
Figure 26.7.
67. Thread-Specific Data
• The flag in the Key structure indicates whether
this array element is currently in use, and all the
flags are initialized to be "not in use."
• When a thread calls pthread_key_create to
create a new thread-specific data item, the
system searches through its array of Key
structures and finds the first one not in use.
• Its index, 0 through 127, is called the key, and this
index is returned to the calling thread.
• We will talk about the "destructor pointer," the
other member of the Key structure, shortly.
68. Thread-Specific Data
• In addition to the process-wide array of Key
structures, the system maintains numerous
pieces of information about each thread
within a process.
• We call this a Pthread structure and part of
this information is a 128-element array of
pointers, which we call the pkey array. We
show this in Figure 26.8.
70. Thread-Specific Data
• All entries in the pkey array are initialized to
null pointers. These 128 pointers are the
"values" associated with each of the possible
128 "keys" in the process.
• When we create a key with
pthread_key_create, the system tells us its key
(index).
• Each thread can then store a value (pointer)
for the key, and each thread normally obtains
the pointer by calling malloc.
71. Thread-Specific Data
• We now go through an example of how
thread-specific data is used, assuming that our
readline function uses thread-specific data to
maintain the per-thread state across
successive calls to the function.
• Shortly we will show the code for this,
modifying our readline function to follow
these steps:
72. Thread-Specific Data
1.A process is started and multiple threads are
created
2.One of the threads will be the first to call readline,
and it in turn calls pthread_key_create. The
system finds the first unused Key structure in
Figure 26.7 and returns its index (0–127) to the
caller. We assume an index of 1 in this example.
• We will use the pthread_once function to
guarantee that pthread_key_create is called only
by the first thread to call readline.
73. Thread-Specific Data
3.readline calls pthread_getspecific to get the
pkey[1] value (the "pointer" in Figure 26.8 for
this key of 1) for this thread, and the returned
value is a null pointer.
Therefore, readline calls malloc to allocate the
memory that it needs to keep the per-thread
information across successive calls to readline
for this thread.
74. Thread-Specific Data
• readline initializes this memory as needed
and calls pthread_setspecific to set the
thread-specific data pointer (pkey[1]) for this
key to point to the memory it just allocated.
We show this in Figure 26.9, assuming that
the calling thread is thread 0 in the process.
76. Thread-Specific Data
• In this figure, we note that the Pthread structure
is maintained by the system (probably the thread
library), but the actual thread-specific data that
we malloc is maintained by our function
(readline, in this case).
• All that pthread_setspecific does is set the
pointer for this key in the Pthread structure to
point to our allocated memory.
• Similarly, all that pthread_getspecific does is
return that pointer to us.
77. Thread-Specific Data
4. Another thread, say thread n, calls readline,
perhaps while thread 0 is still executing within
readline.
readline calls pthread_once to initialize the key
for this thread-specific data item, but since it
has already been called, it is not called again.
78. Thread-Specific Data
5.readline calls pthread_getspecific to fetch the
pkey [1] pointer for this thread, and a null
pointer is returned. This thread then calls
malloc, followed by pthread_setspecific, just
like thread 0, initializing its thread-specific
data for this key (1).
We show this in Figure 26.10.
80. Thread-Specific Data
6.Thread n continues executing in readline, using
and modifying its own thread-specific data.
• One item we have not addressed is: What
happens when a thread terminates? If the
thread has called our readline function, that
function has allocated a region of memory
that needs to be freed.
• This is where the "destructor pointer" from
Figure 26.7 is used.
81. Thread-Specific Data
• When the thread that creates the thread-
specific data item calls pthread_key_create,
one argument to this function is a pointer to a
destructor function.
• When a thread terminates, the system goes
through that thread's pkey array, calling the
corresponding destructor function for each
non-null pkey pointer.
82. Thread-Specific Data
• What we mean by "corresponding destructor"
is the function pointer stored in the Key array
in Figure 26.7.
• This is how the thread-specific data is freed
when a thread terminates.
• The first two functions that are normally
called when dealing with thread-specific data
are pthread_once and pthread_key_create.
83. Thread-Specific Data
#include <pthread.h>
int pthread_once(pthread_once_t *onceptr,
void (*init) (void));
int pthread_key_create(pthread_key_t *keyptr,
void (*destructor) (void *value));
Both return: 0 if OK, positive Exxx value on error
84. Thread-Specific Data
• pthread_once is normally called every time a
function that uses thread-specific data is
called, but pthread_once uses the value in the
variable pointed to by onceptr to guarantee
that the init function is called only one time
per process.
85. Thread-Specific Data
• pthread_key_create must be called only one
time for a given key within a process.
• The key is returned through the keyptr
pointer, and the destructor function, if the
argument is a non-null pointer, will be called
by each thread on termination if that thread
has stored a value for this key.
86. Thread-Specific Data
• Typical usage of these two functions (ignoring
error returns) is as follows:
pthread_key_t rl_key;
pthread_once_t rl_once = PTHREAD_ONCE_INIT;
void readline_destructor(void *ptr)
{
free(ptr);
}
88. Thread-Specific Data
if ( (ptr = pthread_getspecific(rl_key)) == NULL)
{
ptr = Malloc( ... );
pthread_setspecific(rl_key, ptr); /* initialize
memory pointed to by ptr */
}
...
/* use values pointed to by ptr */
}
89. Thread-Specific Data
• Every time readline is called, it calls
pthread_once.
• This function uses the value pointed to by its
onceptr argument (the contents of the variable
rl_once) to make certain that its init function is
called only one time.
• This initialization function, readline_once, creates
the thread-specific data key that is stored in
rl_key, and which readline then uses in calls to
pthread_getspecific and pthread_setspecific.
90. Thread-Specific Data
• The pthread_getspecific and
pthread_setspecific functions are used to
fetch and store the value associated with a
key.
• This value is what we called the "pointer" in
Figure 26.8.
• What this pointer points to is up to the
application, but normally, it points to
dynamically allocated memory.
91. Thread-Specific Data
#include <pthread.h>
void *pthread_getspecific(pthread_key_t key);
Returns: pointer to thread-specific data (possibly
a null pointer)
int pthread_setspecific(pthread_key_t key, const
void *value);
Returns: 0 if OK, positive Exxx value on error
92. Thread-Specific Data
• Notice that the argument to
pthread_key_create is a pointer to the key
(because this function stores the value
assigned to the key), while the arguments to
the get and set functions are the key itself
(probably a small integer index as discussed
earlier).
93. Thread-Specific Data
Example: readline Function Using Thread-
Specific Data
• We now show a complete example of thread-
specific data by converting the optimized
version of our readline function from Figure
3.18 to be thread-safe, without changing the
calling sequence.
94. Thread-Specific Data
• Figure 26.11 shows the first part of the
function: the pthread_key_t variable, the
pthread_once_t variable, the
readline_destructor function, the
readline_once function, and our Rline
structure that contains all the information we
must maintain on a per-thread basis.
95. Thread-Specific Data
Figure 26.11 First part of thread-safe readline
function.
static pthread_key_t rl_key;
static pthread_once_t rl_once =
PTHREAD_ONCE_INIT;
static void readline_destructor(void *ptr)
{
free(ptr);
}
96. Thread-Specific Data
static void readline_once(void)
{
Pthread_key_creat(&rl_key,
readline_destructor);
}
typedef struct {
int rl_cnt; /* initialize to 0 */
98. Thread-Specific Data
Destructor
4–8 Our destructor function just frees the
memory that was allocated for this thread.
One-time function
9–13 We will see that our one-time function is
called one time by pthread_once, and it just
creates the key that is used by readline.
99. Thread-Specific Data
Rline structure
14–18 Our Rline structure contains the three
variables that caused the problem by being
declared static in Figure 3.18.
One of these structures will be dynamically
allocated per thread and then released by our
destructor function.
• Figure 26.12 shows the actual readline function,
plus the function my_read it calls. This figure is a
modification of Figure 3.18.
100. Thread-Specific Data
my_read function
19–35 The first argument to the function is now a
pointer to the Rline structure that was allocated
for this thread (the actual thread-specific data).
Allocate thread-specific data
42 We first call pthread_once so that the first
thread that calls readline in this process calls
readline_once to create the thread-specific data
key.
101. Thread-Specific Data
• Fetch thread-specific data pointer
• 43–46 pthread_getspecific returns the pointer
to the Rline structure for this thread.
• But if this is the first time this thread has
called readline, the return value is a null
pointer.
• In this case, we allocate space for an Rline
structure and the rl_cnt member is initialized
to 0 by calloc.
102. Thread-Specific Data
• We then store the pointer for this thread by
calling pthread_setspecific.
• The next time this thread calls readline,
pthread_getspecific will return this pointer
that was just stored.
103. Thread-Specific Data
Figure 26.12 Second part of thread-safe readline
function.
static ssize_t my_read(Rline *tsd, int fd, char *ptr)
{
if (tsd->rl_cnt < = 0)
{
again: if ( (tsd->rl_cnt = read(fd, tsd->rl_buf,
MAXLINE)) < 0)
{
if (error == EINTR)
goto again;
110. Nonblocking connect: Web Client
• A real-world example of nonblocking connects
started with the Netscape Web.
• The client establishes an HTTP connection
with a Web server and fetches a home page.
• Often, that page will have numerous
references to other Web pages. Instead of
fetching these other pages serially, one at a
time, the client can fetch more than one at
the same time using nonblocking connects.
111. Nonblocking connect: Web Client
• Figure 16.12 shows an example of establishing
multiple connections in parallel.
• The leftmost scenario shows all three
connections performed serially.
• We assume that the first connection takes 10
units of time, the second 15, and the third 4,
for a total of 29 units of time.
113. Nonblocking connect: Web Client
• In the middle scenario, we perform two
connections in parallel.
• At time 0, the first two connections are
started, and when the first of these finishes,
we start the third.
• The total time is almost halved, from 29 to 15,
but realize that this is the ideal case.
114. Nonblocking connect: Web Client
• If the parallel connections are sharing a
common link, each can compete against each
other for the limited resources and all the
individual connection times might get longer.
• For example, the time of 10 might be 15, the
time of 15 might be 20, and the time of 4
might be 6.
• Nevertheless, the total time would be 21, still
shorter than the serial scenario.
115. Nonblocking connect: Web Client
• In the third scenario, we perform three
connections in parallel, and we again assume
there is no interference between the three
connections (the ideal case).
• But, the total time is the same (15 units) as
the second scenario given the example times
that we choose.
116. Nonblocking connect: Web Client
• When dealing with Web clients, the first
connection is done by itself, followed by
multiple connections for the references found
in the data from that first connection.
• We show this in Figure 16.13.
118. Nonblocking connect: Web Client
• To further optimize this sequence, the client can
start parsing the data that is returned for the first
connection before the first connection completes
and initiate additional connections as soon as it
knows that additional connections are needed.
• Program will read up to 20 files from a Web
server.
• We specify as command-line arguments the
maximum number of parallel connections, the
server's hostname, and each of the filenames to
fetch from the server.
119. Nonblocking connect: Web Client
solaris % web 3 www.foobar.com / image1.gif
image2.gif image3.gif image4.gif image5.gif
image6.gif image7.gif
• The command-line arguments specify three
simultaneous connections: the server's
hostname, the filename for the home page (/,
the server's root page), and seven files to then
read (which in this example are all GIF
images).
120. Nonblocking connect: Web Client
• These seven files would normally be
referenced on the home page, and a Web
client would read the home page and parse
the HTML to obtain these filenames.
121. Web Client and Simultaneous
Connections (Continued)
• We now revisit the Web client example from
Section 16.5 and recode it using threads
instead of nonblocking connects.
• With threads, we can leave the sockets in their
default blocking mode and create one thread
per connection.
• Each thread can block in its call to connect, as
the kernel will just run some other thread that
is ready.
122. Web Client and Simultaneous
Connections (Continued)
• Figure 26.13 shows the first part of the
program, the globals, and the start of the
main function.
Globals
• 1–16 #include <thread.h>, in addition to the
normal <pthread.h>, because we need to use
Solaris threads in addition to Pthreads.
123. Web Client and Simultaneous
Connections (Continued)
• 10 We have added one member to the file
structure: f_tid, the thread ID.
• The remainder of this code is similar to Figure
16.15.
• With this threads version, we do not use
select and therefore do not need any
descriptor sets or the variable maxfd.
• 36 The home_page function that is called is
unchanged from Figure 16.16.
129. Web Client and Simultaneous
Connections (Continued)
If possible, create another thread
• 40–52 If we are allowed to create another
thread (nconn is less than maxnconn), we do
so. The function that each new thread
executes is do_get_read and the argument is
the pointer to the file structure.
130. Web Client and Simultaneous
Connections (Continued)
Wait for any thread to terminate
• 53–54 We call the Solaris thread function
thr_join with a first argument of 0 to wait for
any one of our threads to terminate.
Unfortunately, Pthreads does not provide a
way to wait for any one of our threads to
terminate; the pthread_join function makes us
specify exactly which thread we want to wait
for
131. Web Client and Simultaneous
Connections (Continued)
Wait for any thread to terminate
• We will see in Section 26.9 that the Pthreads
solution for this problem is more complicated,
requiring us to use a condition variable for the
terminating thread to notify the main thread
when it is done.
132. Web Client and Simultaneous
Connections (Continued)
• Figure 26.15 shows the do_get_read function,
which is executed by each thread. This
function establishes the TCP connection,
sends an HTTP GET command to the server,
and reads the server's reply.
135. Web Client and Simultaneous
Connections (Continued)
Create TCP socket, establish connection
• 68–71 A TCP socket is created and a
connection is established by our tcp_connect
function. The socket is a normal blocking
socket, so the thread will block in the call to
connect until the connection is established.
136. Web Client and Simultaneous
Connections (Continued)
Write request to server
• 72 write_get_cmd builds the HTTP GET
command and sends it to the server. We do
not show this function again as the only
difference from Figure 16.18 is that the
threads version does not call FD_SET and does
not use maxfd.
137. Web Client and Simultaneous
Connections (Continued)
Read server's reply
• 73–82 The server's reply is then read. When
the connection is closed by the server, the
F_DONE flag is set and the function returns,
terminating the thread.
138. Mutexes: Mutual Exclusion
Figure 26.17 Two threads that increment a
global variable incorrectly.
#include "unpthread.h"
#define NLOOP 5000
int counter; /* incremented by threads */
void *doit(void *);
int main(int argc, char **argv)
{
pthread_t tidA, tidB;
139. Mutexes: Mutual Exclusion
Pthread_create(&tidA, NULL, &doit, NULL);
Pthread_create(&tidB, NULL, &doit, NULL);
/* wait for both threads to terminate */
Pthread_join(tidA, NULL);
Pthread_join(tidB, NULL);
exit(0);
}
140. Mutexes: Mutual Exclusion
void * doit(void *vptr)
{
int i, val;
/* Each thread fetches, prints, and increments
the counter NLOOP times. 22 * The value of
the counter should increase monotonically. */
141. Mutexes: Mutual Exclusion
for (i = 0; i < NLOOP; i++)
{
val = counter;
printf("%d: %dn", pthread_self(), val + 1);
counter = val + 1;
}
return (NULL);
}
142. Mutexes: Mutual Exclusion
• Notice the error the first time the system
switches from thread 4 to thread 5: The value
518 i
Figure 26.16. Output from program in Figure
26.17.
144. Mutexes: Mutual Exclusion
• The problem we just discussed, multiple
threads updating a shared variable, is the
simplest problem.
• The solution is to protect the shared variable
with a mutex (which stands for "mutual
exclusion") and access the variable only when
we hold the mutex.
• In terms of Pthreads, a mutex is a variable of
type pthread_mutex_t.
145. Mutexes: Mutual Exclusion
• We lock and unlock a mutex using the
following two functions:
#include <pthread.h>
int pthread_mutex_lock(pthread_mutex_t *
mptr);
int pthread_mutex_unlock(pthread_mutex_t *
mptr);
Both return: 0 if OK, positive Exxx value on error
146. Mutexes: Mutual Exclusion
Figure 26.18 Corrected version of Figure 26.17
using a mutex to protect the shared variable.
#include "unpthread.h"
#define NLOOP 5000
int counter; /* incremented by threads */
pthread_mutex_t counter_mutex =
PTHREAD_MUTEX_INITIALIZER;
void *doit(void *);
147. Mutexes: Mutual Exclusion
int main(int argc, char **argv)
{
pthread_t tidA, tidB;
Pthread_create(&tidA, NULL, &doit, NULL);
Pthread_create(&tidB, NULL, &doit, NULL);
/* wait for both threads to terminate */
Pthread_join(tidA, NULL);
Pthread_join(tidB, NULL);
148. Mutexes: Mutual Exclusion
exit(0);
}
void * doit(void *vptr)
{
int i, val;
/* Each thread fetches, prints, and increments
the counter NLOOP times. The value of the
counter should increase monotonically. */
149. Mutexes: Mutual Exclusion
for (i = 0; i < NLOOP; i++)
{
Pthread_mutex_lock(&counter_mutex);
val = counter;
printf("%d: %dn", pthread_self(), val + 1);
counter = val + 1;
Pthread_mutex_unlock(&counter_mutex);
}
return (NULL);
}
150. Condition Variables
• A mutex is fine to prevent simultaneous access to
a shared variable, but we need something else to
let us go to sleep waiting for some condition to
occur.
• Let's demonstrate this with an example. We
return to our Web client in Section 26.6 and
replace the Solaris thr_join with pthread_join.
• But, we cannot call the Pthread function until we
know that a thread has terminated.
• We first declare a global variable that counts the
number of terminated threads and protect it with
a mutex.
151. Condition Variables
• int ndone; /* number of terminated threads
*/
• pthread_mutex_t ndone_mutex =
PTHREAD_MUTEX_INITIALIZER;
• We then require that each thread increment
this counter when it terminates, being careful
to use the associated mutex.
153. Condition Variables
• This is fine, but how do we code the main
loop? It needs to lock the mutex continually
and check if any threads have terminated.
while (nlefttoread > 0)
{
while (nconn < maxnconn && nlefttoconn > 0)
{ /* find a file to read */ ... } /* See if one of
the threads is done */
Pthread_mutex_lock(&ndone_mutex);
154. Condition Variables
if (ndone > 0)
{
for (i = 0; i < nfiles; i++)
{
if (file[i].f_flags & F_DONE)
{
Pthread_join(file[i].f_tid, (void **) &fptr); /*
update file[i] for terminated thread */
...
}
156. Condition Variables
• While this is okay, it means the main loop
never goes to sleep; it just loops, checking
ndone every time around the loop.
• This is called polling and is considered a waste
of CPU time.
• We want a method for the main loop to go to
sleep until one of its threads notifies it that
something is ready.
157. Condition Variables
• A condition variable, in conjunction with a
mutex, provides this facility.
• The mutex provides mutual exclusion and the
condition variable provides a signaling
mechanism.
• In terms of Pthreads, a condition variable is a
variable of type pthread_cond_t.
158. Condition Variables
• They are used with the following two
functions:
#include <pthread.h>
int pthread_cond_wait(pthread_cond_t *cptr,
pthread_mutex_t *mptr);
int pthread_cond_signal(pthread_cond_t *cptr);
Both return: 0 if OK, positive Exxx value on error
159. Condition Variables
• A thread notifies the main loop that it is
terminating by incrementing the counter
while its mutex lock is held and by signaling
the condition variable.
• Pthread_mutex_lock(&ndone_mutex);
ndone++;
Pthread_cond_signal(&ndone_cond);
Pthread_mutex_unlock(&ndone_mutex);
160. Condition Variables
• The main loop then blocks in a call to
pthread_cond_wait, waiting to be signaled by
a terminating thread.
161. Condition Variables
while (nlefttoread > 0)
{
while (nconn < maxnconn && nlefttoconn > 0)
{
/* find file to read */
...
} /* Wait for thread to terminate */
Pthread_mutex_lock(&ndone_mutex);
while (ndone == 0)
Pthread_cond_wait (&ndone_cond,
&ndone_mutex);
162. Condition Variables
for (i = 0; i < nfiles; i++)
{
if (file[i].f_flags & F_DONE)
{
Pthread_join(file[i].f_tid, (void **) &fptr);
/* update file[i] for terminated thread */
...
}
}
Pthread_mutex_unlock (&ndone_mutex); }
163. Condition Variables
• Notice that the variable ndone is still checked
only while the mutex is held.
• Then, if there is nothing to do,
pthread_cond_wait is called.
• This puts the calling thread to sleep and
releases the mutex lock it holds.
• Furthermore, when the thread returns from
pthread_cond_wait (after some other thread
has signaled it), the thread again holds the
mutex.
164. Condition Variables
• There are instances when a thread knows that
multiple threads should be awakened, in which
case, pthread_cond_broadcast will wake up all
threads that are blocked on the condition
variable.
• pthread_cond_timedwait lets a thread place a
limit on how long it will block. abstime is a
timespec structure that specifies the system time
when the function must return, even if the
condition variable has not been signaled yet. If
this timeout occurs, ETIME is returned.
165. Condition Variables
#include <pthread.h>
int pthread_cond_broadcast (pthread_cond_t *
cptr);
int pthread_cond_timedwait (pthread_cond_t *
cptr, pthread_mutex_t *mptr, const struct
timespec *abstime);
Both return: 0 if OK, positive Exxx value on error
166. Web Client and Simultaneous
Connections (Continued)
• Figure 26.19 Main processing loop of main
function.
while (nlefttoread > 0)
{
while (nconn < maxnconn && nlefttoconn > 0)
{
/* find a file to read */
167. Web Client and Simultaneous
Connections (Continued)
for (i = 0; i < nfiles; i++)
if (file[i].f_flags == 0)
break;
if (i == nfiles)
err_quit("nlefttoconn = %d but nothing
found", nlefttoconn);
168. Web Client and Simultaneous
Connections (Continued)
file[i].f_flags = F_CONNECTING;
Pthread_create(&tid, NULL, &do_get_read,
&file[i]);
file[i].f_tid = tid;
nconn++;
nlefttoconn--;
}
169. Web Client and Simultaneous
Connections (Continued)
/* Wait for thread to terminate */
Pthread_mutex_lock(&ndone_mutex);
while (ndone == 0)
Pthread_cond_wait(&ndone_cond,
&ndone_mutex);
for (i = 0; i < nfiles; i++)
{
170. Web Client and Simultaneous
Connections (Continued)
if (file[i].f_flags & F_DONE)
{
Pthread_join(file[i].f_tid, (void **) &fptr);
if (&file[i] != fptr)
err_quit("file[i]!= fptr");
fptr->f_flags = F_JOINED; /* clears F_DONE */
ndone--;
nconn--;
171. Web Client and Simultaneous
Connections (Continued)
nlefttoread--;
printf("thread %d for %s donen", fptr->f_tid,
fptr->f_name);
}
}
Pthread_mutex_unlock(&ndone_mutex);
}
exit(0);
}
172. Web Client and Simultaneous
Connections (Continued)
If possible, create another thread
• 44–56 This code has not changed.
Wait for thread to terminate
• 57–60 To wait for one of the threads to
terminate, we wait for ndone to be nonzero.
As discussed in Section 26.8, the test must be
done while the mutex is locked.
• The sleep is performed by
pthread_cond_wait.
173. Web Client and Simultaneous
Connections (Continued)
Handle terminated thread
• 61–73 When a thread has terminated, we go
through all the file structures to find the
appropriate thread, call pthread_join, and
then set the new F_JOINED flag.