SlideShare a Scribd company logo
1 of 18
Azerbaijan State Oil
and Industry University
Subject: Parallel Computing
Teacher: Samir Quliyev
Group: 606.20E
Student: Shams Rasulzade
Synchronizing
Concurrent
Operations in C++
With threading in C++ there are mainly two tasks for the developer:
1. Protecting shared data from concurrent access
2. Synchronizing concurrent operations
Protecting shared data is achieved by mutual exclusion. By avoiding that multiple
threads access the same data at the same time race conditions are prevent. Two read
operations are permitted. Two write operations or one write and one read operation
lead to race conditions.
This presentation is about the second issue: Synchronizing concurrent operations in
C++ which means coordinating the order of events between multiple threads.
Synchronizing concurrent operations in C++ happens when
one thread must wait on another thread to complete a
task before it can continue its own task. For example one
thread indexes data in the background and the second
thread needs to use the indexed data. The first thread is
maybe not ready yet with indexing and the second thread
shall wait on the result of the first thread.
Most of the synchronization problems can be broken down to the
producer-consumer pattern: A consumer thread is waiting for a condition
to be true. The producer thread is setting the condition.
How can you achieve the waiting on a condition in C++? And how can
you pass the results of tasks between different threads?
We’ll talk about two different methods:
1. Periodically checking the condition (naive and worst approach)
2. Condition variables
Suppose you’re traveling on an overnight train. One way to ensure you get off at the right station
would be to stay awake all night and pay attention to where the train stops. You wouldn’t miss your
station, but you’d be tired when you got there. Alternatively, you could look at the timetable to see
when the train is supposed to arrive, set your alarm a bit before, and go to sleep. That’d be OK; you
wouldn’t miss your stop, but if the train got delayed, you’d wake up too early. There is also a
possibility that your alarm clock’s batteries might die, and you oversleep and miss your station. What
would be ideal is if you could go to sleep and have somebody or something wake you up when the
train gets to your station, whenever that happens.
1.
How does that relate to threads? Well, if one thread is waiting for a second thread to complete a task,
it has several options. First, it could keep checking a flag in shared data (protected by a mutex) and
have the second thread set the flag when it completes the task. This is wasteful on two counts: the
thread consumes valuable processing time by repeatedly checking the flag, and when the mutex is
locked by the waiting thread, it can’t be locked by any other thread. Both work against the thread
doing the waiting, because they limit the resources available to the thread being waited for, and even
prevent it from setting the flag when it’s done. This is akin to staying awake all night talking to the
train driver: he has to drive the train more slowly because you keep distracting him, and it takes
longer to get there. Similarly, the waiting thread is consuming resources that could be used by other
threads in the system and may end up waiting longer than necessary.
A second option is to have the waiting thread sleep for small periods between the checks using the
std::this_thread::sleep_for() function:
bool flag;
std::mutex m;
void wait_for_flag()
{
std::unique_lock<std::mutex> lk(m);
while(!flag)
{
lk.unlock(); ❶
std::this_thread::sleep_for(std::chrono::milliseconds(100)); ❷
lk.lock(); ❸
}
}
❶ Unlock the mutex
❷ Sleep for 100 ms
❸ Relock the mutex
In the loop, the function unlocks the mutex ❶ before the sleep ❷ and locks it
again afterward ❸ , and another thread gets a chance to acquire it and set the
flag.
This is an improvement, because the thread doesn’t waste processing time while
it’s sleeping, but it’s hard to get the sleep period right. Too short a sleep between
checks causes the thread to waste processing time checking; too long a sleep and
the thread keeps on sleeping even when the task it’s waiting for is complete,
introducing a delay. It’s rare that this oversleeping has a direct impact on the
operation of the program, but it could mean dropped frames in a fast-paced game
or overrunning a time slice in a real-time application.
The third, and preferred, option is to use the facilities from the C++ Standard
Library to wait for the event itself. The most basic mechanism for waiting for an
event to be triggered by another thread (such as the presence of additional work in
the pipeline mentioned previously) is the condition variable. Conceptually, a
condition variable is associated with some event or other condition, and one or
more threads can wait for that condition to be satisfied. When a thread determines
that the condition is satisfied, it notifies one or more of the threads waiting on
the condition variable to wake them up and allow them to continue processing.
The Standard C++ Library provides not one, but two implementations of a condition variable:
std::condition_variable and std::condition_variable_any. Both are declared in the <condition_variable>
library header. In both cases, they need to work with a mutex in order to provide appropriate
synchronization; the former is limited to working with std::mutex, whereas the latter can work with
anything that meets some minimal criteria for being mutex-like, hence the _any suffix. Because
std::condition_variable_any is more general, there’s the potential for additional costs in terms of size,
performance, or operating system resources, and std::condition_ variable should be preferred unless the
additional flexibility is required.
2.
How do you use a std::condition_variable to handle the example in the introduction—how do you let the thread
waiting for work sleep until there’s data to process? The following listing shows one way you could do this with a
condition variable.
Listing 1 Waiting for data to process with a std::condition_variable
std::mutex mut;
std::queue<data_chunk> data_queue; ❶
std::condition_variable data_cond;
void data_preparation_thread()
{
while(more_data_to_prepare())
{
data_chunk const data=prepare_data();
std::lock_guard<std::mutex> lk(mut);
data_queue.push(data); ❷
data_cond.notify_one(); ❸
}
}
void data_processing_thread()
{
while(true)
{
std::unique_lock<std::mutex> lk(mut); ❹
data_cond.wait(
lk,[]{return !data_queue.empty();}); ❺
data_chunk data=data_queue.front();
data_queue.pop();
lk.unlock(); ❻
process(data);
if(is_last_chunk(data))
break;
}
}
First off, you have queue ❶ , which is used to pass the data between the two threads. When the data is ready, the
thread preparing the data locks the mutex, protecting the queue using a std::lock_guard, and pushes the data onto
queue ❷ . It then calls the notify_one() member function on the std::condition_variable instance to notify the
waiting thread (if one exists) in ❸ .
On the other side of the fence, you have the processing thread. This thread first locks the mutex, but this time with a
std::unique_lock rather than a std::lock_ guard ❹ —you’ll see why in a minute. The thread then calls wait() on the
std:: condition_variable, passing in the lock object and a lambda function that expresses the condition being waited
for in ❺ . Lambda functions are a new feature in C++11 that allow you to write an anonymous function as part of
another expression, and they’re ideally suited for specifying predicates for standard library functions such as wait(). In
this case, the simple lambda function []{return !data_queue.empty();} checks to see if the data_queue isn’t empty()—if
there is some data in the queue ready for processing.
The implementation of wait()checks the condition (by calling the supplied lambda function) and returns if it’s satisfied
(the lambda function returns true). If the condition isn’t satisfied (the lambda function returns false), wait() unlocks
the mutex and puts the thread in a blocked or waiting state. When the condition variable is notified by a call to
notify_one() from the data-preparation thread, the thread wakes from its slumber (unblocks it), reacquires the lock on
the mutex, and checks the condition again, returning from wait() with the mutex still locked if the condition is
satisfied. If the condition isn’t satisfied, the thread unlocks the mutex and resumes waiting. This is why you need the
std::unique_lock rather than the std::lock_guard—the waiting thread must unlock the mutex while it’s waiting and lock
it again afterward, and std::lock_guard doesn’t provide that flexibility. If the mutex remained locked while the thread
was sleeping, the data-preparation thread wouldn’t be able to lock the mutex to add an item to the queue, and the
waiting thread would never be able to see its condition satisfied.
Listing 1 uses a simple lambda function for the wait in ❺ , which checks to see if the queue isn’t empty, but any
function or callable object could be passed. If you already have a function to check the condition (perhaps because it’s
more complicated than a simple test like this), then this function can be passed directly; there’s no need to wrap it in
a lambda. During a call to wait(), a condition variable may check the supplied condition any number of times; but it
always does this with the mutex locked and returns immediately if (and only if) the function provided to test the
condition returns true. When the waiting thread reacquires the mutex and checks the condition, if it isn’t in direct
response to a notification from another thread, it’s called a spurious wake. Because the number and frequency of any
such spurious wakes are indeterminate, it isn’t advisable to use a function with side effects for the condition check. If
you do, you must be prepared for the side effects to occur multiple times.
The flexibility to unlock a std::unique_lock isn’t only used for the call to wait(); it’s also used once you have the data
to process but before processing it ❻ . Processing data can potentially be a time-consuming operation, and it’s a
bad idea to hold a lock on a mutex for longer than necessary.
Using a queue to transfer data between threads, as in listing 1, is a common scenario. Done well, the synchronization
can be limited to the queue itself, which greatly reduces the possible number of synchronization problems and race
conditions. In view of this, let’s now work on extracting a generic thread-safe queue from listing 1.
RESOURCES
https://chrizog.com/cpp-thread-synchronization
https://freecontent.manning.com/synchronizing-concurrent-operations-in-c/
THANKS!

More Related Content

What's hot

computer Graphics
computer Graphics computer Graphics
computer Graphics Rozi khan
 
Stacks, Queues, Binary Search Trees - Lecture 1 - Advanced Data Structures
Stacks, Queues, Binary Search Trees -  Lecture 1 - Advanced Data StructuresStacks, Queues, Binary Search Trees -  Lecture 1 - Advanced Data Structures
Stacks, Queues, Binary Search Trees - Lecture 1 - Advanced Data StructuresAmrinder Arora
 
Solution of N Queens Problem genetic algorithm
Solution of N Queens Problem genetic algorithm  Solution of N Queens Problem genetic algorithm
Solution of N Queens Problem genetic algorithm MohammedAlKazal
 
Output primitives computer graphics c version
Output primitives   computer graphics c versionOutput primitives   computer graphics c version
Output primitives computer graphics c versionMarwa Al-Rikaby
 
Volumes of solids of revolution dfs
Volumes of solids of revolution dfsVolumes of solids of revolution dfs
Volumes of solids of revolution dfsFarhana Shaheen
 
Lecture 2d point,curve,text,line clipping
Lecture   2d point,curve,text,line clippingLecture   2d point,curve,text,line clipping
Lecture 2d point,curve,text,line clippingavelraj
 
All pairs shortest path algorithm
All pairs shortest path algorithmAll pairs shortest path algorithm
All pairs shortest path algorithmSrikrishnan Suresh
 
Hidden surface removal
Hidden surface removalHidden surface removal
Hidden surface removalAnkit Garg
 
Advanced procedures in assembly language Full chapter ppt
Advanced procedures in assembly language Full chapter pptAdvanced procedures in assembly language Full chapter ppt
Advanced procedures in assembly language Full chapter pptMuhammad Sikandar Mustafa
 
Functional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorldFunctional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorldJorge Vásquez
 

What's hot (17)

computer Graphics
computer Graphics computer Graphics
computer Graphics
 
Stacks, Queues, Binary Search Trees - Lecture 1 - Advanced Data Structures
Stacks, Queues, Binary Search Trees -  Lecture 1 - Advanced Data StructuresStacks, Queues, Binary Search Trees -  Lecture 1 - Advanced Data Structures
Stacks, Queues, Binary Search Trees - Lecture 1 - Advanced Data Structures
 
Solution of N Queens Problem genetic algorithm
Solution of N Queens Problem genetic algorithm  Solution of N Queens Problem genetic algorithm
Solution of N Queens Problem genetic algorithm
 
EGL 1.4 Reference Card
EGL 1.4 Reference CardEGL 1.4 Reference Card
EGL 1.4 Reference Card
 
Function of DM
Function of DM Function of DM
Function of DM
 
Output primitives computer graphics c version
Output primitives   computer graphics c versionOutput primitives   computer graphics c version
Output primitives computer graphics c version
 
Volumes of solids of revolution dfs
Volumes of solids of revolution dfsVolumes of solids of revolution dfs
Volumes of solids of revolution dfs
 
Top down parsing
Top down parsingTop down parsing
Top down parsing
 
Contrato Empreitada De Servico
Contrato Empreitada De ServicoContrato Empreitada De Servico
Contrato Empreitada De Servico
 
Lecture 2d point,curve,text,line clipping
Lecture   2d point,curve,text,line clippingLecture   2d point,curve,text,line clipping
Lecture 2d point,curve,text,line clipping
 
Bezier curve computer graphics
Bezier curve computer graphics Bezier curve computer graphics
Bezier curve computer graphics
 
All pairs shortest path algorithm
All pairs shortest path algorithmAll pairs shortest path algorithm
All pairs shortest path algorithm
 
Hidden surface removal
Hidden surface removalHidden surface removal
Hidden surface removal
 
Введение в SQL
Введение в SQLВведение в SQL
Введение в SQL
 
Advanced procedures in assembly language Full chapter ppt
Advanced procedures in assembly language Full chapter pptAdvanced procedures in assembly language Full chapter ppt
Advanced procedures in assembly language Full chapter ppt
 
Functional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorldFunctional Programming 101 with Scala and ZIO @FunctionalWorld
Functional Programming 101 with Scala and ZIO @FunctionalWorld
 
Unit iv
Unit ivUnit iv
Unit iv
 

Similar to Synchronizing Concurrent Operations in C++.pptx

Java And Multithreading
Java And MultithreadingJava And Multithreading
Java And MultithreadingShraddha
 
CS844 U1 Individual Project
CS844 U1 Individual ProjectCS844 U1 Individual Project
CS844 U1 Individual ProjectThienSi Le
 
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Subhajit Sahu
 
seminarembedded-150504150805-conversion-gate02.pdf
seminarembedded-150504150805-conversion-gate02.pdfseminarembedded-150504150805-conversion-gate02.pdf
seminarembedded-150504150805-conversion-gate02.pdfkarunyamittapally
 
Multi threading
Multi threadingMulti threading
Multi threadinggndu
 
Multithreading
MultithreadingMultithreading
Multithreadingsagsharma
 
Describe synchronization techniques used by programmers who develop .pdf
Describe synchronization techniques used by programmers who develop .pdfDescribe synchronization techniques used by programmers who develop .pdf
Describe synchronization techniques used by programmers who develop .pdfexcellentmobiles
 
Java Concurrency Starter Kit
Java Concurrency Starter KitJava Concurrency Starter Kit
Java Concurrency Starter KitMark Papis
 
[Java concurrency]02.basic thread synchronization
[Java concurrency]02.basic thread synchronization[Java concurrency]02.basic thread synchronization
[Java concurrency]02.basic thread synchronizationxuehan zhu
 
Thread syncronization
Thread syncronizationThread syncronization
Thread syncronizationpriyabogra1
 
Multithreading 101
Multithreading 101Multithreading 101
Multithreading 101Tim Penhey
 
Multithreading and concurrency in android
Multithreading and concurrency in androidMultithreading and concurrency in android
Multithreading and concurrency in androidRakesh Jha
 
Clean Code�Chapter 3. (1)
Clean Code�Chapter 3. (1)Clean Code�Chapter 3. (1)
Clean Code�Chapter 3. (1)Felix Chen
 
Multithreading Introduction and Lifecyle of thread
Multithreading Introduction and Lifecyle of threadMultithreading Introduction and Lifecyle of thread
Multithreading Introduction and Lifecyle of threadKartik Dube
 

Similar to Synchronizing Concurrent Operations in C++.pptx (20)

Java And Multithreading
Java And MultithreadingJava And Multithreading
Java And Multithreading
 
CS844 U1 Individual Project
CS844 U1 Individual ProjectCS844 U1 Individual Project
CS844 U1 Individual Project
 
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
Monitors and Blocking Synchronization : The Art of Multiprocessor Programming...
 
Real Time Operating Systems
Real Time Operating SystemsReal Time Operating Systems
Real Time Operating Systems
 
seminarembedded-150504150805-conversion-gate02.pdf
seminarembedded-150504150805-conversion-gate02.pdfseminarembedded-150504150805-conversion-gate02.pdf
seminarembedded-150504150805-conversion-gate02.pdf
 
Multi threading
Multi threadingMulti threading
Multi threading
 
Slide 7 Thread-1.pptx
Slide 7 Thread-1.pptxSlide 7 Thread-1.pptx
Slide 7 Thread-1.pptx
 
Multithreading
MultithreadingMultithreading
Multithreading
 
Describe synchronization techniques used by programmers who develop .pdf
Describe synchronization techniques used by programmers who develop .pdfDescribe synchronization techniques used by programmers who develop .pdf
Describe synchronization techniques used by programmers who develop .pdf
 
Md09 multithreading
Md09 multithreadingMd09 multithreading
Md09 multithreading
 
Java Concurrency Starter Kit
Java Concurrency Starter KitJava Concurrency Starter Kit
Java Concurrency Starter Kit
 
[Java concurrency]02.basic thread synchronization
[Java concurrency]02.basic thread synchronization[Java concurrency]02.basic thread synchronization
[Java concurrency]02.basic thread synchronization
 
Concurrency
ConcurrencyConcurrency
Concurrency
 
Thread syncronization
Thread syncronizationThread syncronization
Thread syncronization
 
Process coordination
Process coordinationProcess coordination
Process coordination
 
P4
P4P4
P4
 
Multithreading 101
Multithreading 101Multithreading 101
Multithreading 101
 
Multithreading and concurrency in android
Multithreading and concurrency in androidMultithreading and concurrency in android
Multithreading and concurrency in android
 
Clean Code�Chapter 3. (1)
Clean Code�Chapter 3. (1)Clean Code�Chapter 3. (1)
Clean Code�Chapter 3. (1)
 
Multithreading Introduction and Lifecyle of thread
Multithreading Introduction and Lifecyle of threadMultithreading Introduction and Lifecyle of thread
Multithreading Introduction and Lifecyle of thread
 

More from emsResulzade1

Introduction to C++, Standard Library, Class Template vector.pptx
Introduction to C++, Standard  Library, Class Template  vector.pptxIntroduction to C++, Standard  Library, Class Template  vector.pptx
Introduction to C++, Standard Library, Class Template vector.pptxemsResulzade1
 
Digital Coding of Images.pptx
Digital Coding of Images.pptxDigital Coding of Images.pptx
Digital Coding of Images.pptxemsResulzade1
 
Computer Networks Slide.pptx
Computer Networks Slide.pptxComputer Networks Slide.pptx
Computer Networks Slide.pptxemsResulzade1
 
Musical Instruments.ppt
Musical Instruments.pptMusical Instruments.ppt
Musical Instruments.pptemsResulzade1
 
How to Detect a Lie.pptx
How to Detect a Lie.pptxHow to Detect a Lie.pptx
How to Detect a Lie.pptxemsResulzade1
 
The MS-DOS File System.pptx
The MS-DOS File System.pptxThe MS-DOS File System.pptx
The MS-DOS File System.pptxemsResulzade1
 

More from emsResulzade1 (6)

Introduction to C++, Standard Library, Class Template vector.pptx
Introduction to C++, Standard  Library, Class Template  vector.pptxIntroduction to C++, Standard  Library, Class Template  vector.pptx
Introduction to C++, Standard Library, Class Template vector.pptx
 
Digital Coding of Images.pptx
Digital Coding of Images.pptxDigital Coding of Images.pptx
Digital Coding of Images.pptx
 
Computer Networks Slide.pptx
Computer Networks Slide.pptxComputer Networks Slide.pptx
Computer Networks Slide.pptx
 
Musical Instruments.ppt
Musical Instruments.pptMusical Instruments.ppt
Musical Instruments.ppt
 
How to Detect a Lie.pptx
How to Detect a Lie.pptxHow to Detect a Lie.pptx
How to Detect a Lie.pptx
 
The MS-DOS File System.pptx
The MS-DOS File System.pptxThe MS-DOS File System.pptx
The MS-DOS File System.pptx
 

Recently uploaded

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 

Recently uploaded (20)

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 

Synchronizing Concurrent Operations in C++.pptx

  • 1. Azerbaijan State Oil and Industry University Subject: Parallel Computing Teacher: Samir Quliyev Group: 606.20E Student: Shams Rasulzade
  • 3. With threading in C++ there are mainly two tasks for the developer: 1. Protecting shared data from concurrent access 2. Synchronizing concurrent operations Protecting shared data is achieved by mutual exclusion. By avoiding that multiple threads access the same data at the same time race conditions are prevent. Two read operations are permitted. Two write operations or one write and one read operation lead to race conditions. This presentation is about the second issue: Synchronizing concurrent operations in C++ which means coordinating the order of events between multiple threads.
  • 4. Synchronizing concurrent operations in C++ happens when one thread must wait on another thread to complete a task before it can continue its own task. For example one thread indexes data in the background and the second thread needs to use the indexed data. The first thread is maybe not ready yet with indexing and the second thread shall wait on the result of the first thread.
  • 5. Most of the synchronization problems can be broken down to the producer-consumer pattern: A consumer thread is waiting for a condition to be true. The producer thread is setting the condition. How can you achieve the waiting on a condition in C++? And how can you pass the results of tasks between different threads? We’ll talk about two different methods: 1. Periodically checking the condition (naive and worst approach) 2. Condition variables
  • 6. Suppose you’re traveling on an overnight train. One way to ensure you get off at the right station would be to stay awake all night and pay attention to where the train stops. You wouldn’t miss your station, but you’d be tired when you got there. Alternatively, you could look at the timetable to see when the train is supposed to arrive, set your alarm a bit before, and go to sleep. That’d be OK; you wouldn’t miss your stop, but if the train got delayed, you’d wake up too early. There is also a possibility that your alarm clock’s batteries might die, and you oversleep and miss your station. What would be ideal is if you could go to sleep and have somebody or something wake you up when the train gets to your station, whenever that happens. 1.
  • 7. How does that relate to threads? Well, if one thread is waiting for a second thread to complete a task, it has several options. First, it could keep checking a flag in shared data (protected by a mutex) and have the second thread set the flag when it completes the task. This is wasteful on two counts: the thread consumes valuable processing time by repeatedly checking the flag, and when the mutex is locked by the waiting thread, it can’t be locked by any other thread. Both work against the thread doing the waiting, because they limit the resources available to the thread being waited for, and even prevent it from setting the flag when it’s done. This is akin to staying awake all night talking to the train driver: he has to drive the train more slowly because you keep distracting him, and it takes longer to get there. Similarly, the waiting thread is consuming resources that could be used by other threads in the system and may end up waiting longer than necessary.
  • 8. A second option is to have the waiting thread sleep for small periods between the checks using the std::this_thread::sleep_for() function: bool flag; std::mutex m; void wait_for_flag() { std::unique_lock<std::mutex> lk(m); while(!flag) { lk.unlock(); ❶ std::this_thread::sleep_for(std::chrono::milliseconds(100)); ❷ lk.lock(); ❸ } }
  • 9. ❶ Unlock the mutex ❷ Sleep for 100 ms ❸ Relock the mutex In the loop, the function unlocks the mutex ❶ before the sleep ❷ and locks it again afterward ❸ , and another thread gets a chance to acquire it and set the flag. This is an improvement, because the thread doesn’t waste processing time while it’s sleeping, but it’s hard to get the sleep period right. Too short a sleep between checks causes the thread to waste processing time checking; too long a sleep and the thread keeps on sleeping even when the task it’s waiting for is complete, introducing a delay. It’s rare that this oversleeping has a direct impact on the operation of the program, but it could mean dropped frames in a fast-paced game or overrunning a time slice in a real-time application.
  • 10. The third, and preferred, option is to use the facilities from the C++ Standard Library to wait for the event itself. The most basic mechanism for waiting for an event to be triggered by another thread (such as the presence of additional work in the pipeline mentioned previously) is the condition variable. Conceptually, a condition variable is associated with some event or other condition, and one or more threads can wait for that condition to be satisfied. When a thread determines that the condition is satisfied, it notifies one or more of the threads waiting on the condition variable to wake them up and allow them to continue processing.
  • 11. The Standard C++ Library provides not one, but two implementations of a condition variable: std::condition_variable and std::condition_variable_any. Both are declared in the <condition_variable> library header. In both cases, they need to work with a mutex in order to provide appropriate synchronization; the former is limited to working with std::mutex, whereas the latter can work with anything that meets some minimal criteria for being mutex-like, hence the _any suffix. Because std::condition_variable_any is more general, there’s the potential for additional costs in terms of size, performance, or operating system resources, and std::condition_ variable should be preferred unless the additional flexibility is required. 2.
  • 12. How do you use a std::condition_variable to handle the example in the introduction—how do you let the thread waiting for work sleep until there’s data to process? The following listing shows one way you could do this with a condition variable. Listing 1 Waiting for data to process with a std::condition_variable std::mutex mut; std::queue<data_chunk> data_queue; ❶ std::condition_variable data_cond; void data_preparation_thread() { while(more_data_to_prepare()) { data_chunk const data=prepare_data(); std::lock_guard<std::mutex> lk(mut); data_queue.push(data); ❷ data_cond.notify_one(); ❸ } }
  • 13. void data_processing_thread() { while(true) { std::unique_lock<std::mutex> lk(mut); ❹ data_cond.wait( lk,[]{return !data_queue.empty();}); ❺ data_chunk data=data_queue.front(); data_queue.pop(); lk.unlock(); ❻ process(data); if(is_last_chunk(data)) break; } }
  • 14. First off, you have queue ❶ , which is used to pass the data between the two threads. When the data is ready, the thread preparing the data locks the mutex, protecting the queue using a std::lock_guard, and pushes the data onto queue ❷ . It then calls the notify_one() member function on the std::condition_variable instance to notify the waiting thread (if one exists) in ❸ . On the other side of the fence, you have the processing thread. This thread first locks the mutex, but this time with a std::unique_lock rather than a std::lock_ guard ❹ —you’ll see why in a minute. The thread then calls wait() on the std:: condition_variable, passing in the lock object and a lambda function that expresses the condition being waited for in ❺ . Lambda functions are a new feature in C++11 that allow you to write an anonymous function as part of another expression, and they’re ideally suited for specifying predicates for standard library functions such as wait(). In this case, the simple lambda function []{return !data_queue.empty();} checks to see if the data_queue isn’t empty()—if there is some data in the queue ready for processing.
  • 15. The implementation of wait()checks the condition (by calling the supplied lambda function) and returns if it’s satisfied (the lambda function returns true). If the condition isn’t satisfied (the lambda function returns false), wait() unlocks the mutex and puts the thread in a blocked or waiting state. When the condition variable is notified by a call to notify_one() from the data-preparation thread, the thread wakes from its slumber (unblocks it), reacquires the lock on the mutex, and checks the condition again, returning from wait() with the mutex still locked if the condition is satisfied. If the condition isn’t satisfied, the thread unlocks the mutex and resumes waiting. This is why you need the std::unique_lock rather than the std::lock_guard—the waiting thread must unlock the mutex while it’s waiting and lock it again afterward, and std::lock_guard doesn’t provide that flexibility. If the mutex remained locked while the thread was sleeping, the data-preparation thread wouldn’t be able to lock the mutex to add an item to the queue, and the waiting thread would never be able to see its condition satisfied.
  • 16. Listing 1 uses a simple lambda function for the wait in ❺ , which checks to see if the queue isn’t empty, but any function or callable object could be passed. If you already have a function to check the condition (perhaps because it’s more complicated than a simple test like this), then this function can be passed directly; there’s no need to wrap it in a lambda. During a call to wait(), a condition variable may check the supplied condition any number of times; but it always does this with the mutex locked and returns immediately if (and only if) the function provided to test the condition returns true. When the waiting thread reacquires the mutex and checks the condition, if it isn’t in direct response to a notification from another thread, it’s called a spurious wake. Because the number and frequency of any such spurious wakes are indeterminate, it isn’t advisable to use a function with side effects for the condition check. If you do, you must be prepared for the side effects to occur multiple times. The flexibility to unlock a std::unique_lock isn’t only used for the call to wait(); it’s also used once you have the data to process but before processing it ❻ . Processing data can potentially be a time-consuming operation, and it’s a bad idea to hold a lock on a mutex for longer than necessary. Using a queue to transfer data between threads, as in listing 1, is a common scenario. Done well, the synchronization can be limited to the queue itself, which greatly reduces the possible number of synchronization problems and race conditions. In view of this, let’s now work on extracting a generic thread-safe queue from listing 1.