- The document discusses threads and shared memory in multithreaded programs. It describes how threads within a process share the same memory space and how this sharing can cause problems if variables are accessed incorrectly.
- It provides examples of different ways to pass arguments to thread functions, noting that pointers to local stack variables should not be shared between threads as they could cause data races. Global variables and dynamically allocated memory can be safely shared.
- The key points are that threads share all memory by default, local variables are private to each thread but pointers/references to memory may be shared unintentionally, and this sharing needs to be managed correctly to avoid data race conditions in multithreaded programs.
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
GSP 125 Become Exceptional/newtonhelp.combellflower148
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Due 24 August (Friday, 1159 p.m. EDT)Use Module 1 and Book Ch.docxjacksnathalie
Due 24 August (Friday, 11:59 p.m. EDT)
Use Module 1 and Book Chapters 1 – 3 (module at bottom/chapters attached)
SHOW ALL WORK OR NO CREDIT
1. (10 Points) Virtualization is a concept that has taken on major importance in the early twenty-first century. Explain what is meant by virtualization.
2. (10 Points) What is a protocol? What is a standard? Do all protocols have to be standards? Explain.
3. (10 Points) Protocols and standards are important features of networks. Why is this so?
4. (10 Points) What are the most important ideas, keywords, and phrases that are stated in the definition of a system?
5. (10 Points) Convert the following hexadecimal numbers to decimal: .
a) 4E
b) 3D7
c) 3D70
6. (10 Points) Add the following binary numbers:
a)
101101101
+ 10011011
b)
110111111
+ 110111111
c)
1101
1010
111
+ 101
7. (10 Points) Multiply the following binary numbers together:
a)
1101
× 101
b)
11011
× 1011
8. (10 Points) Perform the following binary divisions:
a) 1010001001 ÷ 110
b) 11000000000 ÷ 1011
9. (10 Points) Using the division method, convert the following decimal numbers to binary:
a.) 4098
b) 71269
10. (10 Points) Convert the following binary numbers directly to hexadecimal: .
a) 101101110111010
b) 1111111111110001
Module 1
1. Computer Systems
We start this section with a short discussion of why you should be studying software, hardware, and data concepts. You may ask:
· Why should we study the concepts of computer data, computer hardware, and computer software?
· What are the components of a digital computer system and how do they work together?
· How are information systems developed for computers?1.1 Why Study Computer Concepts?
The answer to this question is that the understanding of these concepts is at the very foundation of being both competent and successful in your segment of the field of computers.
Computer users will:
· be aware of system capabilities, strategies, and limitations
· better understand the commands that they use
· be able to make informed decisions about computer equipment and applications
· improve their ability to communicate with other computer professionals
Computer programmers will:
· be able to write better programs by using characteristics of the computer to make the programs more efficient
· understand why compiled languages such as COBOL, C++, or Java run faster than interpreted languages such as BASIC
System analysts will be able to:
· read and understand technical specifications
· determine the correct computer strategy for a particular application
· assist management in determining computer strategy
· determine when to replace older equipment
System administrators or managers will:
· maximize the efficiency of their computer systems
· know when additional resources are required for those systems
· understand how to specify and configure operating system parameters
· know how to set up file systems
· be able to manage system and user PC upgrades
Each of these computer pro ...
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
GSP 125 Become Exceptional/newtonhelp.combellflower148
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
For more course tutorials visit
www.newtonhelp.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Due 24 August (Friday, 1159 p.m. EDT)Use Module 1 and Book Ch.docxjacksnathalie
Due 24 August (Friday, 11:59 p.m. EDT)
Use Module 1 and Book Chapters 1 – 3 (module at bottom/chapters attached)
SHOW ALL WORK OR NO CREDIT
1. (10 Points) Virtualization is a concept that has taken on major importance in the early twenty-first century. Explain what is meant by virtualization.
2. (10 Points) What is a protocol? What is a standard? Do all protocols have to be standards? Explain.
3. (10 Points) Protocols and standards are important features of networks. Why is this so?
4. (10 Points) What are the most important ideas, keywords, and phrases that are stated in the definition of a system?
5. (10 Points) Convert the following hexadecimal numbers to decimal: .
a) 4E
b) 3D7
c) 3D70
6. (10 Points) Add the following binary numbers:
a)
101101101
+ 10011011
b)
110111111
+ 110111111
c)
1101
1010
111
+ 101
7. (10 Points) Multiply the following binary numbers together:
a)
1101
× 101
b)
11011
× 1011
8. (10 Points) Perform the following binary divisions:
a) 1010001001 ÷ 110
b) 11000000000 ÷ 1011
9. (10 Points) Using the division method, convert the following decimal numbers to binary:
a.) 4098
b) 71269
10. (10 Points) Convert the following binary numbers directly to hexadecimal: .
a) 101101110111010
b) 1111111111110001
Module 1
1. Computer Systems
We start this section with a short discussion of why you should be studying software, hardware, and data concepts. You may ask:
· Why should we study the concepts of computer data, computer hardware, and computer software?
· What are the components of a digital computer system and how do they work together?
· How are information systems developed for computers?1.1 Why Study Computer Concepts?
The answer to this question is that the understanding of these concepts is at the very foundation of being both competent and successful in your segment of the field of computers.
Computer users will:
· be aware of system capabilities, strategies, and limitations
· better understand the commands that they use
· be able to make informed decisions about computer equipment and applications
· improve their ability to communicate with other computer professionals
Computer programmers will:
· be able to write better programs by using characteristics of the computer to make the programs more efficient
· understand why compiled languages such as COBOL, C++, or Java run faster than interpreted languages such as BASIC
System analysts will be able to:
· read and understand technical specifications
· determine the correct computer strategy for a particular application
· assist management in determining computer strategy
· determine when to replace older equipment
System administrators or managers will:
· maximize the efficiency of their computer systems
· know when additional resources are required for those systems
· understand how to specify and configure operating system parameters
· know how to set up file systems
· be able to manage system and user PC upgrades
Each of these computer pro ...
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
confusion culling.
redirection.
Gsp 125 Enthusiastic Study / snaptutorial.comStephenson101
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
confusion culling.
redirection.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
GSP 125 RANK Education for Service--gsp125rank.comclaric25
FOR MORE CLASSES VISIT
www.gsp125rank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more course tutorials visit
www.tutorialrank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
For more course tutorials visit
www.tutorialrank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
ECECS 472572 Final Exam ProjectRemember to check the errat.docxtidwellveronique
ECE/CS 472/572 Final Exam Project
Remember to check the errata section (at the very bottom of the page) for updates.
Your submission should be comprised of two items:
a
.pdf
file containing your written report and a
.tar
file containing a directory structure with your C or C++ source code. Your grade will be reduced if you do not follow the submission instructions.
All written reports (for both 472 and 572 students) must be composed in MS Word, LaTeX, or some other word processor and submitted as a PDF file.
Please take the time to read this entire document. If you have questions there is a high likelihood that another section of the document provides answers.
Introduction
In this final project you will implement a cache simulator. Your simulator will be configurable and will be able to handle caches with varying capacities, block sizes, levels of associativity, replacement policies, and write policies. The simulator will operate on trace files that indicate memory access properties. All input files to your simulator will follow a specific structure so that you can parse the contents and use the information to set the properties of your simulator.
After execution is finished, your simulator will generate an output file containing information on the number of cache misses, hits, and miss evictions (i.e. the number of block replacements). In addition, the file will also record the total number of (simulated) clock cycles used during the situation. Lastly, the file will indicate how many read and write operations were requested by the CPU.
It is important to note that your simulator is required to make several significant assumptions for the sake of simplicity.
1. You do not have to simulate the actual data contents. We simply pretend that we copied data from main memory and keep track of the hypothetical time that would have elapsed.
2. Accessing a sub-portion of a cache block takes the exact same time as it would require to access the entire block. Imagine that you are working with a cache that uses a 32 byte block size and has an access time of 15 clock cycles. Reading a 32 byte block from this cache will require 15 clock cycles. However, the same amount of time is required to read 1 byte from the cache.
3. In this project assume that main memory RAM is always accessed in units of 8 bytes (i.e. 64 bits at a time).
When accessing main memory, it's expensive to access the first unit. However, DDR memory typically includes buffering which means that the RAM can provide access to the successive memory (in 8 byte chunks) with minimal overhead. In this project we assume an
overhead of 1 additional clock cycle per contiguous unit
.
For example, suppose that it costs 255 clock cycles to access the first unit from main memory. Based on our assumption, it would only cost 257 clock cycles to access 24 bytes of memory.
4. Assume that all caches utilize a "fetch-on-write" scheme if a miss occurs on a Store operation. This means that .
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
confusion culling.
redirection.
Gsp 125 Enthusiastic Study / snaptutorial.comStephenson101
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
accessibility inversion.
confusion culling.
redirection.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
Question 2. 2. Hiding data in a class is also called (Points : 3)
encapsulation.
For more classes visit
www.snaptutorial.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
GSP 125 RANK Education for Service--gsp125rank.comclaric25
FOR MORE CLASSES VISIT
www.gsp125rank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
For more course tutorials visit
www.tutorialrank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
For more course tutorials visit
www.tutorialrank.com
Question 1. 1. In addition to grouping functions together, a class also groups (Points : 3)
libraries.
math operations.
print statements.
variables.
ECECS 472572 Final Exam ProjectRemember to check the errat.docxtidwellveronique
ECE/CS 472/572 Final Exam Project
Remember to check the errata section (at the very bottom of the page) for updates.
Your submission should be comprised of two items:
a
.pdf
file containing your written report and a
.tar
file containing a directory structure with your C or C++ source code. Your grade will be reduced if you do not follow the submission instructions.
All written reports (for both 472 and 572 students) must be composed in MS Word, LaTeX, or some other word processor and submitted as a PDF file.
Please take the time to read this entire document. If you have questions there is a high likelihood that another section of the document provides answers.
Introduction
In this final project you will implement a cache simulator. Your simulator will be configurable and will be able to handle caches with varying capacities, block sizes, levels of associativity, replacement policies, and write policies. The simulator will operate on trace files that indicate memory access properties. All input files to your simulator will follow a specific structure so that you can parse the contents and use the information to set the properties of your simulator.
After execution is finished, your simulator will generate an output file containing information on the number of cache misses, hits, and miss evictions (i.e. the number of block replacements). In addition, the file will also record the total number of (simulated) clock cycles used during the situation. Lastly, the file will indicate how many read and write operations were requested by the CPU.
It is important to note that your simulator is required to make several significant assumptions for the sake of simplicity.
1. You do not have to simulate the actual data contents. We simply pretend that we copied data from main memory and keep track of the hypothetical time that would have elapsed.
2. Accessing a sub-portion of a cache block takes the exact same time as it would require to access the entire block. Imagine that you are working with a cache that uses a 32 byte block size and has an access time of 15 clock cycles. Reading a 32 byte block from this cache will require 15 clock cycles. However, the same amount of time is required to read 1 byte from the cache.
3. In this project assume that main memory RAM is always accessed in units of 8 bytes (i.e. 64 bits at a time).
When accessing main memory, it's expensive to access the first unit. However, DDR memory typically includes buffering which means that the RAM can provide access to the successive memory (in 8 byte chunks) with minimal overhead. In this project we assume an
overhead of 1 additional clock cycle per contiguous unit
.
For example, suppose that it costs 255 clock cycles to access the first unit from main memory. Based on our assumption, it would only cost 257 clock cycles to access 24 bytes of memory.
4. Assume that all caches utilize a "fetch-on-write" scheme if a miss occurs on a Store operation. This means that .
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
24-sync-basic.pptx
1. Carnegie Mellon
1
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Synchronization: Basics
15-213/14-513/15-513: Introduction to Computer Systems
24th Lecture, July 28, 2022
Instructor: Abi Kim
2. Carnegie Mellon
2
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Today
Threads
Sharing
Mutual exclusion
3. Carnegie Mellon
3
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Traditional View of a Process
Process = process context + code, data, and stack
Program context:
Data registers
Condition codes
Stack pointer (SP)
Program counter (PC)
Code, data, and stack
Stack
SP
Shared libraries
Run-time heap
0
Read/write data
Read-only code/data
PC
brk
Process context
Kernel context:
VM structures
Descriptor table
brk pointer
4. Carnegie Mellon
4
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Alternate View of a Process
Process = thread + (code, data, and kernel context)
Shared libraries
Run-time heap
0
Read/write data
Thread context:
Data registers
Condition codes
Stack pointer (SP)
Program counter (PC)
Code, data, and kernel context
Read-only code/data
Stack
SP
PC
brk
Thread (main thread)
Kernel context:
VM structures
Descriptor table
brk pointer
5. Carnegie Mellon
5
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
A Process With Multiple Threads
Multiple threads can be associated with a process
Each thread has its own logical control flow
Each thread shares the same code, data, and kernel context
Each thread has its own stack for local variables
but not protected from other threads
Each thread has its own thread id (TID)
Thread 1 context:
Data registers
Condition codes
SP1
PC1
stack 1
Thread 1 (main thread)
shared libraries
run-time heap
0
read/write data
Shared code and data
read-only code/data
Kernel context:
VM structures
Descriptor table
brk pointer
Thread 2 context:
Data registers
Condition codes
SP2
PC2
stack 2
Thread 2 (peer thread)
6. Carnegie Mellon
6
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Memory is shared between all threads
Don’t let the picture confuse you!
stack 1
Thread 1 (main thread)
shared libraries
run-time heap
0
read/write data
Shared code and data
read-only code/data
Kernel context:
VM structures
Descriptor table
brk pointer
Thread 1 context:
Data registers
Condition codes
SP1
PC1
Thread 2 context:
Data registers
Condition codes
SP2
PC2
stack 2
Thread 2 (peer thread)
7. Carnegie Mellon
7
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Benefits of Threads
Threads have lighter overhead
Easier to share memory in concurrent programs using
threads
Threads are faster due to multi-core CPUs allowing
multiple threads to execute at once
8. Carnegie Mellon
8
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Today
Threads review
Sharing
Mutual exclusion
Semaphores
9. Carnegie Mellon
9
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Shared Variables in Threaded C Programs
Question: Which variables in a threaded C program are
shared?
The answer is not as simple as “global variables are shared” and
“stack variables are private”
Def: A variable x is shared if and only if multiple threads
reference some instance of x.
Requires answers to the following questions:
What is the memory model for threads?
How are instances of variables mapped to memory?
How many threads might reference each of these instances?
10. Carnegie Mellon
10
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Threads Memory Model: Conceptual
Multiple threads run within the context of a single process
Each thread has its own separate thread context
Thread ID, stack, stack pointer, PC, condition codes, and GP registers
All threads share the remaining process context
Code, data, heap, and shared library segments of the process virtual address space
Open files and installed handlers
Thread 1 context:
Data registers
Condition codes
SP1
PC1
stack 1
Thread 1
(private) Shared code and data
shared libraries
run-time heap
read/write data
read-only code/data
Thread 2 context:
Data registers
Condition codes
SP2
PC2
stack 2
Thread 2
(private)
11. Carnegie Mellon
11
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Threads Memory Model: Actual
Separation of data is not strictly enforced:
Register values are truly separate and protected, but…
Any thread can read and write the stack of any other thread
The mismatch between the conceptual and operation model
is a source of confusion and errors
Thread 1 context:
Data registers
Condition codes
SP1
PC1
stack 1
Thread 1
(private)
Shared code and data
shared libraries
run-time heap
read/write data
read-only code/data
Thread 2 context:
Data registers
Condition codes
SP2
PC2
stack 2
Thread 2
(private)
Virtual Address Space
12. Carnegie Mellon
12
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Passing an argument to a thread - Pedantic
int hist[N] = {0};
int main(int argc, char *argv[]) {
long i;
pthread_t tids[N];
for (i = 0; i < N; i++) {
long* p = Malloc(sizeof(long));
*p = i;
Pthread_create(&tids[i],
NULL,
thread,
(void *)p);
}
for (i = 0; i < N; i++)
Pthread_join(tids[i], NULL);
check();
}
void *thread(void *vargp)
{
hist[*(long *)vargp] += 1;
Free(vargp);
return NULL;
}
void check(void) {
for (int i=0; i<N; i++) {
if (hist[i] != 1) {
printf("Failed at %dn", i);
exit(-1);
}
}
printf("OKn");
}
13. Carnegie Mellon
13
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Passing an argument to a thread - Pedantic
int hist[N] = {0};
int main(int argc, char *argv[]) {
long i;
pthread_t tids[N];
for (i = 0; i < N; i++) {
long* p = Malloc(sizeof(long));
*p = i;
Pthread_create(&tids[i],
NULL,
thread,
(void *)p);
}
for (i = 0; i < N; i++)
Pthread_join(tids[i], NULL);
check();
}
void *thread(void *vargp)
{
hist[*(long *)vargp] += 1;
Free(vargp);
return NULL;
}
• Use malloc to create a per
thread heap allocated
place in memory for the
argument
• Remember to free in
thread!
• Producer-consumer
pattern
14. Carnegie Mellon
14
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
int hist[N] = {0};
int main(int argc, char *argv[]) {
long i;
pthread_t tids[N];
for (i = 0; i < N; i++)
Pthread_create(&tids[i],
NULL,
thread,
(void *)i);
for (i = 0; i < N; i++)
Pthread_join(tids[i], NULL);
check();
}
void *thread(void *vargp)
{
hist[(long)vargp] += 1;
return NULL;
}
Passing an argument to a thread – Also OK!
• Ok to Use cast since
sizeof(long) <= sizeof(void*)
• Cast does NOT change bits
15. Carnegie Mellon
15
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
int hist[N] = {0};
int main(int argc, char *argv[]) {
long i;
pthread_t tids[N];
for (i = 0; i < N; i++)
Pthread_create(&tids[i],
NULL,
thread,
(void *)&i);
for (i = 0; i < N; i++)
Pthread_join(tids[i], NULL);
check();
}
void *thread(void *vargp)
{
hist[*(long*)vargp] += 1;
return NULL;
}
Passing an argument to a thread – WRONG!
• &i points to same location
for all threads!
• Creates a data race!
16. Carnegie Mellon
16
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Three Ways to Pass Thread Arg
Malloc/free
Producer malloc’s space, passes pointer to pthread_create
Consumer dereferences pointer
Ptr to stack slot
Producer passes address to producer’s stack in pthread_create
Consumer dereferences pointer
Cast of int
Producer casts an int/long to address in pthread_create
Consumer casts void* argument back to int/long
17. Carnegie Mellon
17
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Example Program to Illustrate Sharing
char **ptr; /* global var */
int main(int argc, char *argv[])
{
long i;
pthread_t tid;
char *msgs[2] = {
"Hello from foo",
"Hello from bar"
};
ptr = msgs;
for (i = 0; i < 2; i++)
Pthread_create(&tid,
NULL,
thread,
(void *)i);
Pthread_exit(NULL);
}
void *thread(void *vargp)
{
long myid = (long)vargp;
static int cnt = 0;
printf("[%ld]: %s (cnt=%d)n",
myid, ptr[myid], ++cnt);
return NULL;
}
Peer threads reference main thread’s stack
indirectly through global ptr variable
sharing.c
A common way to pass a single
argument to a thread routine
18. Carnegie Mellon
18
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Shared Variables in Threaded C Programs
Question: Which variables in a threaded C program are
shared?
The answer is not as simple as “global variables are shared” and
“stack variables are private”
Def: A variable x is shared if and only if multiple threads
reference some instance of x.
Requires answers to the following questions:
What is the memory model for threads?
How are instances of variables mapped to memory?
How many threads might reference each of these instances?
19. Carnegie Mellon
19
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Mapping Variable Instances to Memory
Global variables
Def: Variable declared outside of a function
Virtual memory contains exactly one instance of any global variable
Local variables
Def: Variable declared inside function without static attribute
Each thread stack contains one instance of each local variable
Local static variables
Def: Variable declared inside function with the static attribute
Virtual memory contains exactly one instance of any local static
variable.
20. Carnegie Mellon
20
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
char **ptr; /* global var */
int main(int main, char *argv[])
{
long i;
pthread_t tid;
char *msgs[2] = {
"Hello from foo",
"Hello from bar"
};
ptr = msgs;
for (i = 0; i < 2; i++)
Pthread_create(&tid,
NULL,
thread,
(void *)i);
Pthread_exit(NULL);
}
void *thread(void *vargp)
{
long myid = (long)vargp;
static int cnt = 0;
printf("[%ld]: %s (cnt=%d)n",
myid, ptr[myid], ++cnt);
return NULL;
}
Mapping Variable Instances to Memory
sharing.c
21. Carnegie Mellon
21
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
char **ptr; /* global var */
int main(int main, char *argv[])
{
long i;
pthread_t tid;
char *msgs[2] = {
"Hello from foo",
"Hello from bar"
};
ptr = msgs;
for (i = 0; i < 2; i++)
Pthread_create(&tid,
NULL,
thread,
(void *)i);
Pthread_exit(NULL);
}
void *thread(void *vargp)
{
long myid = (long)vargp;
static int cnt = 0;
printf("[%ld]: %s (cnt=%d)n",
myid, ptr[myid], ++cnt);
return NULL;
}
Mapping Variable Instances to Memory
Global var: 1 instance (ptr [data])
Local static var: 1 instance (cnt [data])
Local vars: 1 instance (i.m, msgs.m, tid.m)
Local var: 2 instances (
myid.p0 [peer thread 0’s stack],
myid.p1 [peer thread 1’s stack]
)
sharing.c
22. Carnegie Mellon
22
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Shared Variable Analysis
Which variables are shared?
Answer: A variable x is shared iff multiple threads
reference at least one instance of x. Thus:
ptr, cnt, and msgs are shared
i and myid are not shared
Variable Referenced by Referenced by Referenced by
instance main thread? peer thread 0? peer thread 1?
ptr
cnt
i.m
msgs.m
myid.p0
myid.p1
yes yes yes
no yes yes
yes no no
yes yes yes
no yes no
no no yes
char **ptr; /* global var */
int main(int main, char *argv[]) {
long i; pthread_t tid;
char *msgs[2] = {"Hello from foo",
"Hello from bar" };
ptr = msgs;
for (i = 0; i < 2; i++)
Pthread_create(&tid,
NULL, thread,(void *)i);
Pthread_exit(NULL);}
void *thread(void *vargp)
{
long myid = (long)vargp;
static int cnt = 0;
printf("[%ld]: %s (cnt=%d)n",
myid, ptr[myid], ++cnt);
return NULL;
}
23. Carnegie Mellon
23
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Shared Variable Analysis
Which variables are shared?
Answer: A variable x is shared iff multiple threads
reference at least one instance of x. Thus:
ptr, cnt, and msgs are shared
i and myid are not shared
Variable Referenced by Referenced by Referenced by
instance main thread? peer thread 0? peer thread 1?
ptr
cnt
i.m
msgs.m
myid.p0
myid.p1
yes yes yes
no yes yes
yes no no
yes yes yes
no yes no
no no yes
24. Carnegie Mellon
24
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Synchronizing Threads
Shared variables are handy...
…but introduce the possibility of nasty synchronization
errors.
25. Carnegie Mellon
25
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
badcnt.c: Improper Synchronization
/* Global shared variable */
volatile long cnt = 0; /* Counter */
int main(int argc, char **argv)
{
long niters;
pthread_t tid1, tid2;
niters = atoi(argv[1]);
Pthread_create(&tid1, NULL,
thread, &niters);
Pthread_create(&tid2, NULL,
thread, &niters);
Pthread_join(tid1, NULL);
Pthread_join(tid2, NULL);
/* Check result */
if (cnt != (2 * niters))
printf("BOOM! cnt=%ldn", cnt);
else
printf("OK cnt=%ldn", cnt);
exit(0);
}
/* Thread routine */
void *thread(void *vargp)
{
long i, niters =
*((long *)vargp);
for (i = 0; i < niters; i++)
cnt++;
return NULL;
}
linux> ./badcnt 10000
OK cnt=20000
linux> ./badcnt 10000
BOOM! cnt=13051
linux>
cnt should equal 20,000.
What went wrong?
badcnt.c
26. Carnegie Mellon
26
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Assembly Code for Counter Loop
for (i = 0; i < niters; i++)
cnt++;
C code for counter loop in thread i
movq (%rdi), %rcx
testq %rcx,%rcx
jle .L2
movl $0, %eax
.L3:
movq cnt(%rip),%rdx
addq $1, %rdx
movq %rdx, cnt(%rip)
addq $1, %rax
cmpq %rcx, %rax
jne .L3
.L2:
Hi : Head
Ti : Tail
Li : Load cnt
Ui : Update cnt
Si : Store cnt
Asm code for thread i
27. Carnegie Mellon
27
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Concurrent Execution
Key idea: In general, any sequentially consistent*
interleaving is possible, but some give an unexpected result!
Ii denotes that thread i executes instruction I
%rdxi is the content of %rdx in thread i’s context
H1
L1
U1
S1
H2
L2
U2
S2
T2
T1
1
1
1
1
2
2
2
2
2
1
-
0
1
1
-
-
-
-
-
1
0
0
0
1
1
1
1
2
2
2
i (thread) instri cnt
%rdx1
OK
-
-
-
-
-
1
2
2
2
-
%rdx2
*For now. In reality, on x86 even non-sequentially consistent interleavings are possible
28. Carnegie Mellon
28
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Concurrent Execution
Key idea: In general, any sequentially consistent interleaving
is possible, but some give an unexpected result!
Ii denotes that thread i executes instruction I
%rdxi is the content of %rdx in thread i’s context
H1
L1
U1
S1
H2
L2
U2
S2
T2
T1
1
1
1
1
2
2
2
2
2
1
-
0
1
1
-
-
-
-
-
1
0
0
0
1
1
1
1
2
2
2
i (thread) instri cnt
%rdx1
OK
-
-
-
-
-
1
2
2
2
-
%rdx2
Thread 1
critical section
Thread 2
critical section
29. Carnegie Mellon
29
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Concurrent Execution (cont)
Incorrect ordering: two threads increment the counter,
but the result is 1 instead of 2
H1
L1
U1
H2
L2
S1
T1
U2
S2
T2
1
1
1
2
2
1
1
2
2
2
-
0
1
-
-
1
1
-
-
-
0
0
0
0
0
1
1
1
1
1
i (thread) instri cnt
%rdx1
-
-
-
-
0
-
-
1
1
1
%rdx2
Oops!
30. Carnegie Mellon
30
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Concurrent Execution (cont)
How about this ordering?
We can analyze the behavior using a progress graph
H1
L1
H2
L2
U2
S2
U1
S1
T1
T2
1
1
2
2
2
2
1
1
1
2
i (thread) instri cnt
%rdx1 %rdx2
0
0
0
1
1 1
1
1 1
1 Oops!
1
31. Carnegie Mellon
31
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Progress Graphs
A progress graph depicts
the discrete execution
state space of concurrent
threads.
Each axis corresponds to
the sequential order of
instructions in a thread.
Each point corresponds to
a possible execution state
(Inst1, Inst2).
E.g., (L1, S2) denotes state
where thread 1 has
completed L1 and thread
2 has completed S2.
(L1, S2)
H1 L1 U1 S1 T1
H2
L2
U2
S2
T2
Thread 1
Thread 2
32. Carnegie Mellon
32
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Trajectories in Progress Graphs
A trajectory is a sequence of legal
state transitions that describes one
possible concurrent execution of the
threads.
Example:
H1, L1, U1, H2, L2, S1, T1, U2, S2, T2
H1 L1 U1 S1 T1
H2
L2
U2
S2
T2
Thread 1
Thread 2
33. Carnegie Mellon
33
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Trajectories in Progress Graphs
A trajectory is a sequence of legal
state transitions that describes one
possible concurrent execution of the
threads.
Example:
H1, L1, U1, H2, L2, S1, T1, U2, S2, T2
H1 L1 U1 S1 T1
H2
L2
U2
S2
T2
Thread 1
Thread 2
34. Carnegie Mellon
34
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Critical Sections and Unsafe Regions
L, U, and S form a critical
section with respect to the
shared variable cnt
Instructions in critical
sections (wrt some shared
variable) should not be
interleaved
Sets of states where such
interleaving occurs form
unsafe regions
H1 L1 U1 S1 T1
H2
L2
U2
S2
T2
Thread 1
Thread 2
critical section wrt cnt
critical
section
wrt
cnt
Unsafe region
35. Carnegie Mellon
35
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Critical Sections and Unsafe Regions
H1 L1 U1 S1 T1
H2
L2
U2
S2
T2
Thread 1
Thread 2
critical section wrt cnt
critical
section
wrt
cnt
Unsafe region
Def: A trajectory is safe iff it does
not enter any unsafe region
Claim: A trajectory is correct (wrt
cnt) iff it is safe
unsafe
safe
36. Carnegie Mellon
36
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
badcnt.c: Improper Synchronization
/* Global shared variable */
volatile long cnt = 0; /* Counter */
int main(int argc, char **argv)
{
long niters;
pthread_t tid1, tid2;
niters = atoi(argv[1]);
Pthread_create(&tid1, NULL,
thread, &niters);
Pthread_create(&tid2, NULL,
thread, &niters);
Pthread_join(tid1, NULL);
Pthread_join(tid2, NULL);
/* Check result */
if (cnt != (2 * niters))
printf("BOOM! cnt=%ldn", cnt);
else
printf("OK cnt=%ldn", cnt);
exit(0);
}
/* Thread routine */
void *thread(void *vargp)
{
long i, niters =
*((long *)vargp);
for (i = 0; i < niters; i++)
cnt++;
return NULL;
}
badcnt.c
Variable main thread1 thread2
cnt
niters.m
tid1.m
i.1
i.2
niters.1
niters.2
37. Carnegie Mellon
37
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
badcnt.c: Improper Synchronization
/* Global shared variable */
volatile long cnt = 0; /* Counter */
int main(int argc, char **argv)
{
long niters;
pthread_t tid1, tid2;
niters = atoi(argv[1]);
Pthread_create(&tid1, NULL,
thread, &niters);
Pthread_create(&tid2, NULL,
thread, &niters);
Pthread_join(tid1, NULL);
Pthread_join(tid2, NULL);
/* Check result */
if (cnt != (2 * niters))
printf("BOOM! cnt=%ldn", cnt);
else
printf("OK cnt=%ldn", cnt);
exit(0);
}
/* Thread routine */
void *thread(void *vargp)
{
long i, niters =
*((long *)vargp);
for (i = 0; i < niters; i++)
cnt++;
return NULL;
}
badcnt.c
Variable main thread1 thread2
cnt yes* yes yes
niters.m yes no no
tid1.m yes no no
i.1 no yes no
i.2 no no yes
niters.1 no yes no
niters.2 no no yes
38. Carnegie Mellon
38
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Today
Threads review
Sharing
Mutual exclusion
39. Carnegie Mellon
39
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Enforcing Mutual Exclusion
Question: How can we guarantee a safe trajectory?
Answer: We must synchronize the execution of the threads so
that they can never have an unsafe trajectory.
i.e., need to guarantee mutually exclusive access for each critical
section.
Classic solution:
Mutex (pthreads)
Semaphores (Edsger Dijkstra)
Other approaches (out of our scope)
Condition variables (pthreads)
Monitors (Java)
40. Carnegie Mellon
40
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Semaphores
Semaphore: non-negative global integer synchronization variable.
Manipulated by P and V operations.
P(s)
If s is nonzero, then decrement s by 1 and return immediately.
Test and decrement operations occur atomically (indivisibly)
If s is zero, then suspend thread until s becomes nonzero and the thread is
restarted by a V operation.
After restarting, the P operation decrements s and returns control to the
caller.
V(s):
Increment s by 1.
Increment operation occurs atomically
If there are any threads blocked in a P operation waiting for s to become non-
zero, then restart exactly one of those threads, which then completes its P
operation by decrementing s.
Semaphore invariant: (s >= 0)
41. Carnegie Mellon
41
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Semaphores
Semaphore: non-negative global integer synchronization
variable
Manipulated by P and V operations:
P(s): [ while (s == 0) wait(); s--; ]
Dutch for "Proberen" (test)
V(s): [ s++; ]
Dutch for "Verhogen" (increment)
OS kernel guarantees that operations between brackets [ ] are
executed indivisibly
Only one P or V operation at a time can modify s.
When while loop in P terminates, only that P can decrement s
Semaphore invariant: (s >= 0)
42. Carnegie Mellon
42
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
C Semaphore Operations
Pthreads functions:
#include <semaphore.h>
int sem_init(sem_t *s, 0, unsigned int val);} /* s = val */
int sem_wait(sem_t *s); /* P(s) */
int sem_post(sem_t *s); /* V(s) */
CS:APP wrapper functions:
#include "csapp.h”
void P(sem_t *s); /* Wrapper function for sem_wait */
void V(sem_t *s); /* Wrapper function for sem_post */
43. Carnegie Mellon
43
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
badcnt.c: Improper Synchronization
/* Global shared variable */
volatile long cnt = 0; /* Counter */
int main(int argc, char **argv)
{
long niters;
pthread_t tid1, tid2;
niters = atoi(argv[1]);
Pthread_create(&tid1, NULL,
thread, &niters);
Pthread_create(&tid2, NULL,
thread, &niters);
Pthread_join(tid1, NULL);
Pthread_join(tid2, NULL);
/* Check result */
if (cnt != (2 * niters))
printf("BOOM! cnt=%ldn", cnt);
else
printf("OK cnt=%ldn", cnt);
exit(0);
}
/* Thread routine */
void *thread(void *vargp)
{
long i, niters =
*((long *)vargp);
for (i = 0; i < niters; i++)
cnt++;
return NULL;
}
How can we fix this using
synchronization?
badcnt.c
44. Carnegie Mellon
44
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
MUTual EXclusion (mutex)
Mutex: boolean synchronization variable
enum {locked = 0, unlocked = 1}
lock(m)
If the mutex is currently not locked, lock it and return
Otherwise, wait (spinning, yielding, etc) and retry
unlock(m)
Update the mutex state to unlocked
45. Carnegie Mellon
46
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Using Semaphores for Mutual Exclusion
⬛ Basic idea:
▪ Associate a unique semaphore mutex, initially 1, with each shared
variable (or related set of shared variables).
▪ Surround corresponding critical sections with P(mutex) and
V(mutex) operations.
⬛ Terminology:
▪ Binary semaphore: semaphore whose value is always 0 or 1
▪ Mutex: binary semaphore used for mutual exclusion
▪ P operation: “locking” the mutex
▪ V operation: “unlocking” or “releasing” the mutex
▪ “Holding” a mutex: locked and not yet unlocked.
▪ Counting semaphore: used as a counter for set of available
resources.
46. Carnegie Mellon
47
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
goodcnt.c: Proper Synchronization
⬛ Define and initialize a mutex for the shared variable cnt:
volatile long cnt = 0; /* Counter */
sem_t mutex; /* Semaphore that protects cnt */
sem_init(&mutex, 0, 1); /* mutex = 1 */
⬛ Surround critical section with P and V:
for (i = 0; i < niters; i++) {
P(&mutex);
cnt++;
V(&mutex);
}
linux> ./goodcnt 10000
OK cnt=20000
linux> ./goodcnt 10000
OK cnt=20000
linux>
Warning: It’s orders of magnitude slower
than badcnt.c.
goodcnt.c
47. Carnegie Mellon
48
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
goodcnt.c: Proper Synchronization
⬛ Define and initialize a mutex for the shared variable cnt:
volatile long cnt = 0; /* Counter */
sem_t mutex; /* Semaphore that protects cnt */
sem_init(&mutex, 0, 1); /* mutex = 1 */
⬛ Surround critical section with P and V:
for (i = 0; i < niters; i++) {
P(&mutex);
cnt++;
V(&mutex);
}
linux> ./goodcnt 10000
OK cnt=20000
linux> ./goodcnt 10000
OK cnt=20000
linux>
Warning: It’s orders of magnitude slower
than badcnt.c.
goodcnt.c
OK cnt=2000000 BOOM! cnt=1036525 Slowdown
real 0m0.138s 0m0.007s 20X
user 0m0.120s 0m0.008s 15X
sys 0m0.108s 0m0.000s NaN
And slower means much slower!
48. Carnegie Mellon
49
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Why Mutexes Work
Provide mutually exclusive
access to shared variable by
surrounding critical section
with lock and unlock
operations
H1 lo(m) un(m) T1
Thread 1
Thread 2
L1 U1 S1
H2
lo(m)
un(m)
T2
L2
U2
S2
Initially
m = 1
1 0 0 0
0
-1
Unsafe region
49. Carnegie Mellon
50
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Why Mutexes Work
Provide mutually exclusive
access to shared variable by
surrounding critical section
with lock and unlock
operations
Mutex invariant creates a
forbidden region that encloses
unsafe region and that cannot
be entered by any trajectory.
H1 lo(m) un(m) T1
Thread 1
Thread 2
L1 U1 S1
H2
lo(m)
un(m)
T2
L2
U2
S2
Initially
m = 1
1 0 0 0
0
-1
Unsafe region
50. Carnegie Mellon
51
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Why Mutexes Work
Provide mutually exclusive
access to shared variable by
surrounding critical section
with lock and unlock
operations
Mutex invariant creates a
forbidden region that encloses
unsafe region and that cannot
be entered by any trajectory.
H1 lo(m) un(m) T1
Thread 1
Thread 2
L1 U1 S1
H2
lo(m)
un(m)
T2
L2
U2
S2
Initially
m = 1
1 0 0 0
0
Unsafe region
0 1
0
51. Carnegie Mellon
52
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Unsafe region
Why Mutexes Work
Provide mutually exclusive
access to shared variable by
surrounding critical section
with lock and unlock
operations
Mutex invariant creates a
forbidden region that encloses
unsafe region and that cannot
be entered by any trajectory.
H1 lo(m) un(m) T1
Thread 1
Thread 2
L1 U1 S1
H2
lo(m)
un(m)
T2
L2
U2
S2
1 1 0 0 0 0 1 1
1 1 0 0 0 0 1 1
0 0 -1 -1 -1 -1 0 0
0 0
-1 -1 -1 -1
0 0
0 0
-1 -1 -1 -1
0 0
0 0
-1 -1 -1 -1
0 0
1 1 0 0 0 0 1 1
1 1 0 0 0 0 1 1
Initially
m = 1
Forbidden region
52. Carnegie Mellon
53
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Summary
Programmers need a clear model of how variables are
shared by threads.
Variables shared by multiple threads must be protected
to ensure mutually exclusive access.
Semaphores are a fundamental mechanism for enforcing
mutual exclusion.
53. Carnegie Mellon
54
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition
Appendix: Binary Semaphores vs.
Mutexes (not test material, just fyi!)
Binary semaphore: semaphore initialized with a value of 1
Both binary semaphores and mutexes can be used to
guarantee mutual exclusion
Main difference is ownership
Mutexes must be unlocked by the thread who owned them
previously
Binary semaphores can be signaled/incremented (V) by a thread
who did not decrement (P) them
As long as you use binary semaphores in the following
way in all threads, they can be used as a mutex
P(&sem);
// critical section
V(&sem); They are also implemented
differently but that’s out of
the scope for this class…
(covered in 15-410)