The document discusses multi-core architectures. It describes how multi-core CPUs have multiple processor cores on a single die or chip. Each core can run threads independently and in parallel. The cores share the same memory and socket. This allows more parallelism compared to single-core CPUs. Multi-core architectures help address limitations in increasing clock speeds for single cores. Many applications are now multi-threaded and map efficiently to multi-core architectures.
1) Design and Implementation of Multicore Processors
2) Coherence and Consistency
3) Power and Temperature
4) Interconnects
5) Multicore Caches
6) Security
7) Real world examples
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
1) Design and Implementation of Multicore Processors
2) Coherence and Consistency
3) Power and Temperature
4) Interconnects
5) Multicore Caches
6) Security
7) Real world examples
About Cache Memory
working of cache memory
levels of cache memory
mapping techniques for cache memory
1. direct mapping techniques
2. Fully associative mapping techniques
3. set associative mapping techniques
Cache memroy organization
cache coherency
every thing in detail
Parallel computing and its applicationsBurhan Ahmed
Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate. Parallel computing is also known as parallel processing.
↓↓↓↓ Read More:
Watch my videos on snack here: --> --> http://sck.io/x-B1f0Iy
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Presents features of ARM Processors, ARM architecture variants and Processor families. Further presents, ARM v4T architecture, ARM7-TDMI processor: Register organization, pipelining, modes, exception handling, bus architecture, debug architecture and interface signals.
Parallel computing and its applicationsBurhan Ahmed
Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate. Parallel computing is also known as parallel processing.
↓↓↓↓ Read More:
Watch my videos on snack here: --> --> http://sck.io/x-B1f0Iy
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Presents features of ARM Processors, ARM architecture variants and Processor families. Further presents, ARM v4T architecture, ARM7-TDMI processor: Register organization, pipelining, modes, exception handling, bus architecture, debug architecture and interface signals.
Caches are used in many layers of applications that we develop today, holding data inside or outside of your runtime environment, or even distributed across multiple platforms in data fabrics. However, considerable performance gains can often be realized by configuring the deployment platform/environment and coding your application to take advantage of the properties of CPU caches.
In this talk, we will explore what CPU caches are, how they work and how to measure your JVM-based application data usage to utilize them for maximum efficiency. We will discuss the future of CPU caches in a many-core world, as well as advancements that will soon arrive such as HP's Memristor.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
4. Multi-core architectures
• This lecture is about a new trend in
computer architecture:
Replicate multiple processor cores on a
single die.
Core 1 Core 2 Core 3 Core 4
Multi-core CPU chip 4
5. Multi-core CPU chip
• The cores fit on a single processor socket
• Also called CMP (Chip Multi-Processor)
c c c c
o o o o
r r r r
e e e e
1 2 3 4
5
6. The cores run in parallel
thread 1 thread 2 thread 3 thread 4
c c c c
o o o o
r r r r
e e e e
1 2 3 4
6
7. Within each core, threads are time-sliced
(just like on a uniprocessor)
several several several several
threads threads threads threads
c c c c
o o o o
r r r r
e e e e
1 2 3 4
7
8. Interaction with the
Operating System
• OS perceives each core as a separate processor
• OS scheduler maps threads/processes
to different cores
• Most major OS support multi-core today:
Windows, Linux, Mac OS X, …
8
9. Why multi-core ?
• Difficult to make single-core
clock frequencies even higher
• Deeply pipelined circuits:
– heat problems
– speed of light problems
– difficult design and verification
– large design teams necessary
– server farms need expensive
air-conditioning
• Many new applications are multithreaded
• General trend in computer architecture (shift
towards more parallelism) 9
10. Instruction-level parallelism
• Parallelism at the machine-instruction
level
• The processor can re-order, pipeline
instructions, split them into
microinstructions, do aggressive branch
prediction, etc.
• Instruction-level parallelism enabled rapid
increases in processor speeds over the
last 15 years
10
11. Thread-level parallelism (TLP)
• This is parallelism on a more coarser scale
• Server can serve each client in a separate
thread (Web server, database server)
• A computer game can do AI, graphics, and
physics in three separate threads
• Single-core superscalar processors cannot
fully exploit TLP
• Multi-core architectures are the next step in
processor evolution: explicitly exploiting TLP
11
12. General context: Multiprocessors
• Multiprocessor is any
computer with several
processors
• SIMD
– Single instruction, multiple data Lemieux cluster,
Pittsburgh
– Modern graphics cards supercomputing
center
• MIMD
– Multiple instructions, multiple data
12
13. Multiprocessor memory types
• Shared memory:
In this model, there is one (large) common
shared memory for all processors
• Distributed memory:
In this model, each processor has its own
(small) local memory, and its content is
not replicated anywhere else
13
14. Multi-core processor is a special
kind of a multiprocessor:
All processors are on the same chip
• Multi-core processors are MIMD:
Different cores execute different threads
(Multiple Instructions), operating on different
parts of memory (Multiple Data).
• Multi-core is a shared memory multiprocessor:
All cores share the same memory
14
15. What applications benefit
from multi-core?
• Database servers
• Web servers (Web commerce) Each can
• Compilers run on its
own core
• Multimedia applications
• Scientific applications,
CAD/CAM
• In general, applications with
Thread-level parallelism
(as opposed to instruction-
level parallelism)
15
16. More examples
• Editing a photo while recording a TV show
through a digital video recorder
• Downloading software while running an
anti-virus program
• “Anything that can be threaded today will
map efficiently to multi-core”
• BUT: some applications difficult to
parallelize
16
17. A technique complementary to multi-core:
Simultaneous multithreading
• Problem addressed: L1 D-Cache D-TLB
The processor pipeline Integer Floating Point
can get stalled:
L2 Cache and Control
– Waiting for the result Schedulers
of a long floating point Uop queues
(or integer) operation
Rename/Alloc
– Waiting for data to
BTB Trace Cache uCode
arrive from memory ROM
Other execution units Decoder
Bus
wait unused BTB and I-TLB
Source: Intel
17
18. Simultaneous multithreading (SMT)
• Permits multiple independent threads to execute
SIMULTANEOUSLY on the SAME core
• Weaving together multiple “threads”
on the same core
• Example: if one thread is waiting for a floating
point operation to complete, another thread can
use the integer units
18
19. Without SMT, only a single thread
can run at any given time
L1 D-Cache D-TLB
Integer Floating Point
L2 Cache and Control
Schedulers
Uop queues
Rename/Alloc
BTB Trace Cache uCode ROM
Decoder
Bus
BTB and I-TLB
Thread 1: floating point
19
20. Without SMT, only a single thread
can run at any given time
L1 D-Cache D-TLB
Integer Floating Point
L2 Cache and Control
Schedulers
Uop queues
Rename/Alloc
BTB Trace Cache uCode ROM
Decoder
Bus
BTB and I-TLB
Thread 2:
integer operation 20
21. SMT processor: both threads can
run concurrently
L1 D-Cache D-TLB
Integer Floating Point
L2 Cache and Control
Schedulers
Uop queues
Rename/Alloc
BTB Trace Cache uCode ROM
Decoder
Bus
BTB and I-TLB
Thread 2: Thread 1: floating point
integer operation 21
22. But: Can’t simultaneously use the
same functional unit
L1 D-Cache D-TLB
Integer Floating Point
L2 Cache and Control
Schedulers
Uop queues
Rename/Alloc
BTB Trace Cache uCode ROM
Decoder This scenario is
Bus
impossible with SMT
BTB and I-TLB
on a single core
Thread 1 Thread 2 (assuming a single
IMPOSSIBLE integer unit)
22
23. SMT not a “true” parallel processor
• Enables better threading (e.g. up to 30%)
• OS and applications perceive each
simultaneous thread as a separate
“virtual processor”
• The chip has only a single copy
of each resource
• Compare to multi-core:
each core has its own copy of resources
23
24. Multi-core:
threads can run on separate cores
L1 D-Cache D-TLB L1 D-Cache D-TLB
Integer Floating Point Integer Floating Point
L2 Cache and Control
L2 Cache and Control
Schedulers Schedulers
Uop queues Uop queues
Rename/Alloc Rename/Alloc
BTB Trace Cache uCode BTB Trace Cache uCode
ROM ROM
Decoder Decoder
Bus
Bus
BTB and I-TLB BTB and I-TLB
Thread 1 Thread24
2
25. Multi-core:
threads can run on separate cores
L1 D-Cache D-TLB L1 D-Cache D-TLB
Integer Floating Point Integer Floating Point
L2 Cache and Control
L2 Cache and Control
Schedulers Schedulers
Uop queues Uop queues
Rename/Alloc Rename/Alloc
BTB Trace Cache uCode BTB Trace Cache uCode
ROM ROM
Decoder Decoder
Bus
Bus
BTB and I-TLB BTB and I-TLB
Thread 3 25 Thread 4
26. Combining Multi-core and SMT
• Cores can be SMT-enabled (or not)
• The different combinations:
– Single-core, non-SMT: standard uniprocessor
– Single-core, with SMT
– Multi-core, non-SMT
– Multi-core, with SMT: our fish machines
• The number of SMT threads:
2, 4, or sometimes 8 simultaneous threads
• Intel calls them “hyper-threads” 26
27. SMT Dual-core: all four threads can
run concurrently
L1 D-Cache D-TLB L1 D-Cache D-TLB
Integer Floating Point Integer Floating Point
L2 Cache and Control
L2 Cache and Control
Schedulers Schedulers
Uop queues Uop queues
Rename/Alloc Rename/Alloc
BTB Trace Cache uCode BTB Trace Cache uCode
ROM ROM
Decoder Decoder
Bus
Bus
BTB and I-TLB BTB and I-TLB
Thread 1 Thread 3 Thread27 Thread 4
2
29. Comparison: multi-core vs SMT
• Multi-core:
– Since there are several cores,
each is smaller and not as powerful
(but also easier to design and manufacture)
– However, great with thread-level parallelism
• SMT
– Can have one large and fast superscalar core
– Great performance on a single thread
– Mostly still only exploits instruction-level
parallelism
29
30. The memory hierarchy
• If simultaneous multithreading only:
– all caches shared
• Multi-core chips:
– L1 caches private
– L2 caches private in some architectures
and shared in others
• Memory is always shared
30
34. Private vs shared caches
• Advantages of private:
– They are closer to core, so faster access
– Reduces contention
• Advantages of shared:
– Threads on different cores can share the
same cache data
– More cache space available if a single (or a
few) high-performance thread runs on the
system
34
35. The cache coherence problem
• Since we have private caches:
How to keep the data consistent across caches?
• Each core should perceive the memory as a
monolithic array, shared by all the cores
35
36. The cache coherence problem
Suppose variable x initially contains 15213
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
multi-core chip
Main memory
x=15213 36
37. The cache coherence problem
Core 1 reads x
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=15213
multi-core chip
Main memory
x=15213 37
38. The cache coherence problem
Core 2 reads x
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=15213 x=15213
multi-core chip
Main memory
x=15213 38
39. The cache coherence problem
Core 1 writes to x, setting it to 21660
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660 x=15213
multi-core chip
Main memory assuming
x=21660 write-through 39
caches
40. The cache coherence problem
Core 2 attempts to read x… gets a stale copy
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660 x=15213
multi-core chip
Main memory
x=21660 40
41. Solutions for cache coherence
• This is a general problem with
multiprocessors, not limited just to multi-core
• There exist many solution algorithms,
coherence protocols, etc.
• A simple solution:
invalidation-based protocol with snooping
41
42. Inter-core bus
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
multi-core chip
Main memory
inter-core
bus42
43. Invalidation protocol with snooping
• Invalidation:
If a core writes to a data item, all other
copies of this data item in other caches
are invalidated
• Snooping:
All cores continuously “snoop” (monitor)
the bus connecting the cores.
43
44. The cache coherence problem
Revisited: Cores 1 and 2 have both read x
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=15213 x=15213
multi-core chip
Main memory
x=15213 44
45. The cache coherence problem
Core 1 writes to x, setting it to 21660
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660 x=15213
sends INVALIDATED
invalidation
multi-core chip
request
Main memory assuming inter-core
x=21660 write-through 45
bus
caches
46. The cache coherence problem
After invalidation:
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660
multi-core chip
Main memory
x=21660 46
47. The cache coherence problem
Core 2 reads x. Cache misses, and loads the new copy.
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660 x=21660
multi-core chip
Main memory
x=21660 47
48. Alternative to invalidate protocol:
update protocol
Core 1 writes x=21660:
Core 1 Core 2 Core 3 Core 4
One or more One or more One or more One or more
levels of levels of levels of levels of
cache cache cache cache
x=21660 x=21660
UPDATED
broadcasts
multi-core chip
updated
value Main memory assuming inter-core
x=21660 write-through 48
bus
caches
49. Which do you think is better?
Invalidation or update?
49
50. Invalidation vs update
• Multiple writes to the same location
– invalidation: only the first time
– update: must broadcast each write
(which includes new variable value)
• Invalidation generally performs better:
it generates less bus traffic
50
51. Invalidation protocols
• This was just the basic
invalidation protocol
• More sophisticated protocols
use extra cache state bits
• MSI, MESI
(Modified, Exclusive, Shared, Invalid)
51
52. Programming for multi-core
• Programmers must use threads or
processes
• Spread the workload across multiple cores
• Write parallel algorithms
• OS will map threads/processes to cores
52
53. Thread safety very important
• Pre-emptive context switching:
context switch can happen AT ANY TIME
• True concurrency, not just uniprocessor
time-slicing
• Concurrency bugs exposed much faster
with multi-core
53
54. However: Need to use synchronization
even if only time-slicing on a uniprocessor
int counter=0;
void thread1() {
int temp1=counter;
counter = temp1 + 1;
}
void thread2() {
int temp2=counter;
counter = temp2 + 1;
}
54
55. Need to use synchronization even if only
time-slicing on a uniprocessor
temp1=counter;
counter = temp1 + 1; gives counter=2
temp2=counter;
counter = temp2 + 1
temp1=counter;
temp2=counter; gives counter=1
counter = temp1 + 1;
counter = temp2 + 1
55
56. Assigning threads to the cores
• Each thread/process has an affinity mask
• Affinity mask specifies what cores the
thread is allowed to run on
• Different threads can have different masks
• Affinities are inherited across fork()
56
57. Affinity masks are bit vectors
• Example: 4-way multi-core, without SMT
1 1 0 1
core 3 core 2 core 1 core 0
• Process/thread is allowed to run on
cores 0,2,3, but not on core 1
57
58. Affinity masks when multi-core and
SMT combined
• Separate bits for each simultaneous thread
• Example: 4-way multi-core, 2 threads per core
1 1 0 0 1 0 1 1
core 3 core 2 core 1 core 0
thread thread thread thread thread thread thread thread
1 0 1 0 1 0 1 0
• Core 2 can’t run the process
• Core 1 can only use one simultaneous 58
thread
59. Default Affinities
• Default affinity mask is all 1s:
all threads can run on all processors
• Then, the OS scheduler decides what
threads run on what core
• OS scheduler detects skewed workloads,
migrating threads to less busy processors
59
60. Process migration is costly
• Need to restart the execution pipeline
• Cached data is invalidated
• OS scheduler tries to avoid migration as
much as possible:
it tends to keeps a thread on the same core
• This is called soft affinity
60
61. Hard affinities
• The programmer can prescribe her own
affinities (hard affinities)
• Rule of thumb: use the default scheduler
unless a good reason not to
61
62. When to set your own affinities
• Two (or more) threads share data-structures in
memory
– map to same core so that can share cache
• Real-time threads:
Example: a thread running
a robot controller:
- must not be context switched,
or else robot can go unstable Source: Sensable.com
- dedicate an entire core just to this thread
62
63. Kernel scheduler API
#include <sched.h>
int sched_getaffinity(pid_t pid,
unsigned int len, unsigned long * mask);
Retrieves the current affinity mask of process ‘pid’ and
stores it into space pointed to by ‘mask’.
‘len’ is the system word size: sizeof(unsigned int long)
63
64. Kernel scheduler API
#include <sched.h>
int sched_setaffinity(pid_t pid,
unsigned int len, unsigned long * mask);
Sets the current affinity mask of process ‘pid’ to *mask
‘len’ is the system word size: sizeof(unsigned int long)
To query affinity of a running process:
[barbic@bonito ~]$ taskset -p 3935
pid 3935's current affinity mask: f
64
66. Legal licensing issues
• Will software vendors charge a separate
license per each core or only a single
license per chip?
• Microsoft, Red Hat Linux, Suse Linux will
license their OS per chip, not per core
66
67. Conclusion
• Multi-core chips an
important new trend in
computer architecture
• Several new multi-core
chips in design phases
• Parallel programming techniques
likely to gain importance
67