Novel techniques are needed for high-performance applications to exploit massive local concurrency in many-core systems. Getting software applications to run faster on machines with more cores requires substantial restructuring of embedded software stacks, including applications, middleware, and the operating system (OS). Contemporary software stacks are not designed to exploit hundreds or thousands of cores. New OS and middleware mechanisms must be developed to handle scheduling, resource sharing, and communication in many-core systems. The solution must also provide high-level API to simplify development of concurrent software. In this session, we describe new mechanisms for scheduling and communication for many-core embedded platforms.
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Β
Retargeting Embedded Software Stack for Many-Core Systems
1.
2. Agenda
ο Whatβs happening in many-core
world?
ο New Challenges
ο Collaborative Research
ο Real-Time Innovations (RTI)
ο University of North Carolina
ο What we (RTI) do, briefly!
ο Research: Retargeting embedded
software stack for many-core
systems
ο Components
ο Scalable multi-core Scheduling
ο Scalable communication
ο Middleware Modernization
3. Single-core ο Multi-core ο Many-Core
Interconnect
Interconnect
Solution
Transistor count
Interconnect
CPU clock speed and power consumption
hit a wall circa 2004
5. Applications Domains
using Multi-core
ο Defense
ο Transportation
ο Financial trading
ο Telecommunications
ο Factory automation
ο Traffic control
ο Medical imaging
ο Simulation
5
6. Grand Challenge and Prize
ο Scalable Applications
ο Running faster with more cores
ο Inhibitors
ο Embedded software stack (OS, m/w, and
apps) not designed for more than a
handful of cores
β One core maxed-out others idling!
β Overuse of communication via shared-memory
β Severe cache coherence overhead
ο Advanced techniques known only to
experts
β Programming languages and paradigms
ο Lack of design and debugging tools
8. Trends in concurrent
programming (2/7)
ο Message-passing instead of
shared-memory
βDo not communicate by sharing memory.
Instead, share memory by communicating. β Source: Andrew Baumann, et. al, Multi-kernel: A new OS
β Google Go Documentation architecture for scalable multicore systems, SOSPβ09
(small data, messages sent to a single server)
ο Costs less than shared-memory
ο Scales better on many-core
β Shown up to 80 cores
ο Easier to verify and debug
ο Bypass cache coherence
ο Data locality is very important Source: Silas Boyd-Wickizer, Corey: An Operating
System for Many Cores, USENIX 2008
9. Trends in concurrent
programming (3/7)
ο Shared-Nothing Partitioning
ο Data partitioning
β Single Instruction Multiple Data
(SIMD)
β a.k.a βshardingβ in DB circles
β Matrix multiplication on GPGPU
β Content-based filters on stock
symbols (βIBMβ, βMSFTβ, βGOOGβ)
10. Trends in concurrent
programming (4/7)
ο Shared-Nothing Partitioning
ο Functional partitioning
β E.g., Staged Event Driven Architecture (SEDA)
β Split an application into an n-stage pipeline
β Each stage executes concurrently
β Explicit communication channels between stage
ο Channels can be monitored for bottlenecks
β Used in Cassandra, Apache Service Mix, etc.
11. Trends in concurrent
programming (5/7)
ο Erlang-Style Concurrency (Actor Model)
ο Concurrency-Oriented Programming (COP)
ο Fast asynchronous messaging
ο Selective message reception
ο Copying message-passing semantics (share-nothing
concurrency)
ο Process monitoring
ο Fast process creation/destruction
ο Ability to support >> 10 000 concurrent processes with largely
unchanged characteristics
Source: http://ulf.wiger.net
12. Trends in concurrent
programming (6/7)
ο Consistency via Safely Shared
Resources
ο Replacing coarse-grained locking with fine-
grained locking
ο Using wait-free primitives
ο Using cache-conscious algorithms
ο Exploit application-specific data locality
ο New programming APIs
β OpenCL, PPL, AMP, etc.
14. Explicit Multi-threading:
Too much to worry about!
1. The Pillars of Concurrency (Aug 2007)
2. How Much Scalability Do You Have or Need? (Sep 2007)
3. Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)
4. Apply Critical Sections Consistently (Nov 2007)
5. Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)
6. Use Lock Hierarchies to Avoid Deadlock (Jan 2008)
7. Break Amdahlβs Law! (Feb 2008)
8. Going Super-linear (Mar 2008)
9. Super Linearity and the Bigger Machine (Apr 2008)
10. Interrupt Politely (May 2008)
11. Maximize Locality, Minimize Contention (Jun 2008)
12. Choose Concurrency-Friendly Data Structures (Jul 2008)
13. The Many Faces of Deadlock (Aug 2008)
14. Lock-Free Code: A False Sense of Security (Sep 2008)
15. Writing Lock-Free Code: A Corrected Queue (Oct 2008)
16. Writing a Generalized Concurrent Queue (Nov 2008)
17. Understanding Parallel Performance (Dec 2008)
18. Measuring Parallel Performance: Optimizing a Concurrent Queue(Jan 2009)
19. volatile vs. volatile (Feb 2009)
20. Sharing Is the Root of All Contention (Mar 2009)
21. Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)
22. Use Thread Pools Correctly: Keep Tasks Short and Non-blocking(Apr 2009)
23. Eliminate False Sharing (May 2009)
24. Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)
25. The Power of βIn Progressβ (Jul 2009)
26. Design for Many-core Systems (Aug 2009)
27. Avoid Exposing Concurrency β Hide It Inside Synchronous Methods (Oct 2009)
28. Prefer structured lifetimes β local, nested, bounded, deterministic(Nov 2009)
Source: POSA2: Patterns for Concurrent, Parallel, and
29. Prefer Futures to Baked-In βAsync APIsβ (Jan 2010)
Distributed Systems, Dr. Doug Schmidt
30. Associate Mutexes with Data to Prevent Races (May 2010)
31. Prefer Using Active Objects Instead of Naked Threads (June 2010)
32. Prefer Using Futures or Callbacks to Communicate Asynchronous Results (August 2010)
33. Know When to Use an Active Object Instead of a Mutex (September 2010)
Source: Effective Concurrency, Herb Sutter
15. Threads are hard!
Data race
Deadlock
Atomicity Violation
Order Violation
Forgotten Synchronization
Incorrect Granularity Two-Step Dance
Read and Write Tearing Priority Inversion
Lock-Free Reordering Patterns for Achieving Safety
Lock Convoys Immutability
Purity
Isolation
Source: MSDN Magazine, Joe Duffy
16.
17. Collaborative Research!
Prof. James Anderson
University of North Carolina
Real-Time Innovations IEEE Fellow
Sunnyvale, CA
18.
19. Integrating Enterprise Systems
with Edge Systems
Enterprise System Edge System
JMS App SQL App Temperature
Web-Service
Sensor
GetTemp GetTemp Temp Temp Temperature
Response Request
SOAP JMS Adapter Socket Adapter
DB Adapter
Connector
Connector
Adapter
Connector
Connector
RTPS
Data-Centric Messaging Bus
20. Data-Centric Messaging
Standards-based API for
application developers
ο Based on DDS Standard (OMG)
ο DDS = Data Distribution Service
Data Distribution
ο DDS Services
ο is an API specification RTI Data
Distribution Service
ο for Real-Time Systems
Real-time
ο provides publish-subscribe paradigm publish-subscribe
wire protocol
ο provides quality-of-service tuning
ο uses interoperable wire protocol (RTPS)
Open protocol for
interoperability
21. DDS Communication Model
ο Provides a βGlobal Data Spaceβ that is accessible
to all interested applications.
ο Data objects addressed by Domain, Topic and Key
ο Subscriptions are decoupled from Publications
ο Contracts established by means of QoS
ο Automatic discovery and configuration
Participant Participant
Pub Pub
Track,2
Participant
Sub Sub
Track,1 Track,3
Global Data Space
Participant Pub Participant
Alarm Sub
22. Data-Centric vs.
Message-Centric Design
Data-Centric Message-Centric
ο Infrastructure does ο Infrastructure does not
understand your data understand your data
ο What data schema(s) will be ο Opaque contents vary from
used message to message
ο Which objects are distinct from ο No object identity; messages
which other objects indistinguishable
ο What their lifecycles are ο Ad-hoc lifecycle management
ο How to attach behavior (e.g. ο Behaviors can only apply to
filters, QoS) to individual whole data stream
objects
ο Example technologies
ο Example technologies ο JMS API
ο DDS API ο AMQP protocol
ο RTPS (DDSI) protocol
23. Re-enabling the Free Lunch,
Easily!
ο Positioning applications to run faster on machines
with more coresβenabling the free lunch!
ο Three Pillars of Concurrency
ο Coarse-grained parallelism (functional partitioning)
ο Fine-grained parallelism (running a βforβ loop in parallel)
ο Reducing the cost of resource sharing (improved locking)
24. Scalable Communication and Scheduling
for Many-Core Systems
ο Objectives
ο Create a Component Framework for Developing
Scalable Many-core Applications
ο Develop Many-Core Resource Allocation and
Scheduling Algorithms
ο Investigate Efficient Message-Passing Mechanisms
for Component Dataflow
ο Architect DDS Middleware to Improve Internal
Concurrency
ο Demonstrate ideas using a prototype
25. Component-based Software
Engineering C C
ο Facilitate Separation of Concerns
ο Functional partitioning to enable MIMD-style parallelism
ο Manage resource allocation and scheduling algorithms
ο Ease of application lifecycle management
ο Component-based Design
ο Naturally aligned with functional partitioning (pipeline)
ο Components are modular, cohesive, loosely coupled, and
independently deployable
26. Component-based Software
Engineering C
ο Message passing communication C C
ο Isolation of state
C
ο Shared-nothing concurrency
ο Ease of validation
ο Lifecycle management
ο Application design Transformation
ο Deployment
ο Resource allocation
ο Scheduling
ο Deployment and Configuration
ο Placement based on data-flow dependencies
ο Cache-conscious placement on cores Formal Models
27. Scheduling Algorithms for
Many-core
ο Academic Research Partner
ο Real-Time Systems Group, Prof. James Anderson
ο University of North Carolina, Chapel Hill
ο Processing Graph Method (PGM)
ο Clustered scheduling on many-core
G1
N nodes
G2 G3 to
M cores
G4 G5
N nodes
to
G6
M cores
G7
Tilera TILEPro64 Multi-core Processor. Source: Tilera.com
28. Scheduling Algorithms for
Many-cores
ο Key requirements
ο Efficiently utilizing the processing capacity
within each cluster
ο Minimizing data movement across clusters
ο Exploit data locality
ο A many-core Processor
ο An on-chip distributed system!
ο Cores are addressable
ο Send messages to other cores directly
ο On-chip networks (interconnect)
β MIT RAW = 4 networks
β Tilera iMesh = 6 networks
β On chip switches, routing algorithms, packet
switching, multicast!, deadlock prevention E.g., Tilera iMesh Architecture. Source: Tilera.com
ο Sending messages to distant core takes
longer
29. Message-passing over
shared-memory
ο Two key issues
ο Performance
ο Correctness
ο Performance
ο Shared-memory does not scale
on many-core
ο Full chip cache coherence is
expensive
ο Too much power
ο Too much bandwidth
ο Not all cores need to see the update
β Data stalls reduce performance
Source: Ph.D. defense: Natalie Enright Jerger
30. Message-passing over
shared-memory
ο Correctness
ο Hard to achieve in explicit threading (even in task-based libraries)
ο Lock-based programs are not composable
βPerhaps the most fundamental objection [...] is that lock-based programs do not compose: correct fragments may fail when
combined. For example, consider a hash table with thread-safe insert and delete operations. Now suppose that we want to
delete one item A from table t1, and insert it into table t2; but the intermediate state (in which neither table contains the item)
must not be visible to other threads. Unless the implementer of the hash table anticipates this need, there is simply no way to
satisfy this requirement. [...] In short, operations that are individually correct (insert, delete) cannot be composed into larger
correct operations.β
βTim Harris et al., "Composable Memory Transactions", Section 2: Background, pg.2
ο Message-passing
ο Composable
ο Easy to verify and debug
ο Observe in/out messages only
32. Core-Interconnect Transport for
DDS
ο RTI DDS Supports many transports for messaging
ο UDP, TCP, Shared-memory, Zero-copy, etc
ο In future: a βcore-interconnect transportβ!!
ο Tilera provides Tilera Multicore Components (TMC) library
ο Higher-level library for MIT RAW in progress
33. Erlang-Style Concurrency: A
Panacea?
ο Actor Model
ο OO programming of the concurrency world
ο Concurrency-Oriented Programming (COP)
ο Fast asynchronous messaging
ο Selective message reception
ο Copying message-passing semantics (share-nothing
concurrency)
ο Process monitoring
ο Fast process creation/destruction
ο Ability to support >> 10 000 concurrent processes with largely
unchanged characteristics
Source: http://ulf.wiger.net
34. Actors using Data-Centric
Messaging?
Fast asynchronous messaging
β < 100 micro-sec latency
β Vendor neutral but old (2006) results
β Source: Ming Xiong, et al., Vanderbilt University
Selective Message Reception
β Standard DDS data partitioning: Domains, Partitions, Topics
β Content-based Filter Topic (e.g., βkey == 0xabcdβ)
β Time-based Filter, Query conditions, Sample States etc.
Copying message-passing semantics
βProcessβ monitoring RTI
Fast βprocessβ creation/destruction RESEARCH
>> 10,000 concurrent βprocessesβ
35. Middleware Modernization
ο Event-handling patterns
ο Reactor
β Offers coarse-grained concurrency control
ο Proactor (asynchronous IO)
β Decouples of threading from concurrency
ο Concurrency Patterns
ο Leader/follower
β Enhances CPU cache affinity, minimizes
locking overhead reduces latency
ο Half-sync half-async
β Faster low-level system services
36. Middleware Modernization
ο Effective Concurrency (Sutter)
ο Concurrency-friendly data structures
β Fine-grained locking in linked-lists
β Skip-list for fast parallel search
i i i i
β But compactness is important too!
ο See Going Native 2012 Keynote by Dr. Stroustrup: Slide #45 (Vector vs. List)
ο std::vector beats std::list in insertion and deletion!
ο Reason: Linear search dominates. Compact = cache-friendly
ο Data locality aspect
β A first-class design concern
β Avoid false sharing
ο Lock-free data structures (Java ConcurrentHashMap)
β New one will earn you a Ph.D.
ο Processor Affinity and load-balancing
β E.g., pthread_setaffinity_np
37. Concluding Remarks
ο Scalable Communication and Scheduling for Many-Core Systems
ο Research
ο Create a Component Framework for Developing Scalable Many-core
Applications
ο Develop Many-Core Resource Allocation and Scheduling Algorithms
ο Investigate Efficient Message-Passing Mechanisms for Component
Dataflow
ο Architect DDS Middleware to Improve Internal Concurrency